WO2002063479A1 - Distributed computing system - Google Patents

Distributed computing system Download PDF

Info

Publication number
WO2002063479A1
WO2002063479A1 PCT/US2002/003218 US0203218W WO02063479A1 WO 2002063479 A1 WO2002063479 A1 WO 2002063479A1 US 0203218 W US0203218 W US 0203218W WO 02063479 A1 WO02063479 A1 WO 02063479A1
Authority
WO
WIPO (PCT)
Prior art keywords
engine
engines
broker
job
message
Prior art date
Application number
PCT/US2002/003218
Other languages
French (fr)
Inventor
James Bernardin
Peter Lee
James Lewis
Original Assignee
Datasynapse, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/777,190 external-priority patent/US20020023117A1/en
Application filed by Datasynapse, Inc. filed Critical Datasynapse, Inc.
Publication of WO2002063479A1 publication Critical patent/WO2002063479A1/en
Priority to US10/222,337 priority Critical patent/US20030154284A1/en
Priority to US10/306,689 priority patent/US7093004B2/en
Priority to US10/728,732 priority patent/US7130891B2/en
Priority to US11/981,137 priority patent/US8195739B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/465Distributed object oriented systems

Definitions

  • the present invention relates generally to the field of high-performance computing ("HPC") and, more specifically, to systems and techniques for distributed and/or parallel processing.
  • HPC high-performance computing
  • HPC has long been a focus of both academic research and commercial development, and the field presents a bewildering array of standards, products, tools, and consortia. Any attempt at comparative analysis is complicated by the fact that many of these interrelate not as mutually exclusive alternatives, but as complementary component or overlapping standards.
  • Parallel Virtual Machine provided a general mechanism-based on a standard API and messaging protocol for parallel computation over networks of general-purposes processors. More recently, MPI (the Message Passing Interface) has gained ground. Although they differ in many particulars, both are essentially standards that specify an API for developing parallel algorithms and the behavioral requirements for participating processors. By now, libraries provide access to the API from C and/or Fortran. Client implementations are available for nearly every operating system and hardware configuration.
  • Grid Computing represents a more amorphous and broad-reaching initiative - in certain respects, it is more a philosophical movement than an engineering project.
  • the overarching objective of Grid Computing is to pool together heterogeneous resources of all types (e.g., storage, processors, instruments, displays, etc.), anywhere on the network, and make them available to all users. Key elements of this vision include decentralized control, shared data, and distributed, interactive collaboration.
  • Clusters provide HPC by aggregating commodity, off-the-shelf technology (COTS).
  • COTS off-the-shelf technology
  • Beowulf a loose confederation of researchers and developers focused on clusters of Linux-based PCs.
  • Berkeley NOW Network of Workstations
  • clusters and grid implementations share, and in many cases, exacerbate, some of the most important weaknesses of supercomputing hardware solutions, particularly within a commercial ente ⁇ rise environment.
  • Complex, low-level APIs necessitate protracted, costly development and integration efforts.
  • Administration, especially scheduling and management of distributed resources is burdensome and expensive.
  • elaborate custom development is needed to provide fault tolerance and reliability. Both developers and administrators require extensive training and special skills.
  • clusters offer some advantages versus dedicated hardware with respect to scale, fragility and administrative complexity effectively impose hard limits on the number of nodes - commercial installations with as many as 50 nodes are rare, and only a handful support more than 100.
  • the invention provides an improved, Grid-like distributed computing system that addresses the practical needs of real-world commercial users, such as those in the financial services and energy industries.
  • BRIEF SUMMARY OF THE INVENTION The invention provides an off-the-shelf product solution to target the specific needs of commercial users with naturally parallel applications.
  • a top-level, public API provides a simple "compute server” or “task farm” model that dramatically accelerates integration and deployment.
  • turnkey support for ente ⁇ rise features like fault-tolerant scheduling, fail- over, load balancing, and remote, central administration, the invention eliminates the need for customized middleware and yields enormous, on-going savings in maintenance and administrative overhead.
  • P2P peer-to-peer
  • the invention supports effectively unlimited scaling over commoditized resource pools, so that end-users can add resources as needed, with no incremental development cost.
  • the invention seamlessly inco ⁇ orates both dedicated and intermittently idle resources on multiple platforms (WindowsTM, Unix, Linux, etc.). And it provides true idle detection and automatic fault-tolerant rescheduling, thereby
  • the invention provides a system that can operate on user desktops during peak business hours without degrading performance or intruding on the user experience in any way.
  • one aspect of the invention relates to distributed computing systems comprising, for example: a plurality of engines; at least one broker; at least on client application, the client application having an associated driver; the driver being configured to enable communication between the client application and two or more of the engines via a peer-to-peer communication network; the system characterized in that (i) the driver is further configured to enable communication between the client application and the at least one broker over the peer-to-peer network and (ii) the broker is further configured to commumcate with the engines over the peer-to-peer network, thereby enabling the broker to control and supervise the execution of tasks provided by the client application on the two or more engines.
  • the system may further include at least one failover broker configured to communicate with the driver and the engines, and, in the event of a broker failure, control and supervise the execution of tasks provided by the client application on the two or more engines.
  • the broker may further include an adaptive scheduler configured to selectively assign and control the execution of tasks provided by the client application on the engines.
  • the adaptive scheduler may be further configured to redundantly assign one or more of the task(s) provided by the client application to multiple engines, so as to ensure the timely completion of such redundantly assigned task(s) by at least one of the engines.
  • the tasks provided by the client application may have associated discriminators.
  • the broker may utilize parameters associated with such discriminators and the engines to determine the assignment of tasks to engines.
  • the system may control the timing of selected
  • the broker and the two or more engines may each include an associated propagator object that permits control over engine-to-engine propagation of data over the peer-to-peer network.
  • the propagator objects may enable an engine or broker node to perform at least three, four, five, six, seven or eight of the following operations: (i) broadcast a message to all nodes, except the current node; (ii) clear all message(s), and associated message, state(s), on specified broker(s) and/or engine(s); (iii) get message(s) for the current node; (iv) get the message(s) from a specified node for the current node; (v) get the state of a specified node; (vi) get the total number of nodes; (vii) send a message to a specified node; and or (viii) set the state of a specified node.
  • Still further aspects of the present invention relate to other system configurations, methods, software, encoded articles-of-manufacture and/or electronic data signals comprised of, or produced in accordance with, portions of the preferred LiveCluster embodiment, described in detail below.
  • FIGs. 1-2 depict data flows in the preferred LiveCluster embodiment of the invention
  • FIGs. 3-12 are code samples from the preferred LiveCluster embodiment of the invention
  • FIG. 13 depicts comparitive data flows in connnection with the preferred LiveCluster embodiment of the invention
  • FIGs. 14-31 are code samples from the preferred LiveCluster embodiment of the invention.
  • FIG. 32-53 are screen shots from the preferred LiveCluster embodiment of the invention
  • FIGs. 33-70 are code samples from the preferred LiveCluster embodiment of the invention
  • FIG. 71 illustrates data propagation using propagators in accordance with the preferred LiveCluster embodiment of the inveniton
  • FIGs. 72-81 are code samples from the preferred LiveCluster embodiment of the invention.
  • FIGs. 82-87 depict various illustrative configurations of the preferred LiveCluster embodiment f the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Broker A subcomponent of a Server that is responsible for maintaining a
  • job space for managing Jobs and Tasks and the associated interactions with Drivers and Engines.
  • Daemon A process in Unix that runs in the background and performs specific actions or runs a server with little or no direct interaction.
  • LiveCluster Server and the client application.
  • Failover Broker A Broker configured to take on work when another Broker fails.
  • the Failover Broker will continue to accept Jobs until another
  • LiveCluster LiveCluster provides a flexible platform for distributing large computations to idle, underutilized and/or dedicated processors on any network.
  • the LiveClusterarchitecture includes a Driver, one or more Servers, and several Engines.
  • a Server typically contains a Driver and a Database
  • Task An atomic unit of work. Jobs are broken into Tasks and then distributed to Engines for computation.
  • Standalone Broker A Server that has been configured with a Broker, but no Director; its configured primary and secondary Directors are both in other
  • LiveCluster supports a simple but powerful model for distributed parallel processing.
  • the basic configuration inco ⁇ orates three major components — Drivers, Servers, and Engines.
  • the LiveClustermodel works as follows:
  • Client applications (via Drivers) submit messages with work requests to a central Server.
  • the Server distributes the work to a network of Engines, or individual CPUs with LiveCluster Installed.
  • the Server collects the results and returns them to the Drivers. Tasks and Jobs
  • a Job is a unit of work. Typically, this refers to one large problem that has a single solution.
  • a Job is split into a number of smaller units, each called a Task.
  • An application utilizing LiveCluster submits problems as Jobs, and LiveCluster. breaks the Jobs into Tasks.
  • Other computers solve the Tasks and return their results, where they are added, combined, or collated into a solution for the Job.
  • the LiveCluster system is implemented almost entirely in Java. Except for background daemons and the installation program, each component is independent of the operating system under which it is installed. The components are designed to support interoperation across both wide and local area networks (WANs and LANs), so the design is very loosely coupled, based on asynchronous, message-driven interactions. Configurable settings govern message encryption and the underlying transport protocol.
  • WANs and LANs wide and local area networks
  • the Server is the most complex component in the system. Among other things, the Server:
  • the Server functionality is partitioned into two subcomponent entities: the Broker and the Director.
  • the Broker is responsible for maintaining a "job space" for managing Jobs and Tasks and the associated interactions with Drivers and Engines.
  • the primary function of the Director is to manage Brokers.
  • each Server instance imbeds a Broker/Director pair.
  • the simplest fault-tolerant configuration is obtained by deploying two Broker/Director pairs on separate processors, one as the primary, the other to support failover.
  • Brokers and Directors are isolated within separate Server instances to form a two-tiered Server network.
  • the Server is installed as a service (under Windows) or as a daemon (under Unix) — but it can also run "manually,” under a log-in shell, which is primarily useful for testing and debugging.
  • the Driver component maintains the interface between the LiveCluster Server and the client application.
  • the client application code imbeds an instance of the Driver.
  • the Driver in Java, the Driver (called JDriver) exists as a set of classes within the Java Virtual Machine (JVM).
  • JVM Java Virtual Machine
  • C + -I- the Driver (called Driver + +) is purely native, and exists as a set of classes within the application.
  • the client code submits work and administrative commands and retrieves computational results and status information through a simple API, which is available in both Java and C+ + .
  • Application code can also interact directly with the Server by exchanging XML messages over HTTP.
  • the Driver submits Jobs to the Server, and the Server returns the results of the individual component Tasks asynchronously to the Driver.
  • the Driver may exchange messages directly with the Engines within a transaction space maintained by the Server.
  • Engine Engines report to the Server for work when they are available, accept Tasks, and return the results.
  • Engines are invoked on desktop PCs, workstations, or on dedicated servers by a native daemon. Typically, there will be one Engine invoked per participating CPU. For example, four Engines might be invoked on a four-processor SMP.
  • LiveCluster platform provides reliable computations over networks of interruptible Engines, making it possible to utilize intermittently active resources when they would otherwise remain idle.
  • the Engine launches when it is determined that the computer is idle (or that a sufficient system capacity is available in a multi-CPU setting) and relinquishes the processor immediately in case it is interrupted (for example, by keyboard input on a desktop PC). It is also possible to launch one or more Engines on a given processor deterministically, so they run in competition with other processes (and with one another) as scheduled by the operating system. This mode is useful both for testing and for installing Engines on dedicated processors.
  • Engines are typically installed on network processors, where they utilize intermittently available processing capacity that would otherwise go unused. This is accomplished by running an extremely lightweight background process on the Engine. This invocation process monitors the operating system and launches an Engine when it detects an appropriate idle condition.
  • Fault-tolerant adaptive scheduling provides a simple, elegant mechanism for obtaining reliable computations from networks varying numbers of Engines with different available CPU resources.
  • Engines report to the Server when they are "idle” — that is, when they are available to take work.
  • the Engine logs in,” initiating a login session.
  • the Engine polls the Server for work, accepts Task definitions and inputs, and returns results. If a computer is no longer idle, the Engine halts, and the task is rescheduled to another Engine. Meanwhile, the Server tracks the status of Tasks that have been submitted to the Engines, and reschedules tasks as needed to ensure that the Job (collection of Tasks) completes.
  • this scheme is called “adaptive” because the scheduling of Tasks on the Engines is demand-driven. So long as the maximum execution time for any Task is small relative to the average "idle window," that is, the length of the average log-in session, between logging in and dropping out, adaptive scheduling provides a robust, scalable solution for load balancing. More capable Engines, or Engines that receive lighter Tasks, simply report more frequently for Work. In case the Engine drops out because of a "clean" interruption — because it detects that the host processor is no longer "idle” — it sends a message to the Server before
  • Directory replication is a method to provide large files that change relatively infrequently. Instead of sending the files each time a Job is submitted and incurring the transfer overhead, the files are sent to each Engine once, where they are cached.
  • the Server monitors a master directory structure and maintains a synchronized replica of this directory on each Engine, by synchronizing each Engine with the files.
  • This method can be used on generic files, or platform-specific items, such as Java . j ar files, DLLs, or object libraries.
  • LiveCluster API Before examining the various features and options provided by LiveCluster, it is appropriate to introduce the basic features of the LiveCluster API by means of several sample programs.
  • the basic LiveCluster API consists of the Tasklnput, TaskOutput and Tasklet interfaces, and the Job class. LiveCluster is typically used to run computations on different inputs in parallel. The computation to be run is implemented in a Tasklet. A Tasklet takes a Tasklnput, operates on it, and produces a TaskOutput. Using a Job object,
  • FIG. 1 illustrates the relationships among the basic API elements. Although it is helpful to think of a task as a combination of a Tasklet and one Tasklnput, there is no Task class in the API. To understand the basic API better, we will write a simple LiveCluster job.
  • the job generates a unique number for each task, which is given to the tasklet as its
  • the tasklet uses the number to return a TaskOutput consisting of a string.
  • TaskOutput TaskOutput
  • Tasklet Tasklet and Job
  • Test one named Test that contains the main method for the program.
  • Tasklnput class The basic API is found in the com . livecluster . tasklet package, so one should import that package (see FIG. 3).
  • the Tasklnput interface contains no methods, so one need not implement any. Its only pu ⁇ ose is to mark one's class as a valid Tasklnput.
  • the Tasklnput interface also extends the Serial izable interface of the j ava . io package, which means that all of the class's instance variables must be serializable (or transient). Serialization is used to send the Tasklnput object from the Driver to an Engine over the network. As its name suggests, the
  • SimpleTasklnput class is quite simple: it holds a single int representing the unique identifier for a task. For convenience, one need not make the instance variable private.
  • TaskOutput like Tasklnput, is an empty interface that extends Serializable, so the output class should not be su ⁇ rising (see FIG. 4) Writing a Tasklet
  • Tasklet interface which defines a single method: public TaskOutput service (Tasklnput ) ;
  • the service method performs the computation to be parallelized. For our Hello program, this involves taking the task identifier out of the Tasklnput and returning it as part of the TaskOutput string (see FIG. 5).
  • the service method begins by extracting
  • LiveCluster provides a high-performance, fault-tolerant, highly parallel way to repeatedly execute the line:
  • TaskOutput output tasklet .service (input) ;
  • a createTasklnputs method to create all of the Tasklnput objects Call the addTasklnput method on each Tasklnput one creates to add it to the job. Each Tasklnput one adds results in one task.
  • the Hello Job class is displayed in FIG. 7.
  • the constructor creates a single HelloTasklet and installs it into the job with the setTasklet method.
  • the createTasklnputs method creates ten instances of SimpleTasklnput, sets their tasklds to unique values, and adds each one to the job with the addTasklnput method.
  • the processTaskOutput method displays the string that is inside its argument.
  • the Test class (see FIG. 8) consists of a main method that runs the job.
  • the first line creates the job.
  • the second line has to do with distributing the necessary class files to the Engines.
  • the third line executes the job by submitting it to the LiveCluster Server, then waits until the job is finished. (The related executelnThread method runs the job in a separate thread, returning immediately.)
  • the JobOptions class allows one to configure many features of the job. For instance, one can use it to set a name for the job (useful when looking for a job in the Job List of the LiveCluster Administration tool), and to set the job's priority.
  • the JobOptions method set JarFile which takes the name of a jar file. This jar file should contain all of the files that an Engine needs to run the tasklet. In this case, those are the class files for SimpleTasklnput, SimpleTaskOutput, and HelloTasklet.
  • the basic API consists of the Tasklnput, TaskOutput and Tasklet interfaces and the Job class. Typically, one will write one class that implements Tasklnput, one that implements TaskOutput, one that implements Tasklet, and one that extends Job.
  • a Tasklet's service method implements the computation that is to be performed in parallel. The service method takes a Tasklnput as argument and returns a TaskOutput.
  • a Job object manages a single Tasklet and a set of Tasklnputs. It is responsible for providing the Tasklnputs, starting the job and processing the TaskOutputs as they arrive.
  • a tasklet embodies the common activity, and each Tasklnput contains a portion of the data.
  • the Domain Classes Before looking at the LiveCluster classes, we will first discuss the classes related to the application domain. There are six of these: Deal, ZeroCouponBond, Valuation, DealProvider, PricingEnvironment and DateUtil.
  • Each deal is represented by a unique integer identifier.
  • Deals are retrieved from a database or other data source via the DealProvider.
  • Deal 's value method takes a PricingEnvironment as an argument, computes the deal's value, and returns a
  • ZeroCouponBond represents a type of deal that offers a single, fixed payment at a future time.
  • DateUtil contains a utility function for computing the time between two dates.
  • the Deal class is abstract, as is its value method (see FIG. 9).
  • the value method's argument is a PricingEnvironment, which has methods for retrieving the interest rates and the valuation date, the reference date from which the valuation is taking place.
  • the value method returns a Valuation, which is simply a pair of deal ID and value. Both Valuation and PricingEnvironment are serializable so they can be transmitted over the network between the Driver and Engines.
  • ZeroCouponBond is a subclass of Deal that computes the value of a bond with no interest, only a principal payment made at a maturity date (see FTG. 10).
  • the value method uses information from the PricingEnvironment to compute the present value of the bond's payment by discounting it by the appropriate interest rate.
  • the DealProvider class simulates retrieving deals from persistent storage.
  • the getDeal method accepts a deal ID and returns a Deal object.
  • Our version (see FIG. 11) caches deals in a map. If the deal ID is not in the map, a new ZeroCouponBond is created.
  • the first question is how to provide deals to the tasklet.
  • One choice is to load the deal on the Driver and send the Deal object in the Tasklnput; the other is to send just the deal ID, and let the tasklet load the deal itself.
  • the second way is likely to be much faster, for two reasons: reduced data movement and increased parallelism.
  • FIG. 13 the left portion of which illustrates the connections among the Driver, the Engines, and your data server, on which the deal data resides.
  • the left-hand diagram illustrates the data flow that occurs when the Driver loads deals and transmits them to the Engines.
  • the deal data travels across the network twice: once from the data server to the Driver, and again from the Driver to the Engine.
  • the right-hand diagram shows what happens when only the deal IDs are sent to the Engines.
  • the data travels over the network only once, from the data server to the Engine.
  • the third design decision for our Ulustrative LiveCluster portfoUo valuation appUcation concerns how many deals to include in each task. Placing a single deal in each task yields maximum paraUeUsm, but it is unlikely to yield maximum performance. The reason is that there is some communication overhead for each task.
  • the granularity— amount of work— of a task should be large compared to the communication overhead. If it is too large, however, then two other factors come into play. First and most obviously, if one has too few tasks, one will not have much paraUeUsm. The third row of the table illustrates this case. By placing 100 deals in each Tasklnput, only ten of the 100 available Engines wiU be working. Second, a task may fail for a variety of reasons— the Engine may encounter hardware, software or network problems, or someone may begin using the machine on which the Engine is running, causing the Engine to stop immediately. When a task fails, it must be rescheduled, and wiU start from the beginning. Failed tasks waste time, and the longer the task, the more time is wasted. For these reasons, the granularity of a task should not be too large.
  • Task granularity is an important parameter to keep in mind when tuning an appUcation' s performance. We recommend that a task take between one and five minutes. To facilitate tuning, it is wise to make the task granularity a parameter of one's Job class.
  • the Tasklnput wiU be a Ust of deal IDs, and the TaskOutput a Ust of corresponding Valuations. Since both are Usts of objects, we can get away with a single class for both Tasklnput and TaskOutput.
  • This general-pu ⁇ ose ArrayListTasklO class contains a single ArrayList (see FIG. 14).
  • FIG. 15 shows the entire tasklet class.
  • the constructor accepts a PricingEnvironment, which is stored in an instance variable for use by the service method. As discussed above, this is an optimization that can reduce data movement because tasklets are cached on participating Engines.
  • Valuation Job is the largest of the three LiveCluster classes. Its constructor takes the total number of deals as weU as the number of deals to aUocate to each task. In a real appUcation, the first parameter would be replaced by a Ust of deal IDs, but the second would remain to allow for tuning of task granularity.
  • the createTasklnputs method uses the total number of deals and number of deals per task to divide the deals among several Tasklnputs. The code is subtle and is worth a careful look. In the event that the number of deals per task does not evenly divide the total number of deals, the last Tasklnput wiU contain aU the remaining deals.
  • the processTaskOutput method simply adds the TaskOutput's ArrayList of Valuations to a master ArrayList. Thanks to the deal IDs stored within each Valuation, there is no risk of confusion due to TaskOutputs arriving out of order.
  • the Test class has a main method that wiU run the appUcation (see FIG. 18).
  • the initial lines of main load the properties file for the valuation appUcation and obtain the values for totalDeals and dealsPerTask.
  • LiveCluster is ideal for data-parallel applications, such as portfolio valuation.
  • Tasklet object Since the Tasklet object is serialized and sent to each Engine, it can and should contain data that does not vary from task to task within a job.
  • Task granularity the amount of work that each task performs — is a crucial performance parameter for LiveCluster. The right granularity will amortize communication overhead while preventing the loss of too much time due to tasklet failure or interruption. Aim for tasks that run in a few minutes.
  • the EnginePropertiesTasklet class uses LiveCluster's EngineSession class to obtain the Engine's properties. It then prints them to the standard output.
  • the method begins by calling EngineSession's getProperties method to obtain a Properties object containing the Engine's properties. Note that EngineSession resides in the com. livecluster . taskle . util package. The tasklet then prints out the Ust of engine properties to System, out, using the convenient list method of the Properties class.
  • the EngineSession class has two other methods, setProperty and removeP roper ty, with the obvious meanings. Changes made to the Engine's properties using these methods will last for the Engine's session.
  • a session begins when an Engine first becomes available and logs on to the Server, and typicaUy ends when the Engine's JVM terminates. (Thus, properties set by a tasklet are likely to remain even after the tasklet' s job finishes.) Note that calling the setProperties method of the Properties object returned from EngineSession . getProperties will not change the Engine's properties.
  • Test class is similar to the previously-described Test classes.
  • the output from each Engine should be similar to that shown in FIG. 21. The meaning of some of these properties is obvious, but others deserve comment.
  • the cpuNo property is the number of CPUs in the Engine's computer.
  • the id property is unique for each Engine's computer, while multiple Engines running on the same machine are assigned different instance properties starting from 0.
  • the log files wiU be placed on the Engine's machine under the directory where the Engine was instaUed. On Windows machines, this is c : ⁇ Program Files ⁇ DataSynapse ⁇ Engine by default. In LiveCluster, the log file is stored under . /work/ [name] - [instance] /log. Summary
  • Engine properties describe particular features of each Engine in the LiveCluster.
  • Engine properties are set automatically; but one can create and set one's own properties in the Engine Properties page of the Administration Tool.
  • the EngineSession class provides access to Engine properties from within a tasklet.
  • Discrimination is a powerful feature of LiveCluster that allows one to exert dynamic control over the relationships among Drivers, Brokers and Engines.
  • LiveCluster supports two kinds of discrimination: • Broker Discrimination: One can specify which Engines and Drivers can log in to a particular Broker. Access this feature by choosing Broker Discrimination in the Configure section of the LiveCluster Administration Tool.
  • Engine Discrimination One can specify which Engines can accept a task. This is done in one's code, or in an XML file used to submit the job.
  • Engine Discrimination This section discusses only Engine Discrimination, which selects Engines for particular jobs or tasks. Engine Discrimination has many uses. The possibilities include: • limiting a job to run on Engines whose usernames come from a specified set, to confine the job to machines under one's jurisdiction;
  • This class uses a Java Properties object to determine how to perform the discrimination.
  • the Properties object can be created directly in one's code, as we wiU exemplify below, or can be read from a properties file.
  • PropertyDiscriminator When using PropertyDiscriminator, one encodes the conditions under which an
  • the Engine can take a task by writing properties with a particular syntax. For example, setting the property cpuMFlops . gt to the value 80 specifies that the CPU speed of the candidate Engine, in megaflops, must be greater than 80 for the Engine to be eUgible.
  • the discriminator property is of the form engine_property . operator. There are operators for string and numerical equaUty,
  • a PropertyDiscriminator can specify any number of conditions. AU must be true for the Engine to be eUgible to accept the task.
  • the native Value method is a native method invoking a Windows DLL. RecaU that the DealProvider class is responsible for fetching a Deal given its integer identifier. Its getDeal method returns either an OptionDeal object or ZeroCouponBond object, depending on the deal ID it is given. For this example, we decree that deal IDs less than a certain number indicate OptionDeals, and aU others are Z e r oCouponBonds .
  • ValuationJob class has changed significantly, because it must set up the discriminator and divide the Tasklnputs into those with OptionDeals and those without (see FIG. 23).
  • the first three lines set up a PropertyDiscriminator to identify Engines
  • FIG. 24 shows the code for createDeallnputs. This method takes the number of deals for which to create inputs, the deal identifier of the first deal, and a discriminator. (IDiscriminator is the interface that aU discriminators must implement.) It uses the same algorithm previously discussed to place Deals into Tasklnputs. Then caUs the two- argument version of addTasklnput, passing in the discriminator along with the Tasklnput.
  • a discriminator compares the properties of an Engine against one or more conditions to determine if the engine is eUgible to accept a particular task.
  • the PropertyDiscriminator class is the easiest way to set up a discriminator. It uses a Properties object or file to specify the conditions.
  • Discriminators can segregate tasks among Engines based on operating system, CPU speed, memory, or any other property.
  • the service method of a standard LiveCluster tasklet uses Java objects for both input and output. These Tasklnput and TaskOutput objects are serialized and transmitted over the network from the Driver to the Engines.
  • appUcations it may be more efficient to use streams instead of objects for input and output. For example, appUcations involving large amounts of data that can process
  • the data stream as it is being read may benefit from using streams instead of objects.
  • Streams increase concurrency by aUowing the receiving machine to process data while the sending machine is still transmitting. They also avoid the memory overhead of deserializing a large object.
  • the StreamTasklet and StreamJob classes enable appUcations to use streams instead of objects for data transmission.
  • Our exemplary appUcation wiU search a large text file for lines containing a particular string. It will be a paraUel version of the Unix grep command, but for fixed strings only. Each task is given the string to search for, which we will caU the target, as weU as a portion of the file to search, and outputs aU lines that contain the target.
  • Our SearchTasklet class extends the StreamTasklet class (see FIG. 25).
  • the service method for StreamTasklet takes two parameters: an InputStream from which it reads data, and an OutputStream to which it writes its results (see FIG. 26).
  • the method begins by wrapping those streams in a Buf f eredReader and a PrintWriter, for performing line-oriented I/O.
  • StreamJob Users of StreamTasklet and StreamJob are responsible for closing aU streams they are given.
  • Writing a StreamJob is simUar to writing an ordinary Job.
  • One difference is in the creation of task inputs: instead of creating an object and adding it to the job, it obtains a stream, writes to it, and then closes it.
  • the SearchJob class's createTasklnputs method Ulustrates this (see FIG. 27; _linesPerTask and _f ile are instance variables set in the constructor). The method begins by opening the file to be searched. It writes each group of lines to an OutputStream obtained with the createTasklnput method. (To generate the input for a task, one caUs the createTasklnput method, write to the stream it returns, then close that stream.)
  • a StreamJob has a processTaskOutput method (see FIG. 28) that is caUed with the output of each task.
  • the method's parameter is an InputStream instead of a TaskOutput object.
  • the InputStream contains lines that match the target. We print them to the standard output. Once again, it is our responsibiUty to close the stream we are given.
  • Test class for this example is simUar to previous ones. Improvements
  • LiveCluster ensures that caUs to processTaskOutput are synchronized, so that only one caU is active at a time.
  • a naive processTaskOutput implementation like the one above will read an entire InputStream to completion— a process which may involve considerable network I/O— before moving on to the next.
  • the paraUel string search program of the previous section wiU speed up searching for large files, it misses an opportunity in the case where the same file is searched, over time, for many different targets.
  • a web search company that keeps a Ust of all the questions all users have ever asked so that it can display related questions when a user asks a new one.
  • the previous search program will work correctly, it wiU redistribute the Ust of previously asked questions to Engines each time a search is done.
  • a more efficient solution would cache portions of the file to be searched on Engines to avoid repeatedly transmitting it. This is just what LiveCluster' s data set feature does.
  • a data set is a persistent coUection of task inputs (either Tasklnput objects or streams) that can be used across jobs. The first time it is used, the data set distributes its inputs to Engines in the usual way. But when the data set is used subsequently, it attempts to give a task to an Engine that already has the input for that task stored locaUy. If aU such Engines are unavailable, the task is given to some other available Engine, and the input is retransmitted. Data sets thus provide an important data movement optimization without interfering with LiveCluster' s ability to work with dynamicaUy changing resources.
  • TaskDataSet Since a TaskDataSet is a persistent object, it must have a name for future reference. One can choose any name:
  • TaskDataSet dataSet new TaskDataSe ( "search” ) ; or can caU the no-argument constructor, which wiU assign a name that one can access with the get Name method.
  • addTasklnput for Tasklnput objects
  • createTasklnput for streams
  • caU the doneSubmitting method: dataSe t . addTasklnput ( 11 ) ; dataSet . addTasklnput ( t2 ) ; dataSe . addTasklnput ( t 3 ) ; dataSet . doneSubmitting ( ) ;
  • the data set and its inputs are now stored on the Server and can be used to provide inputs to a DataSet Job, as will be illustrated in the next section.
  • a data set can be retrieved in later runs by using the static getDataSet method:
  • TaskDataSet dataSet TaskDataSet . getDataSet ( "search” ) ; It can be removed with the destroy method: dataSet . destroy ( ) ;
  • TaskDataSet and sets it into the Job.
  • the processTaskOutput method of this class is the same as that previously discussed.
  • the SearchTasklet class is also the same.
  • the main method (see FIG. 30) of the Test program creates a TaskDataSet and uses it to run several jobs.
  • the method begins by reading a properties file that contains a comma-separated Ust of target strings, as weU as the data file name and number of lines per
  • createDataSetFromFile places the inputs into a TaskDataSet. Let's review the data movement that occurs when this program is run.
  • Engines When the first job is executed, Engines will puU both the tasklet and a task input stream from the Driver machine. Each engine will cache its stream data on its local disk.
  • the Server wiU attempt to assign an Engine the same task input that it used for the first job. Then the Engine wiU only need to download the tasklet, since the Engine has a local copy of the task input.
  • Data sets can improve the performance of applications that reuse the same task inputs for many jobs, by reducing the amount of data transmitted over the network.
  • a data set is a distributed cache: each Engine has a local copy of a task input.
  • the Server attempts to re-assign a task input to an Engine that had it previously.
  • TaskDataSet class allows the programmer to create, retrieve and destroy data sets.
  • the DataSet Job class extends Job to use a TaskDataSet.
  • the LiveCluster Server provides the LiveCluster Admimstration Tool, a set of web- based tools that aUow the administrator to monitor and manage the Server, its cluster of Engines, and the associated job space.
  • the LiveCluster Administration Tool is accessed from a web-based interface, usable by authorized users from any compatible browser, anywhere on
  • Administrative user accounts provide password-protected, role-based authorization.
  • AU of the administrative screens are password-protected.
  • the site admmistrator creates new user accounts from the New User screen. Access control is organized according to the five functional areas that appear in the navigation bar.
  • the site admmistrator is the only user with access to the configuration screens (under Configure), except that each user has access to a single Edit Profile screen to edit his or her own pro-file.
  • the site administrator grants or denies access separately to each of the four remaining areas (Manage, View, InstaU, and Develop) from the View Users screen.
  • the Server instaUation script creates a single user account for the site administrator, with both user name and password admin. The site administrator should log in and change the password immediately after the Server is instaUed. Navigating the Administration Tool
  • the admimstration tools are accessed through the navigation bar located on the left side of each screen (see FIG. 32). CUck one of the links in the navigation bar to display options for that link. CUck a link to navigate to the corresponding area of the site. (Note that the navigation bar displays only those areas that are accessible from the current account. If one is not using an administrative account with aU privUeges enabled, some options wiU not be visible.) At the bottom of the screen is the shortcut bar, containing the Logout tool, and shortcut links to other areas, such as Documentation and Product Information.
  • the Administration Tool is divided into five sections. Each section contains screens and tools that are explained in more detail in the next five chapters. The foUowing tools are available in each of the sections.
  • the Configure section contains tools to manage user accounts, profiles, Engines,
  • the Manage Section enables one to administer Jobs or Tasks that have been submitted, administer data sets or batch jobs, submit a test Job, or retrieve log files.
  • the View section contains tools to Ust and examine Brokers, Engines, Jobs, and data sets. It's different from the Manage section in that tools focus on viewing information instead of modifying it, changing configuration, or killing Jobs. One can examine historical values to gauge performance, or troubleshoot one's configuration by watching the interaction between
  • Lists are similar to the Usted displays found in the Manage section, which can be refreshed on demand and display more information. Views are graphs implemented in a Java applet that updates in real-time.
  • the instaU section enables one to install Engines on one's Windows machine, or download the executable files and scripts needed to buUd instaUations distributable to Unix machines.
  • the Develop Section
  • the Develop section includes downloads and information such as Driver code, API Documentation, Documentation guides, Release Notes, and the Debug Engine.
  • T e Configure Section
  • the Configure section contains tools to manage user accounts, profiles, Engines, Brokers, and Directors. To use any of the foUowing tools, cUck Configure in the Navigation bar to display the Ust of tools. Then cUck a tool name to continue. View/Edit Users
  • cUck New User Signup To add a new user, cUck New User Signup. One will be presented with a screen siimlar to FIG. 34. Enter in one's admin password and the information about the user, and cUck Submit. (Note that the Subject and Message fields for e-mail notification are already populated with a default message. The placeholders for username and password will be replaced with the actual username and password for the user when the message is sent.) Edit Profile
  • the Edit Profile tool enables you to make changes to the account with which you are currently logged in. It also enables the admin to configure the Server to email notifications of account changes to users. For accounts other than admin, one must cUck Edit Profile, enter one's password in the top box, and make any changes one wishes to make to one's profile. This includes one's first name, last name and emaU address. One can also change one's
  • the Engine Configuration tool (see FIG. 35) enables one to specify properties for each of the Engine types that one deploys. To configure an Engine, one must first choose the Engine type from the File Ust. Then, enter new values for properties in the Ust, and cUck Submit next to each property to enter these values. CUck Save to commit aU of the values to the Engine configuration. One can also cUck Revert at any time before cUcking Save to return to the configuration saved in the original file. For more information on any of the properties in the Engine Configuration tool, one can cUck Help. Engine Properties
  • This tool displays properties associated with each Engine that has logged in to this Server. A Ust of Engine IDs is displayed, along with the corresponding Machine Names and properties that are currently assigned to that Engine. These properties are used for discrimination, either in the Broker or the Driver. Properties can be set with this tool, or when an Engine is instaUed with the 1-CUck InstaU with Tracking link and a tracking profile is created, which is described below, in the Engine Tracking Editor tool.
  • Engines can be instaUed with optional tracking parameters, which can be used for discrimination.
  • Engines are instaUed with the 1-CUck InstaU with Tracking link, one is prompted for values for these parameters.
  • This tool enables one to define what parameters are given to Engines instaUed in this method.
  • the parameters include MachineName, Group, Location, and Description.
  • One can add more parameters by entering the parameter name in the Property column, entering a description of the property type in the Description column, and clicking the Add button.
  • One can also remove parameters by cUcking the Remove button next to the parameter one wants to remove.
  • Broker Configuration The Broker's attributes can be configured by cUcking the Broker Configuration tool.
  • Each discriminator includes a property, a comparator, and a value.
  • the property is the property defined in the Engine or Driver, such as a group, OS or CPU type.
  • the value can be either a number (double) or string.
  • the comparator compares the property and value. If they are true, the discriminator is matched, and the Engine or Driver can login to a Broker. If they
  • each discriminator is the Negate other Brokers box.
  • an Engine or Driver wiU be considered only for this Broker, and no others. For example, if one has a property named state and sets a discriminator for when state equals NY and selects Negate other Brokers, an Engine with state set to NY wiU go to this Broker, because other Brokers won't accept its login.
  • cUck Add Once one has entered a property, comparator, and value, cUck Add.
  • aU Engines currently logged in wiU log out and attempt to log back in. This enables one to set a discriminator to limit a number of Engines and immediately force them to log off.
  • CUent Diagnostics tool see FIG. 40. This generates tables or charts of information based on client messaging times.
  • cUent Diagnostics To use cUent diagnostics, one must first select CUent Diagnostics and then cUck the edit diagnostic options link. Set Enabled to true, cUck Submit, then cUck Save. This wiU enable statistics to be logged as the system runs. (Note that this can generate large amounts of diagnostic data, and it is recommended that one enable this feature only when debugging.) CUck diagnostic statistics to return to the previous screen. Next, one must specify a time range for the analysis. Select a beginning and ending time range, or cUck Use aU available times to analyze aU information.
  • aUck Add a Subscriber.
  • cUck their name in the Ust.
  • For each subscriber enter a single emaU address in the EmaU box. This must be a fuU email address, in the form name@your . address . com.
  • cUck Submit When each event occurs, the Server wiU send a short notification message to the specified emaU address.
  • the Manage section enables one to administer Jobs or Tasks that have been submitted, administer data sets or batch jobs, submit a test Job, or retrieve log files. To use any of the
  • Each Broker logged on to the Director is Usted, along with the number of busy and idle Engines logged onto it.
  • CUck on the Broker name in the Hostname column to display a Ust of the Engines currently logged in.
  • cUck the Create button in the Monitor column One can specify the number of jobs to be displayed in the Broker Monitor by changing the number in the box to the left of the Create button.
  • Driver Weight boxes are used to set the ratio of Engines to Drivers that are sent to the Broker from the Director.
  • Engine Weight and Driver Weight are both 1, so the Broker wiU handle Engines and Drivers equaUy.
  • This can also be changed so a Broker favors either Engines or Brokers. For example, changing Engine Weight to 10 and leaving Driver Weight at 1 wiU make the Broker handle Engines ten times more than Drivers.
  • cUck the Refresh button To update the Ust and display the most current information, cUck the Refresh button.
  • This tool (see FIG. 43) enables one to view and control any Engines currently controUed by one's Server.
  • To update the Ust and display the most current information cUck the Refresh button.
  • Engines are displayed by username, with 20 Engines per page by default. One can select a greater number of Usts per page, or display aU of the Engines, by cUcking a number or AU next to Results Per Page on the top right of the screen. One can also find a specific Engine by entering the user-name in the box and cUcking Search For Engines.
  • the Status column displays if an Engine is avaUable for work. If "AvaUable” is displayed, the Engine is logged on and is ready for work. Engines marked as "Logged off are no longer avaUable. "Busy” Engines are currently working on a Task. Engines shown as “Logging in” are in the login
  • 810791vl process and are possibly transferring files.
  • WhUe a Job is running one can change its priority by selecting a new value from the Ust in the Priority column. Possible values range from 10, the highest, to 0, the lowest.
  • Jobs are shown in rows with UserName, JobName, Submit Time, Tasks Completed, and Status.
  • To display information on a Job point to the Job Name and a popup window containing statistics on the Job appears. For more information, cUck the Job Name and a graph wiU be displayed in a new window.
  • To kiU Jobs select one or more Jobs by cUcking the check box in the KiU column, or cUck Select All to kiU aU Jobs, then cUck Submit.
  • Jobs can utilize a DataSet, which is a reusable set of Tasklnputs. Repeated Jobs wiU result in caching Tasklnputs on Engines, resulting in less transfer overhead.
  • a DataSet which is a reusable set of Tasklnputs. Repeated Jobs wiU result in caching Tasklnputs on Engines, resulting in less transfer overhead.
  • Batch Jobs are items that have been registered with a Server, either by LiveDeveloper, by copying XML into a directory on the Server, or by a Driver. Unlike a Job, they don't immediately enter the queue for processing. Instead, they contain commands, and instructions to specify at what time the tools will execute. These events can remain on the Server and run more than once.
  • TypicaUy a Batch Job is used to run a Job at a specific time or date, but can be used to run any command.
  • the Batch Administration tool displays aU Batch Jobs on the Server, and enables one to suspend, resume, or remove them. Each Batch Job is denoted with a name. A Type and Time specify when the Batch Job wiU start.
  • a Relative Batch Job is defined with a recurring time or a time relative to the current time, such as a Batch Job that runs every hour, or one defined in the cron format. Immediate jobs are already in the queue.
  • Job Name Name of the Job in the Job Admin Name of the Job in the Job Admin.
  • User Name Name of the User in the Job Admin Tasks Number of Tasks in the Job.
  • Priority Job execution priority with 10 being the highest, and 0 the lowest.
  • Compression Compress input and output data.
  • ParaUel CoUection Start coUecting results before aU Tasks are submitted. After one has set the parameters, one cUcks Submit to submit the Job. Once the Job is submitted, the Job Admimstration screen from the Manage section wiU be displayed. One can then view, update, or kiU the Job. Log Retrieval
  • the interface displayed below, enables one to select a type of log file, a date range, and how one would like to display the log file.
  • cUck Current Server Log To view the current log file, cUck Current Server Log. The current log file is displayed (see FIG. 47), and any new log activity wiU be continuously added.
  • cUck Snaspshot to freeze the current results and open them in a new window.
  • cUck Clear to clear the current results.
  • CUck Past Logs to return to the original display.
  • 810791vl To view a past log file, first choose what should be included in the file. Select one or more choices: HT Access Log, HT Error Log, Broker Log, Director Log, Broker.xml, Director.xml, Config.xml, and Engine Updates List. One can also cUck Select AU to select aU of the information. Next, select a date and time that the logs will end, and select the number of hours back from the end time that wiU be displayed. After one has chosen your data and a range, cUck one of the Submit buttons to display the data. One can choose to display data in the window below, in a new window, or in a zip file. One can also view any zip files you made in the past.
  • the View Section contains tools to Ust and examine Brokers, Engines, Jobs, and data sets. It's different from the Manage section in that tools focus on viewing information instead of modifying it, changing configuration, or kiUing Jobs. One can examine historical values to gauge performance, or troubleshoot the configuration by watching the interaction between Brokers and Engines interactively.
  • Lists are si ⁇ lar to the Usted displays found in the Manage section, which can be refreshed on demand and display more information. Views are graphs implemented in a Java applet that updates in real-time. The foUowing tools are available: Broker List
  • the Broker List tool displays aU Brokers currently logged in. It also gives a brief overview of the number of Engines handled by each Broker.
  • cUck the Refresh button.
  • CUck a Broker's hostname to display its Ust of Engines.
  • Broker Monitor The Broker Monitor tool opens an interactive graph display (see FIG. 49) showing current statistics on a Broker.
  • the top graph is the Engine Monitor, a view of the Engines reporting to the Broker, and their statistics over time.
  • the total number of Engines is displayed in green.
  • the employed Engines (Engines currently completing work for the Broker) are displayed in blue, and Engines waiting for work are displayed in red.
  • the middle graph is the Job View, which displays what Jobs have been submitted, and the number of Tasks completed in each Job. Running Jobs are displayed as blue bars, completed Jobs are grey, and canceUed Jobs are pu ⁇ le.
  • the bottom graph, the Job Monitor shows the current Job's statistics. Four lines are shown, each depicting Tasks in the Job. They are submitted (green), waiting (red), running (blue), and completed (grey) Tasks. If a newer Job has been submitted since you opened the Broker Monitor, cUck load latest job to display the newest Job.
  • Engine List shows the Job's statistics. Four lines are shown, each depicting Tasks in the Job. They are submitted (green), waiting (red), running (blue), and completed (grey) Tasks. If a newer Job has been submitted since you opened the Broker Monitor, cUck load latest job to display the newest Job.
  • the Engine List provides the same information as the Engine Administration tool in the Manage section, such as Engines and what Jobs they are running. The only difference is the Ust only aUows one to view the Engine Ust, while the Engine Administration tool also has controls that enable one to kiU Jobs. Engine View
  • the Engine View tool opens an interactive graph displaying Engines on the current Broker, similar to the Engine Monitor section of the Broker Monitor graph, described above. Job List
  • the Job List (see FIG. 50) provides the same information as the Job Administration tool in the Manage section. The only difference is the Ust only aUows one to view Jobs, while the Job Administration tool also has controls that enable you to kiU Jobs and change their priority.
  • the Data Set List (see FIG. 51) provides the same information as the Data Set Administration tool in the Manage section. The only difference is the Ust only aUows one to view Data Sets, while the Data Set Administration tool also has controls that enable one to make Data Sets unavailable. Cluster Capacity
  • the Cluster Capacity tool displays the capabiUties of Engines reporting to a Server. This includes number of CPUs, last login, CPU speed, free disk space, free memory, and total memory. AU Engines, including those not currently online, are displayed.
  • the instaU section contains tools used to instaU Engine on one or more machines.
  • the instaU screen (see FIG. 53) enables one to instaU Engines on a Windows machine, or download the executable files and scripts needed to buUd instaUations distributable to Unix machines.
  • the remote Engine script is a Perl script written for Unix that enables one to install or start several DataSynapse Engines from a central Server on remote nodes. To use this script, download the file at the Remote Engine Script by can holding Shift and cUcking the link, or right-dick the link and selecting Save File As....
  • ACTION can be either install, configure, start, or stop: install instaUs the
  • DSEngine tree on the remote node and configures the Engine with parameters specified on the command line Usted above; configure configures the Engine with parameters specified on the command line as as Usted above; start starts the remote Engine; and stop stops the remote Engine.
  • the format of the resource file is: machine_name /path/to/install/dir .
  • the Driver is avaUable in Java and C+ + and source code is avaUable for developers to download from this page.
  • LiveClusterAPI LiveClusterAPI
  • This link opens a new browser containing notes pertaining to the current and previous releases.
  • a version of the Engine is avaUable to provide debugging information for use with the Java Platform Debugger Architecture, or JPDA.
  • This Engine does not contain the fuU functionaUty of the regular Engine, but does provide information for remote debugging via JPDA.
  • the Broker is responsible for managing the job space: scheduling Jobs and Tasks on Engines and supervising interactions with Engines and Drivers Overview Most of the time, the scheduling of Jobs and Tasks on Engines is completely transparent and requires no admimstration - the "Darwinian" scheduling scheme provides dynamic load balancing and adapts automaticaUy as Engines come and go. However, one needs a basic understanding of how the Broker manages the job space in order to understand the configuration parameters, to tune performance, or to diagnose and resolve problems. RecaU that Drivers submit Jobs to the Broker. Each Job consists of one or more Tasks, which may be performed in any order. ConceptuaUy, the Broker maintains a first-in/first-out queue (FIFO) for Tasks within each Job.
  • FIFO first-in/first-out queue
  • the Broker When the Driver submits the first Task within a Job, the Broker creates a waiting Task Ust for that job, then adds this waiting Ust to the appropriate Job Ust, according to the Job's priority (see “Job-Based Prioritization,” below). Additional Tasks within the Job are appended to the end of the waiting Ust as they arrive.
  • the Broker Whenever an Engine reports to the Broker to request Work, the Broker first determines which Job should receive service, then assigns the Task at the front of that Job's waiting Ust to the Engine. (The Engine may not be eUgible to take the next Task, however - this is discussed in more detaU below.) Once assigned, the Task moves from the waiting Ust to the pending Ust; the pending Ust contains aU the Tasks that have been assigned to Engines. When an Engine reports to the Broker to request Work, the Broker first determines which Job should receive service, then assigns the Task at the front of that Job's waiting Ust to the Engine. (The Engine may not be eUgible to take the next Task, however - this is discussed in more detaU below.) Once assigned, the Task moves from the waiting Ust to the pending Ust; the pending Ust contains aU the Tasks that have been assigned to Engines. When an Engine reports to the Broker to request Work, the Broker first determines which Job should receive service, then assigns the Task
  • the Broker searches both the pending and waiting Usts. If it finds the Task on either Ust, it removes it from both, and adds it to the completed Ust. (The Broker may also restart any Engines that are currently processing redundant instances of the same Task. If the Task is not on either Ust, it was a redundant Task that completed before the Engine restarted, and the Broker ignores it.)
  • Tasks migrate from the pending Ust back to the waiting Ust when the corresponding Engine is interrupted or drops out.
  • the Broker appends the Task to the front, rather than the back, of the queue, so that Tasks that have been interrupted are rescheduled at a higher priority than other waiting Tasks within the same Job.
  • the Broker can be configured to append redundant instances of Tasks on the pending Ust to the waiting Ust; "Redundant Scheduling," below, provides a detaUed discussion of this topic.
  • Discriminators Task-Specific Engine EUgibUity Restrictions
  • Discriminator API supports task discrimination based on Engine-specific attributes.
  • the appUcation code attaches IDiscriminator objects to Tasks at runtime to restrict the class of Engines that are eUgible to process them.
  • the Broker proceeds to the next Task, and so on, assigning the Engine the first Task it is eUgible to take.
  • Discriminators estabUsh hard limits; if the Engine doesn't meet the eUgibiUty requirements for any of the Tasks, the Broker wiU send the Engine away empty-handed, even though Tasks may be waiting.
  • the Broker tracks a number of predefined properties, such as avaUable memory or disk space, performance rating (megaflops), operating system, and so forth, that the Discriminator can use to define eUgibiUty.
  • the site administrator can also estabUsh
  • Every LiveClusterJob has an associated priority. Priorities can take any integer value between zero and ten, so that there are eleven priority levels in aU. 0 is the lowest priority, 10 is the highest, and 5 is the default.
  • the LiveClusterAPI provides methods that aUow the appUcation code to attach priorities to Jobs at runtime, and priorities can be changed wlule a Job is running from the LiveClusterAdministration Tool.
  • Two boolean configuration parameters determine the basic operating mode: Serial Priority Execution and Serial Job Execution.
  • Serial Priority Execution the Broker services the priority queues sequentiaUy. That is, the Broker distributes higher priority Jobs, then moves to lower priority Jobs when higher priority Jobs are completed.
  • Serial Priority Execution is false, the Broker provides interleaved service, so that lower-priority queues with Jobs wiU receive some level of service even when higher-priority Jobs are competing for resources.
  • Serial Job Execution has similar significance for Jobs of the same priority: When Serial Job Execution is true, jobs of the same priority receive strict sequential service; the first Job to arrive is completed before the next begins. When Serial Job Execution is false, the Broker provides round-robin service to Jobs of the same priority, regardless of arrival time.
  • the Broker aUocates resources among the competing priority queues based on the Priority Weights setting. Eleven integer weights determine the relative service rate for each of the eleven priority queues. For example, if the weight for priority 1 is 2, and the weight for priority 4 is 10, the Broker will distribute five priority-4 Tasks for every priority-1 Task whenever Jobs of these two priorities compete. (Priorities with weights less than or equal to zero receive no service when higher priority Tasks are waiting.) The default setting for both Serial Execution flags is false, and the default setting for the Priority Weights scales linearly, ranging from priority 0 at 1, and priority 10 at 11.
  • Job Space In addition to the serial execution flags and the priority weights, there are four remaining parameters under Job Space that merit some discussion. These four parameters govern the polling frequencies for Engines and Drivers and the rate at which Drivers upload Tasks to the Server; occasionaUy, they may require some tuning. Engines constantly poU the Broker when they are avaUable to take work. Likewise,
  • the Broker provides the polling entity with a target latency; that is, it teUs the Engine or Driver approximately how long to wait before initiating the next transaction.
  • Total Engine PoU Frequency sets an approximate upper limit on the aggregate rate at which the avaUable Engines poU the Broker for work.
  • the Broker computes a target latency for the individual Engines, based on the number of currently avaUable Engines, so that the total number of Engine polling requests per second is approximately equal to the Total Engine PoU Frequency.
  • the integer parameter specifies the target rate in poUs per second, with a default setting of 30.
  • the Result Found / Not Found Wait Time parameters limit the frequency with which
  • Result Found Wait Time determines approximately how long a Driver waits, after it retrieves some results, before polling the Broker for more
  • Result Not Found Wait Time determines approximately how long it waits after polling unsuccessfuUy.
  • Each parameter specifies a target value in milUseconds, and the default settings are 0 and 1000, respectively. That is, the default settings introduce no delay after transactions with results, and a one-second delay after transactions without results.
  • the Task submission Wait Time limits the rate at which Drivers submit Tasklnputs to the Server. Drivers buffer the Tasklnput data, and this parameter determines the approximate waiting time between buffers.
  • the integer value specifies the target latency in milUseconds, and the default setting is 0.
  • the default settings are an appropriate starting point for most intranet deployments, and they may ordinarily be left unchanged. However, these latencies provide the primary mechanism for throttling transaction loads on the Server.
  • the Task Rescheduler addresses the situation in which a handful of Tasks, running on less-capable processors, might significantly delay or prevent Job completion. The basic idea is to launch redundant instances of long-running Tasks. The Broker accepts the first TaskOutput to return and cancels the remaining instances (by terminating and restarting the associated Engines). However, it's important to prevent "runaway" Tasks from consuming unlimited resources and delaying Job completion indefinitely. Therefore, a configurable parameter, Max Attempts limits the number of times any given Task will be rescheduled.
  • the Broker cancels aU instances of that Task, removes it from the pending queue, and sends a FatalTaskOutput to the Driver.
  • Three separately configurable strategies govern rescheduling. The three strategies run in paraUel, so that tasks are rescheduled whenever one or more of the three corresponding criteria are satisfied. However, none of the rescheduling strategies comes into play for any Job untU a certain percentage of Tasks within that Job have completed; the Strategy Effective Percent parameter determines this percentage. More precisely, the Driver notifies the Broker when the Job has submitted all its Tasks
  • the rescheduler scans the pending Task Ust for each Job at regular intervals, as determined by the Interval Millis parameter.
  • Each Job has an associated taskMaxTime, after which Tasks within that Job wUl be rescheduled.
  • the Broker tracks the mean and standard deviation of the (clock) times consumed by each completed Task within the Job.
  • 810791vl uses one or both of these statistics to define a strategy-specific time limit for rescheduling Tasks.
  • the rescheduler Each time the rescheduler scans the pending Ust, it checks the elapsed computation time for each pending Task. InitiaUy, rescheduling is driven solely by the taskMaxTime for the Job; after enough Tasks complete, and the strategies are active, the rescheduler also compares the elapsed time for each pending Task against the three strategy-specific limits. If any of the limits is exceeded, it adds a redundant instance of the Task to the waiting Ust. (The Broker wUl reset the elapsed time for that Task when it gives the redundant instance to an Engine.)
  • the Reschedule First flag determines whether the redundant Task instance is placed at the front of the back of the waiting Ust; that is, if Reschedule First is true, rescheduled
  • the default setting for Remaining Task Percent is 1, which means that this strategy becomes active after the Job is 99% completed.
  • the default setting for Average Limit is 3.0, which means that it reschedules Tasks after they take at least three times as long as average.
  • Standard Dev Limit is 2.0, which means that it reschedules Tasks after they exceed the average by two standard deviations, or in other words, after they've taken longer than about 98 % of the completed
  • TaskDataSet addresses appUcations in which a sequence of operations are to be performed on a common input dataset, which is distributed across the Engines.
  • a typical example would be a sequence of risk reports on a common portfoUo, with each Engine responsible for processing a subset of the total portfoUo.
  • a TaskDataSet corresponds to a sequence of Jobs, each of which shares the same coUection of Tasklnputs, but where the Tasklet varies from Job to Job.
  • the principal advantage of the TaskDataSet is that the scheduler makes a "best effort" to assign each Tasklnput to the same Engine repeatedly, throughout the session. In other words, whenever possible, Engines are assigned Tasklnputs that they have processed previously (as part of earUer Jobs within the session). If the Tasklnputs contain data references, such as primary keys in a database table, the appUcation developer can cache the reference data on an Engine and it wiU be retained.
  • the Broker minimizes data transfer by caching the Tasklnputs on the Engines.
  • the Tas k Data Set Manager plug-in manages the distributed data. When Cache Type is set to 0, the Engines cache the Tasklnputs in memory; when Cache Type is set to 1, the Engines cache the Tasklnputs on the local file system.
  • Cache Max and Cache Percent set limits for the size of each Engine's cache. Cache Max determines an absolute limit, in megabytes.
  • the Data Transfer plug-in manages the transfer of Tasklnput and Tasklet objects from the Broker to the Engines and the transfer of TaskOutput objects from the Broker to the Drivers.
  • direct data transfer is configured, and the data transfer configuration specified in this plug-in is not used. However, if direct data transfer is disabled, these settings are used.
  • the Broker saves the serialized data to disk.
  • the Broker assigns a Task to an Engine, the Engine picks up the input data at the location specified by the Base URL.
  • the Broker notifies a polling Driver that output data is avaUable, the Driver retrieves the data from the location specified by the Output URL. Both of these URLs must point to the same directory on the Server, as specified by the
  • 810791vl Data Directory This directory is also used to transfer instructions (the Tasklet definitions) to the Engines.
  • the Broker can be configured to hold the data in memory and accompUsh the transfer directly, by enclosing the data within messages.
  • Two flags, Store Input to Disk and Store Output to Disk determine which method is used to transfer input data to Engines and output data to Drivers, respectively. (The default setting is true in each case; setting the corresponding flag to false selects direct transfer from memory.) This default configuration is appropriate for most situations.
  • the incremental performance cost of the round trip to disk and sUght additional messaging burden is rarely significant, and saving the serialized Task data to disk reduces memory consumption on the Server.
  • the direct-transfer mode is feasible only when there is sufficient memory on the Server to accommodate aU of the data. Note that in making this determination, it is important to account for peak loads. Running in direct-transfer mode with insufficient memory can result in j ava . lang . Out Of Memory- Errors from the Server process, unpredictable behavior, and severely degraded performance.
  • the Job Cleaner plug-in is responsible for Job-space housekeeping, such as cleaning up files and state history for Jobs that have been completed or canceled. This plug-in deletes data files associated with Jobs on a regular basis, and cleans the Job Manage and View pages. It uses the Data Transfer plug-in to find the data files. If a Job is finished or canceUed, the files are deleted on the next sweep. The plug-in sweeps the Server at regular intervals, as specified by the integer Attempts Per Day (the default setting of 2 corresponds to a sweep interval of every 12 hours). The length of time in hours Jobs will remain on the Job Admin page after finished or canceUed is specified by the integer Expiration Hours.
  • the Driver and Engine Managers play analogous roles for Drivers and Engines, respectively. They maintain the server state for the corresponding cUent/server connections.
  • the Broker maintains a server-side proxy corresponding to each active session; there is one session corresponding to each Driver and Engine that is logged in.
  • the Driver Service plug-in is responsible for the Driver proxies.
  • Max Number of Proxies sets an upper limit on the number of Drivers that can log in concurrently. The default value of 100,000, and is typicaUy not modified.
  • the Employment Office plug-in maintains the Engine proxies.
  • Max Number of Proxies is set by the Ucense, and cannot be increased be increased beyond the limit set by the Ucense. (Although it can be set below the limit imposed by the Ucense.)
  • the Login Managers are set by the Ucense, and cannot be increased be increased beyond the limit set by the Ucense. (Although it can be set below the limit imposed by the Ucense.)
  • Both the Driver and Engine Managers incorporate Login Managers.
  • the Login Managers maintain the HTTP connections with corresponding cUents (Drivers and Engines), and momtor the heartbeats from active connections for timeouts.
  • User-configurable settings under the HTTP Connection Managers include the URL (on the Broker) for the connections, timeout periods for read and write operations, respectively, and the number times a cUent will retry a read or write operation that times out before giving up and logging a fatal error.
  • the Server instaU script configures the URL settings, and ordinarily, they should never be modified thereafter.
  • the read/write timeout parameters are in seconds; their default values are 10 and 60, respectively.
  • Read operations for large blocks of data are generaUy accompUshed by directdownloads from file, whereas uploads may utilize the connection, so the write timeout may be substantiaUy longer.
  • the default retry limit is 3. These default settings are generally appropriate for most operating scenarios; they may, however, require some tuning for optimal performance, particularly in the presence of unusually large datasets or suboptimal network conditions.
  • the Driver and Engine Monitors track heartbeats from each active Driver and Engine, respectively, and ends connections to Drivers and Engines which no longer respond.
  • the Checks Per Minute parameters within each plug-in determine the frequency with which the corresponding momtor sweeps its Ust of active cUents for connection timeouts.
  • the heartbeat plug-in determines the approximate target rate at which the corresponding cUents (Drivers or Engines) send heartbeats to the Broker, and set the timeout period on the Broker as a multiple of the target rate. That is, the timeout period in milUseconds (which is displayed in the browser as weU) is computed as the product of the Max Millis Per Heartbeat and the Timeout Factor. (It may be worth noting that the actual latencies for
  • the default setting for each maximum heartbeat period is 30,000 (30 seconds) and for each timeout factor, 3, so that the default timeout period for both Drivers and Engines is 90 seconds.
  • the Broker Manager checks for timeouts 10 times per minute, whUe the Engine Manager sweeps 4 times per minute. (TypicaUy, there are many more Engines than Drivers, and Engine outages have a more immediate impact on appUcation performance.)
  • Other Manager Components
  • the Engine File Update Server manages file updates on the Engines, including both the DataSynapse Engine code and configuration itself, and user files that are distributed via the directory repUcation mechanism.
  • the Native Job Adapter provides services to support appUcations that utilize the C + + or XML APIs.
  • the basic idea is that the Broker maintains a "pseudo Driver" corresponding to each C + + or XML Job, to track the connection state and perform some of the functions that would otherwise be performed by the Java Driver.
  • the Result Found and Result Not Found Wait Times have the same significance as the corresponding settings in the Job Space plug-in, except that they apply only to the pseudo Drivers.
  • the Base URL for connections with native Jobs is set by the instaU script, and should ordinarily never change thereafter.
  • the other settings within the Native Job Adapter plug-in govern logging for the Native Bridge Library, which is responsible for loading the native Driver on each Engine: a switch to turn logging on and off, the log level (1 for the minimum, 5 for the maximum), the name of the log file (which is placed within the Engine directory on each Engine that processes a native Task), and the maximum log size (after which the log roUs over).
  • logging for the Native Bridge is disabled.
  • the Native Job Store plug-in comes into play for native Jobs that maintain persistence of Task- Output s on the Broker. (Currently, these include Jobs that set a positive value for hoursTo- KeepData or are submitted via the JobSubmitter class.)
  • the Data Directory is the directory in the Broker's local file system where the TaskOutputs are
  • Information plug-in provides read-only access to the revision level and buUd date for each component associated with the Broker.
  • the License plug-in together with its License Viewer component, provides similar access to the Ucense settings.
  • the Log F le plug-in maintains the primary log file for the Broker itself. Settings are avaUable to determine whether log messages are written to file or only to the standard output and error streams, the location of the log file, whether to log debug information or errors only, the log level (when debug messages are enabled), the maximum length of the log file before it roUs over, and whether or not to include stack traces with error messages.
  • the MaU Server generates maU notifications for various events on the Broker.
  • the SMTP host can be set here, or from the Edit Profile screen for the site administrator. (If this field is blank or "not set,” maU generation is disabled.)
  • the Garbage CoUector monitors memory consumption on the Broker and forces garbage coUection whenever the free memory falls below a threshold percentage of the total avaUable memory on the host. Configuration settings are avaUable to determine the threshold percentage (the default value is 20%) and the frequency of the checks (the default is once per minute).
  • the remaining utiUty plug-ins are responsible for cleaning up log and other temporary files on the Broker. Each specifies a directory or directories to sweep, the sweep frequency (per day), and the number of hours that each file should be maintained before it is deleted. There are also settings to determine whether or not the sweep should recurse through subdirectories and whether to clean out aU pre-existing files on startup. Ordinarily, the only user modification to these settings might be to vary the sweep rate and expiration period during testing.
  • the LiveClustersystem provides a simple, easy-to-use mechanism for distributing dynamic Ubraries ( . dll or . so), Java class archives ( . jar), or large data files that change relatively infrequently.
  • the basic idea is to place the files to be distributed within a reserved directory on the Server.
  • the system maintains a synchronized repUca of the reserved directory structure for each Engine. Updates can be automaticaUy made, or manuaUy triggered. Also, an
  • Engine file update watchdog can be configured to ensure updates only happen when the Broker is idle.
  • a directory system resides on the Server in which you can put files that wUl be mirrored to the Engines. The location of these directories is outlined below.
  • Server-side directories are located in the Server instaU location (usuaUy c : ⁇ DataSynapse ⁇ Server) plus ⁇ livecluster ⁇ public_html ⁇ updates .
  • the datasynapse directory contains the actual code for the Engine and support binaries for each platform.
  • the resources directory contains four directories: shared, Win32, Solaris, and linux.
  • This shared directory is mirrored to aU Engine types, and the other three are mirrored to
  • Server-side directories for Unix For Servers instaUed under Unix, the structure is identical, but the location is the instaUation directory (usuaUy /opt /datasynapse) plus
  • Engine-side directory locations A simUar directory structure resides in each Engine instaUation. This is where the files are mirrored. The locations are described below.
  • the corresponding Engine-side directory is located under the root directory for the
  • the corresponding Engine-side directory on Unix is the Engine instaU directory (for example, /usr /local ) plus /DSEngine/resources and contains the repUcated directories shared and linux for Linux Engines or Solaris for Solaris Engines. Configuring directory repUcation
  • the system can be configured to trigger updates of the repUcas in one of two modes:
  • the Server continuously poUs the file signatures within the designated subdirectories and triggers Engine updates whenever it detects changes; to update the Engines, the system administrator need only add or overwrite files within the directories.
  • the Broker is configured so updates to the Engine files wiU only happen when the Broker is idle.
  • the Engine file update watchdog provides this function when enabled
  • the watchdog ensures that Engine files are not updated unless there are no Jobs in progress. If a file update is requested (either automaticaUy or manuaUy), the watchdog does not aUow any new Jobs to start, and waits for currently running Jobs to complete. When no Jobs are running or waiting, the update will occur.
  • aU of the Engines wUl be able to use the same files.
  • Unix Engines provide the ability to tune scheduling for multi-CPU platforms. This section explains the basic theory of Engine distribution on multi-CPU machines, and how one can configure CPU scheduling to run an optimal number of Engines per machine.
  • a feature of LiveClusteris that Engines completing work on PCs can be configured to avoid confUcts with regular use of the machine.
  • an Engine By configuring an Engine, one can specify at what point other tasks take greater importance, and when a machine is considered idle and ready to take on work. This is caUed adaptive scheduling, and can be configured to adapt to one's computing environment, be it an office of PCs or a cluster of dedicated servers.
  • minimum and maximum CPU utilization refers to the total system CPU utilization, and not individual CPU utilization. This total CPU utilization percentage is calculated by adding the CPU utilization for each CPU and dividing by the number of CPUs. For example, if a four-CPU computer has one CPU running at 50% utilization and the other three CPUs are idle, the total utilization for the computer is 12.5 % .
  • a minimum CPU and maximum CPU are configured, but they refer to the total utilization. Also, they simultaneously apply to aU Engines. So if the maximum CPU threshold is set at 25 % on a four-CPU machine and four Engines are running, and a non-Engine program pushes the utilization of one CPU to 100% , all four Engines wiU exit. Note that even if the other three CPUs are idle, their Engines wiU stiU exit. In this example, if the minimum CPU threshold was set at 5 % , aU four Engines would restart when total utilization was below 5 % . By default, the Unix Engine uses nonincremental scheduling. Also, Windows Engines always use this method.
  • Incremental Scheduling Incremental scheduling is an alternate method implemented in Unix Engines to provide better scheduling of when Engines can run on multi-CPU computers.
  • To configure incremental scheduling use the -I switch when running the configure .
  • minimum CPU and maximum CPU utilization refers to each CPU. For example, if there is an Engine running on each CPU of a multi-CPU system, and the maximum CPU threshold is set at 80%, and a non-Engine program raises CPU utilization
  • the CPU scheduler takes the minimum and maximum per/CPU settings specified at Engine instaUation and normalizes the values relative to total system utilization. When these boundaries are crossed, and Engine is started or shut down and the boundaries are recalculated to reflect the change in running processes. This algorithm is used because, for example, a 50% total CPU load on an eight processor system is typicaUy due to four processes each using 100% of an individual CPU, rather than sixteen processes each using 25 % of a CPU.
  • the normalized values are calculated with the foUowing assumptions:
  • System processes wUl be scheduled such that a single CPU is at maximum load before other CPUs are utilized.
  • CPUs which do not have Engines running on them are taken to run at maximum capacity before usage encroaches onto a CPU being used by an Engine.
  • the normalized utilization of the computer is calculated by the foUowing formulas.
  • the maximum normalized utilization (Unmax) equals:.
  • the LiveCluster API is avaUable in both C+ + , caUed Driver+ + , and Java, caUed JDriver. There is also an XML faciUty that can be used to configure or script Java-based Job implementations.
  • the Tasklet is analogous with the Servlet interface, part of the Ente ⁇ rise Java
  • a Servlet handles web requests, and returns dynamic content to the web user.
  • a Tasklet handles a task request given by a Tasklnput, and returns the completed taskwith TaskOutput.
  • the three Java interfaces (Tasklnput, TaskOutput, and Tasklet) have corresponding pure abstract classes in C+ + .
  • the C+ + API also introduces one additional class, Serializable, to support serialization of the C+ + Task objects. How It Works To write an appUcation using LiveCluster, one's appUcation should organize the computing problem into units of work, or Jobs. Each Job wiU be submitted from the Driver to the Server. To create a Job, the foUowing steps take place:
  • Each Job is associated with an instance of Tasklet .
  • TaskOutput is added to the Job to collect results.
  • the unit of work represented by the Job is divided into Tasks. For each Task, a
  • Tasklnput is added to the Job. 4. Each Tasklnput is given as input to a Tasklet running on an Engine. The result is returned to a TaskOutput. Each TaskOutput is returned to the Job, where it is processed, stored, or otherwise used by the appUcation. AU other handling of the Job space, Engines, and other parts of the system are handled by the Server. The only classes one's program must implement are the Job, Tasklet, Taskletlnput, and TaskletOutput. This section discusses each of these interfaces, and the corresponding C + + classes. Tasklnput
  • Tasklnput is a marker that represents aU of the input data and context information specific to a Task.
  • Tasklnput extends the j ava . io .
  • Serial izable interface public interface
  • Tasklnput extends Java . io .
  • C+ + Tasklnput extends the class Serializable, so it must define methods to read and write from a stream (this is discussed in more detaU below): class Tasklnput : public Serializable ⁇ public : virtual "Tasklnput ( ) ⁇ ⁇
  • TaskOutput is a marker that represents aU of the output data and status information produced by the Task. (See FIGs. 56-57.)
  • TaskOutput extends the Java . io .
  • Serializable interface public interface
  • TaskOutput extends Java . io .
  • class TaskOutput public Serializable ⁇
  • the Tasklet defines the work to be done on the remote Engines. (See FIGs. 58 and
  • Tasklet There is one command-style method, service, that must be implemented.
  • the Java Tasklet extends j ava . io . Serializable. This means that the Tasklet objects may contain one-time initialization data, which need only be transferred to each Engine once to support many Tasklets from the same Job. (The relationship between Tasklets and Tasklnput/TaskOutput pairs is one-to-many.)
  • shared input data that is common to every task invocation should be placed in the Tasklet, and only data that varies across invocations should be placed in the Tasklnputs.
  • a Job is simply a coUection of Tasks.
  • Implementations of createTasklnputs caU addTasklnput to add Tasks to the queue. (See FIGs. 60-61.)
  • Job defines static methods for instantiating Job objects based on XML configuration scripts and caU-backs to notify the appUcation code when the Job is completed or encounters a fatal error.
  • a Job also implements processTaskOutput to read output from each Task and output, process, store, add, or otherwise utilize the results.
  • Both the C+ + and Java versions provide both blocking (execute) and non-blocking (executelnThread) job execution methods, and executeLocally to run the job in the current process. This last function is useful for debugging prior to deployment.
  • JobOptions Each Job is equipped with a JobOptions object, which contains various parameter settings. The getOpt ions method of the Job class can be used to get or set options in the JobOptions object for that Job.
  • a complete Ust of aU methods avaUable for the getOpt ions method of the Job class can be used to get or set options in the JobOptions object for that Job.
  • 810791vl JobOptions object is avaUable in the API reference documentation.
  • Some commonly used methods include set Jobname, set JarFile, and setDiscriminator. setjobname
  • the name associated with a Job and displayed in the Administration Tool is a long containing a unique number.
  • the favored mechanism of code distribution involves distributing the Jar file containing the concrete class definitions to the Engines using the directory replication mechanism.
  • the C+ + version supports this mechanism.
  • the dynamic Ubrary containing the implementation of the concrete classes must be distributed to the Engines using the native code distribution mechanism, and the corresponding Job implementation must define getLibraryName to specify the name of this Ubrary, for example picalc (for picalc . dll on Win32 or libpicalc . so on Unix).
  • a second method is also available, which can be used during development.
  • the other method of distributing concrete implementations for the Tasklet, Tasklnput, and TaskOutput is to package them in a Jar file, which is typicaUy placed in the working directory of the Driver appUcation.
  • the corresponding Job implementation caUs set JarFile with the name of this Jar file prior to calling one of the execute methods, and the Engines puU down a serialized copy of the file when they begin work on the corresponding Task.
  • This method requires the Engine to download the classes each time a Job is run. setDiscriminator
  • a discriminator is a method of controlling what Engines accept a Task.
  • FIG. 76 contains sample code that sets a simple property discriminator. Additional C++ Classes Serializable
  • the C+ + API incorporates a class Serializable, since object serialization is not a buUt-in feature of the C + + language.
  • This class (see FTG. 62) provides the mechanism by which the C+ + appUcation code and the LiveClustermiddleware exchange object data. It contains two pure virtual methods that must be implemented in any class that derives from it (i.e., in Tasklnput, TaskOutput, and Tasklet).
  • the LiveClusterAPI contains several extensions to classes, providing specialized methods of handling data. These extensions can be used in special cases to improve performance or enable access to information in a database.
  • DataSet Job and TaskDataSet are extensions to classes, providing specialized methods of handling data. These extensions can be used in special cases to improve performance or enable access to information in a database.
  • a TaskDataSet is a coUection of Tasklnputs that persist on the Server as the input for any subsequent DataSet Job.
  • the Tasklnputs get cached on the Engine for subsequent use for the TaskDataSet.
  • This API is therefore appropriate for doing repeated calculations or queries on large datasets. AU Jobs using the same DataSet Job wiU aU use the Tasklnputs added to the TaskDataSet, even though their Tasklets may differ.
  • Tasklnputs from a set are cached on Engines.
  • Engines which request a task from a Job wUl first be asked to use input that already exists in its cache. If it has no input in its cache, or if other Engines have already taken input in its cache, it wUl download a new input, and cache it.
  • An ideal use of TaskDataSet would be when running many Jobs on a very large dataset. NormaUy, one would create Tasklnputs with a new copy of the large dataset for each Job, and then send this large Tasklnputs to Engines and incur a large amount of transfer overhead each time another Job is run. Instead, the TaskDataSet can be created once, like a database of Tasklnputs. Then, smaU Tasklets can be created that use the TaskDataSet for input, like a query on a database. As more jobs are run on this session, the inputs become cached among more Engines, increasing performance. Creating a TaskDataSet
  • TaskDataSet To create a TaskDataSet, first construct a new TaskDataSet, then add inputs to it using the addTasklnput method. (See FIG. 63.) If one is using a stream, one can also use the createTasklnput method. After one has finished adding inputs, caU the addTasklnput method.
  • TaskOutput (See FIG. 64.)
  • the main difference is that to run the Job, one must use setTaskDataSet to specify the dataset one created earUer. Note that the ExecuteLocally method cannot be used with the DataSet Job.
  • StreamJob and StreamTasklet A StreamJob is a Job which aUows one to create input and read output via streams rather than using defined objects.
  • a StreamTasklet reads data from an InputStream and writes to an OutputStream, instead of using a Tasklnput and TaskOutput. When the StreamJob writes input to a stream, the data is written directly to the local file system, and given to Engines via a Ughtweight webserver.
  • the Engine also streams the data in via the StreamTasklet. In this way, the memory overhead on the Driver, Broker, and Engine is reduced, since an entire Tasklnput does not need to be loaded into memory for transfer or processing.
  • the StreamTasklet must be used with a StreamJob.
  • SQLDataSetJob and SQLTasklet Engines can use information in an SQL database as input to complete a Task by the use of SQL.
  • An SQLDataSetJob queries the database and receives a result set. Each SQLTasklet is given a subset of the result set as an input. This feature is only avaUable from the Java Driver. Starting the database To use an SQL database, one must first have a running database with a JDBC interface.
  • the sample code loads a properties file caUed sql test .properties. It contains properties used by the database, plus the properties tasks and query, which are used in our Job. (See FIG. 67.) SQLDataSetJob
  • SQLDataSetJob is created by implementing DataSet Job. (See FIG. 67) Task inputs are not created, as they wiU be from the SQL database. (See FTG. 68.) SQLTasklet
  • An SQLTasklet is implemented simUar to a normal Tasklet, except the input is an SQL table. (See FIG. 69.) Running the Job
  • the Job can be run.
  • the SQLDataSet is created on the server and is prepared with setJDBCProperties, setMode, setQuery, and prepare . Then the Job is run. (See FIG. 70.) Note that in order to use most recent information in the database, the SQLDataSet needs to be destroyed and created again. This may be important if one is using a frequently updated database.
  • the Propagator API is a group of classes that can be used to distribute a problem over a variable number of compute Engines instead of fixed-node cluster. It is an appropriate alternative to MPI for running parallel codes which require inter-node communication. Unlike most MPI paraUel codes, Propagator implementations can run over heterogeneous resources, including interruptible desktop PCs.
  • a Propagator appUcation is divided into steps, with steps sent to nodes.
  • the number of nodes can vary, even changing during a problem's computation.
  • a node can communicate with other nodes, propagating results and coUecting information from nodes that have completed earUer steps. This checkpointing aUows for fault-tolerant computations.
  • FIG. 71 Ulustrates how nodes communicate at barrier synchronization points when each step of an algorithm is completed. Using the Propagator API
  • the Propagator API consists of three classes: GroupPropagator and NodePropagator and the Interface GroupCommunicator.
  • the GroupPropagator is used as the controller.
  • a GroupPropagator is created, and it is used to create the nodes and the messaging system used between nodes.
  • the NodePropagator contains the actual code that each node will execute at each step. It also contains whatever code each node wiU need to send and receive messages, and send and receive the node state.
  • the GroupCommunicator is the interface used by the nodes to send and receive messages, and to get and set node state.
  • the GroupPropagator is the controlling class of the NodePropagators and GroupCommunicator. One should initiaUy create a GroupPropagator as the first step in running a Propagator Job.
  • GroupCommunicator After creating a GroupPropagator, one can access the GroupCommunicator, like this:
  • GroupCommunicator gc gp . getGroupCommunicator ( ) ; This wiU enable one to communicate with nodes, and get or set their state.
  • the NodePropagator contains the actual code run on each node.
  • the NodePropagator code is run on each step, and it communicates with the GroupCommunicator to send and receive messages, and set its state.
  • 810791vl wiU be run when propagate is run in the GroupPropagator, and it contains the code which the node actuaUy runs.
  • NodePropagator wiU vary depending on the problem. But several possibUities include getting the state of a node to populate variables with partial solutions, broadcasting a partial solution so that other nodes can use it, or sending messages to other nodes to relay work status or other information. AU of this is done using the GroupCommunicator. Group Communicator
  • the GroupCommunicator communicates messages and states between nodes and the GroupPropagator. It can also transfer the states of nodes. It's like the bus or conduit between aU of the nodes.
  • the GroupCommunicator exists after one creates the GroupPropagator. It's passed to each NodePropagator through the propagate method. Several methods enable communication. They include the foUowing (there are also variations avaUable to delay methods until a specified step or to execute them immediately):
  • Test.java which contains the main class
  • HeatEqnSolver.java which implements the GroupPropagator
  • HeatPropagator which implements the NodePropagator. Test.java
  • properties are loaded from disk, and variables needed for the calculations are initialized, either from the properties file, or to a default value. If anything faUs, an exception wUl be thrown.
  • the GroupPropagator is created. It's passed aU of the variables it wUl need to do its calculations. Also, a message is printed to System, out, displaying the variables used to run the equation.
  • the solve method for the HeatEqnSolver object, which wUl run the equation, is caUed (see FIG. 72D), and the program ends. HeatEqnSolver .j ava
  • the class HeatEqnSolver is defined with a constructor that is passed the values used to calculate the heat equation. It has a single pubUc method, Solve, which is caUed by Test to run the program. (See FIG. 73A.) This creates the GroupPropagator, which controls the calculation on the nodes. solver. solve 0 ;
  • a GroupPropagator gp is created (see FIG. 73B) with the name "heat2d," and the number of nodes specified in the properties. Then, a GroupCommunicator gc is assigned with the GroupPropagator method getGroupCommunicator. A new
  • HeatPropagator is created, which is the code for the NodePropagator, which is described in the next section.
  • the HeatPropagator is set as the NodePropagator for gp. It wiU now be used as the NodePropagator, and wUl have access to the
  • a JarfUe is set for the GroupPropagator.
  • the code (see FIG. 73C) then defines a matrix of random values and a mirror of the matrix for use by the nodes.
  • the i loop uses setNodeState to push the value of the matrix to the nodes. Now, aU of the nodes wUl be using the same starting condition for their calculations.
  • the main iteration loop uses the propogate method to send the steps to the nodes.
  • This wiU cause _iters number of iterations by the nodes using their code.
  • the HeatPropagator class (see FIG. 74) implements the NodePropagator, and is the code that wiU actuaUy run on each node. When created, it is given lastlter, fax and f acy. It obtains the boundary information as a message from the last step that was completed. It completes its equations, then broadcasts the results so the next node that runs can continue.
  • the first thing propogate does is use getNodeState to initialize its own copy of the matrix. (See FIG. 75A.)
  • boundary calculations are obtained. (See FIG. 75B.) These are results that are on the boundary of what this node wUl calculate. If this is the first node, there aren't any boundaries, and nothing is done. But if this isn't step 0, there wiU be a message waiting from the last node, and it's obtained with getMessagesFromSender .
  • the scheduling of Tasks to Engines may not be linear. And sometimes, a specific Job may require special handling to ensure the optimal resources are avaUable for it. Also, in
  • a discriminator enables one to specify what Engines can be assigned to a Task, what Drivers can submit Tasks to a Broker, and what Engines can report to a Broker. These limitations are set based on properties given to Engines or Drivers.
  • Task discrimination is set in the Driver properties, and controls what Engines can be assigned to a Task.
  • Broker discrimination is set in the LiveClusterAdministration Tool, and controls what Drivers and Engines use that Broker.
  • the appUcation when it sends a complex Job with the LiveClusterAPI, it attaches a Task discriminator specifying not to send any Tasks from the Job to any Engine with the department property set to Marketing.
  • the large Job's Tasks wUl only go to Engines outside of Marketing, and smaUer Jobs with no Task discriminator set wUl have Tasks processed by any Engine in the company, including those in Marketing.
  • Configuring Engines with Properties Default Properties An Engine has several properties set by default, with values corresponding to the configuration of the PC running the Engine. One can use these properties to set discriminators. The default properties, available in all Engines, are as follows:
  • Broker discrimination can be configured to work on either Engines or Drivers. For discrimination on Drivers, one can add or modify properties in the driver .properties file included in the top-level directory of the Driver distribution. Configuring Broker Discriminators
  • a discriminator is set in the Driver properties, and it prevents Tasks from a defined group of Drivers from being taken by this Broker.
  • a discriminator prevents the Engine from being able to log in to a Broker and take Tasks from it.
  • Each discriminator includes a property, a comparator, and a value.
  • the property is the property defined in the Engine or Driver, such as a group, OS or CPU type.
  • the value can be either a number (double) or string.
  • the comparator compares the property and value. If they are true, the discriminator is matched, and the Engine can accept a Task, or the Driver can submit a Job. If they are false, the Driver is returned the Task, or in the case of an Engine, the Broker will try to send the Task to another Engine.
  • the foUowing comparators are avaUable:
  • each discriminator is the Negate other Brokers box.
  • an Engine or Driver wiU be considered only for this Broker, and no others. For example, if one has a property named state and one sets a discriminator for when state equals NY and selects Negate other Brokers, any Engine with state set to NY wiU only go to this Broker and not others.
  • Task discriminators are set by the Driver, either in Java or in XML. (See FIG. 76.)
  • JNI Java Native Interface
  • FIGs. 77-79 provide an example of a JNI for the previously-discussed Pi calculation program. Submitting a LiveCluster Job
  • jobs can be submitted to a LiveCluster Server in any of three ways:
  • J ava - cp DSDriver . j ar MyApp picalc .xml This method uses properties from the driver . properties file located in the same directory as the Driver. One can also specify command-line properties.
  • Properties can be defined in the driver .
  • Properties specified on the command line are overwritten by properties specified in the driver . properties file. If one wants to set a property already defined in the driver . properties, one must first edit the driver . properties and comment out the property. Using the Direct Data Transfer Property
  • Jobs can be scheduled to run on a regular basis.
  • XML scripting one can submit a Job with specific scheduling instructions. Instead of immediately entering the queue, the Job wUl wait until the time and date specified in the instructions given.
  • Batch Jobs can be submitted to run at a specific absolute time, or a relative time, such as every hour. Also, a Batch Job can remain active, resubmitting a Job on a regular basis.
  • the LiveCluster system provides a simple, easy-to-use mechanism for distributing linked Ubraries ( . dll or . so), Java class archives ( . j ar), or large data files that change relatively infrequently. The basic idea is to place the files to be distributed within a reserved directory associated with the Server.
  • the system maintains a synchronized repUca of the reserved directory structure for each Engine. This is caUed directory rephcation.
  • four directories are repUcated to Engines: Win32, Solaris, and linux directories are mirrored to Engines run on the respective operating systems, and shared is mirrored to aU Engines.
  • these paths are relative to one's instaUation directory. For example, if one instaUs LiveCluster at c : ⁇ DataSynapse, one should append these paths to C : ⁇ DataSynapse ⁇ Server ⁇ livecluster on your server.
  • the default instaUation in Windows puts the shared and Win32 directories in C : ⁇ Program Files ⁇ DataSynapse ⁇ Engine ⁇ resources.
  • Auto Update Enabled When Auto Update Enabled is set to true (the default), the shared directories wUl automaticaUy be mirrored to any Engine upon login to the Broker. Also, the Server wiU check for file changes in these directories at the time interval specified in Minutes Per Check. If changes are found, aU Engines are signaled to make an update. One can force aU Engines to update immediately by setting Update AU Now to true.
  • the LiveCluster Server architecture can be deployed to give varying degrees of redundancy and load sharing, depending on the computing resources avaUable. Before instaUation, it's important to ascertain how LiveCluster will be used, estimate the volume and frequency of jobs, and survey what hardware and networking will be used for the instaUation.
  • the LiveCluster Server consists of two entities: the LiveCluster Director and the LiveCluster Broker:
  • a LiveCluster instaUation can have a
  • Broker responsible for managing jobs by assigning tasks to Engines. Every LiveCluster instaUation must have at least one Broker, often located on the same system as the primary Director. If more than one Broker is instaUed, then a Broker may be designated as a FaUover Broker; it accepts Engines and Drivers only if aU other Brokers faU.
  • a minimal configuration of LiveCluster would consist of a single Server configured as a Primary Director, with a single Broker. Additional Servers containing more Brokers or Directors can be added to address three primary concerns: redundancy, volume, and other considerations. Redundancy
  • FIG. 82 shows an exemplary implementation with two Servers.
  • a Broker can also have a backup on a second Server.
  • a Broker can be designated a FaUover Broker on a second Server during instaUation.
  • Directors wiU only route Drivers and Engines to FaUover Brokers if no other regular Brokers are avaUable. When regular Brokers then become avaUable, nothing further is routed to the
  • FIG. 82 shows a FaUover Broker on the second Server.
  • additional Brokers can be added to other Servers at instaUation.
  • FIG. 83 shows a two Server system with two Brokers. Drivers and Engines wUl be routed to these Brokers in round-robin fashion.
  • Other Considerations Several other factors may influence how one may integrate LiveCluster with an existing computing environment. These include:
  • One's network may dictate how the Server environment should be planned. For example, if one has offices in two parts of the country and a relatively slow extranet but a fast intranet in each location, one could instaU a Server in each location.
  • Different Servers can support data used for different job types. For example, one Server can be used for Jobs accessing a SQL database, and a different Server can be used for jobs that don't access the database. With this flexibihty, it's possible to architect a Server model to provide a job space that wiU facilitate job traffic. Configuring a network
  • LiveCluster is a distributed computing appUcation, successful deployment wiU depend on one's network configuration. LiveCluster has many configuration options to help it work with existing networks. LiveCluster Servers should be treated the same way one treats other mission-critical file and appUcation servers: assign LiveCluster Servers static IP addresses and resolvable DNS hostnames. LiveCluster Engines and Drivers can be configured in several different ways. To receive the full benefit of peer-to-peer communication, one will need to enable communication between Engines and Drivers (the default), but LiveCluster can also be configured to work with a hub and spoke architecture by disabling Direct Data Transfer.
  • LiveCluster Servers should run on systems with static IP addresses and resolvable DNS hostnames. In a pure Windows environment, it is possible to run LiveCluster using just WINS name resolution, but this mode is not recommended for larger deployments or heterogeneous environments.
  • LiveCluster uses the Internet Protocol (IP).
  • IP Internet Protocol
  • AU Engine-Server, Driver-Server, and Engine-Driver communication is via the HTTP protocol.
  • Server components, Engines, and Drivers can be configured to use port 80 or any other avaUable TCP port that is convenienl for one's network configuration.
  • AU Director-Broker communication is via TCP.
  • the default Broker login TCP port is 2000, but another port can be specified at instaUation time. By default, after the Broker logs in, another pair of ephemeral ports is assigned for further communication.
  • the Broker and Director can also be configured to use static ports for post-login communication.
  • AU communication between Engines and Servers (Directors and Brokers) and between Drivers and Servers is via the HTTP protocol, with the Engine or Driver acting as HTTP cUent and the Server acting as HTTP server. (See FIG. 84.)
  • the Server can be configured to work with an NAT device between the Server and the Engines or Drivers. To do this, specify the external (translated) address of the NAT device when referring to the Server address in Driver and Engine installation.
  • Win32 LiveCluster Engines can also support an HTTP proxy for communication between the Engine and the Broker. If the default HTML browser is configured with an HTTP proxy, the Win32 Engine wiU detect the proxy configuration and use it. However, since aU LiveCluster communication is dynamic, the HTTP proxy is effectively useless, and for this reason it is preferred not to use an HTTP proxy. Broker-Director Communication
  • LiveCluster uses Direct Data Transfer, or peer-to-peer communication, to optimize data throughput between Drivers and Engines. (See FIGs. 86-87.) Without Direct Data Transfer, aU task inputs and outputs must be sent through the Server. Sending the inputs and outputs through the Server wUl result in higher memory and disk use on the Server, and lower throughput overaU.
  • Direct Data Transfer With Direct Data Transfer, only Ughtweight messages are sent though the Server, and the "heavy lifting" is done by the Driver and Engine nodes themselves. Direct data transfer requires that each peer knows the B? address that he presents to other peers. In most cases, therefore, Direct Data Transfer precludes the use of NAT between the peers. LUcewise, Direct Data Transfer does not support proxies.
  • NAT For LiveCluster deployments where NAT is already in effect, NAT between Drivers and Engines can be supported by disabling peer-to-peer communication as foUows:

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The invention provides an off-the-shelf product solution to target the specific needs of commercial users with naturally parallel applications. A top-level, public API provides a simple 'compute server' or 'task farm' model that dramatically accelerates integration and deployment. By providing built-in, turnkey support for enterprise features like fault-tolerant scheduling, fail-over, load balancing, and remote, central administration, the invention eliminates the need for customized middleware and yields enormous, on-going savings in maintenance and administrative overhead.

Description

DISTRIBUTED COMPUTING SYSTEM
FIELD OF THE INVENTION
The present invention relates generally to the field of high-performance computing ("HPC") and, more specifically, to systems and techniques for distributed and/or parallel processing. CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority from the following co-pending U. S. Patent Applications: (i) S/N 09/583,244, Methods, Apparatus, and Articles-of-Manufacture for Network Based Distributed Computing, filed May 31 ,2000; (ii) S/N 09/711 ,634, Methods, Apparatus and Articles-of-Manufacture for Providing Always-Live Distributed Computing, filed November 13,2000; (iii) S/N 09/777,190, Redundancy-Based Methods, Apparatus and Articles-of- Manufacture for Providing Improved Quality-ofService in an Always-Live Distributed Computing Environment, filed February 2,2001; (iv) S/N 60/266,185, Methods, Apparatus and Articles-of-Manufacture for Network-Based Distributed Computing, filed February 2, 2001 , now published as WOO 188708. Each of the aforementioned co-pending applications (i)- (iv) is hereby incorporated by reference herein. BACKGROUND OF THE INVENTION
HPC has long been a focus of both academic research and commercial development, and the field presents a bewildering array of standards, products, tools, and consortia. Any attempt at comparative analysis is complicated by the fact that many of these interrelate not as mutually exclusive alternatives, but as complementary component or overlapping standards.
Probably the most familiar, and certainly the oldest, approach is based on dedicated supercomputing hardware. The earliest supercomputers included vector-based array processors, whose defining feature was the capability to perform numerical operations on very large data arrays, and other SIMD (Single-Instruction, Multiple-Data) architectures, which essentially performed an identical sequence of instructions on multiple datasets simultaneously. More recently, multiple-instruction architectures, and especially SMPs (Symmetric Multi-Processors), have tended to predominate, although the most powerful supercomputers generally combine features of both. With dramatic improvements in the processing power and storage capacity of "commodity" hardware and burgeoning network bandwidth, much of the focus has shifted toward parallel computing based on loosely-coupled clusters of general-purpose processors, including clusters of network workstations. Indeed, many of the commercially available high- performance hardware platforms are essentially networks of more or less generic processors with access to shared memory and a high-speed, low latency communications bus. Moreover, many of the available tools and standards for developing parallel code are explicitly designed to present a uniform interface to both multi-processor hardware and network clusters. Despite this blurring around the edges, however, it is convenient to draw a broad dichotomy between conventional hardware and clustering solutions, and the discussion below is structured accordingly. Conventional hardware solutions
Typical commercial end-users faced with performance bottlenecks consider hardware solutions ranging from mid- to high-end SMP server configurations to true "supercomputers." In practice, they often follow a tortuous, incremental migration path, as they purchase and outgrow successively more powerful hardware solutions.
The most obvious shortcoming of this approach is the visible, direct hardware cost, but even more important are the indirect costs of integration, development, administration, and maintenance. For example, manufacturers and resellers generally provide support at an annual rate equal to approximately 20-30% of the initial hardware cost. Moreover, the increase in physical infrastructure requirements and the administrative burden is much more than linear to the number of CPUs.
But by far the most important issue is that each incremental hardware migration necessitates a major redevelopment effort. Even when the upgrade retains the same operating system (e.g., from one Sun Solaris™ platform to another), most applications require substantial modification to take advantage of the capabilities of the new target architecture. For migrating from one operating system to another (e.g., from NT™ or Solaris™ to Irix™), the redevelopment cost is typically comparable to that of new development, but with the additional burden of establishing and maintaining an alternative development environment, installing and testing new tools, etc. Both development and administration require specialized skill sets and dedicated personnel.
810791vl In sum, other indirect costs often total 7 to 9x direct hardware costs, when personnel, time-to-market, and application redevelopment costs are taken into account. Clusters, grids, and virtual supercomputers
The basic idea of bundling together groups of general-purpose processors to attack large- scale computations has been around for a long time. Practical implementation efforts, primarily within academic computer science departments and government research laboratories, began in earnest in the early 1980s. Among the oldest and most widely recognized of these was the Linda project at Yale University, which resulted in a suite of libraries and tools for distributed parallel processing centered around a distributed, shared memory model. More elaborate and at a somewhat higher level than Lnda, but similar in spirit, PNM (for
Parallel Virtual Machine) provided a general mechanism-based on a standard API and messaging protocol for parallel computation over networks of general-purposes processors. More recently, MPI (the Message Passing Interface) has gained ground. Although they differ in many particulars, both are essentially standards that specify an API for developing parallel algorithms and the behavioral requirements for participating processors. By now, libraries provide access to the API from C and/or Fortran. Client implementations are available for nearly every operating system and hardware configuration.
Grid Computing represents a more amorphous and broad-reaching initiative - in certain respects, it is more a philosophical movement than an engineering project. The overarching objective of Grid Computing is to pool together heterogeneous resources of all types (e.g., storage, processors, instruments, displays, etc.), anywhere on the network, and make them available to all users. Key elements of this vision include decentralized control, shared data, and distributed, interactive collaboration.
A third stream of development within high-performance distributed computing is loosely characterized as "clustering." Clusters provide HPC by aggregating commodity, off-the-shelf technology (COTS). By far the most prominent clustering initiative is Beowulf, a loose confederation of researchers and developers focused on clusters of Linux-based PCs. Another widely recognized project is Berkeley NOW (Network of Workstations), which has constructed a distributed supercomputer by linking together a heterogeneous collection of Unix and NT workstations over a high-speed switched network at the University of California.
810791vl There is considerable overlap among these approaches. For example, both Grid implementations and clusters frequently employ PNM, MPI, and/or other tools, many of which were developed initially to target dedicated parallel hardware. Nor is the terminology particularly well defined; there is no clear division between "grids" and "clusters," and some authors draw a distinction between "clusters" or dedicated processors, as opposed to "NOWs" (Networks of Workstations), which enlist part-time or intermittently available resources. Clusters and grids as enterprise solutions
The vast majority of clusters and Grid implementations are deployed within large universities and Government research laboratories. These implementations were specifically developed as alternatives to dedicated supercomputing hardware, to address the kinds of research problems that formed the traditional domain of supercomputing. Consequently, much of the development has focused on emulating some of the more complex features of the parallel hardware that are essential to address these research problems.
The earliest commercial deployments also targeted traditional supercomputing applications. Examples include: hydrodynamics and fluid-flow, optics, and manufacturing process control. In both research and commercial settings, clustering technologies provide at least a partial solution for two of the most serious shortcomings of traditional supercomputing: (1) up-front hardware cost, and (2) chronic software obsolescence (since the system software to support distributed computing over loosely coupled networks must, out of necessity, provide substantial abstraction of the underlying hardware implementation).
However, clusters and grid implementations share, and in many cases, exacerbate, some of the most important weaknesses of supercomputing hardware solutions, particularly within a commercial enteφrise environment. Complex, low-level APIs necessitate protracted, costly development and integration efforts. Administration, especially scheduling and management of distributed resources, is burdensome and expensive. In many cases, elaborate custom development is needed to provide fault tolerance and reliability. Both developers and administrators require extensive training and special skills. And although clusters offer some advantages versus dedicated hardware with respect to scale, fragility and administrative complexity effectively impose hard limits on the number of nodes - commercial installations with as many as 50 nodes are rare, and only a handful support more than 100.
810791vl These weaknesses have become increasingly apparent as commercial deployments have moved beyond traditional supercomputing applications. Many of the most important commercial applications, including the vast majority of process-intensive financial applications, are "naturally parallel." That is, the computation is readily partitioned into a number of more or less independent sub-computations. Within financial services, the two most common sources of natural parallelism are portfolios, which are partitioned by instrument or counterparty, and simultations, which are partitioned by sample point. For these applications, complex features to support process synchronization, distributed shared memory, and inter-process communication are irrelevant - a basic "compute server" or "task farm" provides the ideal solution. The features that are essential, especially for time-sensitive, business-critical applications, are fault-tolerance, reliability, and ease-of-use. Unnecessary complexity drives up development and administration costs, undermines reliability, and limits scale. HPC in the financial services industry
The history of HPC within financial services has been characterized by inappropriate technology. One of the earliest supercomputing applications on Wall Street was Monte Carlo valuation of mortgage-backed securities (MBS) - a prototypical example of "naturally parallel" computation. With deep pockets and an overwhelming need for computing power, the MBS trading groups adopted an obvious, well-established solution: supercomputing hardware, specifically MPPs (Massively Parallel Processors). Although this approach solved the immediate problem, it was enormously inefficient.
The MPP hardware that they purchased was developed for research applications with intricate inter-process synchronization and communication requirements, not for naturally parallel applications within a commercial enteφrise. Consequently, it came loaded with complex features that were completely irrelevant for the Monte Carlo calculations that the MBS applications required, but failed to provide many of the turnkey administrative and reliability features that are typically associated with enteφrise computing. Protracted in-house development efforts focused largely on customized middleware that had nothing to do with the specific application area and resulted in fragile implementations that imposed an enormous administrative burden. Growing portfolios and shrinking spreads continued to increase the demand for computing power, and MPP solutions wouldn't scale, so most of these development efforts have been repeated many times over.
810791vl As computing requirements have expanded throughout the enteφrise, the same story has played out again and again - fixed-income and equity derivatives desks, global credit and market risk, treasury and Asset-Liability Management (ALM), etc., all have been locked in an accelerating cycle of hardware obsolescence and software redevelopment. More recently, clustering and grid technologies have offered a partial solution, in that they reduce the upfront hardware cost and eliminate some of the redevelopment associated with incremental upgrades. But they continue to suffer from the same basic defect - as an outgrowth of traditional supercomputing, they are loaded with irrelevant features and low-level APIs that drive up cost and complexity, while failing to provide turnkey support for basic enteφrise requirements like fault-tolerance and central administration.
The invention, as described below, provides an improved, Grid-like distributed computing system that addresses the practical needs of real-world commercial users, such as those in the financial services and energy industries. BRIEF SUMMARY OF THE INVENTION The invention provides an off-the-shelf product solution to target the specific needs of commercial users with naturally parallel applications. A top-level, public API provides a simple "compute server" or "task farm" model that dramatically accelerates integration and deployment. By providing built-in, turnkey support for enteφrise features like fault-tolerant scheduling, fail- over, load balancing, and remote, central administration, the invention eliminates the need for customized middleware and yields enormous, on-going savings in maintenance and administrative overhead.
Behind the public API is a layered, peer-to-peer (P2P) messaging implementation that provides tremendous flexibility to configure data transport and overcome bottlenecks, and a powerful underlying SDK based on pluggable components and equipped with a run-time XML scripting facility that provides a robust migration path for future enhancements.
Utilizing the techniques described in detail below, the invention supports effectively unlimited scaling over commoditized resource pools, so that end-users can add resources as needed, with no incremental development cost. The invention seamlessly incoφorates both dedicated and intermittently idle resources on multiple platforms (Windows™, Unix, Linux, etc.). And it provides true idle detection and automatic fault-tolerant rescheduling, thereby
810791vl harnessing discrete pockets of idle capacity without sacrificing guaranteed service levels. (In contrast, previous efforts to harness idle capacity have run low-priority background jobs, restricted utilization to overnight idle periods, or imposed intrusive measures, such as checkpointing.) The invention provides a system that can operate on user desktops during peak business hours without degrading performance or intruding on the user experience in any way.
While the above discussion outlines some of the important features and advantages of the invention, those skilled in the art will recognize that the invention contains numerous other novel features and advantages, as described below in connection with applicants' preferred LiveCluster embodiment.
Accordingly, generally speaking, and without intending to be limiting, one aspect of the invention relates to distributed computing systems comprising, for example: a plurality of engines; at least one broker; at least on client application, the client application having an associated driver; the driver being configured to enable communication between the client application and two or more of the engines via a peer-to-peer communication network; the system characterized in that (i) the driver is further configured to enable communication between the client application and the at least one broker over the peer-to-peer network and (ii) the broker is further configured to commumcate with the engines over the peer-to-peer network, thereby enabling the broker to control and supervise the execution of tasks provided by the client application on the two or more engines. The system may further include at least one failover broker configured to communicate with the driver and the engines, and, in the event of a broker failure, control and supervise the execution of tasks provided by the client application on the two or more engines. The broker may further include an adaptive scheduler configured to selectively assign and control the execution of tasks provided by the client application on the engines. The adaptive scheduler may be further configured to redundantly assign one or more of the task(s) provided by the client application to multiple engines, so as to ensure the timely completion of such redundantly assigned task(s) by at least one of the engines. The tasks provided by the client application may have associated discriminators. The broker may utilize parameters associated with such discriminators and the engines to determine the assignment of tasks to engines. The system may control the timing of selected
810791vl communications between the driver and the engines (or other communications, such as engine- to-engine communications) so as to avoid bottlenecks associated with overloads of the peer-to- peer network, such as delays associated with excessive simultaneous network traffic. The broker and the two or more engines may each include an associated propagator object that permits control over engine-to-engine propagation of data over the peer-to-peer network. The propagator objects may enable an engine or broker node to perform at least three, four, five, six, seven or eight of the following operations: (i) broadcast a message to all nodes, except the current node; (ii) clear all message(s), and associated message, state(s), on specified broker(s) and/or engine(s); (iii) get message(s) for the current node; (iv) get the message(s) from a specified node for the current node; (v) get the state of a specified node; (vi) get the total number of nodes; (vii) send a message to a specified node; and or (viii) set the state of a specified node.
Still further aspects of the present invention relate to other system configurations, methods, software, encoded articles-of-manufacture and/or electronic data signals comprised of, or produced in accordance with, portions of the preferred LiveCluster embodiment, described in detail below.
BRIEF DESCRIPTION OF THE FIGURES
The present invention will be best appreciated by reference to the following set of figures (to be considered in combination with the associated detailed description) in which:
FIGs. 1-2 depict data flows in the preferred LiveCluster embodiment of the invention; FIGs. 3-12 are code samples from the preferred LiveCluster embodiment of the invention;
FIG. 13 depicts comparitive data flows in connnection with the preferred LiveCluster embodiment of the invention;
FIGs. 14-31 are code samples from the preferred LiveCluster embodiment of the invention;
FIG. 32-53 are screen shots from the preferred LiveCluster embodiment of the invention; FIGs. 33-70 are code samples from the preferred LiveCluster embodiment of the invention;
FIG. 71 illustrates data propagation using propagators in accordance with the preferred LiveCluster embodiment of the inveniton;
810791vl FIGs. 72-81 are code samples from the preferred LiveCluster embodiment of the invention; and,
FIGs. 82-87 depict various illustrative configurations of the preferred LiveCluster embodiment f the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
What follows is a rough glossary of terms used in describing the preferred LiveCluster implementation of the invention.
Broker A subcomponent of a Server that is responsible for maintaining a
"job space, " for managing Jobs and Tasks and the associated interactions with Drivers and Engines.
Daemon A process in Unix that runs in the background and performs specific actions or runs a server with little or no direct interaction.
In Windows NT or Windows 2000, these are also called Services.
Director A subcomponent of a Server that is responsible for routing Drivers and Engines to Brokers.
Driver The component used to maintain a connection between the
LiveCluster Server and the client application.
Engine The component that actually handles the work of computation, accepting work from and returning results to a Broker.
Failover Broker A Broker configured to take on work when another Broker fails.
The Failover Broker will continue to accept Jobs until another
Broker is functioning again, and then it will wait for any remaining Jobs to finish before returning to a wait state.
Job A unit of work submitted from a Driver to a Server. Servers break apart Jobs into Tasks for further computation.
LiveCluster LiveCluster provides a flexible platform for distributing large computations to idle, underutilized and/or dedicated processors on any network. The LiveClusterarchitecture includes a Driver, one or more Servers, and several Engines.
Server The component of the LiveCluster tm system that takes work from
810791vl Drivers, coordinates it with Engines, and supports Web-based administrative tools. A Server typically contains a Driver and a
Broker.
Task An atomic unit of work. Jobs are broken into Tasks and then distributed to Engines for computation.
Standalone Broker A Server that has been configured with a Broker, but no Director; its configured primary and secondary Directors are both in other
Servers.
Service A program in Windows NT or Windows 2000 that performs specific functions to support other programs. In Unix, these are also called daemons.
How LiveCluster Works
LiveCluster supports a simple but powerful model for distributed parallel processing. The basic configuration incoφorates three major components — Drivers, Servers, and Engines. Generally speaking, the LiveClustermodel works as follows:
A. Client applications (via Drivers) submit messages with work requests to a central Server.
B. The Server distributes the work to a network of Engines, or individual CPUs with LiveCluster Installed. C. The Engines return the results to the Server.
D. The Server collects the results and returns them to the Drivers. Tasks and Jobs
In LiveCluster, work is defined in two different ways: a larger, overall unit, and a smaller piece, or subdivision of that unit. These are called Jobs and Tasks. A Job is a unit of work. Typically, this refers to one large problem that has a single solution. A Job is split into a number of smaller units, each called a Task. An application utilizing LiveCluster submits problems as Jobs, and LiveCluster. breaks the Jobs into Tasks. Other computers solve the Tasks and return their results, where they are added, combined, or collated into a solution for the Job.
810791vl Component Architecture
The LiveCluster system is implemented almost entirely in Java. Except for background daemons and the installation program, each component is independent of the operating system under which it is installed. The components are designed to support interoperation across both wide and local area networks (WANs and LANs), so the design is very loosely coupled, based on asynchronous, message-driven interactions. Configurable settings govern message encryption and the underlying transport protocol.
In the next section, we describe each of the three major components in the LiveClustersystem — Driver, Server, and Engine — in greater detail. Server
The Server is the most complex component in the system. Among other things, the Server:
• Keeps track of the Engines and the ongoing computations (Jobs and Tasks)
• Supports the web-based administration tools — in particular, it embeds a dedicated HTTP Server, which provides the primary administrative interface to the entire system. Despite its complexity, however, the Server imposes relatively little processing burden. Because Engines and Drivers exchange data directly, so the Server doesn't have to consume a great deal of network bandwidth. By default, LiveCluster is configured so that Drivers and Engines communicate to the Server only for lightweight messages.
The Server functionality is partitioned into two subcomponent entities: the Broker and the Director. Roughly speaking, the Broker is responsible for maintaining a "job space" for managing Jobs and Tasks and the associated interactions with Drivers and Engines. The primary function of the Director is to manage Brokers. Typically, each Server instance imbeds a Broker/Director pair. The simplest fault-tolerant configuration is obtained by deploying two Broker/Director pairs on separate processors, one as the primary, the other to support failover. For very large-scale deployments, Brokers and Directors are isolated within separate Server instances to form a two-tiered Server network. Ordinarily, in production, the Server is installed as a service (under Windows) or as a daemon (under Unix) — but it can also run "manually," under a log-in shell, which is primarily useful for testing and debugging.
810791vl Driver
The Driver component maintains the interface between the LiveCluster Server and the client application. The client application code imbeds an instance of the Driver. In Java, the Driver (called JDriver) exists as a set of classes within the Java Virtual Machine (JVM). In C + -I- , the Driver (called Driver + +) is purely native, and exists as a set of classes within the application. The client code submits work and administrative commands and retrieves computational results and status information through a simple API, which is available in both Java and C+ + . Application code can also interact directly with the Server by exchanging XML messages over HTTP. Conceptually, the Driver submits Jobs to the Server, and the Server returns the results of the individual component Tasks asynchronously to the Driver. In the underlying implementation, the Driver may exchange messages directly with the Engines within a transaction space maintained by the Server. Engine Engines report to the Server for work when they are available, accept Tasks, and return the results. Engines are invoked on desktop PCs, workstations, or on dedicated servers by a native daemon. Typically, there will be one Engine invoked per participating CPU. For example, four Engines might be invoked on a four-processor SMP.
An important features of the LiveCluster platform is that it provides reliable computations over networks of interruptible Engines, making it possible to utilize intermittently active resources when they would otherwise remain idle. The Engine launches when it is determined that the computer is idle (or that a sufficient system capacity is available in a multi-CPU setting) and relinquishes the processor immediately in case it is interrupted (for example, by keyboard input on a desktop PC). It is also possible to launch one or more Engines on a given processor deterministically, so they run in competition with other processes (and with one another) as scheduled by the operating system. This mode is useful both for testing and for installing Engines on dedicated processors.
o ιmnι..ι Principles of Operation Idle Detection
Engines are typically installed on network processors, where they utilize intermittently available processing capacity that would otherwise go unused. This is accomplished by running an extremely lightweight background process on the Engine. This invocation process monitors the operating system and launches an Engine when it detects an appropriate idle condition.
The definition and detection of appropriate idle conditions is inherently platform- and operating-system dependent. For desktop processors, the basic requirement is that the Engine does nothing to interfere with the normal activities of the desktop user. For multi-processor systems, the objective, roughly speaking, is to control the number of active Engines so that they consume only cycles that would otherwise remain idle. In any case, Engines must relinquish the host processor (or their share of it, on multi-processor systems) immediately when it's needed for a primary application. (For example, when the user hits a key on a workstation, or when a batch process starts up on a Server.) Adaptive Scheduling
Fault-tolerant adaptive scheduling provides a simple, elegant mechanism for obtaining reliable computations from networks varying numbers of Engines with different available CPU resources. Engines report to the Server when they are "idle" — that is, when they are available to take work. We say the Engine "logs in," initiating a login session. During the login session, the Engine polls the Server for work, accepts Task definitions and inputs, and returns results. If a computer is no longer idle, the Engine halts, and the task is rescheduled to another Engine. Meanwhile, the Server tracks the status of Tasks that have been submitted to the Engines, and reschedules tasks as needed to ensure that the Job (collection of Tasks) completes.
As a whole, this scheme is called "adaptive" because the scheduling of Tasks on the Engines is demand-driven. So long as the maximum execution time for any Task is small relative to the average "idle window," that is, the length of the average log-in session, between logging in and dropping out, adaptive scheduling provides a robust, scalable solution for load balancing. More capable Engines, or Engines that receive lighter Tasks, simply report more frequently for Work. In case the Engine drops out because of a "clean" interruption — because it detects that the host processor is no longer "idle" — it sends a message to the Server before
810791vl it exits, so that the Server can reschedule running Tasks immediately. However, the Server cannot rely on this mechanism alone. In order to maintain performance in the presence of network drop-outs, system crashes, etc. , the Server monitors a heartbeat from each active Engine and reschedules promptly in case of time-outs. Directory Replication
Directory replication is a method to provide large files that change relatively infrequently. Instead of sending the files each time a Job is submitted and incurring the transfer overhead, the files are sent to each Engine once, where they are cached. The Server monitors a master directory structure and maintains a synchronized replica of this directory on each Engine, by synchronizing each Engine with the files. This method can be used on generic files, or platform-specific items, such as Java . j ar files, DLLs, or object libraries. Basic API Features
Before examining the various features and options provided by LiveCluster, it is appropriate to introduce the basic features of the LiveCluster API by means of several sample programs.
This section discusses the following Java interfaces and classes: Tasklnput TaskOutput Tasklet • Job
PropertyDiscriminator EngineSession StreamJob StreamTasklet • DataSetJob
TaskDataSet
The basic LiveCluster API consists of the Tasklnput, TaskOutput and Tasklet interfaces, and the Job class. LiveCluster is typically used to run computations on different inputs in parallel. The computation to be run is implemented in a Tasklet. A Tasklet takes a Tasklnput, operates on it, and produces a TaskOutput. Using a Job object,
810791vl one's program submits Tasklnputs, executes the job, and processes the TaskOutputs as they arrive. The Job collaborates with the Server to distribute the Tasklet and the various
Tasklnputs to Engines.
FIG. 1 illustrates the relationships among the basic API elements. Although it is helpful to think of a task as a combination of a Tasklet and one Tasklnput, there is no Task class in the API. To understand the basic API better, we will write a simple LiveCluster job.
The job generates a unique number for each task, which is given to the tasklet as its
Tasklnput. The tasklet uses the number to return a TaskOutput consisting of a string.
The job prints these strings as it receives them. This is the LiveCluster equivalent of a "Hello, World" program. This program will consist of five classes: one each for the Tasklnput,
TaskOutput, Tasklet and Job, and one named Test that contains the main method for the program.
Tasklnput and TaskOutput
Consider first the Tasklnput class: The basic API is found in the com . livecluster . tasklet package, so one should import that package (see FIG. 3).
The Tasklnput interface contains no methods, so one need not implement any. Its only puφose is to mark one's class as a valid Tasklnput. The Tasklnput interface also extends the Serial izable interface of the j ava . io package, which means that all of the class's instance variables must be serializable (or transient). Serialization is used to send the Tasklnput object from the Driver to an Engine over the network. As its name suggests, the
SimpleTasklnput class is quite simple: it holds a single int representing the unique identifier for a task. For convenience, one need not make the instance variable private.
TaskOutput, like Tasklnput, is an empty interface that extends Serializable, so the output class should not be suφrising (see FIG. 4) Writing a Tasklet
Now we turn to the Tasklet interface, which defines a single method: public TaskOutput service (Tasklnput ) ; The service method performs the computation to be parallelized. For our Hello program, this involves taking the task identifier out of the Tasklnput and returning it as part of the TaskOutput string (see FIG. 5). The service method begins by extracting
810791vl its task ID from the Tasklnput. It then creates a SimpleTaskOutput, sets its instance variable, and returns it. One aspect of the Tasklet interface not seen here is that it, too, extends Serializable. Thus any instance variables of the tasklet must be serializable or transient. With the help of a simple main method (see FIG. 6), one can run this code. This program creates a Tasklet, and then repeatedly creates a Tasklnput and calls the Tasklet's service method on it, displaying the results. Although not something one would want to do in practice, this code does illustrate the essential functionality of LiveCluster. In essence, LiveCluster provides a high-performance, fault-tolerant, highly parallel way to repeatedly execute the line:
TaskOutput output = tasklet .service (input) ; The Job Class
To run this code within LiveCluster, one needs a class that extends Job. Recall that a Job is associated with a single tasklet. The needed Job class creates several Tasklnputs, starts the job running, and collects the TaskOutputs that result. To write a Job class, one generally writes the following methods:
• (likely) A constructor to accept parameters for the job. It is recommended that the constructor call the setTasklet method to set the job's tasklet.
• (optionally) A createTasklnputs method to create all of the Tasklnput objects. Call the addTasklnput method on each Tasklnput one creates to add it to the job. Each Tasklnput one adds results in one task.
• (required) A processTaskOutput method. It will be called for each TaskOutput that is produced.
The Hello Job class is displayed in FIG. 7. The constructor creates a single HelloTasklet and installs it into the job with the setTasklet method. The createTasklnputs method creates ten instances of SimpleTasklnput, sets their tasklds to unique values, and adds each one to the job with the addTasklnput method. The processTaskOutput method displays the string that is inside its argument.
810791vl Putting It AU Together
The Test class (see FIG. 8) consists of a main method that runs the job. The first line creates the job. The second line has to do with distributing the necessary class files to the Engines. The third line executes the job by submitting it to the LiveCluster Server, then waits until the job is finished. (The related executelnThread method runs the job in a separate thread, returning immediately.)
The second line of main deserves more comment. First, the getOpt ions method returns a JobOptions object. The JobOptions class allows one to configure many features of the job. For instance, one can use it to set a name for the job (useful when looking for a job in the Job List of the LiveCluster Administration tool), and to set the job's priority. Here we use the JobOptions method set JarFile, which takes the name of a jar file. This jar file should contain all of the files that an Engine needs to run the tasklet. In this case, those are the class files for SimpleTasklnput, SimpleTaskOutput, and HelloTasklet. By calling the set JarFile method, one tells LiveCluster to distribute the jar file to all Engines that will work on this job. Although suitable for development, this approach sends the jar file to the Engines each time the job is run, and so should not be used for production. Instead, one should use the file replication service or a shared network file system when in production. Running the Example Running the above-discussed code will create the following output:
Hello from #0 Hello from #5 Hello from #2 Hello from #4 Hello from #9
Hello from #1 Hello from #6 Hello from #7 Hello from #8 Hello from #3
DONE
Summary
810791vl • The basic API consists of the Tasklnput, TaskOutput and Tasklet interfaces and the Job class. Typically, one will write one class that implements Tasklnput, one that implements TaskOutput, one that implements Tasklet, and one that extends Job. • A Tasklet's service method implements the computation that is to be performed in parallel. The service method takes a Tasklnput as argument and returns a TaskOutput.
• A Job object manages a single Tasklet and a set of Tasklnputs. It is responsible for providing the Tasklnputs, starting the job and processing the TaskOutputs as they arrive.
• Some additional code is necessary to create a job, arrange to distribute ajar file of classes, and execute the job.
Data Parallelism
In this section, we will look at a typical financial application: portfolio valuation. Given a portfolio of deals, our program will compute the value of each one. For those unfamiliar with the concepts, a deal here represents any financial instrument, security or contract, such as a stock, bond, option, and so on. The procedure used to calculate the value, or theoretical price, of a deal depends on the type of deal, but typically involves reference to market information like interest rates. Because each deal can be valued independently of the others, there is a natural way to parallelize this problem: compute the value of each deal concurrently. Since the activity is the same for all tasks (pricing a deal) and only the deal changes, we have an example of data parallelism. Data-parallel computations are a perfect fit for LiveCluster. A tasklet embodies the common activity, and each Tasklnput contains a portion of the data. The Domain Classes Before looking at the LiveCluster classes, we will first discuss the classes related to the application domain. There are six of these: Deal, ZeroCouponBond, Valuation, DealProvider, PricingEnvironment and DateUtil.
Each deal is represented by a unique integer identifier. Deals are retrieved from a database or other data source via the DealProvider. Deal 's value method takes a PricingEnvironment as an argument, computes the deal's value, and returns a
810791vl Valuation object, which contains the value and the deal ED. ZeroCouponBond represents a type of deal that offers a single, fixed payment at a future time. DateUtil contains a utility function for computing the time between two dates.
The Deal class is abstract, as is its value method (see FIG. 9). The value method's argument is a PricingEnvironment, which has methods for retrieving the interest rates and the valuation date, the reference date from which the valuation is taking place. The value method returns a Valuation, which is simply a pair of deal ID and value. Both Valuation and PricingEnvironment are serializable so they can be transmitted over the network between the Driver and Engines. ZeroCouponBond is a subclass of Deal that computes the value of a bond with no interest, only a principal payment made at a maturity date (see FTG. 10). The value method uses information from the PricingEnvironment to compute the present value of the bond's payment by discounting it by the appropriate interest rate.
The DealProvider class simulates retrieving deals from persistent storage. The getDeal method accepts a deal ID and returns a Deal object. Our version (see FIG. 11) caches deals in a map. If the deal ID is not in the map, a new ZeroCouponBond is created.
With the classes discussed so far, one can write a simple stand-alone apphcation to value some deals (see FIG. 12). This program loads and values 10 deals using a single pricing environment. This LiveCluster apphcation will also take this approach, using the same pricing environment for all deals. The output of this program looks something like: deal ID = 0 , value = 3253 . 5620409955113 deal ID = 1 , value = 750 . 9387692727968 deal ID = 2 , value = 8525 . 835888008573 deal ID = 3 , value = 5445 .987705373893 deal ID = 4 , value = 3615 . 2722123351246 deal ID = 5 , value = 1427. 1584028651682 deal ID = 6 , value = 5824 . 137556101124 deal ID = 7 , value = 2171 . 6068493160974 deal ID = 8 , value = 5099 . 034037828654 deal ID = 9 , value = 3652 . 567194863038
810791 vl With the domain classes finished, we proceed to the LiveCluster application. The basic structure is clear enough: we will have a ValuationTasklet class to value deals and return Valuations, which will be gathered by a Valuation Job class. But there are three important questions we must answer before writing the code: 1. How are Deal objects provided to the tasklet?
2. How is the PricingEnvironment object provided to the tasklet?
3. How many deals should a tasklet value at once?
We address the first two of these questions in the next section, "Understanding Data Movement," and the third in the section following, "Understanding Granularity." Understanding Data Movement
The first question is how to provide deals to the tasklet. One choice is to load the deal on the Driver and send the Deal object in the Tasklnput; the other is to send just the deal ID, and let the tasklet load the deal itself. The second way is likely to be much faster, for two reasons: reduced data movement and increased parallelism. To understand the first reason, consider FIG. 13, the left portion of which illustrates the connections among the Driver, the Engines, and your data server, on which the deal data resides. The left-hand diagram illustrates the data flow that occurs when the Driver loads deals and transmits them to the Engines. The deal data travels across the network twice: once from the data server to the Driver, and again from the Driver to the Engine. The right-hand diagram shows what happens when only the deal IDs are sent to the Engines. The data travels over the network only once, from the data server to the Engine.
The second reason why sending only deal IDs will be faster is that tasklets will try to load deals in parallel. Provided one's data server can keep up with the demand, this can increase the overall throughput of the appUcation. These arguments for sending deal IDs instead of deals themselves makes sense for the kind of architecture sketched in FIG. 13, but not for other, less typical configurations. For example, if the Driver and the data server are running on the same machine, then it may make sense, at least from a data movement standpoint, to load the deals in the Driver. Let us now turn to the question of how to provide each tasklet with the PricingEnvironment. Recall that in this appUcation, every deal wiU be valued with the
810791vl same PricingEnvironment, so only a single object needs to be distributed across the LiveCluster. Although the obvious choice is to place the PricingEnvironment in each Tasklnput, there is a better way: place the PricingEnvironment within the tasklet itself. The first time that an Engine is given a task from a particular job, it downloads the tasklet object from the Driver, as weU as the Tasklnput. When given subsequent tasks from the same job, it downloads only the Tasklnput, reusing the cached tasklet. So placing an object in the tasklet wiU never be slower than putting it in a Tasklnput, and will be faster if Engines get more than one task from the same job.
One can summarize this section by providing two rules of thumb: • Let each tasklet load its own data.
• If an object does not vary across tasks, place it within the tasklet. Understanding Granularity
The third design decision for our Ulustrative LiveCluster portfoUo valuation appUcation concerns how many deals to include in each task. Placing a single deal in each task yields maximum paraUeUsm, but it is unlikely to yield maximum performance. The reason is that there is some communication overhead for each task.
For example, say that one has 100 processors in a LiveCluster, and 1000 deals to price. Assume that it takes 100 ms to compute the value of one deal, and that the total communication overhead of sending a Tasklnput to an Engine and receiving its TaskOutput is 500 ms. Since there are 10 times more deals than processors, each processor wiU receive 10
Tasklnputs and produce 10 TaskOutputs during the life of the computation. So the total time for a program that aUocates one deal to each Tasklnput is roughly (0. Is compute time per task + 0.5s overhead) x 10 = 6 seconds. Compare that with a program that places 10 deals in each Tasklnput, which requires only a single round-trip communication to each processor: (0.1s x 10) compute time per task + 0.5s overhead = 1.5 seconds. The second program is much faster because the communication overhead is a smaUer fraction of the total computation time. The foUowing table summarizes these calculations, and adds another data point for comparison:
810791vl
Figure imgf000023_0001
In general, the granularity— amount of work— of a task should be large compared to the communication overhead. If it is too large, however, then two other factors come into play. First and most obviously, if one has too few tasks, one will not have much paraUeUsm. The third row of the table illustrates this case. By placing 100 deals in each Tasklnput, only ten of the 100 available Engines wiU be working. Second, a task may fail for a variety of reasons— the Engine may encounter hardware, software or network problems, or someone may begin using the machine on which the Engine is running, causing the Engine to stop immediately. When a task fails, it must be rescheduled, and wiU start from the beginning. Failed tasks waste time, and the longer the task, the more time is wasted. For these reasons, the granularity of a task should not be too large.
Task granularity is an important parameter to keep in mind when tuning an appUcation' s performance. We recommend that a task take between one and five minutes. To facilitate tuning, it is wise to make the task granularity a parameter of one's Job class. The LiveCluster Classes
We are at last ready to write the LiveCluster code for our portfoUo valuation appUcation. We will need classes for Tasklnput, TaskOutput, Tasklet and Job.
The Tasklnput wiU be a Ust of deal IDs, and the TaskOutput a Ust of corresponding Valuations. Since both are Usts of objects, we can get away with a single class for both Tasklnput and TaskOutput. This general-puφose ArrayListTasklO class contains a single ArrayList (see FIG. 14).
FIG. 15 shows the entire tasklet class. The constructor accepts a PricingEnvironment, which is stored in an instance variable for use by the service method. As discussed above, this is an optimization that can reduce data movement because tasklets are cached on participating Engines.
810791vl The service method expects an ArrayListTasklO containing a Ust of deal IDs. It loops over the deal IDs, loading and valuing each deal, just as in our stand-alone appUcation. The resulting Valuations are placed in another ArrayListTasklO, which is returned as the tasklet' s TaskOutput. Valuation Job is the largest of the three LiveCluster classes. Its constructor takes the total number of deals as weU as the number of deals to aUocate to each task. In a real appUcation, the first parameter would be replaced by a Ust of deal IDs, but the second would remain to allow for tuning of task granularity.
The createTasklnputs method (see FIG. 16) uses the total number of deals and number of deals per task to divide the deals among several Tasklnputs. The code is subtle and is worth a careful look. In the event that the number of deals per task does not evenly divide the total number of deals, the last Tasklnput wiU contain aU the remaining deals.
The processTaskOutput method (see FIG. 17) simply adds the TaskOutput's ArrayList of Valuations to a master ArrayList. Thanks to the deal IDs stored within each Valuation, there is no risk of confusion due to TaskOutputs arriving out of order.
The Test class has a main method that wiU run the appUcation (see FIG. 18). The initial lines of main load the properties file for the valuation appUcation and obtain the values for totalDeals and dealsPerTask.
In summary: • LiveCluster is ideal for data-parallel applications, such as portfolio valuation.
• In typical configurations where the data server and the Driver are on different machines, let each tasklet load its own data from the data server, rather than loading the data into the Driver and distributing it in the Tasklnputs.
• Since the Tasklet object is serialized and sent to each Engine, it can and should contain data that does not vary from task to task within a job.
• Task granularity — the amount of work that each task performs — is a crucial performance parameter for LiveCluster. The right granularity will amortize communication overhead while preventing the loss of too much time due to tasklet failure or interruption. Aim for tasks that run in a few minutes.
810791vl Engine Properties
In this brief section, we take a look at Engine properties in preparation for the next section, on Engine discrimination. Each Engine has its own set of properties. .Some properties are set automaticaUy by LiveCluster, such as the operating system that the Engine is running on and the estimated speed of the Engine's processor. Users can also create custom properties for engines by choosing Engine Properties under the Configure section of the LiveCluster Administration Tool.
This chapter also introduces a simple but effective way of debugging tasklets by placing print statements within the service method. This output can be viewed from the Administration Tool or written to a log file. Application Classes
Our exemplary LiveCluster appUcation (see FIG. 19) will simply print out all Engine properties. Since we wiU not be using Tasklnputs or generating TaskOutputs, we will only need to write classes for the tasklet, job andmain method. The EnginePropertiesTasklet class uses LiveCluster's EngineSession class to obtain the Engine's properties. It then prints them to the standard output. The method begins by calling EngineSession's getProperties method to obtain a Properties object containing the Engine's properties. Note that EngineSession resides in the com. livecluster . taskle . util package. The tasklet then prints out the Ust of engine properties to System, out, using the convenient list method of the Properties class.
Where does the output of the service method go? Since Engines are designed to run in the background, the output does not go to the screen of the Engine's machine. Instead, it is transmitted to the LiveCluster Server and, optionaUy, saved to a log file on the Engine's machine. We wiU see how to view the output in "Running the Program," below. The try. . . catch is necessary in this method, because EngineSession . getProperties may throw an exception and the service method cannot propagate a checked exception.
The EngineSession class has two other methods, setProperty and removeP roper ty, with the obvious meanings. Changes made to the Engine's properties using these methods will last for the Engine's session. A session begins when an Engine first becomes available and logs on to the Server, and typicaUy ends when the Engine's JVM terminates. (Thus, properties set by a tasklet are likely to remain even after the tasklet' s job finishes.) Note that calling the setProperties method of the Properties object returned from EngineSession . getProperties will not change the Engine's properties.
To set an Engine's properties permanently, one should use the Engine Properties tool in the Configure section of the Administration Tool. CUck on an Engine in the left column. Then enter property names and values on the resulting page. The EnginePropertiesJob class (see FIG. 20) simply adds a few Tasklnputs in order to generate tasks. Tasklnputs cannot be null, so empty Tasklnput object is provided as a placeholder.
The Test class is similar to the previously-described Test classes.
Running The Program To see what is written to an Engine's System, out (or System, err) stream, one must open a Remote Engine Log window in the LiveCluster Administration Tool, as foUows:
1. From the Manage section of the navigation bar, choose Engine Administration.
2. One should now see a Ust of Engines that are logged in to one's Server. CUck an Engine name in the leftmost column. 3. One should now see an empty window titled Remote Engine Log. It is important to do these steps before one runs the appUcation. By default, Engine output is not saved to a file, so the data sent to this window is transient and cannot be retrieved once the appUcation has completed.
The output from each Engine should be similar to that shown in FIG. 21. The meaning of some of these properties is obvious, but others deserve comment. The cpuNo property is the number of CPUs in the Engine's computer. The id property is unique for each Engine's computer, while multiple Engines running on the same machine are assigned different instance properties starting from 0.
It is possible to configure an Engine to save its output to a log file as weU as sending it to the Remote Engine Log window. One can do this as foUows:
810791 vl 1. Visit Engine Configuration in the Configure section of the Administration tool.
2. Choose the configuration one wishes to change from the File Ust at the top.
3. Find the DSLog argument in the Ust of properties and set it to true.
4. CUck Submit. 5. When the page reloads, cUck Save.
The log files wiU be placed on the Engine's machine under the directory where the Engine was instaUed. On Windows machines, this is c : \Program Files\DataSynapse\Engine by default. In LiveCluster, the log file is stored under . /work/ [name] - [instance] /log. Summary
To summarize the above:
• Engine properties describe particular features of each Engine in the LiveCluster.
• Some Engine properties are set automatically; but one can create and set one's own properties in the Engine Properties page of the Administration Tool. • The EngineSession class provides access to Engine properties from within a tasklet.
• Writing to System . out is a simple but effective technique for debugging tasklets. The output goes to the Remote Engine Log window, which can be brought up from Engine Administration in the Administration Tool. One can also configure Engines to save the output to a log file.
Discrimination
Discrimination is a powerful feature of LiveCluster that allows one to exert dynamic control over the relationships among Drivers, Brokers and Engines. LiveCluster supports two kinds of discrimination: • Broker Discrimination: One can specify which Engines and Drivers can log in to a particular Broker. Access this feature by choosing Broker Discrimination in the Configure section of the LiveCluster Administration Tool.
• Engine Discrimination: One can specify which Engines can accept a task. This is done in one's code, or in an XML file used to submit the job.
810791vl Both kinds of discrimination work by specifying which properties an Engine or Driver must possess in order to be acceptable.
This section discusses only Engine Discrimination, which selects Engines for particular jobs or tasks. Engine Discrimination has many uses. The possibilities include: • limiting a job to run on Engines whose usernames come from a specified set, to confine the job to machines under one's jurisdiction;
• limiting a resource-intensive task to run only on Engines whose processors are faster than a certain threshold, or that have more than a specified amount of memory or disk space; • directing a task that requires operating-system-specific resources to Engines that run under that operating system;
• inventing one's own properties for Engines and discriminating based on them to achieve any match of Engines to tasks that one desires.
In this section, we wiU pursue the third of these ideas. We will elaborate our valuation example to include two different types of deals. We wiU assume that the analytics for one kind of deal have been compUed to a Windows DLL file, and thus can be executed only on Windows computers. The other kind of deal is written in pure Java and therefore can run on any machine. We wiU segregate tasks by deal type, and use a discriminator to ensure that tasks with Windows-specific deals wiU be sent only to Engines on Windows machines. Using Discrimination
This discussion wiU focus on the class PropertyDiscriminator. This class uses a Java Properties object to determine how to perform the discrimination. The Properties object can be created directly in one's code, as we wiU exemplify below, or can be read from a properties file. When using PropertyDiscriminator, one encodes the conditions under which an
Engine can take a task by writing properties with a particular syntax. For example, setting the property cpuMFlops . gt to the value 80 specifies that the CPU speed of the candidate Engine, in megaflops, must be greater than 80 for the Engine to be eUgible. In general, the discriminator property is of the form engine_property . operator. There are operators for string and numerical equaUty,
810791vl numerical comparison, and set membership. They are documented in the Java API documentation for PropertyDiscriminator.
Since a single Properties object can contain any number of properties, a PropertyDiscriminator can specify any number of conditions. AU must be true for the Engine to be eUgible to accept the task.
In our example, we want to ensure that tasks that contain OptionDeals are given only to Engines that run under the Windows operating system. The Engine property denoting the operating system is os and its value for Windows is Win32. So, to construct the right discriminator, one would add the line: props . setProperty ( " os . equals" , "Win32 " ) ; to our code. The Application
Most of the earUer-described classes require no change, including Deal, ZeroCouponDeal, ArrayListTasklO, Valuation, PricingEnvironment and ValuationTasklet. We wiU add another subclass of Deal, caUed OptionDeal, whose value method caUs the method native Value to do the work (see FIG. 22).
We assume that the native Value method is a native method invoking a Windows DLL. RecaU that the DealProvider class is responsible for fetching a Deal given its integer identifier. Its getDeal method returns either an OptionDeal object or ZeroCouponBond object, depending on the deal ID it is given. For this example, we decree that deal IDs less than a certain number indicate OptionDeals, and aU others are Z e r oCouponBonds .
The ValuationTasklet class is unchanged, but it is important to note that Deal's value method is now polymoφhic: output . add (deal .value (_pricingEnvironment) ) ;
In this line, the heart of ValuationTasklet, the caU to value wiU cause a Windows DLL to run if deal is an OptionDeal.
The ValuationJob class has changed significantly, because it must set up the discriminator and divide the Tasklnputs into those with OptionDeals and those without (see FIG. 23). The first three lines set up a PropertyDiscriminator to identify Engines
810791vl that run under Windows, as described above. The last two lines caU the helper method createDeallnputs, which aggregates deal IDs into Tasklnputs, attaching a discriminator. The second argument is the starting deal ID; since deal IDs below DealProvider . MIN_OPTION_ID are OptionDeals, the above two caUs result in the first group of Tasklnputs consisting solely of OptionDeals and the second consisting solely of ZeroCouponBonds.
FIG. 24 shows the code for createDeallnputs. This method takes the number of deals for which to create inputs, the deal identifier of the first deal, and a discriminator. (IDiscriminator is the interface that aU discriminators must implement.) It uses the same algorithm previously discussed to place Deals into Tasklnputs. Then caUs the two- argument version of addTasklnput, passing in the discriminator along with the Tasklnput.
When createDeallnputs is invoked to create OptionDeals, the PropertyDiscriminator we created is passed in. For ZeroCouponBonds, the discriminator is null, indicating no discrimination is to be done— any Engine can accept the task. Using null is the same as calling the one-argument version of addTasklnput. Summary
• Discriminators allow one to control which Engines run which tasks.
• A discriminator compares the properties of an Engine against one or more conditions to determine if the engine is eUgible to accept a particular task.
• The PropertyDiscriminator class is the easiest way to set up a discriminator. It uses a Properties object or file to specify the conditions.
• Discriminators can segregate tasks among Engines based on operating system, CPU speed, memory, or any other property. Streaming Data
The service method of a standard LiveCluster tasklet uses Java objects for both input and output. These Tasklnput and TaskOutput objects are serialized and transmitted over the network from the Driver to the Engines.
For some appUcations, it may be more efficient to use streams instead of objects for input and output. For example, appUcations involving large amounts of data that can process
810791vl the data stream as it is being read may benefit from using streams instead of objects. Streams increase concurrency by aUowing the receiving machine to process data while the sending machine is still transmitting. They also avoid the memory overhead of deserializing a large object. The StreamTasklet and StreamJob classes enable appUcations to use streams instead of objects for data transmission. Application Classes
Our exemplary appUcation wiU search a large text file for lines containing a particular string. It will be a paraUel version of the Unix grep command, but for fixed strings only. Each task is given the string to search for, which we will caU the target, as weU as a portion of the file to search, and outputs aU lines that contain the target.
We will look at the tasklet first. Our SearchTasklet class extends the StreamTasklet class (see FIG. 25). The service method for StreamTasklet takes two parameters: an InputStream from which it reads data, and an OutputStream to which it writes its results (see FIG. 26). The method begins by wrapping those streams in a Buf f eredReader and a PrintWriter, for performing line-oriented I/O.
It then reads its input line by line. If it finds the target string in a line of input, it copies that line to its output. The constructor is given the target, which it stores in an instance variable. Since aU tasks will be searching for the same target, the target should be placed in the tasklet. The service method is careful to close both its input and output streams when it is finished.
Users of StreamTasklet and StreamJob are responsible for closing aU streams they are given. Writing a StreamJob is simUar to writing an ordinary Job. One difference is in the creation of task inputs: instead of creating an object and adding it to the job, it obtains a stream, writes to it, and then closes it. The SearchJob class's createTasklnputs method Ulustrates this (see FIG. 27; _linesPerTask and _f ile are instance variables set in the constructor). The method begins by opening the file to be searched. It writes each group of lines to an OutputStream obtained with the createTasklnput method. (To generate the input for a task, one caUs the createTasklnput method, write to the stream it returns, then close that stream.)
810791 vl The loop within createTasklnputs is careful to aUocate aU of the file's lines to tasks while making sure that no task is given more than the number of lines specified in the constructor.
Like an ordinary Job, a StreamJob has a processTaskOutput method (see FIG. 28) that is caUed with the output of each task. In StreamJob, the method's parameter is an InputStream instead of a TaskOutput object. In this case, the InputStream contains lines that match the target. We print them to the standard output. Once again, it is our responsibiUty to close the stream we are given.
The Test class for this example is simUar to previous ones. Improvements
There are number of ways this basic appUcation can be improved. Let's first consider the final output from the job, the Ust of matching lines. Because tasks may complete in any order, these lines may not be in their original order within the file. If this is a concern, then line number information can be sent to and returned from the tasklet, and used to sort the matching lines.
If many lines match the target string, then there will be a lot of traffic from the Engines back to the Driver. This traffic can be reduced by returning line numbers, instead of whole lines, from the tasklet. The line numbers can be sorted at the end, and a final pass made over the file to output the corresponding lines. As a further improvement, byte offsets instead of line numbers can be transmitted, enabUng the use of random access file I/O to obtain the matching lines from the file. Whether these techniques wiU in fact result in increased performance wiU depend on a number of factors, including line length, number of matches, and so on. Experimentation will probably benecessary to find the best design.
Another source of improvement may come from multithreading. LiveCluster ensures that caUs to processTaskOutput are synchronized, so that only one caU is active at a time. Thus a naive processTaskOutput implementation like the one above will read an entire InputStream to completion— a process which may involve considerable network I/O— before moving on to the next. One may achieve better use of the Driver's processor by starting a thread to read the results on each caU to processTaskOutput. Summary
810791vl • Use StreamTasklet and StreamJob when the amount of input or output data is large, and a tasklet can process the data stream as it arrives.
• The service method of StreamTasklet reads its input from an InputStream and writes its results to an OutputStream. • When writing a St eamJob class, create an input for a task by calling the createTasklnput method to obtain an OutputStream, then writing to and closing that stream.
• The processTaskOutput method of StreamJob is given an InputStream to read a task's results. • It is the taskset's responsibility to close all streams.
Data Sets
Although the paraUel string search program of the previous section wiU speed up searching for large files, it misses an opportunity in the case where the same file is searched, over time, for many different targets. As an example of such a situation, consider a web search company that keeps a Ust of all the questions all users have ever asked so that it can display related questions when a user asks a new one. Although the previous search program will work correctly, it wiU redistribute the Ust of previously asked questions to Engines each time a search is done.
A more efficient solution would cache portions of the file to be searched on Engines to avoid repeatedly transmitting it. This is just what LiveCluster' s data set feature does. A data set is a persistent coUection of task inputs (either Tasklnput objects or streams) that can be used across jobs. The first time it is used, the data set distributes its inputs to Engines in the usual way. But when the data set is used subsequently, it attempts to give a task to an Engine that already has the input for that task stored locaUy. If aU such Engines are unavailable, the task is given to some other available Engine, and the input is retransmitted. Data sets thus provide an important data movement optimization without interfering with LiveCluster' s ability to work with dynamicaUy changing resources.
In this section, we will adapt the program of the previous section to use a data set. We will need to use the two classes: DataSetJob and TaskDataSet. There is no new type of tasklet that we need to consider— as data sets work with existing tasklets.
810791vl Using a TaskDataSet
Since a TaskDataSet is a persistent object, it must have a name for future reference. One can choose any name:
TaskDataSet dataSet = new TaskDataSe ( "search" ) ; or can caU the no-argument constructor, which wiU assign a name that one can access with the get Name method.
One can now use the methods addTasklnput (for Tasklnput objects) or createTasklnput (for streams) to add inputs to the data set. When finished, caU the doneSubmitting method: dataSe t . addTasklnput ( 11 ) ; dataSet . addTasklnput ( t2 ) ; dataSe . addTasklnput ( t 3 ) ; dataSet . doneSubmitting ( ) ;
The data set and its inputs are now stored on the Server and can be used to provide inputs to a DataSet Job, as will be illustrated in the next section.
The data set outUves the program that created it. A data set can be retrieved in later runs by using the static getDataSet method:
TaskDataSet dataSet = TaskDataSet . getDataSet ( "search" ) ; It can be removed with the destroy method: dataSet . destroy ( ) ;
The Application
To convert the string search appUcation to use a data set, one must provide a Job class that extends DataSet Job. To do this, one uses a DataSet Job much like an ordinary Job, except that instead of providing a createTasklnputs method, one provides a data set via the setTaskDataSet method (see FIG. 29). The constructor accepts a
TaskDataSet and sets it into the Job. The processTaskOutput method of this class is the same as that previously discussed. The SearchTasklet class is also the same.
The main method (see FIG. 30) of the Test program creates a TaskDataSet and uses it to run several jobs. The method begins by reading a properties file that contains a comma-separated Ust of target strings, as weU as the data file name and number of lines per
810791vl task It then creates a data set via the helper method createDataSetFromFile. Lastly, it runs several jobs using the data set. createDataSetFromFile (see FIG. 31) places the inputs into a TaskDataSet. Let's review the data movement that occurs when this program is run. When the first job is executed, Engines will puU both the tasklet and a task input stream from the Driver machine. Each engine will cache its stream data on its local disk. When the second and subsequent jobs are executed, the Server wiU attempt to assign an Engine the same task input that it used for the first job. Then the Engine wiU only need to download the tasklet, since the Engine has a local copy of the task input. EarUer, we suggested that if an object does not vary across tasks (but does vary from job to job), it should be placed within the tasklet, rather than inside a task input. Here, we see that idea's biggest payoff. By keeping the task inputs constant, we can amortize their network transmission time over many jobs. Only the relatively smaU amount of data that varies from job to job —the target string, or in the earUer case, the pricing environment— needs to be transmitted for each new job. Summary
• Data sets can improve the performance of applications that reuse the same task inputs for many jobs, by reducing the amount of data transmitted over the network.
• A data set is a distributed cache: each Engine has a local copy of a task input. The Server attempts to re-assign a task input to an Engine that had it previously.
• The TaskDataSet class allows the programmer to create, retrieve and destroy data sets.
• The DataSet Job class extends Job to use a TaskDataSet.
• Data that varies from job to job should be placed in the tasklet. LiveCluster Administration Tools
The LiveCluster Server provides the LiveCluster Admimstration Tool, a set of web- based tools that aUow the administrator to monitor and manage the Server, its cluster of Engines, and the associated job space. The LiveCluster Administration Tool is accessed from a web-based interface, usable by authorized users from any compatible browser, anywhere on
810791vl the network. Administrative user accounts provide password-protected, role-based authorization.
With the screens in the Administration Tool, one can:
• View and modify Server and Engine configuration; • Create administrative user accounts and edit user profiles;
• Subscribe to get e-mail notification of events;
• Monitor Engine activity and kill Engines;
• Monitor Job and Task execution and cancel Jobs;
• InstaU Engines; • Edit Engine Tracking properties and change values;
• Configure Broker discrimination;
• View the LiveCluster API, release notes, and other developer documents;
• Download the files necessary to integrate appUcation code and run Drivers;
• View and extract log information; • View diagnostic reports; and,
• Run test Jobs.
User Accounts and Administrative Access
AU of the administrative screens are password-protected. There is a single "super-user" account, the site administrator, whose hard-coded user name is admin. The site admmistrator creates new user accounts from the New User screen. Access control is organized according to the five functional areas that appear in the navigation bar. The site admmistrator is the only user with access to the configuration screens (under Configure), except that each user has access to a single Edit Profile screen to edit his or her own pro-file. For every other user, the site administrator grants or denies access separately to each of the four remaining areas (Manage, View, InstaU, and Develop) from the View Users screen. The Server instaUation script creates a single user account for the site administrator, with both user name and password admin. The site administrator should log in and change the password immediately after the Server is instaUed. Navigating the Administration Tool
810791vl The admimstration tools are accessed through the navigation bar located on the left side of each screen (see FIG. 32). CUck one of the links in the navigation bar to display options for that link. CUck a link to navigate to the corresponding area of the site. (Note that the navigation bar displays only those areas that are accessible from the current account. If one is not using an administrative account with aU privUeges enabled, some options wiU not be visible.) At the bottom of the screen is the shortcut bar, containing the Logout tool, and shortcut links to other areas, such as Documentation and Product Information.
The Administration Tool is divided into five sections. Each section contains screens and tools that are explained in more detail in the next five chapters. The foUowing tools are available in each of the sections.
The Configure Section
The Configure section contains tools to manage user accounts, profiles, Engines,
Brokers, and Directors.
The Manage Section The Manage section enables one to administer Jobs or Tasks that have been submitted, administer data sets or batch jobs, submit a test Job, or retrieve log files.
The View Section
The View section contains tools to Ust and examine Brokers, Engines, Jobs, and data sets. It's different from the Manage section in that tools focus on viewing information instead of modifying it, changing configuration, or killing Jobs. One can examine historical values to gauge performance, or troubleshoot one's configuration by watching the interaction between
Brokers and Engines interactively.
In general, Lists are similar to the Usted displays found in the Manage section, which can be refreshed on demand and display more information. Views are graphs implemented in a Java applet that updates in real-time.
The InstaU Section
The instaU section enables one to install Engines on one's Windows machine, or download the executable files and scripts needed to buUd instaUations distributable to Unix machines. The Develop Section
810791vl The Develop section includes downloads and information such as Driver code, API Documentation, Documentation guides, Release Notes, and the Debug Engine. T e Configure Section
The Configure section contains tools to manage user accounts, profiles, Engines, Brokers, and Directors. To use any of the foUowing tools, cUck Configure in the Navigation bar to display the Ust of tools. Then cUck a tool name to continue. View/Edit Users
As an administrator, one can change information for existing user accounts. For example, one could change the name of an account, change an account's level of access, or delete an account entirely.
When one cUcks View/Edit Users, one is presented with a Ust of defined users, as shown in FIG. 33. To change an existing user account, cUck the name Usted in the FuU Name column. The display shown in FIG. 34 wiU open. First, one must enter one's admin password in the top box to make any changes. Then, one can change any of the information for the user displayed. There is also a Subject and Message section; if one would like to notify the user that changes have been made to his/her account, enter an e-maU message in these fields. To make the change, cUck Submit. One can also delete the account completely by cUcking Delete. If one would like to create a new user, one must use the New User Signup tool. New User Signup To add a new user, cUck New User Signup. One will be presented with a screen siimlar to FIG. 34. Enter in one's admin password and the information about the user, and cUck Submit. (Note that the Subject and Message fields for e-mail notification are already populated with a default message. The placeholders for username and password will be replaced with the actual username and password for the user when the message is sent.) Edit Profile
The Edit Profile tool enables you to make changes to the account with which you are currently logged in. It also enables the admin to configure the Server to email notifications of account changes to users. For accounts other than admin, one must cUck Edit Profile, enter one's password in the top box, and make any changes one wishes to make to one's profile. This includes one's first name, last name and emaU address. One can also change one's
810791vl password by entering a new password twice. When one has made the changes, one dicks the Submit button. If one is logged in as admin, one can also configure the Server to generate email notifications automaticaUy whenever user accounts are added or modified. To activate this feature, one must provide an emaU address and the location of the SMTP server. The LiveCluster Server wiU generate maU from the administrator to the affected users. To disable the email feature, one simply clears the SMTP entry. Engine Configuration
The Engine Configuration tool (see FIG. 35) enables one to specify properties for each of the Engine types that one deploys. To configure an Engine, one must first choose the Engine type from the File Ust. Then, enter new values for properties in the Ust, and cUck Submit next to each property to enter these values. CUck Save to commit aU of the values to the Engine configuration. One can also cUck Revert at any time before cUcking Save to return to the configuration saved in the original file. For more information on any of the properties in the Engine Configuration tool, one can cUck Help. Engine Properties
This tool (see FIG. 36) displays properties associated with each Engine that has logged in to this Server. A Ust of Engine IDs is displayed, along with the corresponding Machine Names and properties that are currently assigned to that Engine. These properties are used for discrimination, either in the Broker or the Driver. Properties can be set with this tool, or when an Engine is instaUed with the 1-CUck InstaU with Tracking link and a tracking profile is created, which is described below, in the Engine Tracking Editor tool.
To change the properties assigned to an Engine, one must cUck the displayed Engine ID in the Ust. An edit screen (see FIG. 37) is displayed. If there are properties already assigned, one can change their value(s) in an editable box and cUck Submit, or cUck Remove to remove a property completely. To add a new property and value, one may enter them in the editable boxes at the bottom of the Ust and cUck Add. Once one has finished changing the properties, one may cUck Save. The properties wiU be sent to the Server, and the Engine wiU restart. (Note that if Broker discrimination is configured, it is possible to change or add a property that will prevent an Engine from logging back on again.) Engine Tracking Editor
810791vl Engines can be instaUed with optional tracking parameters, which can be used for discrimination. When Engines are instaUed with the 1-CUck InstaU with Tracking link, one is prompted for values for these parameters. This tool enables one to define what parameters are given to Engines instaUed in this method. By default, the parameters include MachineName, Group, Location, and Description. One can add more parameters by entering the parameter name in the Property column, entering a description of the property type in the Description column, and clicking the Add button. One can also remove parameters by cUcking the Remove button next to the parameter one wants to remove. Broker Configuration The Broker's attributes can be configured by cUcking the Broker Configuration tool.
This displays a hierarchical expanding/coUapsing (see FIG. 38) Ust of aU of the attributes of the Broker. One may cUck on the + and - controls in the left pane to show or hide attributes, or cUck Expand All or CoUapse AU to expand or coUapse the entire Ust.
When one dicks on an attribute, its values are shown in the right pane. One can change an attribute in an editable box by entering a new value and cUcking Submit. To find more information about each additional attribute, one may cUck Help in the lower right corner of the display. A help window wiU open with complete detaUs about the attribute. Broker Discrimination
One can configure Brokers to do discrimination on Engines and Drivers with the Broker Discrimination tool (see FIG. 39). First, one must select the Broker one wants to configure from the Ust at the top of the page. If one is only running a single Broker, there wiU only be one entry in this Ust. One can configure discriminators for both Driver properties and Engine properties. For Drivers, a discriminator is set in the Driver properties, and it prevents Tasks from a defined group of Drivers from being taken by this Broker. For Engines and Drivers, discriminators prevent login sessions from being estabUshed with a Broker, which changes routing between Brokers and Engines or Drivers.
Each discriminator includes a property, a comparator, and a value. The property is the property defined in the Engine or Driver, such as a group, OS or CPU type. The value can be either a number (double) or string. The comparator compares the property and value. If they are true, the discriminator is matched, and the Engine or Driver can login to a Broker. If they
810791vl are false, the Driver can't log in to the Broker, and must use another Broker. In the case of an Engine, it won't be sent Tasks from that Broker. Note that both property names and values are case-sensitive.
One further option for each discriminator is the Negate other Brokers box. When this is selected, an Engine or Driver wiU be considered only for this Broker, and no others. For example, if one has a property named state and sets a discriminator for when state equals NY and selects Negate other Brokers, an Engine with state set to NY wiU go to this Broker, because other Brokers won't accept its login.
Once one has entered a property, comparator, and value, cUck Add. One can add multiple discriminators to a Broker by defining another discriminator and cUcking Add again. CUck Save to save aU added discriminators to the Broker. When one saves discriminators, aU Engines currently logged in wiU log out and attempt to log back in. This enables one to set a discriminator to limit a number of Engines and immediately force them to log off.
By default, if an Engine or Driver does not contain the property specified in the discriminator, the discriminator is not evaluated and considered false. However, one can select Ignore Missing Properties for both the Driver and Engine. This makes an Engine or Driver missing the property specified in a discriminator ignore the discriminator and continue. For example, if one sets a discriminator for state = Arizona, and an Engine doesn't have a state property, normaUy the Broker won't give the Engine Jobs. But if one selects Ignore Missing Properties, the Engine without properties wiU stiU get Jobs from the Broker. Director Configuration
To configure the Director, an interface simUar to the Broker Configuration tool described above is used. When one cUcks Director Configuration, a hierarchy of attributes is shown, and one can cUck an attribute to change it. As with the Broker, the Director attributes have a Help link available. CUent Diagnostics
If one is troubleshooting issues with one's LiveClusterinstaUation, one can generate and display cUent statistics using the CUent Diagnostics tool (see FIG. 40). This generates tables or charts of information based on client messaging times.
810791vl To use cUent diagnostics, one must first select CUent Diagnostics and then cUck the edit diagnostic options link. Set Enabled to true, cUck Submit, then cUck Save. This wiU enable statistics to be logged as the system runs. (Note that this can generate large amounts of diagnostic data, and it is recommended that one enable this feature only when debugging.) CUck diagnostic statistics to return to the previous screen. Next, one must specify a time range for the analysis. Select a beginning and ending time range, or cUck Use aU available times to analyze aU information.
After selecting a time range, one can select what data is to be shown, and how it wiU be shown, either in a table or chart. For the tables, one must select one or more statistic(s) and one or more cUent(s). For charts, select only one client and one or more statistic for cUent charts; statistic charts require one to select one statistic and one or more cUent(s). The table or chart wiU be displayed in a new window. Event Subscription
If one has enabled emaU notifications by entering a SMTP address in the admin profile, one can define a Ust of email addresses, and configure what event notifications are sent to each address with the Event Subscription tool (see FIG. 41). To enter a subscriber, cUck Add a Subscriber. To change events for a subscriber, cUck their name in the Ust. For each subscriber, enter a single emaU address in the EmaU box. This must be a fuU email address, in the form name@your . address . com. One can enter a string in the Filter box to limit notifications to events which contain the string in the event. For example, one could limit notifications to those about an Engine named Alpha by entering Alpha in the Filter box. When the box is left clear (the default), aU events are considered for notification.
After specifying an emaU address and an optional filter, select which events one would like to monitor from the Ust below. Once one is done, cUck Submit. When each event occurs, the Server wiU send a short notification message to the specified emaU address. One can later edit a subscriber's events, filter, or emaU address by cUcking the subscriber's name in the Ust presented when one selects the Event Subscription tool. One can also remove a name completely by cUcking the Remove button next to it.
The Manage section enables one to administer Jobs or Tasks that have been submitted, administer data sets or batch jobs, submit a test Job, or retrieve log files. To use any of the
810791vl foUowing tools, cUck Manage in the Navigation bar to display a Ust of tools at the left. Then cUck a tool to continue. Broker Administration
One can view Engines logged on to a Broker, or change the ratio of Engines to Drivers handled by a Broker, by using the Broker Administration tool (see FIG. 42). Each Broker logged on to the Director is Usted, along with the number of busy and idle Engines logged onto it. CUck on the Broker name in the Hostname column to display a Ust of the Engines currently logged in. To see the graphs depicting Broker statistics, cUck the Create button in the Monitor column. One can specify the number of jobs to be displayed in the Broker Monitor by changing the number in the box to the left of the Create button. The Engine Weight and
Driver Weight boxes are used to set the ratio of Engines to Drivers that are sent to the Broker from the Director. By default, Engine Weight and Driver Weight are both 1, so the Broker wiU handle Engines and Drivers equaUy. This can also be changed so a Broker favors either Engines or Brokers. For example, changing Engine Weight to 10 and leaving Driver Weight at 1 wiU make the Broker handle Engines ten times more than Drivers. To update the Ust and display the most current information, cUck the Refresh button. One can also automaticaUy update the Ust by selecting a value from the Ust next to the Refresh button. Engine Administration
This tool (see FIG. 43) enables one to view and control any Engines currently controUed by one's Server. To update the Ust and display the most current information, cUck the Refresh button. One can also automatically update the Ust by selecting a value from the Ust next to the Refresh button.
Engines are displayed by username, with 20 Engines per page by default. One can select a greater number of Usts per page, or display aU of the Engines, by cUcking a number or AU next to Results Per Page on the top right of the screen. One can also find a specific Engine by entering the user-name in the box and cUcking Search For Engines. The Status column displays if an Engine is avaUable for work. If "AvaUable" is displayed, the Engine is logged on and is ready for work. Engines marked as "Logged off are no longer avaUable. "Busy" Engines are currently working on a Task. Engines shown as "Logging in" are in the login
810791vl process, and are possibly transferring files. One can also cUck the text in the Status column to open a window containing current server logs for that Engine.
To quickly find out more information about an Engine, one may move the mouse over the Engine username without cUcking it. A popup window containing statistics wiU be shown (see FIG. 44). One can also cUck on an Engine username to display detailed logging on that Engine. If the Engine is currently processing a Job, it is displayed in the Job-Task column. Hover the mouse over the entry to display a popup with brief statistics on the Job currently being processed, or cUck on the entry for a more detailed log. Current Jobs also have their owner displayed in the Owner column. Job Administration
One can view and administer Jobs posted to a Broker in the Job Administration section (see FIG. 45). Here, one is presented with a Ust of nning, completed, and canceUed Jobs on the Broker. To get the most up-to-date information, cUck the Refresh button. One can also automaticaUy refresh the page by selecting an interval from the Ust next to the Reload button.
WhUe a Job is running, one can change its priority by selecting a new value from the Ust in the Priority column. Possible values range from 10, the highest, to 0, the lowest. One can cUck Remove Finished Jobs to display only pending Jobs, vary the number of results per page by cUcking on a number, or find a specific Job by searching on its name, simUar to the Engine Administration.
Jobs are shown in rows with UserName, JobName, Submit Time, Tasks Completed, and Status. To display information on a Job, point to the Job Name and a popup window containing statistics on the Job appears. For more information, cUck the Job Name and a graph wiU be displayed in a new window. One can also cUck on a Job's status to view its Broker and Director log files. To kiU Jobs, select one or more Jobs by cUcking the check box in the KiU column, or cUck Select All to kiU aU Jobs, then cUck Submit. Data Set Administration
Jobs can utilize a DataSet, which is a reusable set of Tasklnputs. Repeated Jobs wiU result in caching Tasklnputs on Engines, resulting in less transfer overhead. One can cUck Data Set Administration to view aU of the active Data Sets. One can also select Data Sets and
810791vl cUck Submit to remove them; however, one wiU also need to kiU the related Jobs. DataSets are usually created and destroyed with the Java API. Batch Administration
Batch Jobs are items that have been registered with a Server, either by LiveDeveloper, by copying XML into a directory on the Server, or by a Driver. Unlike a Job, they don't immediately enter the queue for processing. Instead, they contain commands, and instructions to specify at what time the tools will execute. These events can remain on the Server and run more than once. TypicaUy, a Batch Job is used to run a Job at a specific time or date, but can be used to run any command. The Batch Administration tool (see FIG. 46) displays aU Batch Jobs on the Server, and enables one to suspend, resume, or remove them. Each Batch Job is denoted with a name. A Type and Time specify when the Batch Job wiU start. If a Batch Job is Absolute, it wiU enter the queue at a given time. A Relative Batch Job is defined with a recurring time or a time relative to the current time, such as a Batch Job that runs every hour, or one defined in the cron format. Immediate jobs are already in the queue.
To suspend a Batch Job or resume a suspended Batch Job, select it in the Suspend Resume column, and cUck the Submit button below that column. Batch Jobs can be kiUed by selecting them in the Remove column and cUcking the Submit button below that column, or cUcking Select AU and then Submit. Killing a Batch Job does not kiU any currently running Jobs that were created by that Batch Job. To kiU these, one must use the Job
Administration tool. Likewise, if one kills a Job from the Job Administration tool, one won't kiU the Batch Job. For example, if there exists a Batch Job that runs a Job every hour, it is after 4:00, and one kiUs the Job that appears in the Job Administration tool, another Job wiU appear at 5:00. One must kiU both the Job and the Batch Job to stop the Jobs completely. Batch Jobs that are submitted by a Driver wiU only stay resident until the Server is restarted. To create a Batch Job that wUl always remain resident, one can create a Batch Job file. To do this, cUck new batch file to open the editor. One can also cUck the name of a Batch Job that was already created on the Server. One can then enter the XML for the Batch Job, specify a filename, and cUck Save to save the file, Submit to enter the file, or Revert to abandon the changes.
810791vl Test Job
To test a configuration, one can submit a test Job. This tool submits a Job using the standard Linpack benchmark, using an internal Driver. One can set the foUowing parameters for a Linpack test:
Job Name Name of the Job in the Job Admin. User Name Name of the User in the Job Admin. Tasks Number of Tasks in the Job. Priority Job execution priority, with 10 being the highest, and 0 the lowest.
Duration Average duration for Tasks in seconds. Std Dev Standard deviation of Task duration in percent.
Input Data Size of Task input data in kilobytes. Output Data Size of Task output data in kUobytes. Compression Compress input and output data. ParaUel CoUection Start coUecting results before aU Tasks are submitted. After one has set the parameters, one cUcks Submit to submit the Job. Once the Job is submitted, the Job Admimstration screen from the Manage section wiU be displayed. One can then view, update, or kiU the Job. Log Retrieval
One can display current and historical log information for the Server with the Log Retrieval tool. The interface, displayed below, enables one to select a type of log file, a date range, and how one would like to display the log file. To view the current log file, cUck Current Server Log. The current log file is displayed (see FIG. 47), and any new log activity wiU be continuously added. One can use this feature to watch an ongoing Job's progress, or troubleshoot errors. At any time one is viewing the current log, cUck Snaspshot to freeze the current results and open them in a new window. Also, one can cUck Clear to clear the current results. CUck Past Logs to return to the original display.
810791vl To view a past log file, first choose what should be included in the file. Select one or more choices: HT Access Log, HT Error Log, Broker Log, Director Log, Broker.xml, Director.xml, Config.xml, and Engine Updates List. One can also cUck Select AU to select aU of the information. Next, select a date and time that the logs will end, and select the number of hours back from the end time that wiU be displayed. After one has chosen your data and a range, cUck one of the Submit buttons to display the data. One can choose to display data in the window below, in a new window, or in a zip file. One can also view any zip files you made in the past. The View Section The View section contains tools to Ust and examine Brokers, Engines, Jobs, and data sets. It's different from the Manage section in that tools focus on viewing information instead of modifying it, changing configuration, or kiUing Jobs. One can examine historical values to gauge performance, or troubleshoot the configuration by watching the interaction between Brokers and Engines interactively. In general, Lists are siπύlar to the Usted displays found in the Manage section, which can be refreshed on demand and display more information. Views are graphs implemented in a Java applet that updates in real-time. The foUowing tools are available: Broker List
The Broker List tool (see FIG. 48) displays aU Brokers currently logged in. It also gives a brief overview of the number of Engines handled by each Broker. To update the Ust, cUck the Refresh button. One can also automaticaUy update the display by selecting an interval from the Ust next to the Refresh button. CUck a Broker's hostname to display its Ust of Engines. One can also cUck Create to show the Broker Monitor graph, described below. Broker Monitor The Broker Monitor tool opens an interactive graph display (see FIG. 49) showing current statistics on a Broker. The top graph is the Engine Monitor, a view of the Engines reporting to the Broker, and their statistics over time. The total number of Engines is displayed in green. The employed Engines (Engines currently completing work for the Broker) are displayed in blue, and Engines waiting for work are displayed in red.
810791vl The middle graph is the Job View, which displays what Jobs have been submitted, and the number of Tasks completed in each Job. Running Jobs are displayed as blue bars, completed Jobs are grey, and canceUed Jobs are puφle. The bottom graph, the Job Monitor, shows the current Job's statistics. Four lines are shown, each depicting Tasks in the Job. They are submitted (green), waiting (red), running (blue), and completed (grey) Tasks. If a newer Job has been submitted since you opened the Broker Monitor, cUck load latest job to display the newest Job. Engine List
The Engine List provides the same information as the Engine Administration tool in the Manage section, such as Engines and what Jobs they are running. The only difference is the Ust only aUows one to view the Engine Ust, while the Engine Administration tool also has controls that enable one to kiU Jobs. Engine View
The Engine View tool opens an interactive graph displaying Engines on the current Broker, similar to the Engine Monitor section of the Broker Monitor graph, described above. Job List
The Job List (see FIG. 50) provides the same information as the Job Administration tool in the Manage section. The only difference is the Ust only aUows one to view Jobs, while the Job Administration tool also has controls that enable you to kiU Jobs and change their priority.
Data Set List
The Data Set List (see FIG. 51) provides the same information as the Data Set Administration tool in the Manage section. The only difference is the Ust only aUows one to view Data Sets, while the Data Set Administration tool also has controls that enable one to make Data Sets unavailable. Cluster Capacity
The Cluster Capacity tool (see FIG. 52) displays the capabiUties of Engines reporting to a Server. This includes number of CPUs, last login, CPU speed, free disk space, free memory, and total memory. AU Engines, including those not currently online, are displayed.
810791vl One may cUck OnUne Engines Only to view only those Engines currently reporting to the
Server, or cUck Offline Engines Only to view Engines that are not currently available.
The InstaU Section
The instaU section contains tools used to instaU Engine on one or more machines. Engine InstaUation
The instaU screen (see FIG. 53) enables one to instaU Engines on a Windows machine, or download the executable files and scripts needed to buUd instaUations distributable to Unix machines.
Remote Engine Script The remote Engine script is a Perl script written for Unix that enables one to install or start several DataSynapse Engines from a central Server on remote nodes. To use this script, download the file at the Remote Engine Script by can holding Shift and cUcking the link, or right-dick the link and selecting Save File As....
The usage of the script is as foUows: dslremoteadmin .pl [ACTION] [ - f f ilename | -m MACHINE_NAME -p
PATH_TO_DS] -s server [ -n num_engines] [ - i ui_idle_ ait] [ -D dist_name] [ - c min_cpu_busy] [ -C max_cpu_busy]
ACTION can be either install, configure, start, or stop: install instaUs the
DSEngine tree on the remote node and configures the Engine with parameters specified on the command line Usted above; configure configures the Engine with parameters specified on the command line as as Usted above; start starts the remote Engine; and stop stops the remote Engine.
One can specify resources either from a file or singularly on the command line using the -m machine and -p path options. The format of the resource file is: machine_name /path/to/install/dir . Driver Downloads
The Driver is avaUable in Java and C+ + and source code is avaUable for developers to download from this page. One can also obtain the Live Developer suite from this link. LiveClusterAPI
810791vl One can view the LiveClusterAPI by selecting this tool. API documents are avaUable in HTML as generated by JavaDoc for Java and by Doxygen for C+ + . Also, documentation is available for the LiveClusterXML API, in HTML format. Documentation This screen contains links to documentation about LiveCluster. Guides are included with the software distribution, in Adobe Acrobat format. To view a guide, cUck its link to open it. Note: one must have Adobe Acrobat instaUed to view the guides in pdf format. Release Notes
This link opens a new browser containing notes pertaining to the current and previous releases.
Debug Engine InstaUation
A version of the Engine is avaUable to provide debugging information for use with the Java Platform Debugger Architecture, or JPDA. This Engine does not contain the fuU functionaUty of the regular Engine, but does provide information for remote debugging via JPDA. One may select this tool to download an archive containing the Debug Engine. Basic Scheduling
The Broker is responsible for managing the job space: scheduling Jobs and Tasks on Engines and supervising interactions with Engines and Drivers Overview Most of the time, the scheduling of Jobs and Tasks on Engines is completely transparent and requires no admimstration - the "Darwinian" scheduling scheme provides dynamic load balancing and adapts automaticaUy as Engines come and go. However, one needs a basic understanding of how the Broker manages the job space in order to understand the configuration parameters, to tune performance, or to diagnose and resolve problems. RecaU that Drivers submit Jobs to the Broker. Each Job consists of one or more Tasks, which may be performed in any order. ConceptuaUy, the Broker maintains a first-in/first-out queue (FIFO) for Tasks within each Job. When the Driver submits the first Task within a Job, the Broker creates a waiting Task Ust for that job, then adds this waiting Ust to the appropriate Job Ust, according to the Job's priority (see "Job-Based Prioritization," below). Additional Tasks within the Job are appended to the end of the waiting Ust as they arrive.
81079W1 Whenever an Engine reports to the Broker to request Work, the Broker first determines which Job should receive service, then assigns the Task at the front of that Job's waiting Ust to the Engine. (The Engine may not be eUgible to take the next Task, however - this is discussed in more detaU below.) Once assigned, the Task moves from the waiting Ust to the pending Ust; the pending Ust contains aU the Tasks that have been assigned to Engines. When an
Engine completes a task, the Broker searches both the pending and waiting Usts. If it finds the Task on either Ust, it removes it from both, and adds it to the completed Ust. (The Broker may also restart any Engines that are currently processing redundant instances of the same Task. If the Task is not on either Ust, it was a redundant Task that completed before the Engine restarted, and the Broker ignores it.)
Tasks migrate from the pending Ust back to the waiting Ust when the corresponding Engine is interrupted or drops out. In this case, however, the Broker appends the Task to the front, rather than the back, of the queue, so that Tasks that have been interrupted are rescheduled at a higher priority than other waiting Tasks within the same Job. Also, the Broker can be configured to append redundant instances of Tasks on the pending Ust to the waiting Ust; "Redundant Scheduling," below, provides a detaUed discussion of this topic. Discriminators: Task-Specific Engine EUgibUity Restrictions
As indicated above, not every Task is eUgible to run on every Engine. The Discriminator API supports task discrimination based on Engine-specific attributes. In effect, the appUcation code attaches IDiscriminator objects to Tasks at runtime to restrict the class of Engines that are eUgible to process them. This introduces a sUght modification in the procedure described above: When an Engine is ineUgible to take a Task, the Broker proceeds to the next Task, and so on, assigning the Engine the first Task it is eUgible to take. Note that Discriminators estabUsh hard limits; if the Engine doesn't meet the eUgibiUty requirements for any of the Tasks, the Broker wiU send the Engine away empty-handed, even though Tasks may be waiting.
The Broker tracks a number of predefined properties, such as avaUable memory or disk space, performance rating (megaflops), operating system, and so forth, that the Discriminator can use to define eUgibiUty. The site administrator can also estabUsh
810791vl additional attributes to be defined as part of the Engine instaUation, or attach arbitrary properties to Engines "on the fly" from the Broker. Job-Based Prioritization
Every LiveClusterJob has an associated priority. Priorities can take any integer value between zero and ten, so that there are eleven priority levels in aU. 0 is the lowest priority, 10 is the highest, and 5 is the default. The LiveClusterAPI provides methods that aUow the appUcation code to attach priorities to Jobs at runtime, and priorities can be changed wlule a Job is running from the LiveClusterAdministration Tool.
When the Driver submits a job at a priority level, it wUl wait in that priority queue until distributed by the Broker. Two boolean configuration parameters determine the basic operating mode: Serial Priority Execution and Serial Job Execution. When Serial Priority Execution is true, the Broker services the priority queues sequentiaUy. That is, the Broker distributes higher priority Jobs, then moves to lower priority Jobs when higher priority Jobs are completed. When Serial Priority Execution is false, the Broker provides interleaved service, so that lower-priority queues with Jobs wiU receive some level of service even when higher-priority Jobs are competing for resources. Serial Job Execution has similar significance for Jobs of the same priority: When Serial Job Execution is true, Jobs of the same priority receive strict sequential service; the first Job to arrive is completed before the next begins. When Serial Job Execution is false, the Broker provides round-robin service to Jobs of the same priority, regardless of arrival time.
The Broker aUocates resources among the competing priority queues based on the Priority Weights setting. Eleven integer weights determine the relative service rate for each of the eleven priority queues. For example, if the weight for priority 1 is 2, and the weight for priority 4 is 10, the Broker will distribute five priority-4 Tasks for every priority-1 Task whenever Jobs of these two priorities compete. (Priorities with weights less than or equal to zero receive no service when higher priority Tasks are waiting.) The default setting for both Serial Execution flags is false, and the default setting for the Priority Weights scales linearly, ranging from priority 0 at 1, and priority 10 at 11.
It is generally best to leave the flags at their default settings, so that low-priority Tasks don't "starve," and Jobs can't monopolize resources based on time of arrival. Robust solutions
810791vl to most resource-contention problems require no more than two or three priority levels, but they do require some planning. In particular, the cUent appUcation code needs to assign the appropriate priorities to Jobs at runtime, and the priority weights must be tuned to meet minimum service requirements under peak load conditions. Polling Rates for Engines and Drivers
In addition to the serial execution flags and the priority weights, there are four remaining parameters under Job Space that merit some discussion. These four parameters govern the polling frequencies for Engines and Drivers and the rate at which Drivers upload Tasks to the Server; occasionaUy, they may require some tuning. Engines constantly poU the Broker when they are avaUable to take work. Likewise,
Drivers poU the Broker for results after they submit Jobs. Within each such transaction, the Broker provides the polling entity with a target latency; that is, it teUs the Engine or Driver approximately how long to wait before initiating the next transaction.
Total Engine PoU Frequency sets an approximate upper limit on the aggregate rate at which the avaUable Engines poU the Broker for work. The Broker computes a target latency for the individual Engines, based on the number of currently avaUable Engines, so that the total number of Engine polling requests per second is approximately equal to the Total Engine PoU Frequency. The integer parameter specifies the target rate in poUs per second, with a default setting of 30. The Result Found / Not Found Wait Time parameters limit the frequency with which
Drivers poU the Server for Job results (TaskOutputs). Result Found Wait Time determines approximately how long a Driver waits, after it retrieves some results, before polling the Broker for more, and Result Not Found Wait Time determines approximately how long it waits after polling unsuccessfuUy. Each parameter specifies a target value in milUseconds, and the default settings are 0 and 1000, respectively. That is, the default settings introduce no delay after transactions with results, and a one-second delay after transactions without results.
The Task Submission Wait Time limits the rate at which Drivers submit Tasklnputs to the Server. Drivers buffer the Tasklnput data, and this parameter determines the approximate waiting time between buffers. The integer value specifies the target latency in milUseconds, and the default setting is 0.
810791vl The default settings are an appropriate starting point for most intranet deployments, and they may ordinarily be left unchanged. However, these latencies provide the primary mechanism for throttling transaction loads on the Server. The Task Rescheduler The Task Rescheduler addresses the situation in which a handful of Tasks, running on less-capable processors, might significantly delay or prevent Job completion. The basic idea is to launch redundant instances of long-running Tasks. The Broker accepts the first TaskOutput to return and cancels the remaining instances (by terminating and restarting the associated Engines). However, it's important to prevent "runaway" Tasks from consuming unlimited resources and delaying Job completion indefinitely. Therefore, a configurable parameter, Max Attempts limits the number of times any given Task will be rescheduled. If a Task faUs to complete after the maximum number of retries, the Broker cancels aU instances of that Task, removes it from the pending queue, and sends a FatalTaskOutput to the Driver. Three separately configurable strategies govern rescheduling. The three strategies run in paraUel, so that tasks are rescheduled whenever one or more of the three corresponding criteria are satisfied. However, none of the rescheduling strategies comes into play for any Job untU a certain percentage of Tasks within that Job have completed; the Strategy Effective Percent parameter determines this percentage. More precisely, the Driver notifies the Broker when the Job has submitted all its Tasks
(from Java or C+ + , this notification is tied to the return from the createTasklnputs method within the Job class). At that point, the number of Tasks that have been submitted is equal to the total Task count for the Job, and the Broker begins monitoring the number of Tasks that have completed. When the ratio of completed Tasks to the total exceeds the Strategy Effective Percent, the rescheduling strategies begin operating.
The rescheduler scans the pending Task Ust for each Job at regular intervals, as determined by the Interval Millis parameter. Each Job has an associated taskMaxTime, after which Tasks within that Job wUl be rescheduled. When the strategies are active (based on the Strategy Effective Per-cent), the Broker tracks the mean and standard deviation of the (clock) times consumed by each completed Task within the Job. Each of the three strategies
810791vl uses one or both of these statistics to define a strategy-specific time limit for rescheduling Tasks.
Each time the rescheduler scans the pending Ust, it checks the elapsed computation time for each pending Task. InitiaUy, rescheduling is driven solely by the taskMaxTime for the Job; after enough Tasks complete, and the strategies are active, the rescheduler also compares the elapsed time for each pending Task against the three strategy-specific limits. If any of the limits is exceeded, it adds a redundant instance of the Task to the waiting Ust. (The Broker wUl reset the elapsed time for that Task when it gives the redundant instance to an Engine.)
The Reschedule First flag determines whether the redundant Task instance is placed at the front of the back of the waiting Ust; that is, if Reschedule First is true, rescheduled
Tasks are placed at the front of the queue to be distributed before other Tasks that are waiting. The default setting is false, which results in less aggressive rescheduling. Thus, the algorithm that determines the threshold for elapsed time, after which Tasks are rescheduled, can be summarized as: if (job.completedPercent > strategyEffectivePercent) { threshold : = min (job. taskMaxTime, percentCompletedStrategy. limit, averages rategy.limit, standardDevStrategy. limit) } else threshold := job . taskMaxTime
Each of the three strategies computes its corresponding limit as foUows:
• The Percent Completed Strategy returns the maximum long value
(effectively infinite, so there is no limit) untU the number of waiting Tasks, as a fraction of the total number of Tasks, faUs below the Remaining Task Percent parameter, after which it returns the mean completion time. In other words, this strategy only comes into play when the Job nears completion (as determined by the Remaining Task Percent setting), after which it begins rescheduling every pending Task at regular intervals, based on the average completion time for Tasks within the Job: if (percentCompleted < remainingTaskPercent) { percentCompletedStrategy. limit := Long .MAX_VALUE
810791vl } else percentCompletedStrategy. limit := mean
The default setting for Remaining Task Percent is 1, which means that this strategy becomes active after the Job is 99% completed.
• The Average Strategy returns the product of the mean completion time and the Average Limit parameter (a double). That is, this strategy reschedules Tasks when their elapsed time exceeds some multiple (as determined by the Average Limit) of the mean completion time: averageStrategy . limit : = averageLimit * mean The default setting for Average Limit is 3.0, which means that it reschedules Tasks after they take at least three times as long as average.
• The Standard Dev Strategy returns the mean plus the product of the Standard Dev Limit parameter (a double) and the standard deviation of the completion times. That is, this strategy reschedules Tasks when their elapsed time exceeds the mean by some multiple (as determined by the Standard Dev Limit) of the standard deviation: standardDevStrategy. limit := mean + (standardDevLimit * standardDeviation)
The default setting for Standard Dev Limit is 2.0, which means that it reschedules Tasks after they exceed the average by two standard deviations, or in other words, after they've taken longer than about 98 % of the completed
Tasks. (Note that if Reschedule First is true, then Tasks are guaranteed to either complete or faU within Max Attempts * MaxTaskTime.) Tuning the Rescheduler Task rescheduling addresses three basic issues:
• It prevents a smaU number of less capable processors from significantly degrading Job performance and provides fault tolerance and graceful faUure when Engine-specific problems prevent Tasks from completing on individual Engines.
810791vl • It prevents "runaway" Tasks from consuming unlimited resources and delaying Job completion indefinitely.
• It provides a fa -safe system to insure that aU Tasks wiU complete, despite unexpected problems from other systems. The default settings are reasonable for many environments, but any configuration represents a compromise, and there are some pitfaUs to watch out for. In general, aggressive rescheduling is appropriate when there are abundant resources, but with widely differing capabUities. Conversely, to utilize smaUer pools of more nearly identical Engines most efficiently, rescheduling should only be configured to occur in exceptional situations. In case this is not possible, it may be necessary to substantiaUy curtaU, or even disable, the rescheduling strategies, to prevent repeated rescheduling and ultimately, cancellation, of long-running Tasks. In many cases, it may be possible to reduce the impact of heterogeneous resources by applying discriminators to route long-running Tasks (at least, those that can be identified a priori) to more capable processors. (This is generaUy a good idea in any case, since it smoothes turnaround performance with no loss of efficiency.)
Another approach that can be effective in the presence of abundant resources is simply to increase the Max Attempts setting, to aUow more rescheduling attempts before a Task is canceUed and returns a FatalTaskOutput. Jobs with very few Tasks also work best without rescheduling. For example, with a setting of 40% for Strategy Effective Percent, the strategies would become active for a Job with ten Tasks after only four of those Tasks had completed. Therefore, in cases where Jobs have very few Tasks, Strategy Effective Percent should be increased. (For example, a setting of 90% ensures that at least nine Tasks complete before launching the strategies, and a setting of 95% requires at least nineteen.)
FinaUy, note that it is seldom a good idea to disable rescheduling altogether, for example by setting Max Attempts to zero. Otherwise, a single incapacitated or compromised Engine can significantly degrade performance or prevent Tasks from completing. Nor should one completely disable the rescheduling strategies without ensuring that every Job is equipped with a reasonable taskMaxTime. Without this backstop, runaway appUcation code can permanently remove Engines from service (that is, untU an admmistrator cancels the offending Job manuaUy from the management area on the Server).
810791vl The Task Data Set Manager
TaskDataSet addresses appUcations in which a sequence of operations are to be performed on a common input dataset, which is distributed across the Engines. A typical example would be a sequence of risk reports on a common portfoUo, with each Engine responsible for processing a subset of the total portfoUo.
In terms of the LiveClusterAPI, a TaskDataSet corresponds to a sequence of Jobs, each of which shares the same coUection of Tasklnputs, but where the Tasklet varies from Job to Job. The principal advantage of the TaskDataSet is that the scheduler makes a "best effort" to assign each Tasklnput to the same Engine repeatedly, throughout the session. In other words, whenever possible, Engines are assigned Tasklnputs that they have processed previously (as part of earUer Jobs within the session). If the Tasklnputs contain data references, such as primary keys in a database table, the appUcation developer can cache the reference data on an Engine and it wiU be retained.
The Broker minimizes data transfer by caching the Tasklnputs on the Engines. The Tas k Data Set Manager plug-in manages the distributed data. When Cache Type is set to 0, the Engines cache the Tasklnputs in memory; when Cache Type is set to 1, the Engines cache the Tasklnputs on the local file system. Cache Max and Cache Percent set limits for the size of each Engine's cache. Cache Max determines an absolute limit, in megabytes. Cache Percent estabUshes a limit as a percentage of the Engine's free memory or disk space (respectively, depending on the setting of Cache Type). The Data Transfer Plug-In
The Data Transfer plug-in manages the transfer of Tasklnput and Tasklet objects from the Broker to the Engines and the transfer of TaskOutput objects from the Broker to the Drivers. By default, direct data transfer is configured, and the data transfer configuration specified in this plug-in is not used. However, if direct data transfer is disabled, these settings are used. Under the default configuration, the Broker saves the serialized data to disk. When the Broker assigns a Task to an Engine, the Engine picks up the input data at the location specified by the Base URL. Sirnilarly, when the Broker notifies a polling Driver that output data is avaUable, the Driver retrieves the data from the location specified by the Output URL. Both of these URLs must point to the same directory on the Server, as specified by the
810791vl Data Directory. This directory is also used to transfer instructions (the Tasklet definitions) to the Engines. Alternatively, the Broker can be configured to hold the data in memory and accompUsh the transfer directly, by enclosing the data within messages. Two flags, Store Input to Disk and Store Output to Disk, determine which method is used to transfer input data to Engines and output data to Drivers, respectively. (The default setting is true in each case; setting the corresponding flag to false selects direct transfer from memory.) This default configuration is appropriate for most situations. The incremental performance cost of the round trip to disk and sUght additional messaging burden is rarely significant, and saving the serialized Task data to disk reduces memory consumption on the Server. In particular, the direct-transfer mode is feasible only when there is sufficient memory on the Server to accommodate aU of the data. Note that in making this determination, it is important to account for peak loads. Running in direct-transfer mode with insufficient memory can result in j ava . lang . Out Of Memory- Errors from the Server process, unpredictable behavior, and severely degraded performance. The Job Cleaner
The Job Cleaner plug-in is responsible for Job-space housekeeping, such as cleaning up files and state history for Jobs that have been completed or canceled. This plug-in deletes data files associated with Jobs on a regular basis, and cleans the Job Manage and View pages. It uses the Data Transfer plug-in to find the data files. If a Job is finished or canceUed, the files are deleted on the next sweep. The plug-in sweeps the Server at regular intervals, as specified by the integer Attempts Per Day (the default setting of 2 corresponds to a sweep interval of every 12 hours). The length of time in hours Jobs will remain on the Job Admin page after finished or canceUed is specified by the integer Expiration Hours. The Driver and Engine Managers The Driver and Engine Managers play analogous roles for Drivers and Engines, respectively. They maintain the server state for the corresponding cUent/server connections. The Broker maintains a server-side proxy corresponding to each active session; there is one session corresponding to each Driver and Engine that is logged in. The Driver Service and Employment Office Plug-Ins
810791vl The Driver Service plug-in is responsible for the Driver proxies. Max Number of Proxies sets an upper limit on the number of Drivers that can log in concurrently. The default value of 100,000, and is typicaUy not modified.
The Employment Office plug-in maintains the Engine proxies. In this case, Max Number of Proxies is set by the Ucense, and cannot be increased be increased beyond the limit set by the Ucense. (Although it can be set below the limit imposed by the Ucense.) The Login Managers
Both the Driver and Engine Managers incorporate Login Managers. The Login Managers maintain the HTTP connections with corresponding cUents (Drivers and Engines), and momtor the heartbeats from active connections for timeouts. User-configurable settings under the HTTP Connection Managers include the URL (on the Broker) for the connections, timeout periods for read and write operations, respectively, and the number times a cUent will retry a read or write operation that times out before giving up and logging a fatal error. The Server instaU script configures the URL settings, and ordinarily, they should never be modified thereafter. The read/write timeout parameters are in seconds; their default values are 10 and 60, respectively. (Read operations for large blocks of data are generaUy accompUshed by directdownloads from file, whereas uploads may utilize the connection, so the write timeout may be substantiaUy longer.) The default retry limit is 3. These default settings are generally appropriate for most operating scenarios; they may, however, require some tuning for optimal performance, particularly in the presence of unusually large datasets or suboptimal network conditions.
The Driver and Engine Monitors track heartbeats from each active Driver and Engine, respectively, and ends connections to Drivers and Engines which no longer respond. The Checks Per Minute parameters within each plug-in determine the frequency with which the corresponding momtor sweeps its Ust of active cUents for connection timeouts. Within each monitor, the heartbeat plug-in determines the approximate target rate at which the corresponding cUents (Drivers or Engines) send heartbeats to the Broker, and set the timeout period on the Broker as a multiple of the target rate. That is, the timeout period in milUseconds (which is displayed in the browser as weU) is computed as the product of the Max Millis Per Heartbeat and the Timeout Factor. (It may be worth noting that the actual latencies for
810791vl individual heartbeats vary randomly between the target maximum and 2/3 of this value; this randomization is essential to prevent ringing for large clusters.)
The default setting for each maximum heartbeat period is 30,000 (30 seconds) and for each timeout factor, 3, so that the default timeout period for both Drivers and Engines is 90 seconds. By default, the Broker Manager checks for timeouts 10 times per minute, whUe the Engine Manager sweeps 4 times per minute. (TypicaUy, there are many more Engines than Drivers, and Engine outages have a more immediate impact on appUcation performance.) Other Manager Components
The Engine File Update Server manages file updates on the Engines, including both the DataSynapse Engine code and configuration itself, and user files that are distributed via the directory repUcation mechanism. The Native Job Adapter
The Native Job Adapter provides services to support appUcations that utilize the C + + or XML APIs. The basic idea is that the Broker maintains a "pseudo Driver" corresponding to each C + + or XML Job, to track the connection state and perform some of the functions that would otherwise be performed by the Java Driver.
The Result Found and Result Not Found Wait Times have the same significance as the corresponding settings in the Job Space plug-in, except that they apply only to the pseudo Drivers. The Base URL for connections with native Jobs is set by the instaU script, and should ordinarily never change thereafter.
The other settings within the Native Job Adapter plug-in govern logging for the Native Bridge Library, which is responsible for loading the native Driver on each Engine: a switch to turn logging on and off, the log level (1 for the minimum, 5 for the maximum), the name of the log file (which is placed within the Engine directory on each Engine that processes a native Task), and the maximum log size (after which the log roUs over). By default, logging for the Native Bridge is disabled.
The Native Job Store plug-in comes into play for native Jobs that maintain persistence of Task- Output s on the Broker. (Currently, these include Jobs that set a positive value for hoursTo- KeepData or are submitted via the JobSubmitter class.) The Data Directory is the directory in the Broker's local file system where the TaskOutputs are
810791vl stored; this directory is set by the instaU script, and should ordinarily not be changed. The Attempts Per Day setting determines the number of times per day that the Broker sweeps the data directory for TaskOutputs that are no longer needed; the default setting is 24 (hourly). Utilities The UtUities plug-in maintains several administrative functions. The Revision
Information plug-in provides read-only access to the revision level and buUd date for each component associated with the Broker. The License plug-in, together with its License Viewer component, provides similar access to the Ucense settings.
The Log F le plug-in maintains the primary log file for the Broker itself. Settings are avaUable to determine whether log messages are written to file or only to the standard output and error streams, the location of the log file, whether to log debug information or errors only, the log level (when debug messages are enabled), the maximum length of the log file before it roUs over, and whether or not to include stack traces with error messages.
The MaU Server generates maU notifications for various events on the Broker. The SMTP host can be set here, or from the Edit Profile screen for the site administrator. (If this field is blank or "not set," maU generation is disabled.)
The Garbage CoUector monitors memory consumption on the Broker and forces garbage coUection whenever the free memory falls below a threshold percentage of the total avaUable memory on the host. Configuration settings are avaUable to determine the threshold percentage (the default value is 20%) and the frequency of the checks (the default is once per minute).
The remaining utiUty plug-ins are responsible for cleaning up log and other temporary files on the Broker. Each specifies a directory or directories to sweep, the sweep frequency (per day), and the number of hours that each file should be maintained before it is deleted. There are also settings to determine whether or not the sweep should recurse through subdirectories and whether to clean out aU pre-existing files on startup. Ordinarily, the only user modification to these settings might be to vary the sweep rate and expiration period during testing.
Directory RepUcation and Synchronization Mechanism Overview
810791vl The LiveClustersystem provides a simple, easy-to-use mechanism for distributing dynamic Ubraries ( . dll or . so), Java class archives ( . jar), or large data files that change relatively infrequently. The basic idea is to place the files to be distributed within a reserved directory on the Server. The system maintains a synchronized repUca of the reserved directory structure for each Engine. Updates can be automaticaUy made, or manuaUy triggered. Also, an
Engine file update watchdog can be configured to ensure updates only happen when the Broker is idle.
Server-side directory locations
A directory system resides on the Server in which you can put files that wUl be mirrored to the Engines. The location of these directories is outlined below.
Server-side directories for Windows
Server-side directories are located in the Server instaU location (usuaUy c : \DataSynapse\Server) plus \livecluster\public_html\updates . Within that directory are two directories: datasynapse and resources. The datasynapse directory contains the actual code for the Engine and support binaries for each platform. The resources directory contains four directories: shared, Win32, Solaris, and linux.
This shared directory is mirrored to aU Engine types, and the other three are mirrored to
Engines running the corresponding operating system.
Server-side directories for Unix For Servers instaUed under Unix, the structure is identical, but the location is the instaUation directory (usuaUy /opt /datasynapse) plus
/Server/Broker/public_html/updates/resources. The directories are also shared, Win32, Solaris, and linux.
Engine-side directory locations A simUar directory structure resides in each Engine instaUation. This is where the files are mirrored. The locations are described below.
Engine-side directories for Windows
The corresponding Engine-side directory is located under the root directory for the
Engine instaUation. The default location on Windows is: C: \ Program
810791vl Files\DataSynapse\Engine\resources and contains the repUcated directories shared and Win32. Engine-side directories for Unix
The corresponding Engine-side directory on Unix is the Engine instaU directory (for example, /usr /local ) plus /DSEngine/resources and contains the repUcated directories shared and linux for Linux Engines or Solaris for Solaris Engines. Configuring directory repUcation
The system can be configured to trigger updates of the repUcas in one of two modes:
• Automatic update mode. The Server continuously poUs the file signatures within the designated subdirectories and triggers Engine updates whenever it detects changes; to update the Engines, the system administrator need only add or overwrite files within the directories.
• Manual update mode. The administrator ensures that the correct files are located in the designated subdirectories and triggers the updates manuaUy by issuing the appropriate tools through the Administration tool.
Configuring automatic directory updates
1. In the Configure section of the Administration tool, select the Broker Configuration tool.
2. CUck Engine Manager, then select Engine File Update Server. 3. Set the value of Enabled to true.
Once this is set, files added or overwritten within the Server resources directory hierarchy wuT automaticaUy update on the Engines. The value of Minutes Per Check determines the interval at which the Server poUs the directory for changes ManuaUy Updating files To update aU files to the Engines manuaUy, set Update Now to true, and cUck
Submit. This triggers the actual transfer of files from the Server to the Engines, and returns the value of Update Now. to false. The Engine File Update Watchdog
By default, the Broker is configured so updates to the Engine files wiU only happen when the Broker is idle. The Engine file update watchdog provides this function when enabled,
810791vl and ensures that aU Engines have the same files. When enabled, the watchdog ensures that Engine files are not updated unless there are no Jobs in progress. If a file update is requested (either automaticaUy or manuaUy), the watchdog does not aUow any new Jobs to start, and waits for currently running Jobs to complete. When no Jobs are running or waiting, the update will occur.
If the running Jobs don't complete within the specified update period (the default is 60 minutes), the update wiU not happen, and Jobs wiU once again be aUowed to start. If this happens, one can either try to trigger an update again, specify a longer update period, or try to manuaUy remove Jobs or stop sending new Jobs. When there is a pending update, a notice wiU be displayed at the top of the Administration Tool. Also, an email notification is sent on update requests, completions, and timeouts if one subscribes to the FUeUpdateEvent with the Event Subscription tool. Using Engines with shared network directories
Instead of using directory repUcation, one can also provide Engines with common files with a shared network directory, such as an NFS mounted directory. To do this, simply provide a directory on a shared server that can be accessed from aU of the Engines. Then, go to the Configure section of the Administration tool, select Engine Configuration, and change the Class directory to point to the shared directory. When one updates the files on the shared server, aU of the Engines wUl be able to use the same files. CPU Scheduling for Unix
Unix Engines provide the ability to tune scheduling for multi-CPU platforms. This section explains the basic theory of Engine distribution on multi-CPU machines, and how one can configure CPU scheduling to run an optimal number of Engines per machine.
A feature of LiveClusteris that Engines completing work on PCs can be configured to avoid confUcts with regular use of the machine. By configuring an Engine, one can specify at what point other tasks take greater importance, and when a machine is considered idle and ready to take on work. This is caUed adaptive scheduling, and can be configured to adapt to one's computing environment, be it an office of PCs or a cluster of dedicated servers.
With a single-CPU computer, it's easy to determine when this work state takes place. For example, using the Unix Engine, one can specify a minimum and maximum CPU
810791vl threshold, using the -c and -C switches when running the configure . sh Engine instaUation script. When non-Engine CPU utilization crosses below the minimum threshold, an Engine is aUowed to run; when the maximum CPU usage on the machine is reached, the Engine exits and any Jobs it was processing are rescheduled. With a multi-CPU machine, the processing power is best utilized if an Engine is run on each processor. However, determining a machine's coUective avaUable capacity isn't as straightforward as with a single-CPU system. Because of this, Unix Engines have two types of CPU scheduling avaUable to determine how Engines behave with multiprocessor systems. Nonincremental Scheduling The simple form of CPU scheduling is caUed absolute, or nonincremental scheduling.
In this method, minimum and maximum CPU utilization refers to the total system CPU utilization, and not individual CPU utilization. This total CPU utilization percentage is calculated by adding the CPU utilization for each CPU and dividing by the number of CPUs. For example, if a four-CPU computer has one CPU running at 50% utilization and the other three CPUs are idle, the total utilization for the computer is 12.5 % .
With nonincremental scheduling, a minimum CPU and maximum CPU are configured, but they refer to the total utilization. Also, they simultaneously apply to aU Engines. So if the maximum CPU threshold is set at 25 % on a four-CPU machine and four Engines are running, and a non-Engine program pushes the utilization of one CPU to 100% , all four Engines wiU exit. Note that even if the other three CPUs are idle, their Engines wiU stiU exit. In this example, if the minimum CPU threshold was set at 5 % , aU four Engines would restart when total utilization was below 5 % . By default, the Unix Engine uses nonincremental scheduling. Also, Windows Engines always use this method. Incremental Scheduling Incremental scheduling is an alternate method implemented in Unix Engines to provide better scheduling of when Engines can run on multi-CPU computers. To configure incremental scheduling, use the -I switch when running the configure . sh Engine instaUation script. With incremental scheduling, minimum CPU and maximum CPU utilization refers to each CPU. For example, if there is an Engine running on each CPU of a multi-CPU system, and the maximum CPU threshold is set at 80%, and a non-Engine program raises CPU utilization
810791vl above 80% on that CPU, that Engine wUl exit, and other Engines will continue to run until their CPU reaches the maximum utilization threshold. Also, an Engine would restart on that CPU when that CPU's utilization dropped below the minimum CPU utilization threshold. The CPU scheduler takes the minimum and maximum per/CPU settings specified at Engine instaUation and normalizes the values relative to total system utilization. When these boundaries are crossed, and Engine is started or shut down and the boundaries are recalculated to reflect the change in running processes. This algorithm is used because, for example, a 50% total CPU load on an eight processor system is typicaUy due to four processes each using 100% of an individual CPU, rather than sixteen processes each using 25 % of a CPU. The normalized values are calculated with the foUowing assumptions:
1. System processes wUl be scheduled such that a single CPU is at maximum load before other CPUs are utilized.
2. For computing maximum thresholds, CPUs which do not have Engines running on them are taken to run at maximum capacity before usage encroaches onto a CPU being used by an Engine.
3. For computing minimum thresholds, CPUs which do not have Engines running on them are taken to be running at least the per/CPU maximum threshold.
The normalized utilization of the computer is calculated by the foUowing formulas. The maximum normalized utilization (Unmax) equals:.
u„ nm maaxx = ^ s-i -+— / % [ *•C. * -C rl J / ^i
Where U,^ = Per-CPU maximum (user specified);
Uωt = Maximum value for CPU utilization (always 100);
Ct = Total number of CPUs; and, Cr = Number of CPUs running Engines.
The minimum normalized utilization (\J____ equals:
II — """ -i- max \C __ C _ 11
^ nmin ~ (- (- * ' r J
The variables are the same as above, with the addition of Umin = per-CPU minimum.
810791vl The LiveCluster API
The LiveCluster API is avaUable in both C+ + , caUed Driver+ + , and Java, caUed JDriver. There is also an XML faciUty that can be used to configure or script Java-based Job implementations. The Tasklet is analogous with the Servlet interface, part of the Enteφrise Java
Platform. For example, a Servlet handles web requests, and returns dynamic content to the web user. SimUarly, a Tasklet handles a task request given by a Tasklnput, and returns the completed taskwith TaskOutput.
The three Java interfaces (Tasklnput, TaskOutput, and Tasklet) have corresponding pure abstract classes in C+ + . There is also one partiaUy implemented class, with several abstract/virtual methods for the developer to define, caUed Job.
The C+ + API also introduces one additional class, Serializable, to support serialization of the C+ + Task objects. How It Works To write an appUcation using LiveCluster, one's appUcation should organize the computing problem into units of work, or Jobs. Each Job wiU be submitted from the Driver to the Server. To create a Job, the foUowing steps take place:
1 . Each Job is associated with an instance of Tasklet .
2. One TaskOutput is added to the Job to collect results. 3. The unit of work represented by the Job is divided into Tasks. For each Task, a
Tasklnput is added to the Job. 4. Each Tasklnput is given as input to a Tasklet running on an Engine. The result is returned to a TaskOutput. Each TaskOutput is returned to the Job, where it is processed, stored, or otherwise used by the appUcation. AU other handling of the Job space, Engines, and other parts of the system are handled by the Server. The only classes one's program must implement are the Job, Tasklet, Taskletlnput, and TaskletOutput. This section discusses each of these interfaces, and the corresponding C + + classes. Tasklnput
810791vl Tasklnput is a marker that represents aU of the input data and context information specific to a Task. In Java, Tasklnput extends the j ava . io . Serial izable interface: public interface Tasklnput extends Java . io . Serializable { } . In C+ + , Tasklnput extends the class Serializable, so it must define methods to read and write from a stream (this is discussed in more detaU below): class Tasklnput : public Serializable { public : virtual "Tasklnput ( ) { }
} ; The examples show a Monte Carlo approach to calculating Pi (see FIGs. 54-55). TaskOutput
TaskOutput is a marker that represents aU of the output data and status information produced by the Task. (See FIGs. 56-57.)
Like Tasklnput, TaskOutput extends the Java . io . Serializable interface: public interface TaskOutput extends Java . io . Serializable { }
Similarly, the C+ + version extends the class Serializable, so it must define methods to read and write from a stream: class TaskOutput : public Serializable { public: virtual "TaskOutput ( ) { }
}; Tasklet
The Tasklet defines the work to be done on the remote Engines. (See FIGs. 58 and
59A-B.) There is one command-style method, service, that must be implemented. Like Tasklnput and TaskOutput, the Java Tasklet extends j ava . io . Serializable. This means that the Tasklet objects may contain one-time initialization data, which need only be transferred to each Engine once to support many Tasklets from the same Job. (The relationship between Tasklets and Tasklnput/TaskOutput pairs is one-to-many.) In particular, for maximum efficiency, shared input data that is common to every task invocation should be placed in the Tasklet, and only data that varies across invocations should be placed in the Tasklnputs.
810791vl As above, the Java implementation requires a default constructor, and any non-transient fields must themselves be serializable: public interface Tasklet extends java. io. Serializable { public TaskOutput service (Tasklnput input); }
The C++ version is equivalent. It extends the class Serializable, so it must define methods to read and write from a stream: class Tasklet : public Serializable { public: virtual TaskOutput* service (Tasklnput* input) = 0; virtual "Tasklet () { }
}; Job
A Job is simply a coUection of Tasks. One must implement three methods: createTasklnputs processTaskOutput processFatalOutput (C+ + implementations require another method, getLibraryName, which specifies the Ubrary that contains the Tasklet implementation to be shipped to the remote Engines.) Implementations of createTasklnputs caU addTasklnput to add Tasks to the queue. (See FIGs. 60-61.) In addition, Job defines static methods for instantiating Job objects based on XML configuration scripts and caU-backs to notify the appUcation code when the Job is completed or encounters a fatal error. A Job also implements processTaskOutput to read output from each Task and output, process, store, add, or otherwise utilize the results. Both the C+ + and Java versions provide both blocking (execute) and non-blocking (executelnThread) job execution methods, and executeLocally to run the job in the current process. This last function is useful for debugging prior to deployment. JobOptions Each Job is equipped with a JobOptions object, which contains various parameter settings. The getOpt ions method of the Job class can be used to get or set options in the JobOptions object for that Job. A complete Ust of aU methods avaUable for the
810791vl JobOptions object is avaUable in the API reference documentation. Some commonly used methods include set Jobname, set JarFile, and setDiscriminator. setjobname
By default, the name associated with a Job and displayed in the Administration Tool is a long containing a unique number. One can set a name that will also be displayed in the Administration Tool with the Job ID. For example, if one's Job is named job, add this code: j ob . getOpt ions 0 . set Jobname ( "Job Number 9 " ) ; setJarFUe
A difference between the C+ + and Java versions of the Driver API has to do with the mechanism for distributing code to the Engines.
For both APIs, the favored mechanism of code distribution involves distributing the Jar file containing the concrete class definitions to the Engines using the directory replication mechanism. The C+ + version supports this mechanism. The dynamic Ubrary containing the implementation of the concrete classes must be distributed to the Engines using the native code distribution mechanism, and the corresponding Job implementation must define getLibraryName to specify the name of this Ubrary, for example picalc (for picalc . dll on Win32 or libpicalc . so on Unix).
With Java, a second method is also available, which can be used during development. The other method of distributing concrete implementations for the Tasklet, Tasklnput, and TaskOutput is to package them in a Jar file, which is typicaUy placed in the working directory of the Driver appUcation. In this case, the corresponding Job implementation caUs set JarFile with the name of this Jar file prior to calling one of the execute methods, and the Engines puU down a serialized copy of the file when they begin work on the corresponding Task. This method requires the Engine to download the classes each time a Job is run. setDiscriminator
A discriminator is a method of controlling what Engines accept a Task. FIG. 76 contains sample code that sets a simple property discriminator. Additional C++ Classes Serializable
810791vl The C+ + API incorporates a class Serializable, since object serialization is not a buUt-in feature of the C + + language. This class (see FTG. 62) provides the mechanism by which the C+ + appUcation code and the LiveClustermiddleware exchange object data. It contains two pure virtual methods that must be implemented in any class that derives from it (i.e., in Tasklnput, TaskOutput, and Tasklet). API Extensions
The LiveClusterAPI contains several extensions to classes, providing specialized methods of handling data. These extensions can be used in special cases to improve performance or enable access to information in a database. DataSet Job and TaskDataSet
A TaskDataSet is a coUection of Tasklnputs that persist on the Server as the input for any subsequent DataSet Job. The Tasklnputs get cached on the Engine for subsequent use for the TaskDataSet. This API is therefore appropriate for doing repeated calculations or queries on large datasets. AU Jobs using the same DataSet Job wiU aU use the Tasklnputs added to the TaskDataSet, even though their Tasklets may differ.
Also, Tasklnputs from a set are cached on Engines. Engines which request a task from a Job wUl first be asked to use input that already exists in its cache. If it has no input in its cache, or if other Engines have already taken input in its cache, it wUl download a new input, and cache it. An ideal use of TaskDataSet would be when running many Jobs on a very large dataset. NormaUy, one would create Tasklnputs with a new copy of the large dataset for each Job, and then send this large Tasklnputs to Engines and incur a large amount of transfer overhead each time another Job is run. Instead, the TaskDataSet can be created once, like a database of Tasklnputs. Then, smaU Tasklets can be created that use the TaskDataSet for input, like a query on a database. As more jobs are run on this session, the inputs become cached among more Engines, increasing performance. Creating a TaskDataSet
To create a TaskDataSet, first construct a new TaskDataSet, then add inputs to it using the addTasklnput method. (See FIG. 63.) If one is using a stream, one can also use the createTasklnput method. After one has finished adding inputs, caU the
810791vl doneSubmitting method. If a name is assigned using setName, that wUl be used for subsequent references to the session; otherwise, a name wiU be assigned. The set wiU remain on the Server untU destroy is caUed, even if the Java VM that created it exits. Creating a DataSet Job After creating a TaskDataSet, implement the Job using DataSet Job, and create a
TaskOutput. (See FIG. 64.) The main difference is that to run the Job, one must use setTaskDataSet to specify the dataset one created earUer. Note that the ExecuteLocally method cannot be used with the DataSet Job. StreamJob and StreamTasklet A StreamJob is a Job which aUows one to create input and read output via streams rather than using defined objects. (See FIG. 65.) A StreamTasklet reads data from an InputStream and writes to an OutputStream, instead of using a Tasklnput and TaskOutput. When the StreamJob writes input to a stream, the data is written directly to the local file system, and given to Engines via a Ughtweight webserver. The Engine also streams the data in via the StreamTasklet. In this way, the memory overhead on the Driver, Broker, and Engine is reduced, since an entire Tasklnput does not need to be loaded into memory for transfer or processing. The StreamTasklet must be used with a StreamJob. SQLDataSetJob and SQLTasklet Engines can use information in an SQL database as input to complete a Task by the use of SQL. An SQLDataSetJob queries the database and receives a result set. Each SQLTasklet is given a subset of the result set as an input. This feature is only avaUable from the Java Driver. Starting the database To use an SQL database, one must first have a running database with a JDBC interface.
(See FIG. 66.) The sample code loads a properties file caUed sql test .properties. It contains properties used by the database, plus the properties tasks and query, which are used in our Job. (See FIG. 67.) SQLDataSetJob
810791vl An SQLDataSetJob is created by implementing DataSet Job. (See FIG. 67) Task inputs are not created, as they wiU be from the SQL database. (See FTG. 68.) SQLTasklet
An SQLTasklet is implemented simUar to a normal Tasklet, except the input is an SQL table. (See FIG. 69.) Running the Job
After defining a TaskOutput, the Job can be run. The SQLDataSet is created on the server and is prepared with setJDBCProperties, setMode, setQuery, and prepare . Then the Job is run. (See FIG. 70.) Note that in order to use most recent information in the database, the SQLDataSet needs to be destroyed and created again. This may be important if one is using a frequently updated database. The Propagator API
This section discusses how to use the Propagator API to run paraUel code with inter- node communication. Overview
The Propagator API is a group of classes that can be used to distribute a problem over a variable number of compute Engines instead of fixed-node cluster. It is an appropriate alternative to MPI for running parallel codes which require inter-node communication. Unlike most MPI paraUel codes, Propagator implementations can run over heterogeneous resources, including interruptible desktop PCs.
A Propagator appUcation is divided into steps, with steps sent to nodes. Using adaptive scheduling, the number of nodes can vary, even changing during a problem's computation. After a step has completed, a node can communicate with other nodes, propagating results and coUecting information from nodes that have completed earUer steps. This checkpointing aUows for fault-tolerant computations.
FIG. 71 Ulustrates how nodes communicate at barrier synchronization points when each step of an algorithm is completed. Using the Propagator API
The Propagator API consists of three classes: GroupPropagator and NodePropagator and the Interface GroupCommunicator.
810791vl • The GroupPropagator is used as the controller. A GroupPropagator is created, and it is used to create the nodes and the messaging system used between nodes.
• The NodePropagator contains the actual code that each node will execute at each step. It also contains whatever code each node wiU need to send and receive messages, and send and receive the node state.
• The GroupCommunicator is the interface used by the nodes to send and receive messages, and to get and set node state.
Group Propagator
The GroupPropagator is the controlling class of the NodePropagators and GroupCommunicator. One should initiaUy create a GroupPropagator as the first step in running a Propagator Job.
After creating a GroupPropagator, one can access the GroupCommunicator, like this:
GroupCommunicator gc = gp . getGroupCommunicator ( ) ; This wiU enable one to communicate with nodes, and get or set their state.
Next, one wUl need to set the NodePropagator used by the nodes. Given a simple NodePropagator implementation caUed TestPropagator that is passed the value of the integer x, one would do this: gp. setNodePropagator ( new TestPropagator ( x ) ) ; After one has defined a NodePropagator, one can teU the nodes to execute a step of code by calling the propagate method, and passing a single integer containing the step number one wishes to run.
When a program is complete, the endSession method should be caUed to complete the session. Node Propagator
The NodePropagator contains the actual code run on each node. The NodePropagator code is run on each step, and it communicates with the GroupCommunicator to send and receive messages, and set its state.
To create one's own NodePropagator implementation, create a class that extends NodePropagator. The one method the created class must implement is propagate . It
810791vl wiU be run when propagate is run in the GroupPropagator, and it contains the code which the node actuaUy runs.
The code in the NodePropagator wiU vary depending on the problem. But several possibUities include getting the state of a node to populate variables with partial solutions, broadcasting a partial solution so that other nodes can use it, or sending messages to other nodes to relay work status or other information. AU of this is done using the GroupCommunicator. Group Communicator
The GroupCommunicator communicates messages and states between nodes and the GroupPropagator. It can also transfer the states of nodes. It's like the bus or conduit between aU of the nodes.
The GroupCommunicator exists after one creates the GroupPropagator. It's passed to each NodePropagator through the propagate method. Several methods enable communication. They include the foUowing (there are also variations avaUable to delay methods until a specified step or to execute them immediately):
• broadcast Send a message to all recipients, except current node.
• clearMessages Clear all messages and states on server and Engines.
• getMessages Get the messages for current node.
• getMessagesFromSender Get the message from specified node for current node.
• getNodeS tate Get the state of specified node.
• getNumNodes Get the total number of nodes.
• sendMessage Send the message to nodeld.
• setNodeState Set the state of the node. The 2-D Heat Equation - A Propagator API Example
We wiU now explain how to use the Propagator API to solve an actual problem. In this example, it is used to calculate a two-dimensional heat equation. This example uses three files: Test.java, which contains the main class, HeatEqnSolver.java, which implements the GroupPropagator, and HeatPropagator, which implements the NodePropagator. Test.java
810791vl This file (see FIG. 72A) starts like most other LiveClusterprograms, except we import com. livecluster . tasklet . propagator . *. Also, a Test class is created as our main class.
Continuing (see FIG. 72B), properties are loaded from disk, and variables needed for the calculations are initialized, either from the properties file, or to a default value. If anything faUs, an exception wUl be thrown.
Next (see FIG. 72C), the GroupPropagator is created. It's passed aU of the variables it wUl need to do its calculations. Also, a message is printed to System, out, displaying the variables used to run the equation. The solve method for the HeatEqnSolver object, which wUl run the equation, is caUed (see FIG. 72D), and the program ends. HeatEqnSolver .j ava
The class HeatEqnSolver is defined with a constructor that is passed the values used to calculate the heat equation. It has a single pubUc method, Solve, which is caUed by Test to run the program. (See FIG. 73A.) This creates the GroupPropagator, which controls the calculation on the nodes. solver. solve 0 ;
A GroupPropagator gp is created (see FIG. 73B) with the name "heat2d," and the number of nodes specified in the properties. Then, a GroupCommunicator gc is assigned with the GroupPropagator method getGroupCommunicator. A new
HeatPropagator is created, which is the code for the NodePropagator, which is described in the next section. The HeatPropagator is set as the NodePropagator for gp. It wiU now be used as the NodePropagator, and wUl have access to the
GroupCommunicator. A JarfUe is set for the GroupPropagator. The code (see FIG. 73C) then defines a matrix of random values and a mirror of the matrix for use by the nodes. After the math is done, the i loop uses setNodeState to push the value of the matrix to the nodes. Now, aU of the nodes wUl be using the same starting condition for their calculations.
The main iteration loop (see below) uses the propogate method to send the steps to the nodes. This wiU cause _iters number of iterations by the nodes using their code.
810791vl // main iteration loop for ( int i=0; i < _iters; i++ ) { gp.propagate (i) ;
} As nodes return their results, the code (see FIGs. 73D-E) uses getNodeState to capture back the results and copy them into the matrix. HeatPropagator. Java
The HeatPropagator class (see FIG. 74) implements the NodePropagator, and is the code that wiU actuaUy run on each node. When created, it is given lastlter, fax and f acy. It obtains the boundary information as a message from the last step that was completed. It completes its equations, then broadcasts the results so the next node that runs can continue.
The first thing propogate does is use getNodeState to initialize its own copy of the matrix. (See FIG. 75A.)
Next, boundary calculations are obtained. (See FIG. 75B.) These are results that are on the boundary of what this node wUl calculate. If this is the first node, there aren't any boundaries, and nothing is done. But if this isn't step 0, there wiU be a message waiting from the last node, and it's obtained with getMessagesFromSender .
Next, the actual calculation takes place (see FIG. 75C), and then copied back into the matrix. The matrix is then set into the node state for the next iteration using setNodeState. (see FIG. 75D.) The boundaries are also sent on for the next node using sendMessage . This section explains how to use Engine Discriminators, a powerful method of controlling which Engines are eUgible to receive specific Jobs. About Discriminators
In a typical business environment, not every PC wiU be identical. Some departments may have slower machines that are utilized less. Other groups may have faster PCs, but it may be a priority to use them to capacity during the day. And server farms of dedicated machines may be avaUable aU the time, without being interrupted by foreground tasks.
Depending on the Jobs one has and the general demographics of one's computing environment, the scheduling of Tasks to Engines may not be linear. And sometimes, a specific Job may require special handling to ensure the optimal resources are avaUable for it. Also, in
810791vl some LiveClusterinstaUations, you one want to limit what Engines report to a given Broker for work. Or, one may want to limit what Driver submits work to a given Broker.
A discriminator enables one to specify what Engines can be assigned to a Task, what Drivers can submit Tasks to a Broker, and what Engines can report to a Broker. These limitations are set based on properties given to Engines or Drivers. Task discrimination is set in the Driver properties, and controls what Engines can be assigned to a Task. Broker discrimination is set in the LiveClusterAdministration Tool, and controls what Drivers and Engines use that Broker.
For example: say one is implementing LiveClusterat a site that has 1000 PCs. However, 300 of the PCs are slower machines used by the Marketing department, and they are rarely idle. The Job wiU require a large amount of CPU time from each Engine processing tasks. Without using discriminators, the Tasks are sent to the slower machines and are regularly interrupted. This means that roughly 30% of the time, a Task wiU be scheduled on a machine that might not complete any work. Discriminators provide a solution to this issue. First, one would deploy Engines to aU of one's computers; Marketing computers would have a department property set to Marketing, and the rest of the machines in the company would have the department property set to something other than Marketing. Next, when the appUcation sends a complex Job with the LiveClusterAPI, it attaches a Task discriminator specifying not to send any Tasks from the Job to any Engine with the department property set to Marketing. The large Job's Tasks wUl only go to Engines outside of Marketing, and smaUer Jobs with no Task discriminator set wUl have Tasks processed by any Engine in the company, including those in Marketing. Configuring Engines with Properties Default Properties An Engine has several properties set by default, with values corresponding to the configuration of the PC running the Engine. One can use these properties to set discriminators. The default properties, available in all Engines, are as follows:
• guid The GUTD (network card address)
• id The numerical ID of the Engine • instance The instance, for multi-processor machines
810791vl • username The Engine's username
• cpuNo The number of CPUs on the machine
• cpuMFlops The performance, in Megaflops
• totalMemlnKB Total available memory, in Kilobytes
• freeMemlnKB Free memory, in Kilobytes
• freeDisklnMB Free disk space, in Megabytes
• os Operating system (win32, Solaris or linux)
Custom Properties
To set other properties, one can add the properties to the Engine Tracker, and instaU the Engine using tracking. One may also add and changes properties individuaUy after instaUation using the Engine Properties command. In Windows:
To add custom properties to an Engine, in the LiveCluster Administration Tool, one must make changes using the Engine Tracking Editor. After one changes the properties in the editor, one wiU be prompted for values for the properties each time one instaUs an Engine with the 1-CUck InstaU with Tracking option. One can also change these at any time on any Engine with the Engine Properties command.
To access the editor, go to the Configure section, and cUck Engine Tracking Editor.
By default, the foUowing properties are defined:
• MachineName hostname of the machine where the Engine is being installed;
• Group work group to attach Engine;
• Location machine location;
• Description brief description of machine. When one installs an Engine with the 1-CUck InstaU with Tracking option, one wiU be prompted to enter values for aU four of the properties. If one doesn't want to use aU four properties, one may cUck the Remove button next to the properties one does not want to use. (Note that you cannot remove the MachineName property.)
To add another property to the above Ust, enter the property name in the Property column, then enter a description of the property in the Description column, and cUck Add.
810791vl Configuring Driver Properties
Broker discrimination can be configured to work on either Engines or Drivers. For discrimination on Drivers, one can add or modify properties in the driver .properties file included in the top-level directory of the Driver distribution. Configuring Broker Discriminators
One can configure a Broker to discriminate which Engines and Drivers from which it wiU accept login sessions. This can be done from the LiveClusterAdministration Tool by selecting Broker Discrimination in the Configure section.
First, select the Broker to be configured from the Ust at the top of the page. If one is only running a single Broker, there wUl only be one entry in this Ust.
One can configure discriminators for both Driver properties and Engine properties. For Drivers, a discriminator is set in the Driver properties, and it prevents Tasks from a defined group of Drivers from being taken by this Broker. For Engines, a discriminator prevents the Engine from being able to log in to a Broker and take Tasks from it. Each discriminator includes a property, a comparator, and a value. The property is the property defined in the Engine or Driver, such as a group, OS or CPU type. The value can be either a number (double) or string. The comparator compares the property and value. If they are true, the discriminator is matched, and the Engine can accept a Task, or the Driver can submit a Job. If they are false, the Driver is returned the Task, or in the case of an Engine, the Broker will try to send the Task to another Engine.
The foUowing comparators are avaUable:
• equals A string that must equal the client's value for the property.
• not equals A string that must not equal the client's value for the property.
• includes A comma-delimited string that must equal the client's value for that property. ("*" means accept aU.)
• excludes A comma-delimited string that cannot equal the client's value for that property. ("*" means deny all.)
• = The value is a number (double, for any to be used) that must equal the value for that property.
810791vl • ! = The value is a number (double, for any to be used) that must not equal the value for that property.
• < The value is a number, the client's value must be less than this value. • < = The value is a number, the client's value must be less than or equal to this value.
• > The value is a number, the client's value must be greater than this value.
• > = The value is a number, the client's value must be greater than or equal to this value.
One further option for each discriminator is the Negate other Brokers box. When this is selected, an Engine or Driver wiU be considered only for this Broker, and no others. For example, if one has a property named state and one sets a discriminator for when state equals NY and selects Negate other Brokers, any Engine with state set to NY wiU only go to this Broker and not others.
Once you has entered a property, comparator, and value, cUck Add. One can add multiple discriminators to a Broker by defining another discriminator and cUcking Add again. CUck Save to save aU added discriminators to the Broker.
By default, if an Engine or Driver does not contain the property specified in the discriminator, the discriminator is not evaluated and considered false. However, one can select Ignore Missing Properties for both the Driver and Engine. This makes an Engine or Driver missing the property specified in a discriminator ignore the discriminator and continue. For example, if one sets a discriminator for OS = Linux, and an Engine doesn't have an OS property, normaUy the Broker won't give the Engine Jobs. But if one selects Ignore Missing Properties, the Engine without properties will still get Jobs from the Broker.
Task discriminators are set by the Driver, either in Java or in XML. (See FIG. 76.) The LiveClusterTutorial
This section provides detaUs on how to obtain examples of using the LiveClusterAPI. Using JNI Example
810791vl Often, the appUcation, or some portion of it, is written in another (native) programming language such as C, C+ + , or Fortran, but it is convenient to use Java as the glue that binds the compute server to the appUcation layer. In these cases the Java Native Interface (JNI) provides a simple mechanism for passing data and function caUs between Java and the native code. [Note: One must create a separate wrapper to access the dynamically linked library ( . dll or . so) from the Engine-side and insert a call to this wrapper in the serviceO method of the Tas let interface.]
FIGs. 77-79 provide an example of a JNI for the previously-discussed Pi calculation program. Submitting a LiveCluster Job
Using Java, jobs can be submitted to a LiveCluster Server in any of three ways:
• From the command line, using XML scripting:
J ava - cp DSDriver . j ar MyApp picalc .xml This method uses properties from the driver . properties file located in the same directory as the Driver. One can also specify command-line properties.
• At runtime using one of the createJob methods (this supports partial scripting of the Job Bean).
PiCalcJob job = (PiCalcJob) Job . createJob ( new File (picalc.xml) ); job. execute () ; double pi = job.getPiValue () ;
• At runtime (entirely).
PiCalcJob job = new PiCalcJob (); job.getOptions () .setJarFile ( new File (picalc. jar) ); job. setlterations (30000000) ; j ob.setNumTasks (500) ; j ob . execute ( ) ; double pi = job . getPiValue ( ) ; XML scripting also supports the Batch object, which enables one to submit a Job once and have it run many times on a regular schedule.
Using C+ + , jobs must be submitted to a LiveClusterServer using the run-time interface:
810791vl j ob = new Pi Job ( ) ; try { job->execute() ;// or executelnThread ( ) or executeLocally () } catch (JobException je) { cerr << "testJob caught an exception " << je << endl;
} delete job;
Driver Properties
Properties can be defined in the driver . properties file, located in the same directory as the Driver. One can edit this file and add properties, as property = value pairs. One can also specify properties on the command line using the -D switch, if they are prefixed with ds. For example:
Java -Dds.DSPrimaryDirector=serverl:80 - Dds.DSSecondaryDirector=server2 : 80 -cp DSDriver.jar MyApp picalc.xml
Properties specified on the command line are overwritten by properties specified in the driver . properties file. If one wants to set a property already defined in the driver . properties, one must first edit the driver . properties and comment out the property. Using the Direct Data Transfer Property
Direct data transfer is enabled by setting DSDirectDataTransf er=true, which is the default setting in the driver . properties file. If one writes a sheU script to create Jobs, each with their own Driver running from its own Java VM, one's script must provide a different port number for the DSWebserverPort property normaUy set in the driver . properties file. If one's script instansiates multiple Drivers from the same driver . properties file with the same port number, the first Driver wiU open a web server Ustening to the defined socket. Subsequent Drivers wiU not open another web server as long as the first Job is running, but wiU be able to continue running by using the first Job's server for direct data. However when the first Job completes, its server wiU be terminated, causing subsequent Jobs to faU.
810791vl To write a sheU script for the above situation, one could remove the DS ebserverPort property from the driver . properties file and set a unique port number for each Job using a command line property, as described in the previous section. XML Job Scripting LiveCluster is packaged with XML-based scripting facilities one can use to create and configure Jobs, (see FTG. 80.) Since Java Jobs are JavaBeans components, their properties can be manipulated via XML and other Bean-compatible scripting facilities. Batch Jobs
Jobs can be scheduled to run on a regular basis. Using XML scripting, one can submit a Job with specific scheduling instructions. Instead of immediately entering the queue, the Job wUl wait until the time and date specified in the instructions given.
Batch Jobs can be submitted to run at a specific absolute time, or a relative time, such as every hour. Also, a Batch Job can remain active, resubmitting a Job on a regular basis.
See, for example, FIG. 81, which submits the Linpack test at 11:20 AM on September 28th, 2001. The batch element contains the entire script, while the schedule element contains properties for type and startTime, defining when the Job wiU run. j ob actuaUy runs the Job when it is time, and contains properties needed to run the Job, while command also runs at the same time, writing a message to a log. Distributing Libraries, Shared Data, and Native Code The LiveCluster system provides a simple, easy-to-use mechanism for distributing linked Ubraries ( . dll or . so), Java class archives ( . j ar), or large data files that change relatively infrequently. The basic idea is to place the files to be distributed within a reserved directory associated with the Server. The system maintains a synchronized repUca of the reserved directory structure for each Engine. This is caUed directory rephcation. By default, four directories are repUcated to Engines: Win32, Solaris, and linux directories are mirrored to Engines run on the respective operating systems, and shared is mirrored to aU Engines.
The default location for these four directories are as foUows: public_html /updates/resources/shared/ public_html/updates/resources/win32/
810791vl public_html/updates/resources/solaris/ public_html/updates/resources/linux/
On the Server, these paths are relative to one's instaUation directory. For example, if one instaUs LiveCluster at c : \DataSynapse, one should append these paths to C : \DataSynapse\Server\livecluster on your server. On the Engine, the default instaUation in Windows puts the shared and Win32 directories in C : \ Program Files\DataSynapse\Engine\resources.
To configure directory repUcation, in the Administration Tool, go to the Configure section, and select Broker Configuration. Select Engine Manager, then Engine FUe Update Server.
When Auto Update Enabled is set to true (the default), the shared directories wUl automaticaUy be mirrored to any Engine upon login to the Broker. Also, the Server wiU check for file changes in these directories at the time interval specified in Minutes Per Check. If changes are found, aU Engines are signaled to make an update. One can force aU Engines to update immediately by setting Update AU Now to true.
This wUl cause aU Engines to update, and then its value wUl return to false. If one has instaUed new files and wants aU Engines to use them immediately, set this option to true. Verifying the Application
Before deploying any appUcation in a distributed environment, one should verify that it operates correctly in a purely local setting, on a single processor. The executeLocally ( ) method in the Job class is provided for this purpose. Calling this method results in synchronous execution on the local processor; that is, the constituent Tasks execute sequentiaUy on the local processor, without any intermediation from a Broker or distribution to remote Engines. Optimizing LiveCluster Server architecture
The LiveCluster Server architecture can be deployed to give varying degrees of redundancy and load sharing, depending on the computing resources avaUable. Before instaUation, it's important to ascertain how LiveCluster will be used, estimate the volume and frequency of jobs, and survey what hardware and networking will be used for the instaUation.
810791vl First, it's important to briefly review the architecture of a Server. The LiveCluster Server consists of two entities: the LiveCluster Director and the LiveCluster Broker:
• Director — Responsible for authenticating Engines and initiating sessions between Engines and Brokers, or Drivers and Brokers. Each LiveCluster installation must have a Primary Director. OptionaUy, a LiveCluster instaUation can have a
Secondary Director, to which Engines wUl log in If the Primary Director faUs.
• Broker — Responsible for managing jobs by assigning tasks to Engines. Every LiveCluster instaUation must have at least one Broker, often located on the same system as the primary Director. If more than one Broker is instaUed, then a Broker may be designated as a FaUover Broker; it accepts Engines and Drivers only if aU other Brokers faU. A minimal configuration of LiveCluster would consist of a single Server configured as a Primary Director, with a single Broker. Additional Servers containing more Brokers or Directors can be added to address three primary concerns: redundancy, volume, and other considerations. Redundancy
Given a minimal configuration of a single Director and single Broker, Engines and Drivers wiU log in to the Director, but faUure of the Director (either by excessive volume, Server faUure, or network faUure) would mean a Driver or Engine not logged in would no longer be able to contact a Director to estabUsh a connection.
To prevent this, redundancy can be buUt into the LiveCluster architecture. One method is to run a second Server with a Secondary Director, and configure Engines and Drivers with the address of both Directors. When the Primary Director faUs, the Engine or Driver wUl contact the Secondary Director, which contains identical Engine configuration information and will route Engines and Drivers to Brokers in the same manner as the Primary Director. FIG. 82 shows an exemplary implementation with two Servers.
In addition to redundant Directors, a Broker can also have a backup on a second Server. A Broker can be designated a FaUover Broker on a second Server during instaUation. Directors wiU only route Drivers and Engines to FaUover Brokers if no other regular Brokers are avaUable. When regular Brokers then become avaUable, nothing further is routed to the
810791vl FaUover Broker. When a FaUover Broker has finished processing any remaining jobs, it logs off aU Engines, and Engines are then no longer routed to that FaUover Broker. FIG. 82 shows a FaUover Broker on the second Server. Volume In larger clusters, the volume of Engines in the cluster may require more capability than can be offered by a single Broker. To distribute load, additional Brokers can be added to other Servers at instaUation. For example, FIG. 83 shows a two Server system with two Brokers. Drivers and Engines wUl be routed to these Brokers in round-robin fashion. Other Considerations Several other factors may influence how one may integrate LiveCluster with an existing computing environment. These include:
• Instead of using one Cluster for all types of Jobs, one may wish to segregate different subsets of jobs (for example, by size or priority) to different Directors.
• One's network may dictate how the Server environment should be planned. For example, if one has offices in two parts of the country and a relatively slow extranet but a fast intranet in each location, one could instaU a Server in each location.
• Different Servers can support data used for different job types. For example, one Server can be used for Jobs accessing a SQL database, and a different Server can be used for jobs that don't access the database. With this flexibihty, it's possible to architect a Server model to provide a job space that wiU facilitate job traffic. Configuring a network
Since LiveCluster is a distributed computing appUcation, successful deployment wiU depend on one's network configuration. LiveCluster has many configuration options to help it work with existing networks. LiveCluster Servers should be treated the same way one treats other mission-critical file and appUcation servers: assign LiveCluster Servers static IP addresses and resolvable DNS hostnames. LiveCluster Engines and Drivers can be configured in several different ways. To receive the full benefit of peer-to-peer communication, one will need to enable communication between Engines and Drivers (the default), but LiveCluster can also be configured to work with a hub and spoke architecture by disabling Direct Data Transfer.
810791vl Name Service
LiveCluster Servers should run on systems with static IP addresses and resolvable DNS hostnames. In a pure Windows environment, it is possible to run LiveCluster using just WINS name resolution, but this mode is not recommended for larger deployments or heterogeneous environments.
Protocols and Port Numbers
LiveCluster uses the Internet Protocol (IP). AU Engine-Server, Driver-Server, and Engine-Driver communication is via the HTTP protocol. Server components, Engines, and Drivers can be configured to use port 80 or any other avaUable TCP port that is convenienl for one's network configuration.
AU Director-Broker communication is via TCP. The default Broker login TCP port is 2000, but another port can be specified at instaUation time. By default, after the Broker logs in, another pair of ephemeral ports is assigned for further communication. The Broker and Director can also be configured to use static ports for post-login communication. Server-Engine and Driver-Server Communication
AU communication between Engines and Servers (Directors and Brokers) and between Drivers and Servers is via the HTTP protocol, with the Engine or Driver acting as HTTP cUent and the Server acting as HTTP server. (See FIG. 84.)
The Server can be configured to work with an NAT device between the Server and the Engines or Drivers. To do this, specify the external (translated) address of the NAT device when referring to the Server address in Driver and Engine installation.
Win32 LiveCluster Engines can also support an HTTP proxy for communication between the Engine and the Broker. If the default HTML browser is configured with an HTTP proxy, the Win32 Engine wiU detect the proxy configuration and use it. However, since aU LiveCluster communication is dynamic, the HTTP proxy is effectively useless, and for this reason it is preferred not to use an HTTP proxy. Broker-Director Communication
Communication between Brokers and Directors is via TCP. (See FIG. 85.) By default, the Broker will log in on port 2000, and ephemeral ports wiU then be assigned for further communication. This configuration does not permit a firewaU or screening router between the
810791vl Brokers and Directors. If a firewaU or screening router must be supported between Brokers and Directors, then the firewaU or screening must have the Broker login port (default 2000) open. AdditionaUy, the Brokers must be configured to use static ports for post-login communication, and those ports must be open on the firewaU as weU. Direct Data Transfer
By default, LiveCluster uses Direct Data Transfer, or peer-to-peer communication, to optimize data throughput between Drivers and Engines. (See FIGs. 86-87.) Without Direct Data Transfer, aU task inputs and outputs must be sent through the Server. Sending the inputs and outputs through the Server wUl result in higher memory and disk use on the Server, and lower throughput overaU.
With Direct Data Transfer, only Ughtweight messages are sent though the Server, and the "heavy lifting" is done by the Driver and Engine nodes themselves. Direct data transfer requires that each peer knows the B? address that he presents to other peers. In most cases, therefore, Direct Data Transfer precludes the use of NAT between the peers. LUcewise, Direct Data Transfer does not support proxies.
For LiveCluster deployments where NAT is already in effect, NAT between Drivers and Engines can be supported by disabling peer-to-peer communication as foUows:
• If, from the perspective of the Drivers, the Engines appear to be behind an NAT device, then the Engines cannot provide peer-to-peer communication, because they won't know their NAT address. In this case Direct Data Transfer must be disabled in the Engine configuration.
• Likewise, if, from the perspective of the Engines, the Drivers appear to be behind an NAT device, then the Drivers cannot provide peer-to-peer communication, as they do not know their NAT address. In this case Direct Data Transfer must be disabled in the Driver properties.
WhUe the foregoing has described the invention by recitation of its various aspects/features and Ulustrative embodiment (s) thereof, those skiUed in the art wiU recognize that alternative elements and techniques, and/or combinations and sub-combinations of the described elements and techniques, can be substituted for, or added to, those described herein. The present invention, therefore, should not be limited to, or defined by, the specific
810791vl apparatus, methods, and articles-of-manufacture described herein, but rather by the appended claims (and others that may be contained in continuing appUcations), which claims are intended to be construed in accordance with weU-settled principles of claim construction, including, but not limited to, the foUowing : • Limitations should not be read from the specification or drawings into the claims (i.e., if the claim calls for a "chair," and the specification and drawings show a rocking chair, the claim term "chair" should not be limited to a rocking chair, but rather should be construed to cover any type of "chair").
• The words "comprising," "including," and "having" are always open-ended, irrespective of whether they appear as the primary transitional phrase of a claim, or as a transitional phrase within an element or sub-element of the claim (e.g., the claim "a widget comprising: A; B; and C" would be infringed by a device containing 2A's, B, and 3C's; also, the claim"a gizmo comprising: A; B, including X, Y, and Z; and C, having P and Q" would be infringed by a device containing 3A's, 2X*s, 3Y's, Z, 6P's, and Q).
• The indefinite articles "a" or "an" mean "one or more"; where, instead, a purely singular meaning is intended, a phrase such as "one," "only one," or "a single," will appear.
• Where the phrase "means for" precedes a data processing or manipulation "function," it is intended that the resulting means-plus-function element be construed to cover any, and all, computer implementation(s) of the recited "function" using any standard programming techniques known by, or available to, persons skilled in the computer programming arts.
810791vl A claim that contains more than one computer-implemented means-plus-function element should not be construed to require that each means-plus-function element must be a structuraUy distinct entity (such as a particular piece of hardware or block of code); rather, such claim should be construed merely to require that the overaU combination of hardware/firmware/software which implements the invention must, as a whole, implement at least the function(s) caUed for by the claim.

Claims

In hght of the above, and reserving aU rights to seek additional claims covering the subject matter disclose above, WHAT WE CLAIM IN THIS APPLICATION IS:
1. A distributed computing system, comprising: a pluraUty of engines; at least one broker; at least on cUent appUcation, said cUent appUcation having an associated driver; said driver configured to enable communication between said cUent appUcation and two or more of said engines via a peer-to-peer communication network; characterized in that said driver is further configured to enable communication between said cUent appUcation and said at least one broker over said peer-to-peer network, and said broker is further configured to communicate with said engines over said peer-to-peer network, thereby enabling said broker to control and supervise the execution of tasks provided by said cUent appUcation on said two or more engines.
2. A distributed computing system, as defined in claim 1, further including at least one faUover broker configured to communicate with said driver and said engines, and, in the event of a broker faUure, control and supervise the execution of tasks provided by said cUent appUcation on said two or more engines.
3. A distributed computing system, as defined in claim 1, wherein said broker further includes an adaptive scheduler configured to selectively assign and control the execution of tasks provided by said cUent appUcation on said engines.
4. A distributed computing system, as defined in claim 3, wherein said adaptive scheduler is further configured to redundantly assign one or more of the task(s) provided by said cUent appUcation to multiple engines, so as to ensure the timely completion of said redundantly assigned task(s) by at least one of said engines.
5. A distributed computing system, as defined in claim 1, wherein the tasks provided by said cUent appUcation have associated discriminators.
6. A distributed computing system, as defined in claim 5, wherein said broker utilizes parameters associated with said discriminators and said engines to determine the assignment of tasks to engines.
810791vl
7. A distributed computing system, as defined in claim 1, wherein said system controls the timing of selected communications between said driver and said engines so as to avoid bottlenecks associated with overloads of said peer-to-peer network.
8. A distributed computing system, as defined in claim 1, wherein said system is further configured to selectively delay certain communications over said peer-to-peer network, thereby avoiding excessive simultaneous network traffic and increasing overaU performance of said system.
9. A distributed computing system, as defined in claim 1, wherein said broker and said two or more engines each include an associated propagator object that permits control over engine-to-engine propagation of data over said peer-to-peer network.
10. A distributed computing system, as defined in claim 9, wherein said propagator objects enable an engine or broker node to perform at least three of the foUowing operations:
(i) broadcast a message to all nodes, except the current node;
(ii) clear all message(s), and associated message state(s), on specified broker(s) and/or engine(s);
(iii) get message(s) for the current node;
(iv) get the message(s) from a specified node for the current node;
(v) get the state of a specified node;
(vi) get the total number of nodes; (vii) send a message to a specified node; and/or,
(viii) set the state of a specified node.
11. A distributed computing system, as defined in claim 9, wherein said propagator objects enable an engine or broker node to perform at least five of the foUowing operations:
(i) broadcast a message to all nodes, except the current node; (ii) clear all message(s), and associated message state(s), on specified broker(s) and/or engine(s);
(iii) get message(s) for the current node;
810791vl (iv) get the message(s) from a specified node for the current node;
(v) get the state of a specified node;
(vi) get the total number of nodes;
(vii) send a message to a specified node; and/or, (viii) set the state of a specified node.
12. A distributed computing system, as defined in claim 9, wherein said propagator objects enable an engine or broker node to perform at least five of the foUowing operations:
(i) broadcast a message to all nodes, except the current node;
(ii) clear all message(s), and associated message state(s), on specified broker(s) and/or engine(s);
(iii) get message(s) for the current node;
(iv) get the message(s) from a specified node for the current node;
(v) get the state of a specified node;
(vi) get the total number of nodes; (vii) send a message to a specified node; and/or,
(viii) set the state of a specified node.
810791vl
PCT/US2002/003218 2000-05-31 2002-02-04 Distributed computing system WO2002063479A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US10/222,337 US20030154284A1 (en) 2000-05-31 2002-08-16 Distributed data propagator
US10/306,689 US7093004B2 (en) 2002-02-04 2002-11-27 Using execution statistics to select tasks for redundant assignment in a distributed computing platform
US10/728,732 US7130891B2 (en) 2002-02-04 2003-12-05 Score-based scheduling of service requests in a grid services computing platform
US11/981,137 US8195739B2 (en) 2002-02-04 2007-10-31 Adaptive polling

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US26618501P 2001-02-02 2001-02-02
US09/777,190 2001-02-02
US60/266,185 2001-02-02
US09/777,190 US20020023117A1 (en) 2000-05-31 2001-02-02 Redundancy-based methods, apparatus and articles-of-manufacture for providing improved quality-of-service in an always-live distributed computing environment

Related Child Applications (3)

Application Number Title Priority Date Filing Date
US10/222,337 Continuation-In-Part US20030154284A1 (en) 2000-05-31 2002-08-16 Distributed data propagator
US10/306,689 Continuation-In-Part US7093004B2 (en) 2002-02-04 2002-11-27 Using execution statistics to select tasks for redundant assignment in a distributed computing platform
US10/306,689 Continuation US7093004B2 (en) 2002-02-04 2002-11-27 Using execution statistics to select tasks for redundant assignment in a distributed computing platform

Publications (1)

Publication Number Publication Date
WO2002063479A1 true WO2002063479A1 (en) 2002-08-15

Family

ID=26951686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/003218 WO2002063479A1 (en) 2000-05-31 2002-02-04 Distributed computing system

Country Status (1)

Country Link
WO (1) WO2002063479A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2419693A (en) * 2004-10-29 2006-05-03 Hewlett Packard Development Co Method of scheduling grid applications with task replication
US8036867B2 (en) * 2003-10-14 2011-10-11 Verseon Method and apparatus for analysis of molecular configurations and combinations
US20130254202A1 (en) * 2012-03-23 2013-09-26 International Business Machines Corporation Parallelization of synthetic events with genetic surprisal data representing a genetic sequence of an organism
US8812243B2 (en) 2012-05-09 2014-08-19 International Business Machines Corporation Transmission and compression of genetic data
US8855938B2 (en) 2012-05-18 2014-10-07 International Business Machines Corporation Minimization of surprisal data through application of hierarchy of reference genomes
US8972406B2 (en) 2012-06-29 2015-03-03 International Business Machines Corporation Generating epigenetic cohorts through clustering of epigenetic surprisal data based on parameters
US9002888B2 (en) 2012-06-29 2015-04-07 International Business Machines Corporation Minimization of epigenetic surprisal data of epigenetic data within a time series
US20150254398A1 (en) * 2014-03-04 2015-09-10 Fry Laboratories, LLC Electronic methods and systems for microorganism characterization
US9805159B2 (en) * 2015-07-02 2017-10-31 Neuroinitiative, Llc Simulation environment for experimental design
US10192010B1 (en) * 2016-05-25 2019-01-29 X Development Llc Simulation of chemical reactions via multiple processing threads
US10255409B2 (en) * 2013-08-15 2019-04-09 Zymeworks Inc. Systems and methods for in silico evaluation of polymers
US10331626B2 (en) 2012-05-18 2019-06-25 International Business Machines Corporation Minimization of surprisal data through application of hierarchy filter pattern
CN112817573A (en) * 2019-11-18 2021-05-18 北京沃东天骏信息技术有限公司 Method, apparatus, computer system, and medium for building streaming computing applications

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5548726A (en) * 1993-12-17 1996-08-20 Taligeni, Inc. System for activating new service in client server network by reconfiguring the multilayer network protocol stack dynamically within the server node
US5819033A (en) * 1993-06-04 1998-10-06 Caccavale; Frank Samuel System and method for dynamically analyzing and improving the performance of a network
US6304910B1 (en) * 1997-09-24 2001-10-16 Emulex Corporation Communication processor having buffer list modifier control bits

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5819033A (en) * 1993-06-04 1998-10-06 Caccavale; Frank Samuel System and method for dynamically analyzing and improving the performance of a network
US5548726A (en) * 1993-12-17 1996-08-20 Taligeni, Inc. System for activating new service in client server network by reconfiguring the multilayer network protocol stack dynamically within the server node
US6304910B1 (en) * 1997-09-24 2001-10-16 Emulex Corporation Communication processor having buffer list modifier control bits

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8036867B2 (en) * 2003-10-14 2011-10-11 Verseon Method and apparatus for analysis of molecular configurations and combinations
GB2419693A (en) * 2004-10-29 2006-05-03 Hewlett Packard Development Co Method of scheduling grid applications with task replication
US20130254202A1 (en) * 2012-03-23 2013-09-26 International Business Machines Corporation Parallelization of synthetic events with genetic surprisal data representing a genetic sequence of an organism
US8812243B2 (en) 2012-05-09 2014-08-19 International Business Machines Corporation Transmission and compression of genetic data
US8855938B2 (en) 2012-05-18 2014-10-07 International Business Machines Corporation Minimization of surprisal data through application of hierarchy of reference genomes
US10353869B2 (en) 2012-05-18 2019-07-16 International Business Machines Corporation Minimization of surprisal data through application of hierarchy filter pattern
US10331626B2 (en) 2012-05-18 2019-06-25 International Business Machines Corporation Minimization of surprisal data through application of hierarchy filter pattern
US8972406B2 (en) 2012-06-29 2015-03-03 International Business Machines Corporation Generating epigenetic cohorts through clustering of epigenetic surprisal data based on parameters
US9002888B2 (en) 2012-06-29 2015-04-07 International Business Machines Corporation Minimization of epigenetic surprisal data of epigenetic data within a time series
US10255409B2 (en) * 2013-08-15 2019-04-09 Zymeworks Inc. Systems and methods for in silico evaluation of polymers
US9589101B2 (en) * 2014-03-04 2017-03-07 Fry Laboratories, LLC Electronic methods and systems for microorganism characterization
US10204209B2 (en) 2014-03-04 2019-02-12 Fry Laboratories, LLC Electronic methods and systems for microorganism characterization
US9971867B2 (en) 2014-03-04 2018-05-15 Fry Laboratories, LLC Electronic methods and systems for microorganism characterization
US20150254398A1 (en) * 2014-03-04 2015-09-10 Fry Laboratories, LLC Electronic methods and systems for microorganism characterization
US11437122B2 (en) 2014-03-04 2022-09-06 Fry Laboratories, LLC Electronic methods and systems for microorganism characterization
US9805159B2 (en) * 2015-07-02 2017-10-31 Neuroinitiative, Llc Simulation environment for experimental design
US10192010B1 (en) * 2016-05-25 2019-01-29 X Development Llc Simulation of chemical reactions via multiple processing threads
CN112817573A (en) * 2019-11-18 2021-05-18 北京沃东天骏信息技术有限公司 Method, apparatus, computer system, and medium for building streaming computing applications
CN112817573B (en) * 2019-11-18 2024-03-01 北京沃东天骏信息技术有限公司 Method, apparatus, computer system, and medium for building a streaming computing application

Similar Documents

Publication Publication Date Title
US8195739B2 (en) Adaptive polling
US20030154284A1 (en) Distributed data propagator
KR101203224B1 (en) Scalable synchronous and asynchronous processing of monitoring rules
Li et al. A distributed service-oriented architecture for business process execution
Baker et al. Cluster computing review
Marco et al. The glite workload management system
Prodan et al. Overhead analysis of scientific workflows in grid environments
Koulouzis et al. Time‐critical data management in clouds: Challenges and a Dynamic Real‐Time Infrastructure Planner (DRIP) solution
Baker et al. A review of commercial and research cluster management software
WO2002063479A1 (en) Distributed computing system
US20070240140A1 (en) Methods and systems for application load distribution
De Benedetti et al. JarvSis: a distributed scheduler for IoT applications
Memishi et al. Fault tolerance in MapReduce: A survey
James Scheduling in metacomputing systems
Huang et al. HCloud: A trusted JointCloud serverless platform for IoT systems with blockchain
Agbaria et al. Starfish: Fault-tolerant dynamic MPI programs on clusters of workstations
Morrison et al. Webcom-G: grid enabled metacomputing
Leite A user-centered and autonomic multi-cloud architecture for high performance computing applications
Richard et al. The I-Cluster Cloud: distributed management of idle resources for intense computing
Carreira et al. Implementing Tuple Space with Threads.
Wang et al. A dependable Peer-to-Peer computing platform
Dharsee et al. Mobidick: a tool for distributed computing on the internet
Chrabakh et al. Gridsat: Design and implementation of a computational grid application
Stokes-Rees et al. A REST model for high throughput scheduling in computational grids
Rathore et al. Implementing Checkpointing Algorithm in Alchemi. NET

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 10222337

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 10306689

Country of ref document: US

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP