WO2001059561A1 - System and method for rapid completion of data processing tasks distributed on a network - Google Patents

System and method for rapid completion of data processing tasks distributed on a network Download PDF

Info

Publication number
WO2001059561A1
WO2001059561A1 PCT/US2001/003801 US0103801W WO0159561A1 WO 2001059561 A1 WO2001059561 A1 WO 2001059561A1 US 0103801 W US0103801 W US 0103801W WO 0159561 A1 WO0159561 A1 WO 0159561A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
file
routine
task
central processing
Prior art date
Application number
PCT/US2001/003801
Other languages
French (fr)
Other versions
WO2001059561A9 (en
Inventor
John Joseph Carrasco
Stephan Doliov
Frank B. Ehrenfried
Original Assignee
Overture Services, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Overture Services, Inc. filed Critical Overture Services, Inc.
Priority to JP2001558824A priority Critical patent/JP2003523010A/en
Priority to DE10195549T priority patent/DE10195549T1/en
Priority to AU4145301A priority patent/AU4145301A/en
Priority to CA2400216A priority patent/CA2400216C/en
Priority to GB0219491A priority patent/GB2392997B/en
Priority to EP01912700A priority patent/EP1277108A4/en
Priority to AU2001241453A priority patent/AU2001241453B2/en
Publication of WO2001059561A1 publication Critical patent/WO2001059561A1/en
Publication of WO2001059561A9 publication Critical patent/WO2001059561A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching

Definitions

  • Appendix A is included of a computer routine listing. The total number of pages is forty.
  • a system and method for allowing multiple processors for example, a computer's central processing unit(s) (CPU), on a network to perform varying number and type of data processing tasks.
  • Data processing tasks are provided to any of the available CPU's on a network equipped for the system and method.
  • the system and method choose the first available CPU for performance of the data processing task.
  • the system and method provide the task performing CPU with the minimum amount of data needed to complete the task and the necessary software instructions to complete the task.
  • the preferred system and method allow for improved efficiency, in terms of both cost and time, of data processing.
  • the user of the software considers a given data processing task, provides a set of definitions for the type of data required for each task, and specifies the task for a given group of data.
  • the system and method then divide up the input file into the sub-task data files and ships the given data and task specifications to any available computer on the network.
  • the CPU performs the task and returns the completed result to the computer that requested the task.
  • large amounts of data to be processed quickly by ordinary, commodity personal computers, running conventional operating systems such as Windows NT or Unix.
  • a small cluster of, for example, twelve dual processor computers or twenty-four single processor computers, running the software of the preferred embodiments can equal, if not exceed the performance of a supercomputer with an equivalent number of CPUs.
  • FIG. 1 shows an exemplary system for carrying out the present method in accordance with the preferred embodiments.
  • FIG. 2 shows an overview of the present method, in accordance with the preferred embodiments.
  • FIG. 3 shows a process flow for how the method constructs a work queue, in accordance with the preferred embodiments.
  • FIG. 4 shows a process of sending task data and task instructions to a computer on the network, in accordance with the preferred embodiments.
  • FIG. 5 shows a process for reassembling file results once the worker routines have completed their tasks, in accordance with the preferred embodiments.
  • a preferred embodiment of the present system and method consists of two major components - a plurality of processing units on a network to perform data processing, and software to control the computers to each work on one discreet task within a larger task, also known as a master task.
  • a task is a routine, such as a grabber routine, a worker routine and a reassembling routine, all described in more detail below.
  • Preferred embodiments for both the network hardware and software are described herein.
  • the present system can be used with search engines.
  • One suitable search engine is described in co-pending application Serial No. 09/322,677, filed May 28, 1999 entitled "System and Method for Influencing a Position on a Search Result List Generated by a Computer Network Search Engine," which is incorporated by reference herein.
  • a network generally 100, of processors 102 are shown which perform the routines used in the present system.
  • the processors 102 can be connected on a LAN, over the Internet, or the like.
  • the processors 102 can be CPUs of commodity computers, super computers, laptops or the like.
  • At least one of the processors 102 acts as a main processor 104.
  • the main processor 104 can be any of the processors 102.
  • the present method takes a large universe of data and breaks the data down to smaller groups of data that are task specific (block 200).
  • the smaller groups of data are tailored to include only that data which is needed for a particular task.
  • a user can prepare a task queue and a task data queue.
  • the task queue defines which tasks are to be executed and the order in which the tasks are to be executed.
  • the task data queue defines the data elements required for the completion of data processing tasks and how the data for the data processing tasks are laid out in a file so that the individual sub-tasks may readily be performed.
  • the sub-group is identified with a header (block 202). The header is described in more detail below.
  • the sub-group of data is then sent to an available processor (block 204).
  • the available processor can then perform the required task on the sub-group of data (block 206).
  • a grabber routine located at the CPU which performs the grabbing task, receives instructions on which data elements to grab from a task data file created previously (block 300).
  • the task data file is defined by a combination of a file header and data used by the routine.
  • the grabber routine creates the file header that describes output data so that the worker routine can act on elements in the data file (block 302).
  • the grabber routine insures that the only data elements in the data file are those which are required by a subsequent worker task (block 304).
  • the grabber routine is described in more detail below.
  • the process for sending the tasks to the remote host For example, scans a memory database of available CPUs and the state of the available CPUs (block 400).
  • the first CPU in an available state is sent the task instruction file (block 402) and the task data file (block 404).
  • the task instructions include a worker routine such as monkeyCount, described in more detail below.
  • the task data file could be the results of a routine like monkeyDeJoumal, described below.
  • the controller routine creates a data file and memory entries to keep track of which available host is performing with task (block 406). Upon completing the task, the task routine makes the result data available to the controller routine
  • the controller routine waits for the task routine to indicate completion. Upon successful completion, the controller routine updates the memory data files of available hosts (block 410). Upon unsuccessful completion, however, the controller routine updates the data file and memory entries and reassigns the task to another available CPU.
  • the data file which results from completing the task is sent back to the requesting computer, i.e., the main processor, to be reassembled with resulting data files from other completed tasks (block 500).
  • the main processor constructs a reassembling command to specify the sub- task data files that will be reassembled into one file, (block 502).
  • reassembling code e.g., monkeyJoin described below, reads and evaluates the file header to identify data columns defining keys and data
  • the reassembling routine After having read the result file header for each sub-task, the reassembling routine reads a data line from each of the files and outputs the data values for the key in the data line if the keys match (block 506). If the keys do not match, the reassembling routine reads the file header for how to represent and handle the missing data values.
  • all task result files come from the same input data file, so there is no need to handle duplicate keys.
  • the task result data files will contain only one of any given key. Likewise, it is preferred that the task routines are written to output the keys in sorted order, so not further sorting is required. Thereafter, the data is reassembled (block 508).
  • the preferred embodiment of the hardware is a set of commodity personal computers. So that a user works within a known environment, it is preferred that each computer contains a current model CPU, the same amount of memory, and at least one high-speed network interface card. Of course, other combinations could be used.
  • a preferred embodiment consists of twelve, dual CPU computers, with each CPU running at 500MHz clock speed, 1 Gigabyte of Random Access Memory, and two 100BaseT ethernet network adapter cards.
  • Each computer in the cluster runs the RedHat Linux operating system.
  • Those skilled in the art will appreciate that other operating systems could be used, such as Microsoft Windows and Window NT, Sun Solaris, and Macintosh.
  • the twelve computers are connected to each other via a high-speed network switch.
  • Each computer is also equipped with, for example, a SCSI II interface and a nine Gigabyte hard drive. Of course, many more computers could be added to the group without impairing performance. II. Software
  • a preferred embodiment of the software consists of several separate computer programs, implemented in a language that is interpreted at run time, for example, the Perl programming language.
  • Perl programming language Those skilled in the art will appreciate that other programming languages that are interpreted at run time could be used such as Java, PHP3, Python and Tcl/Tk.
  • a first computer routine referred to as the data preprocessing routine, reads the input data file, reads the task data queue control file, performs any number of arbitrary instructions which are specified in the task data queue file, and creates a file header and data file.
  • a second computer routine extracts only the data fields required for a particular data processing task from the data file created by the first routine.
  • the grabber routine creates a file that includes a file header segment and a data segment that a worker routine utilizes.
  • the file header segment of the output data file includes executable or evaluable code.
  • the third routine the worker routine, performs computations on the data file created by the grabber routine.
  • the worker routine parses the file header created by the grabber routine and then performs a requested operation on the data portion of the file that follows the file header. Parsing includes reading the file header and evaluating the programmed code to generate a hash table that defines the structure of the data to follow.
  • a hash table is a data structure that uses keys that map to data values.
  • the preferred embodiment of this invention includes a fourth software routine, which allows various task files to be merged.
  • the task files are merged, so that, for a given class, all the aggregate measures are in one data row, which makes loading databases an easy task.
  • Each of the four software routines is treated in more detail below.
  • the data preprocessing routine formats data selected by the user so that the data may easily be rendered into discrete parts required for processing.
  • a preferred embodiment of the data preprocessing routine reads an input data file and a control file, returns groupings of data specified in the control file and outputs a new file which will be read by the other routines of this invention. Reading the input data file is handled via an abstraction mechanism- the data preprocessing routine is pointed to an existing data file parsing routine. The only two assumptions made about the data file parsing routine is that it is capable of 1) identifying groupings of data and 2) returning such groupings of data in a hash table data structure.
  • the input data file contains groups of data for user clicks on a web site (group: userClick) and groups of data for searches performed by a search engine (group: search), 'userClick' and 'search' would be keys in the hash table of all known data groupings for the input data file. These two grouping keys, in turn, each need to point to another hash table.
  • the names of the keys in this hash table would be the names of the data elements, and the values would be the actual data value for that data element within a particular instance of a group.
  • the group, 'userClick' could for example, contain data elements named 'timeStamp', 'IPAddress', 'rank', and 'AccountlD'; the group named 'search' could for example, contain data elements named 'query', 'timeStamp', and 'resultsFound'.
  • the reading of the control file, and acting upon its data, is handled by the code of the data preprocessing routine.
  • the control file is specified as a hash table evaluable at run time.
  • the keys in this hash table hash table are arbitrary names, with each name representing a task queue.
  • the values for these keys are hash tables also.
  • This nested hash table has three required keys and any number of optional keys.
  • the required keys are named 'event', 'columns', and 'delimiter 1 .
  • the optional keys will be described later.
  • the value for the key named 'event' specifies a group of data from the input file that contains the data elements of interest.
  • the input data file parser identifies the group name. For example, the value for the key named 'event' could be
  • the key named columns points to another hash table.
  • the keys of this nested hash table are the arbitrarily assigned column names that any later data processing tasks may need.
  • the values of the column name keys are one innermost hash table, which allows the user to specify how to construct the columns of interest for a data processing task. This is done via two keys, one named 'source_args', the other named 'source_routine'.
  • the 'source_args' key specifies the data elements from the data group which are to be used in constructing the desired output field; the 'source_routine' is specified as a valid Perl subroutine.
  • the key named 'delimiter' specifies how the data columns of interest are to be separated in the data segment of the output file. In the preferred embodiment, this control file would, at a minimum, be presented as follows:
  • 'AdvertiserQueue' is the arbitrary name assigned to the task data queue.
  • the data group of interest comes from the group 'userClick'.
  • Two output columns are desired: one referred to as'AdListingX', the other as 'IP_Address'.
  • the output column 'AdListingX' would consist of the data elements 'AccountlD' and 'rank', which are part of the 'userClick' data group.
  • the final presentation of the 'AdListingX' data column would be in the form of ABC###3, supposing the advertiser ID was 'ABC and the rank was '3'.
  • the output column 'IP_Address' would merely contain the data value that the data element 'clientlP' has from an instance of the data group 'userClick 1 .
  • control file may include three additional, optional keys.
  • These optional keys have the following names: 'deltaT', 'restriction_args', and 'restriction 1 .
  • the 'deltaT 1 provides information to the data preprocessing routine about the earliest minute within an hour for which the output of the data preprocessing file should contain data.
  • Legal values for this key are numbers between 0 and 59 inclusive; single digit numbers are preceded with a zero (i.e. if 1 is the value, spell out
  • the 'restriction_args' key works just like the 'source_args' key previously mentioned. This value for this key provides input arguments to a user defined function.
  • the elements of the list must be names of data elements within the data group of interest.
  • the 'restriction' key value is a valid Perl subroutine. For example,
  • the data preprocessing routine has read the control file and evaluated the contents, it creates an output file that begins with a file header segment.
  • the file header segment is written in the form of a Perl evaluable hash table. This file header segment has four required keys and three optional keys that are discussed in the next section (B. Grabber Routine).
  • the data preprocessing routine After having output the fileheader, the data preprocessing routine enters a token named "EndOfHeader" on the next output line. At this point, any instances of data groupings which meet any restriction criteria are assembled according the rules specified in the control file and then written out to the data portion of the output file, with the columns being delimited by the delimiting character specified in the control file.
  • the preprocessor routine utilizes the data for which the user wishes computations to be performed on and places these data in a file. Thereafter, other routines that can read the file header with executable code, and execute the code, can read the data to be acted upon.
  • a requirement of the grabber routine is to be able to read the file header created by the preprocessing routine so it can extract the appropriate data elements. Since the grabber routine also writes the minimum amount of data needed for a given task to a task file, the grabber routine can write out a file header which is bound by similar rules as the preprocessing routine's file headers.
  • the grabber routine grabs data columns based on column names that are provided through input arguments delivered to the grabber routine.
  • a data file may contain a column containing a price value, such as a search result bid column, and a column containing a class descriptor, a key field that may consist of one or many fields.
  • the grabber routine could be invoked as follows: "grab -g price -g myKeyField".
  • the data file from which the grabber routine works has in its file header entries for the columns named "price” and "myKeyField".
  • the file header from the preprocessed routine should contain the appropriate header entries, i.e., the key names for the hash table which describes the data columns.
  • the grabber routine reads the header information from the data file to obtain information about the location within the data file of the column locations, the character(s) which delimit the data columns, and any special handling rules, such as how to treat or value an empty column location. Once the grabber has ascertained the column locations and what processing rules are required for a given column, the grabber routine loads those columns of interest and places them in an output file.
  • the output file written by the grabber routine, has header information pertinent to the extracted columns.
  • the header is in the form of a Perl hash table with four keys in it.
  • the four keys correspond to four of the seven keys included with the output of the preprocessor routine. These four keys are for the data columns, which of the data columns, or group of columns, make a given data row unique, the data column labels, and the output data field delimiter.
  • the key for the data columns points to a second hash table.
  • the second hash table has as its keys column names, and as its values a hash table with two keys.
  • the first key of the innermost hash table describes the data column's location in the data portion of the file, where the innermost hash table is the most embedded hash table in a series of hash tables.
  • the second key of the innermost hash table describes how to represent null data values.
  • the key for the column which uniquely describes any given data row must have the name of the column that describes the data row. This name is a key in the hash table of columns.
  • the key for the data column labels has the value of the Perl list.
  • the key describing the column delimiter has a value corresponding to the column delimiter. If this column delimiter includes any characters which are escape sequences in Perl, then these escape characters are preceded by a backslash ⁇ " ⁇ " - character.
  • the preferred embodiment places a token at the end of the file header, so that the software knows when it is done reading header information and when it can begin reading data.
  • the output data file contains the following data rows: Key # searches
  • the worker routine would sum up the data values in the column of interest for each key of interest.
  • the worker routines minimize any given task in terms of file input/output and complexity, for example, if a count is requested, the worker routine only receives a key and the data element to be counted columns. This allows many worker assignments to occur on any arbitrary number of machines. One worker routine might count the total searches for each advertiser; another might count the number of unique IP addresses that clicked on any given advertisers listing within a specific time period. When the worker routine finishes its assignment, it writes out a file with the same header format as the input file, however, the values in the hash table describing the columns will be the key descriptor from the input file and name of the worked upon data.
  • the output file would have a key of "Advertiser ID” and a column of "Count of
  • D. Data Reconstruction routine According to a preferred embodiment of the data reconstruction routine, referred to as monkeyJoin, all fields are reconstructed into one file, organized by key. The reconstruction occurs after the data to be worked on has been preprocessed, i.e., broken up into smaller work units, and the small work units are sent to machines on the network for processing.
  • the data reconstruction facilitates convenient database loading. To accomplish data reconstruction, the data reconstruction routine is given input as to which data files need to be merged into one data file. Each of the data files that will become part of the database load file is supplied as an argument to the data reconstruction routine, in list format. For example, the data reconstruction routine, e.g., monkeyJoin, is called as follows: reconstruct file 1 file2 file3 ... fileN
  • the reconstruction routine For each of the files supplied as an input argument, the reconstruction routine reads the file header information and stores the header information and data in a hash table. Once all of the headers and data have been read, each of the key values are cycled through. For every input file that had a matching key, the corresponding output columns are written. If one of the input files did not have a key entry or a value, the handling of missing or undefined values is invoked and the reconstruction routine supplies an appropriate value, per the notation in the input file header hash table. This file is written out, as the other files, with header information in the format of a Perl hash table.
  • the hash table contains the same four keys as the hash table headers supplied by the grabber and worker routines.
  • the values for the keys of this hash table include the same basic four keys required by this application: the columns hash table, the key hash table, the column delimiter specification and the hash table of column labels.
  • the CPU and computer memory intensive work occurs in the worker routines which perform the operations of interest, for example, counting unique instances within a class.
  • This CPU and memoi ⁇ intensive work ideally is distributed to a number of computers.
  • the present system and method dispatches work to available computers on a network based on the distributing software's known usage load of the computers on the network.
  • the dispatch routine allows one worker or grabber routine to run for each CPU of a computer attached to the network.
  • the dispatcher routine needs information regarding which tasks or task components can be done simultaneously and which ones first require the completion of some other task component.
  • the dispatching routine needs data about the machines which are capable of receiving the work orders, and how many work orders they may receive at once. For example, a four CPU machine could receive four orders at a time, a one CPU machine only one.
  • the routine also stores data about 1) which machine is currently performing how many tasks at any given point in time and 2) which task(s) any of these machines is performing at any given point in time.
  • the dispatch routine can initiate the launching of code for processing the data on a remote machine.
  • the preferred embodiment begins with a data file, written as a Perl hash table, which specifies the names of the available machines on the network and the total number of CPUs on the given machine. Also specified is the last known "busy/idle" state of each of the CPUs on a given machine, the last known start time. For example, in integer format, an integer indicates that a task was started on a CPU and a blank value indicates that no job is currently running on a given machine's CPU(s). Each machine on the network has one key in the hash table. Each key in this hash table points to a second hash table, the second hash table having key entries for the number of CPUs known for the given machine and the number of CPUs currently occupied doing work for that machine.
  • the outermost key in this set of tasks hash table points to one or more hash tables which specify the components of a task, and whether or not these sub-tasks can be performed simultaneously.
  • the keys of this outermost task hash table are simply integers, beginning with the number one, and incrementing by one for each task-set.
  • Each of these numbered tasks points to yet another hash table, which contains keys to represent aspects of the task (such as data preprocessing, data grabbing/counting, data joining etc.).
  • a preferred embodiment wraps these individual task keys inside a hash table whose single present key is named, for example, 'parms'.
  • the 'parms' key points to a hash table with four key entries: 'key', 'name', 'tasks' and 'masterTaskFile'.
  • These keys have the corresponding values, a descriptor of the column which constitutes the class level key, e.g., Advertiser ID, a tokenized, that is, a predefined representation of the data preprocessing task (for example, dejournal.lineads to represent the task list pertaining to the reduction of advertiser listings in an internet search engine), the list of paired "grabbing" and "counting" tasks which can be performed simultaneously, and the name of the output file of the data preprocessing routine.
  • Advertiser ID a tokenized, that is, a predefined representation of the data preprocessing task (for example, dejournal.lineads to represent the task list pertaining to the reduction of advertiser listings in an internet search engine), the list of paired "grabbing" and “counting" tasks which can be performed simultaneously, and the name of the output file of the
  • the dispatch routine reads in the control file to identify available machines on the network and the machine's availability. As a task is dispatched to a machine, the dispatching software updates its memory copy of the machine availability hash table. Thus, if a machine has two CPUs and the dispatching routine sent a task to the machine with two CPUs, the dispatching software would increment the number of busy CPUs from 0 to 1 , to indicate that one job has been sent to the machine on a network. When the machine performing the worker routine task finishes the task, the dispatcher decrements the busy CPU value by one for the machine that performed the task.
  • the dispatching software sorts the available machines by current tasks assigned to a machine. If machine X on the network has 0 busy CPUs and machine Y has 1 busy CPU, and both machines X and Y have a total of two CPUs, then the dispatching software will preferably first assign a task to machine X. This occurs because machine X has no busy CPUs as far as the dispatching software can determine. Machine X could be running some CPU intensive software without the dispatching routines knowledge. It is preferred that the computers having the CPUs only have the necessary operating system software running, to prevent the problem of work being sent to a computer whose processor is tied up with a non germane task, such as a word processing task.
  • the computers only include an operating system, a program interpreter, such as a Perl interpreter, and a secure copying program, if necessary.
  • a program interpreter such as a Perl interpreter
  • a secure copying program if necessary. If all machines on the network are equally busy, the dispatching software sorts the stack of available machines by machine name and assigns tasks in that way. If a machine is fully occupied, the dispatching software removes this machine from the available machine stack until the busy machine reports that it has finished at least one of its assigned tasks.
  • the dispatching software waits for a first time period, for example, a few minutes, to retry task dispatching. If the dispatcher software has tasks queued but cannot find an available machine after a second time period, for example, fifteen minutes, the dispatcher software creates a warning message. This condition might indicate a larger system failure that would require resetting the software system and the tasks.
  • the preferred embodiment of this invention supplies enough hardware on a network so that all pieces of hardware on this network are not likely to be completely busy for any given fifteen minutes.
  • the command set the software assembles is specific to a task, guided by the information provided in the dispatching software's control file.
  • the construction of these commands is specified as follows.
  • the machine creates a name that will uniquely identify a task and the machine on which the task is to be run. This name then gets used as the directory entry mark which the software uses as an indication that a task is either running or completed.
  • the dispatching software uses the syntax of the freely available secure shell utility (also known as ssh) to create the command which will launch a program on a remote computer.
  • ssh freely available secure shell utility
  • the preferred embodiments have the computers on the network access shared disk space, so that the remote computer references the shared disk space for program code and data.
  • both program code and data could be copied to a remote computer's private disk.
  • a remote execution utility could point the worker computer to the new location of the program code and data.
  • $machineToUse "machine07”
  • $programToLaunch "monkeyGrab”
  • $dataFileToUse "AdvertiserReport”
  • $programArguments "-g AdvertiserlD -g $task”
  • $programLocation "/shared/disk/space/code”;
  • $dataLocation "/shared/disk/space/data”
  • $dirEntryFileMark "$task .
  • $remoteExecutionTool "ssh”
  • $remoteExToolArgs "-secret /secrets/myKeyFile”
  • $commandSet "touch $dirEntryMark
  • x " $remoteExecutionTool
  • the dispatcher routine's control file indicates that a particular process or process pair can be executed simultaneously, it loops over the steps just described to launch as many processes as are required by the control file and that can be handled by the existing network. If the control file indicates that a process must finish before one or more other processes must begin, the dispatcher routine waits for such a serial task to finish before launching more serial or parallel tasks within a task queue.
  • a preferred embodiment includes a special case of a worker routine, referred to as monkeyLoad.
  • monkeyLoad has the capability of parsing file headers which are in the form of Perl evaluable code.
  • This monkeyLoad routine takes the file header information and creates a set of SQL (structured query language) statements to insert the data which follows the file header into a database. Through a set of standardized and freely available database interfaces for the Perl language, this routine can read the data lines in the output file and insert these as rows into a database.
  • SQL structured query language
  • the worker routine could also read and evaluate the file header, for example, to produce a control file for another routine, which might have more efficient interactions with a database routine, such as Oracle's SQL loader routine (sqlldr).
  • a database routine such as Oracle's SQL loader routine (sqlldr).
  • This routine is that the database columns match the column labels provided in the file header. This detail is attended to at the beginning of the process, in the initial control file which a user creates and where the user can specify arbitrary data column labels.
  • Any given worker routine functions by reading in the file header information, evaluating it, and upon output, creating another file header which has a minimum number of keys, e.g., four which any of the worker routines needs to function.
  • the one special instance of a worker routine demonstrates that the present system and method can be generalized. Since the worker routines can parse the file header information, worker routines can accomplish many useful things. One can readily recognize that instead of being instructed to count unique instances, a worker routine could be written to add, subtract, divide, multiply, compute standard deviation and so forth. The unique functionality any such routine requires is the ability to evaluate the header information in a data file as executable code that translates into a hash table with a minimum number of keys.

Abstract

A method for running tasks on a network, comprising: creating (200) at least one sub-group of data from a universe of data; identifying (202) the sub-group of data with a header, the header containing executable code; sending (204) the sub-group of data to an available processor; and performing (206) tasks with the available processor to obtain result data using the sub-group of data and instructions contained in the executable code in the header.

Description

SYSTEM AND METHOD FOR CONTROLLING THE FLOW OF DATA
ON A NETWORK
REFERENCE TO APPENDIX
Appendix A is included of a computer routine listing. The total number of pages is forty.
BACKGROUND
Many commercial enterprises require that large volumes of data be processed in as short a time frame as possible. In recent years, businesses needing to process such large volumes of data have purchased very expensive, specialized multi-processor hardware, often referred to as mainframe computers, supercomputers or massively parallel computers. The cost of such hardware is often in the millions of dollars, with additional costs incurred by support contracts and the need to hire specialized personnel to maintain these systems. Not only is such supercomputing power expensive, but it does not afford the user much control over how any given task gets distributed among the multiple processors. How any computing task gets distributed becomes a function of the operating system of such a supercomputer.
In the field of data processing, often very similar operations are performed on different groups of data. For example, one may want to count the unique instances of a class, e.g., a group of data, for several different classes, know what the arithmetic mean of a given class is, or know what the intersection of two classes may be. In a supercomputing environment, one has to rely on the operating system to make sound decisions on how to distribute the various parts of a task among many central processing units
(CPUs). Today's operating systems, however, are not capable of this kind of decision making in a data processing context.
Thus, there is a need for a system and method that overcomes these deficiencies. BRIEF SUMMARY OF THE PRESENTLY PREFERRED EMBODIMENTS
According to the preferred embodiments, described is a system and method for allowing multiple processors, for example, a computer's central processing unit(s) (CPU), on a network to perform varying number and type of data processing tasks. Data processing tasks are provided to any of the available CPU's on a network equipped for the system and method. The system and method choose the first available CPU for performance of the data processing task. The system and method provide the task performing CPU with the minimum amount of data needed to complete the task and the necessary software instructions to complete the task.
More specifically, the preferred system and method allow for improved efficiency, in terms of both cost and time, of data processing. The user of the software considers a given data processing task, provides a set of definitions for the type of data required for each task, and specifies the task for a given group of data. The system and method then divide up the input file into the sub-task data files and ships the given data and task specifications to any available computer on the network. The CPU performs the task and returns the completed result to the computer that requested the task. Thus, large amounts of data to be processed quickly, by ordinary, commodity personal computers, running conventional operating systems such as Windows NT or Unix. A small cluster of, for example, twelve dual processor computers or twenty-four single processor computers, running the software of the preferred embodiments, can equal, if not exceed the performance of a supercomputer with an equivalent number of CPUs.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows an exemplary system for carrying out the present method in accordance with the preferred embodiments.
FIG. 2 shows an overview of the present method, in accordance with the preferred embodiments. FIG. 3 shows a process flow for how the method constructs a work queue, in accordance with the preferred embodiments.
FIG. 4 shows a process of sending task data and task instructions to a computer on the network, in accordance with the preferred embodiments. FIG. 5 shows a process for reassembling file results once the worker routines have completed their tasks, in accordance with the preferred embodiments.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
A preferred embodiment of the present system and method consists of two major components - a plurality of processing units on a network to perform data processing, and software to control the computers to each work on one discreet task within a larger task, also known as a master task. A task is a routine, such as a grabber routine, a worker routine and a reassembling routine, all described in more detail below. Preferred embodiments for both the network hardware and software are described herein. The present system can be used with search engines. One suitable search engine is described in co-pending application Serial No. 09/322,677, filed May 28, 1999 entitled "System and Method for Influencing a Position on a Search Result List Generated by a Computer Network Search Engine," which is incorporated by reference herein.
Referring to the drawings, and particularly FIG. 1, as an overview, a network, generally 100, of processors 102 are shown which perform the routines used in the present system. Those skilled in the art will appreciate that, in addition to a network, the processors 102 can be connected on a LAN, over the Internet, or the like. The processors 102 can be CPUs of commodity computers, super computers, laptops or the like. At least one of the processors 102 acts as a main processor 104. The main processor 104 can be any of the processors 102.
Referring to FIG. 2, the above system can be used to perform the present method. The present method takes a large universe of data and breaks the data down to smaller groups of data that are task specific (block 200). In other words, the smaller groups of data are tailored to include only that data which is needed for a particular task. For example, a user can prepare a task queue and a task data queue. The task queue defines which tasks are to be executed and the order in which the tasks are to be executed. The task data queue defines the data elements required for the completion of data processing tasks and how the data for the data processing tasks are laid out in a file so that the individual sub-tasks may readily be performed. Thereafter, the sub-group is identified with a header (block 202). The header is described in more detail below. The sub-group of data is then sent to an available processor (block 204). The available processor can then perform the required task on the sub-group of data (block 206).
Referring to FIG. 3, a grabber routine, referred to as monkeyGrab, located at the CPU which performs the grabbing task, receives instructions on which data elements to grab from a task data file created previously (block 300). The task data file is defined by a combination of a file header and data used by the routine. The grabber routine creates the file header that describes output data so that the worker routine can act on elements in the data file (block 302). The grabber routine insures that the only data elements in the data file are those which are required by a subsequent worker task (block 304). The grabber routine is described in more detail below.
Referring to FIG. 4, shown is the process for sending the tasks to the remote host. Software, for example, scans a memory database of available CPUs and the state of the available CPUs (block 400). Preferably, the first CPU in an available state is sent the task instruction file (block 402) and the task data file (block 404). The task instructions include a worker routine such as monkeyCount, described in more detail below. The task data file could be the results of a routine like monkeyDeJoumal, described below. Preferably, the controller routine creates a data file and memory entries to keep track of which available host is performing with task (block 406). Upon completing the task, the task routine makes the result data available to the controller routine
(block 408). The controller routine waits for the task routine to indicate completion. Upon successful completion, the controller routine updates the memory data files of available hosts (block 410). Upon unsuccessful completion, however, the controller routine updates the data file and memory entries and reassigns the task to another available CPU.
As shown in FIG. 5, the data file which results from completing the task is sent back to the requesting computer, i.e., the main processor, to be reassembled with resulting data files from other completed tasks (block 500). The main processor constructs a reassembling command to specify the sub- task data files that will be reassembled into one file, (block 502). For each sub-task, reassembling code, e.g., monkeyJoin described below, reads and evaluates the file header to identify data columns defining keys and data
(block 504). After having read the result file header for each sub-task, the reassembling routine reads a data line from each of the files and outputs the data values for the key in the data line if the keys match (block 506). If the keys do not match, the reassembling routine reads the file header for how to represent and handle the missing data values. Preferably, all task result files come from the same input data file, so there is no need to handle duplicate keys. The task result data files will contain only one of any given key. Likewise, it is preferred that the task routines are written to output the keys in sorted order, so not further sorting is required. Thereafter, the data is reassembled (block 508).
I. Hardware and Network.
Turning to a more specific example of the hardware, the preferred embodiment of the hardware is a set of commodity personal computers. So that a user works within a known environment, it is preferred that each computer contains a current model CPU, the same amount of memory, and at least one high-speed network interface card. Of course, other combinations could be used. For explanatory purposes, a preferred embodiment consists of twelve, dual CPU computers, with each CPU running at 500MHz clock speed, 1 Gigabyte of Random Access Memory, and two 100BaseT ethernet network adapter cards. Each computer in the cluster runs the RedHat Linux operating system. Those skilled in the art will appreciate that other operating systems could be used, such as Microsoft Windows and Window NT, Sun Solaris, and Macintosh. The twelve computers are connected to each other via a high-speed network switch. Each computer is also equipped with, for example, a SCSI II interface and a nine Gigabyte hard drive. Of course, many more computers could be added to the group without impairing performance. II. Software
A preferred embodiment of the software consists of several separate computer programs, implemented in a language that is interpreted at run time, for example, the Perl programming language. Those skilled in the art will appreciate that other programming languages that are interpreted at run time could be used such as Java, PHP3, Python and Tcl/Tk.
As an overview, a first computer routine, referred to as the data preprocessing routine, reads the input data file, reads the task data queue control file, performs any number of arbitrary instructions which are specified in the task data queue file, and creates a file header and data file. A second computer routine extracts only the data fields required for a particular data processing task from the data file created by the first routine. The grabber routine creates a file that includes a file header segment and a data segment that a worker routine utilizes. The file header segment of the output data file includes executable or evaluable code. The third routine, the worker routine, performs computations on the data file created by the grabber routine.
Preferably, the worker routine parses the file header created by the grabber routine and then performs a requested operation on the data portion of the file that follows the file header. Parsing includes reading the file header and evaluating the programmed code to generate a hash table that defines the structure of the data to follow. A hash table is a data structure that uses keys that map to data values.
Since many of the tasks performed by the software are tasks related to some kind of aggregation, e.g., counting the number of instances of a given class, the preferred embodiment of this invention includes a fourth software routine, which allows various task files to be merged. The task files are merged, so that, for a given class, all the aggregate measures are in one data row, which makes loading databases an easy task. Each of the four software routines is treated in more detail below. A. Data Preprocessing routine.
The data preprocessing routine, referred to in the Appendix as monkeyDeJournal, formats data selected by the user so that the data may easily be rendered into discrete parts required for processing. A preferred embodiment of the data preprocessing routine reads an input data file and a control file, returns groupings of data specified in the control file and outputs a new file which will be read by the other routines of this invention. Reading the input data file is handled via an abstraction mechanism- the data preprocessing routine is pointed to an existing data file parsing routine. The only two assumptions made about the data file parsing routine is that it is capable of 1) identifying groupings of data and 2) returning such groupings of data in a hash table data structure. For example, if the input data file contains groups of data for user clicks on a web site (group: userClick) and groups of data for searches performed by a search engine (group: search), 'userClick' and 'search' would be keys in the hash table of all known data groupings for the input data file. These two grouping keys, in turn, each need to point to another hash table. The names of the keys in this hash table would be the names of the data elements, and the values would be the actual data value for that data element within a particular instance of a group. The group, 'userClick', could for example, contain data elements named 'timeStamp', 'IPAddress', 'rank', and 'AccountlD'; the group named 'search' could for example, contain data elements named 'query', 'timeStamp', and 'resultsFound'. As the data file is read groups and their individual elements are returned based on the data in the control file which the data preprocessing routine must read.
The reading of the control file, and acting upon its data, is handled by the code of the data preprocessing routine. The control file is specified as a hash table evaluable at run time. The keys in this hash table hash table are arbitrary names, with each name representing a task queue. The values for these keys are hash tables also. This nested hash table has three required keys and any number of optional keys. The required keys are named 'event', 'columns', and 'delimiter1. The optional keys will be described later. The value for the key named 'event' specifies a group of data from the input file that contains the data elements of interest. The input data file parser identifies the group name. For example, the value for the key named 'event' could be
'userClick'. The key named columns points to another hash table. The keys of this nested hash table are the arbitrarily assigned column names that any later data processing tasks may need. The values of the column name keys are one innermost hash table, which allows the user to specify how to construct the columns of interest for a data processing task. This is done via two keys, one named 'source_args', the other named 'source_routine'. The 'source_args' key specifies the data elements from the data group which are to be used in constructing the desired output field; the 'source_routine' is specified as a valid Perl subroutine. Lastly, the key named 'delimiter' specifies how the data columns of interest are to be separated in the data segment of the output file. In the preferred embodiment, this control file would, at a minimum, be presented as follows:
{ 'AdvertiserQueue' => {
'event' => 'userClick', 'columns' => { 'AdListingX' => {
'source_args' => ['AccountlD'.'rank',], 'sourcejOutine' => 'sub { my($ald)=shift @$_; my($rank)=shift @$_; my($x)="$ald###$rank"; return $x; '},
}> 'IP_address' => { 'source_args' => ['clientlP',], 'source_routine' => 'sub { return shift@$_; }■ }, }, 'delimiter' => *\t',
}■ }
In this example, 'AdvertiserQueue' is the arbitrary name assigned to the task data queue. The data group of interest comes from the group 'userClick'. Two output columns are desired: one referred to as'AdListingX', the other as 'IP_Address'. The output column 'AdListingX' would consist of the data elements 'AccountlD' and 'rank', which are part of the 'userClick' data group.
The final presentation of the 'AdListingX' data column would be in the form of ABC###3, supposing the advertiser ID was 'ABC and the rank was '3'. The output column 'IP_Address' would merely contain the data value that the data element 'clientlP' has from an instance of the data group 'userClick1. Those skilled in the art will recognize that any valid Perl syntax can be used in the
'source_routine' value to create derivations and modifications to input data fields of interest. The key named 'delimiter' having the value '\t' indicates that the output fields should be seperated by a tab character.
In the present instance of this invention, the control file may include three additional, optional keys. These optional keys have the following names: 'deltaT', 'restriction_args', and 'restriction1. The 'deltaT1 provides information to the data preprocessing routine about the earliest minute within an hour for which the output of the data preprocessing file should contain data. Legal values for this key are numbers between 0 and 59 inclusive; single digit numbers are preceded with a zero (i.e. if 1 is the value, spell out
'01'). The 'restriction_args' key works just like the 'source_args' key previously mentioned. This value for this key provides input arguments to a user defined function. The elements of the list must be names of data elements within the data group of interest. The 'restriction' key value is a valid Perl subroutine. For example,
{
'deltaT* => '09',
'restriction_args' => ['bid',],
'restriction' => 'sub { my($x)=shift @$_; return ($x > 0);
}
specifies that the first data groups of interested to be included in the output file should occur no sooner than nine minutes after the first hour of data seen by the file parsing routine. The only data group instances to be returned are those whose 'bid' element has a value greater than zero. The 'bid' element is passed to the user defined function specified via the 'restriction' key. Once the data preprocessing routine has read the control file and evaluated the contents, it creates an output file that begins with a file header segment. The file header segment is written in the form of a Perl evaluable hash table. This file header segment has four required keys and three optional keys that are discussed in the next section (B. Grabber Routine). After having output the fileheader, the data preprocessing routine enters a token named "EndOfHeader" on the next output line. At this point, any instances of data groupings which meet any restriction criteria are assembled according the rules specified in the control file and then written out to the data portion of the output file, with the columns being delimited by the delimiting character specified in the control file.
B. Grabber routine. According to a preferred embodiment of the grabber routine, referred to as monkeyGrab, the preprocessor routine utilizes the data for which the user wishes computations to be performed on and places these data in a file. Thereafter, other routines that can read the file header with executable code, and execute the code, can read the data to be acted upon. Thus, a requirement of the grabber routine is to be able to read the file header created by the preprocessing routine so it can extract the appropriate data elements. Since the grabber routine also writes the minimum amount of data needed for a given task to a task file, the grabber routine can write out a file header which is bound by similar rules as the preprocessing routine's file headers.
The grabber routine grabs data columns based on column names that are provided through input arguments delivered to the grabber routine. For example, a data file may contain a column containing a price value, such as a search result bid column, and a column containing a class descriptor, a key field that may consist of one or many fields. In such a case, the grabber routine could be invoked as follows: "grab -g price -g myKeyField". The data file from which the grabber routine works has in its file header entries for the columns named "price" and "myKeyField". The file header from the preprocessed routine should contain the appropriate header entries, i.e., the key names for the hash table which describes the data columns. The grabber routine reads the header information from the data file to obtain information about the location within the data file of the column locations, the character(s) which delimit the data columns, and any special handling rules, such as how to treat or value an empty column location. Once the grabber has ascertained the column locations and what processing rules are required for a given column, the grabber routine loads those columns of interest and places them in an output file.
The output file, written by the grabber routine, has header information pertinent to the extracted columns. For explanatory purposes, the header is in the form of a Perl hash table with four keys in it. The four keys correspond to four of the seven keys included with the output of the preprocessor routine. These four keys are for the data columns, which of the data columns, or group of columns, make a given data row unique, the data column labels, and the output data field delimiter. The key for the data columns points to a second hash table. The second hash table has as its keys column names, and as its values a hash table with two keys. The first key of the innermost hash table describes the data column's location in the data portion of the file, where the innermost hash table is the most embedded hash table in a series of hash tables. The second key of the innermost hash table describes how to represent null data values.
The key for the column which uniquely describes any given data row must have the name of the column that describes the data row. This name is a key in the hash table of columns. The key for the data column labels has the value of the Perl list. Lastly, the key describing the column delimiter has a value corresponding to the column delimiter. If this column delimiter includes any characters which are escape sequences in Perl, then these escape characters are preceded by a backslash ~"\" - character. The preferred embodiment places a token at the end of the file header, so that the software knows when it is done reading header information and when it can begin reading data.
C. Worker routine. A preferred embodiment of the worker routine(s), referred to as monkeyCount, much like the preferred embodiment of the preprocessor and the grabber routines, reads the data file headers and also output such file headers. Like the grabber routine, the worker routine reads the file header to determine which columns it is reading and in what order those columns appear in the data segment of the input file, and which of the input data columns constitute the class, or key, definition. Upon reading this information, the worker routine performs the operation desired on the input data. If the worker routine is a counting routine, it will output, for each class, a count of the number of rows which belong to this class, along with a descriptor of a class. For example, if the input data consists of seven rows and two columns, a key and an item to be counted for that key, is as follows: Key Search Term Adv01 dog
Adv03 cat
Adv05 house
Adv03 mouse
Adv01 travel
AdvOδ music
Adv01 sound
the output data file contains the following data rows: Key # searches
Adv01 3
Adv03 2
AdvOδ 2
Likewise, if the worker routine were an adding program, it would sum up the data values in the column of interest for each key of interest. The worker routines minimize any given task in terms of file input/output and complexity, for example, if a count is requested, the worker routine only receives a key and the data element to be counted columns. This allows many worker assignments to occur on any arbitrary number of machines. One worker routine might count the total searches for each advertiser; another might count the number of unique IP addresses that clicked on any given advertisers listing within a specific time period. When the worker routine finishes its assignment, it writes out a file with the same header format as the input file, however, the values in the hash table describing the columns will be the key descriptor from the input file and name of the worked upon data. For example, if the input data file had a key of "Advertiser ID" and a column of "Search Term" and the worker routine was set up to count the number of searches which returned a given advertiser, the output file would have a key of "Advertiser ID" and a column of "Count of
Searches".
D. Data Reconstruction routine. According to a preferred embodiment of the data reconstruction routine, referred to as monkeyJoin, all fields are reconstructed into one file, organized by key. The reconstruction occurs after the data to be worked on has been preprocessed, i.e., broken up into smaller work units, and the small work units are sent to machines on the network for processing. The data reconstruction facilitates convenient database loading. To accomplish data reconstruction, the data reconstruction routine is given input as to which data files need to be merged into one data file. Each of the data files that will become part of the database load file is supplied as an argument to the data reconstruction routine, in list format. For example, the data reconstruction routine, e.g., monkeyJoin, is called as follows: reconstruct file 1 file2 file3 ... fileN
For each of the files supplied as an input argument, the reconstruction routine reads the file header information and stores the header information and data in a hash table. Once all of the headers and data have been read, each of the key values are cycled through. For every input file that had a matching key, the corresponding output columns are written. If one of the input files did not have a key entry or a value, the handling of missing or undefined values is invoked and the reconstruction routine supplies an appropriate value, per the notation in the input file header hash table. This file is written out, as the other files, with header information in the format of a Perl hash table. The hash table contains the same four keys as the hash table headers supplied by the grabber and worker routines. The values for the keys of this hash table include the same basic four keys required by this application: the columns hash table, the key hash table, the column delimiter specification and the hash table of column labels. E. Dispatching routine. According to a preferred embodiment of the workload distributing routine, referred to as monkeyDispatcher, the CPU and computer memory intensive work occurs in the worker routines which perform the operations of interest, for example, counting unique instances within a class. This CPU and memoiγ intensive work ideally is distributed to a number of computers. Preferably, the present system and method dispatches work to available computers on a network based on the distributing software's known usage load of the computers on the network. The dispatch routine allows one worker or grabber routine to run for each CPU of a computer attached to the network. For example, if there are twenty four counting operations to be performed, and there are twelve computers each equipped with two CPUs, two worker operations can be farmed off to each of the twelve computers for simultaneous processing to ensure the most rapid possible completion of the counting tasks. Likewise, the dispatcher routine needs information regarding which tasks or task components can be done simultaneously and which ones first require the completion of some other task component.
Thus, the dispatching routine needs data about the machines which are capable of receiving the work orders, and how many work orders they may receive at once. For example, a four CPU machine could receive four orders at a time, a one CPU machine only one. The routine also stores data about 1) which machine is currently performing how many tasks at any given point in time and 2) which task(s) any of these machines is performing at any given point in time. Lastly, the dispatch routine can initiate the launching of code for processing the data on a remote machine.
The preferred embodiment begins with a data file, written as a Perl hash table, which specifies the names of the available machines on the network and the total number of CPUs on the given machine. Also specified is the last known "busy/idle" state of each of the CPUs on a given machine, the last known start time. For example, in integer format, an integer indicates that a task was started on a CPU and a blank value indicates that no job is currently running on a given machine's CPU(s). Each machine on the network has one key in the hash table. Each key in this hash table points to a second hash table, the second hash table having key entries for the number of CPUs known for the given machine and the number of CPUs currently occupied doing work for that machine. On the first construction of the hash table, the value for CPUs currently occupied doing work is zero. Also in the data file the task queues are specified. The outermost key in this set of tasks hash table points to one or more hash tables which specify the components of a task, and whether or not these sub-tasks can be performed simultaneously. The keys of this outermost task hash table are simply integers, beginning with the number one, and incrementing by one for each task-set. Each of these numbered tasks points to yet another hash table, which contains keys to represent aspects of the task (such as data preprocessing, data grabbing/counting, data joining etc.). A preferred embodiment wraps these individual task keys inside a hash table whose single present key is named, for example, 'parms'. The 'parms' key points to a hash table with four key entries: 'key', 'name', 'tasks' and 'masterTaskFile'. These keys have the corresponding values, a descriptor of the column which constitutes the class level key, e.g., Advertiser ID, a tokenized, that is, a predefined representation of the data preprocessing task (for example, dejournal.lineads to represent the task list pertaining to the reduction of advertiser listings in an internet search engine), the list of paired "grabbing" and "counting" tasks which can be performed simultaneously, and the name of the output file of the data preprocessing routine.
According to a preferred embodiment, the dispatch routine reads in the control file to identify available machines on the network and the machine's availability. As a task is dispatched to a machine, the dispatching software updates its memory copy of the machine availability hash table. Thus, if a machine has two CPUs and the dispatching routine sent a task to the machine with two CPUs, the dispatching software would increment the number of busy CPUs from 0 to 1 , to indicate that one job has been sent to the machine on a network. When the machine performing the worker routine task finishes the task, the dispatcher decrements the busy CPU value by one for the machine that performed the task.
With this mechanism in place, prior to assigning tasks to machines on the network, the dispatching software sorts the available machines by current tasks assigned to a machine. If machine X on the network has 0 busy CPUs and machine Y has 1 busy CPU, and both machines X and Y have a total of two CPUs, then the dispatching software will preferably first assign a task to machine X. This occurs because machine X has no busy CPUs as far as the dispatching software can determine. Machine X could be running some CPU intensive software without the dispatching routines knowledge. It is preferred that the computers having the CPUs only have the necessary operating system software running, to prevent the problem of work being sent to a computer whose processor is tied up with a non germane task, such as a word processing task. In the preferred embodiment, the computers only include an operating system, a program interpreter, such as a Perl interpreter, and a secure copying program, if necessary. If all machines on the network are equally busy, the dispatching software sorts the stack of available machines by machine name and assigns tasks in that way. If a machine is fully occupied, the dispatching software removes this machine from the available machine stack until the busy machine reports that it has finished at least one of its assigned tasks.
If all machines are busy, the dispatching software waits for a first time period, for example, a few minutes, to retry task dispatching. If the dispatcher software has tasks queued but cannot find an available machine after a second time period, for example, fifteen minutes, the dispatcher software creates a warning message. This condition might indicate a larger system failure that would require resetting the software system and the tasks. The preferred embodiment of this invention supplies enough hardware on a network so that all pieces of hardware on this network are not likely to be completely busy for any given fifteen minutes. Once the dispatching software identifies the machine assigned a given task, the dispatching software begins assembling a set of commands to be executed on a remote computer. The command set the software assembles is specific to a task, guided by the information provided in the dispatching software's control file. The construction of these commands is specified as follows. The machine creates a name that will uniquely identify a task and the machine on which the task is to be run. This name then gets used as the directory entry mark which the software uses as an indication that a task is either running or completed. After constructing the unique name, the dispatching software uses the syntax of the freely available secure shell utility (also known as ssh) to create the command which will launch a program on a remote computer. Those skilled in the art will recognize that other existing utilities, such as remote shell execution (also known as rsh) could as readily be used.
In its present form, the preferred embodiments have the computers on the network access shared disk space, so that the remote computer references the shared disk space for program code and data. Again, those skilled in the art will recognize that using existing network and remote execution tools, both program code and data could be copied to a remote computer's private disk. Thereafter, a remote execution utility could point the worker computer to the new location of the program code and data. Lastly, the dispatching software adds the syntax of the commands to remove the file and mark it created upon completion of the remotely executed task. Once this syntax is constructed, the dispatcher routine creates a copy of itself (known as forking) and overwrites this copy with a call to the constructed commands (known as a fork-exec combination). Pseudo-code is used to illustrate this process: $task = "NumUniqueUsers";
$machineToUse = "machine07"; $programToLaunch = "monkeyGrab"; $dataFileToUse = "AdvertiserReport" $programArguments = "-g AdvertiserlD -g $task"; $programLocation = "/shared/disk/space/code";
$dataLocation = "/shared/disk/space/data"; $dirEntryFileMark = "$task . $machineToUse. system(\"date\")"; $remoteExecutionTool = "ssh"; $remoteExToolArgs = "-secret /secrets/myKeyFile"; $commandSet = "touch $dirEntryMark; x="$remoteExecutionTool
$remoteExToolArgs $machineToUse '$programToLaunch $programArguments $dataFileToUse ; if [ $x eq 0]; then rm $dirEntryMark; fi"; fork() exec("$commandSef);
If the dispatcher routine's control file indicates that a particular process or process pair can be executed simultaneously, it loops over the steps just described to launch as many processes as are required by the control file and that can be handled by the existing network. If the control file indicates that a process must finish before one or more other processes must begin, the dispatcher routine waits for such a serial task to finish before launching more serial or parallel tasks within a task queue. F. Special Cases and general extensibility
A preferred embodiment includes a special case of a worker routine, referred to as monkeyLoad. Like all the other worker routines that can be created in the framework of the present system and method, monkeyLoad has the capability of parsing file headers which are in the form of Perl evaluable code. This monkeyLoad routine takes the file header information and creates a set of SQL (structured query language) statements to insert the data which follows the file header into a database. Through a set of standardized and freely available database interfaces for the Perl language, this routine can read the data lines in the output file and insert these as rows into a database. Those skilled in the art will recognize that the worker routine could also read and evaluate the file header, for example, to produce a control file for another routine, which might have more efficient interactions with a database routine, such as Oracle's SQL loader routine (sqlldr). The special requirement of this routine is that the database columns match the column labels provided in the file header. This detail is attended to at the beginning of the process, in the initial control file which a user creates and where the user can specify arbitrary data column labels. Any given worker routine functions by reading in the file header information, evaluating it, and upon output, creating another file header which has a minimum number of keys, e.g., four which any of the worker routines needs to function.
The one special instance of a worker routine demonstrates that the present system and method can be generalized. Since the worker routines can parse the file header information, worker routines can accomplish many useful things. One can readily recognize that instead of being instructed to count unique instances, a worker routine could be written to add, subtract, divide, multiply, compute standard deviation and so forth. The unique functionality any such routine requires is the ability to evaluate the header information in a data file as executable code that translates into a hash table with a minimum number of keys.
Although the invention has been described and illustrated with reference to specific illustrative embodiments thereof, it is not intended that the invention be limited to those illustrative embodiments. Those skilled in the art will recognize that variations and modifications can be made without departing from the true scope and spirit of the invention as defined by the claims that follow. It is therefore intended to include within the invention all such variations and modifications as fall within the scope of the appended claims and equivalents thereof.

Claims

WHAT IS CLAIMED IS:
1. A method for running tasks on a network, comprising: creating at least one sub-group of data from a universe of data; identifying the sub-group of data with a header, the header containing executable code; sending the sub-group of data to an available processor; and performing tasks with the available processor to obtain result data using the sub-group of data and instructions contained in the executable code in the header.
2. The method according to claim 1, further including returning the result data files for each data sub-group to a main processor for possible further processing.
3. The method according to claim 2, wherein returned result data is reconstructed to form a single result data file which consists of many individual result data files.
4. A method for preprocessing data, comprising: reading an input data file; placing the input data into a data structure; selecting data elements from the data structure; outputting a file header that describes the selected data elements into a file; and outputting the selected data elements into the file.
5. The method according to claim 4, wherein the data structure is a hash table.
6. The method according to claim 4, wherein the data elements selected are chosen by evaluating a block of program code such as a hash data at run time.
7. The method according to claim 4, wherein the file header is a hash table that can be evaluated at run time.
8. A method for extracting data, comprising: reading a file header containing executable code; executing the code to determine data to extract from a universe of data; and obtaining the data to be extracted from a data structure.
9. The method according to claim 8, further including loading the extracted data to an output file.
10. The method according to claim 8, wherein the data structure is a hash table.
11. A method for processing data, comprising: reading a file header that contains executable code; executing the executable code; determining a desired operation from the executed code; and performing the desired operation on data.
12. The method according to claim 11 , further including outputting result data after performing the desired operation.
13. The method according to claim 11 , wherein the desired operation is written in a language that is executed at run time.
14. The method according to claim 13, wherein the desired operation is any one selected from the group of addition, subtraction, multiplication, division, counting total number of instances in a class, enumerating the unique instances of a class, computing descriptive statistics such as the standard deviation, standard error of the mean, median, arithmetic mean, variance, covariance, correlation coefficient, and odds ratio.
15. A method for dispatching a task and data to a central processing unit located on a network, comprising: executing code placed in a file header; determining if at least one, if any, central processing unit is available on the network; and dispatching the task to the central processing unit based on availability.
16. The method according to claim 15, wherein the availability of the central processing unit is determined by reading a control file.
17. The method according to claim 16, wherein the control file is formatted for evaluation as a hash table at run time.
18. The method according to claim 16, further including updating the control file by re-writing a new control file which indicates the status of the central processing units.
19. The method according to claim 16, further including: assembling at least one command to be executed by the central processing unit; and sending the at least one command to the central processing unit for execution.
20. The method according to claim 19, wherein the at least one command includes creating a name to identify the task.
21. The method according to claim 15, further including marking a state of the task.
22. The method according to claim 21 , wherein the state of the task is either running or completed.
23. The method according to claim 15, further including retrying, after a time period has elapsed, to determine if the availability of the at least one central processing unit if all central processing units were busy.
24. The method according to claim 23, wherein the time period can be specified by a user.
25. The method according to claim 24, wherein the time period is fifteen minutes.
26. A method for loading data, comprising: reading a file header containing executable code; executing the code to obtain file header information; and creating a structured query language statement based on the file header information.
27. A method for running tasks on a network, comprising: creating at least one sub-group of data from a universe of data; identifying the sub-group of data with a header, the header containing executable code; sending the sub-group of data to an available processor; performing tasks with the available processor to obtain result data using the sub-group of data and instructions contained in the executable code in the header; and returning the result data to a main processor, wherein returned result data is reconstructed to form a result.
28. A method for running tasks on a network, comprising: reading an input data file; placing the input data into a data structure; selecting data elements from the data structure; outputting a file header that describes the selected data elements into a file; outputting the selected data elements into the file; reading the file header containing executable code; and executing the code to determine data to extract from a universe of data.
29. The method according to claim 28, further including loading the extracted data to an output file.
30. The method according to claim 28, wherein the data structure is a hash table.
31. A method for running tasks on a network, comprising: reading a file header that contains executable code; executing the executable code; determining a desired operation from the executed code; performing the desired operation on data; outputting result data after performing the operation.
32. The method according to claim 31 , wherein the desired operation is written in a language that is executed at run time.
33. The method according to claim 32, wherein the desired operation is any one selected from the group of addition, subtraction, multiplication, division, counting total number of instances in a class, enumerating the unique instances of a class, computing descriptive statistics such as the standard deviation, standard error of the mean, median, arithmetic mean, variance, covariance, correlation coefficient, and odds ratio.
34. A method for dispatching a task and data to a central processing unit located on a network, comprising: executing code placed in a file header; determining if at least one, if any, central processing unit is available on the network; retrying, after a time period has elapsed, to determine if the availability of the at least one central processing unit if all central processing units were busy; dispatching the task to the central processing unit based on availability; and assembling at least one command to be executed by the central processing unit, wherein the at least one command includes creating a name to identify the task.
35. The method according to claim 34, wherein the availability of the central processing unit is determined by reading a control file.
36. The method according to claim 35, further including the step of updating the control file.
37. The method according to claim 34, further including marking a state of the task.
38. The method according to claim 37, wherein the state of the task is either running or completed.
39. The method according to claim 34, wherein the time period can be determined by a user.
40. The method according to claim 39, wherein the time period is fifteen minutes.
41. A system for running tasks on a network, comprising: a data preprocessing routine to create at least one sub-group of data from a universe of data; a data header containing executable code that identifies the sub-group of data; and a dispatch routine to send the sub-group of data to an available processor, wherein the available processor performs to obtain result data using the sub-group of data and instructions contained in the executable code in the header.
42. The system according to claim 41 , further including a main processor to collect the result data files for each data sub-group.
43. The system according to claim 42, further including a data reconstructing routine to reconstruct the result data to form a result.
44. A system for preprocessing data, comprising: a processor to read an input data file; and a preprocessing routine to place the input data into a data structure, select data elements from the data structure, output a file header that describes the selected data elements into a file, and output the selected data elements into the file.
45. The system according to claim 44, wherein the data structure is a hash table.
46. The system according to claim 45, wherein the data elements selected are chosen by evaluating a block of program code such as a hash table.
47. The system according to claim 41 , wherein the file header is a hash table that can be evaluated at run time.
48. A system for extracting data, comprising: a processor to read a file header containing executable code, wherein the processor executes the code to determine data to extract from a universe of data; and a data grabber routine to obtain the data to be extracted from a data structure.
49. The system according to claim 48, further including a dispatch routine to load the extracted data to an output file.
50. The system according to claim 48, wherein the data structure is a hash table.
51. A system for processing data, comprising: a processor to read a file header that contains executable code; and an operating system supporting an interpreted language to execute the executable code, wherein the processor determines a desired operation from the executed code and performs a desired operation on data.
52. The system according to claim 51 , wherein the processor outputs result data after performing the desired operation.
53. The system according to claim 51 , wherein the desired operation is written in a language that is executed at run time.
54. The system according to claim 53, wherein the desired operation is any one selected from the group of addition, subtraction, multiplication, division, counting total number of instances in a class, enumerating the unique instances of a class, computing descriptive statistics such as the standard deviation, standard error of the mean, median, arithmetic mean, variance, covariance, correlation coefficient, and odds ratio.
55. A system for dispatching a task and data to a central processing unit located on a network, comprising: an operating system supporting an interpreted language to execute code placed in a file header; and a dispatching routine to determine at least one, if any, central processing unit available on the network, and to dispatch the task to the central processing unit based on availability.
56. The system according to claim 55, wherein the availability of the central processing unit is determined by reading a control file.
57. The system according to claim 56, wherein the control file is formatted for evaluation as a hash table at run time.
58. The system according to claim 56, wherein the dispatching routine updates the control file by re-writing a new control file which indicates the status of the central processing units.
59. The system according to claim 55, wherein the dispatching routine assembles at least one command to be executed by the central processing unit, and sends the at least one command to the central processing unit for execution.
60. The system according to claim 59, wherein the command includes creating a name to identify the task.
61. The system according to claim 55, wherein the dispatching routine marks a state of the task.
62. The system according to claim 61 , wherein the state of the task is either running or completed.
63. The system according to claim 55, wherein the dispatching routine retries, after a time period has elapsed, to determine the availability of the at least one central processing if all central processing units were busy.
64. The system according to claim 63, wherein the time period can be determined by a user.
65. The system according to claim 64, wherein the time period is fifteen minutes.
66. A system for loading data, comprising: a processor to read a file header containing executable code and executing the code to obtain file header information; and a preprocessing routine to create a structured query language based on the file header information.
67. A system for running tasks on a network, comprising: a preprocessing routine to create at least one sub-group of data from a universe of data; a data header containing executable code that identifies the sub-group of data; and a dispatch routine to send the sub-group of data to an available processor, wherein the available processor performs to obtain result data using the sub-group of data and instructions contained in the executable code in the header; and a main processor to collect returned result data, wherein returned result data is reconstructed to form a result.
68. A system for running tasks on a network, comprising: a processor to read an input data file; a preprocessing routine to place the input data into a data structure, select data elements from the data structure, output a file header that describes the selected data elements into a file, and output the selected data elements into the file, and the processor reads the file header containing executable code, wherein the processor executes the code to determine data to extract from a universe of data; and a data grabber routine to obtain the data to be extracted from a table.
69. The system according to claim 68, further including a dispatch routine to load the extracted data to an output file.
70. The system according to claim 68, wherein the data structure is a hash table.
71. A system for running tasks on a network, comprising: a processor to read a file header that contains executable code; and an operating system supporting an interpreted language to execute the executable code, wherein the processor determines a desired operation from the executed code and performs the desired operation on data, wherein the processor outputs result data after performing the operation.
72. The system according to claim 71 , wherein the desired operation is written in a language that is executed at run time.
73. The system according to claim 72, wherein the desired operation is any one selected from the group of addition, subtraction, multiplication, division, counting total number of instances in a class, enumerating the unique instances of a class, computing descriptive statistics such as the standard deviation, standard error of the mean, median, arithmetic mean, variance, covariance, correlation coefficient, and odds ratio.
74. A system for dispatching a task and data to a central processing unit located on a network, comprising: an operating system supporting an interpreted language to execute code placed in a file header; and a dispatching routine to determine at least one, if any, central processing unit available on the network, and to dispatch the task to the central processing unit based on availability, wherein the dispatching routine retries, after a time period has elapsed, to determine the availability of the at least one central processing if all central processing units were busy.
75. The system according to claim 74, wherein the availability of the central processing unit is determined by reading a control file.
76. The system according to claim 75, wherein the dispatching routine updates the control file.
77. The system according to claim 74, wherein the dispatching routine marks a state of the task.
78. The system according to claim 77, wherein the state of the task is either running or completed.
79. The system according to claim 74, wherein the time period can be determined by a user.
80. The system according to claim 79, wherein the time period is fifteen minutes.
PCT/US2001/003801 2000-02-11 2001-02-06 System and method for rapid completion of data processing tasks distributed on a network WO2001059561A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
JP2001558824A JP2003523010A (en) 2000-02-11 2001-02-06 System and method for rapidly completing data processing tasks distributed over a network
DE10195549T DE10195549T1 (en) 2000-02-11 2001-02-06 Method for the rapid completion of distributed data processing processes in a network
AU4145301A AU4145301A (en) 2000-02-11 2001-02-06 System and method for controlling the flow of data on a network
CA2400216A CA2400216C (en) 2000-02-11 2001-02-06 System and method for rapid completion of data processing tasks distributed on a network
GB0219491A GB2392997B (en) 2000-02-11 2001-02-06 System and method for rapid completion of data processing tasks distributed on a network
EP01912700A EP1277108A4 (en) 2000-02-11 2001-02-06 System and method for controlling the flow of data on a network
AU2001241453A AU2001241453B2 (en) 2000-02-11 2001-02-06 System and method for rapid completion of data processing tasks distributed on a network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/502,692 US6775831B1 (en) 2000-02-11 2000-02-11 System and method for rapid completion of data processing tasks distributed on a network
US09/502,692 2000-02-11

Publications (2)

Publication Number Publication Date
WO2001059561A1 true WO2001059561A1 (en) 2001-08-16
WO2001059561A9 WO2001059561A9 (en) 2002-02-07

Family

ID=23998951

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/003801 WO2001059561A1 (en) 2000-02-11 2001-02-06 System and method for rapid completion of data processing tasks distributed on a network

Country Status (10)

Country Link
US (1) US6775831B1 (en)
EP (1) EP1277108A4 (en)
JP (1) JP2003523010A (en)
KR (1) KR100502878B1 (en)
CN (1) CN1262915C (en)
AU (2) AU2001241453B2 (en)
CA (1) CA2400216C (en)
DE (1) DE10195549T1 (en)
GB (1) GB2392997B (en)
WO (1) WO2001059561A1 (en)

Families Citing this family (110)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7233942B2 (en) * 2000-10-10 2007-06-19 Truelocal Inc. Method and apparatus for providing geographically authenticated electronic documents
US7685224B2 (en) * 2001-01-11 2010-03-23 Truelocal Inc. Method for providing an attribute bounded network of computers
US20020174244A1 (en) * 2001-05-18 2002-11-21 Telgen Corporation System and method for coordinating, distributing and processing of data
US8024395B1 (en) 2001-09-04 2011-09-20 Gary Odom Distributed processing multiple tier task allocation
US8590013B2 (en) 2002-02-25 2013-11-19 C. S. Lee Crawford Method of managing and communicating data pertaining to software applications for processor-based devices comprising wireless communication circuitry
US7010596B2 (en) * 2002-06-28 2006-03-07 International Business Machines Corporation System and method for the allocation of grid computing to network workstations
US7127446B1 (en) * 2002-10-30 2006-10-24 Advanced Micro Devices, Inc. File system based task queue management
US7243098B2 (en) * 2002-12-19 2007-07-10 International Business Machines Corporation Method, system, and program for optimizing aggregate processing
US20050149507A1 (en) * 2003-02-05 2005-07-07 Nye Timothy G. Systems and methods for identifying an internet resource address
US7953667B1 (en) * 2003-02-07 2011-05-31 Britesmart Corp. Method and system to detect invalid and fraudulent impressions and clicks in web-based advertisement systems
US7325002B2 (en) * 2003-04-04 2008-01-29 Juniper Networks, Inc. Detection of network security breaches based on analysis of network record logs
US7613687B2 (en) * 2003-05-30 2009-11-03 Truelocal Inc. Systems and methods for enhancing web-based searching
US20050262513A1 (en) * 2004-04-23 2005-11-24 Waratek Pty Limited Modified computer architecture with initialization of objects
US7844665B2 (en) * 2004-04-23 2010-11-30 Waratek Pty Ltd. Modified computer architecture having coordinated deletion of corresponding replicated memory locations among plural computers
US7849452B2 (en) * 2004-04-23 2010-12-07 Waratek Pty Ltd. Modification of computer applications at load time for distributed execution
US20050257219A1 (en) * 2004-04-23 2005-11-17 Holt John M Multiple computer architecture with replicated memory fields
US7707179B2 (en) * 2004-04-23 2010-04-27 Waratek Pty Limited Multiple computer architecture with synchronization
US20060095483A1 (en) * 2004-04-23 2006-05-04 Waratek Pty Limited Modified computer architecture with finalization of objects
US20060265704A1 (en) * 2005-04-21 2006-11-23 Holt John M Computer architecture and method of operation for multi-computer distributed processing with synchronization
US7512626B2 (en) * 2005-07-05 2009-03-31 International Business Machines Corporation System and method for selecting a data mining modeling algorithm for data mining applications
US7516152B2 (en) * 2005-07-05 2009-04-07 International Business Machines Corporation System and method for generating and selecting data mining models for data mining applications
US9201979B2 (en) 2005-09-14 2015-12-01 Millennial Media, Inc. Syndication of a behavioral profile associated with an availability condition using a monetization platform
US7548915B2 (en) 2005-09-14 2009-06-16 Jorey Ramer Contextual mobile content placement on a mobile communication facility
US8660891B2 (en) 2005-11-01 2014-02-25 Millennial Media Interactive mobile advertisement banners
US9076175B2 (en) 2005-09-14 2015-07-07 Millennial Media, Inc. Mobile comparison shopping
US8027879B2 (en) 2005-11-05 2011-09-27 Jumptap, Inc. Exclusivity bidding for mobile sponsored content
US7860871B2 (en) 2005-09-14 2010-12-28 Jumptap, Inc. User history influenced search results
US7769764B2 (en) 2005-09-14 2010-08-03 Jumptap, Inc. Mobile advertisement syndication
US8989718B2 (en) 2005-09-14 2015-03-24 Millennial Media, Inc. Idle screen advertising
US8666376B2 (en) 2005-09-14 2014-03-04 Millennial Media Location based mobile shopping affinity program
US8364540B2 (en) 2005-09-14 2013-01-29 Jumptap, Inc. Contextual targeting of content using a monetization platform
US10911894B2 (en) 2005-09-14 2021-02-02 Verizon Media Inc. Use of dynamic content generation parameters based on previous performance of those parameters
US8532633B2 (en) 2005-09-14 2013-09-10 Jumptap, Inc. System for targeting advertising content to a plurality of mobile communication facilities
US20110313853A1 (en) 2005-09-14 2011-12-22 Jorey Ramer System for targeting advertising content to a plurality of mobile communication facilities
US7577665B2 (en) 2005-09-14 2009-08-18 Jumptap, Inc. User characteristic influenced search results
US8229914B2 (en) 2005-09-14 2012-07-24 Jumptap, Inc. Mobile content spidering and compatibility determination
US8156128B2 (en) 2005-09-14 2012-04-10 Jumptap, Inc. Contextual mobile content placement on a mobile communication facility
US8195133B2 (en) 2005-09-14 2012-06-05 Jumptap, Inc. Mobile dynamic advertisement creation and placement
US9703892B2 (en) 2005-09-14 2017-07-11 Millennial Media Llc Predictive text completion for a mobile communication facility
US10038756B2 (en) 2005-09-14 2018-07-31 Millenial Media LLC Managing sponsored content based on device characteristics
US8832100B2 (en) 2005-09-14 2014-09-09 Millennial Media, Inc. User transaction history influenced search results
US7702318B2 (en) 2005-09-14 2010-04-20 Jumptap, Inc. Presentation of sponsored content based on mobile transaction event
US9471925B2 (en) 2005-09-14 2016-10-18 Millennial Media Llc Increasing mobile interactivity
US8103545B2 (en) 2005-09-14 2012-01-24 Jumptap, Inc. Managing payment for sponsored content presented to mobile communication facilities
US8290810B2 (en) 2005-09-14 2012-10-16 Jumptap, Inc. Realtime surveying within mobile sponsored content
US8302030B2 (en) 2005-09-14 2012-10-30 Jumptap, Inc. Management of multiple advertising inventories using a monetization platform
US9058406B2 (en) 2005-09-14 2015-06-16 Millennial Media, Inc. Management of multiple advertising inventories using a monetization platform
US8311888B2 (en) 2005-09-14 2012-11-13 Jumptap, Inc. Revenue models associated with syndication of a behavioral profile using a monetization platform
US7660581B2 (en) 2005-09-14 2010-02-09 Jumptap, Inc. Managing sponsored content based on usage history
US7603360B2 (en) 2005-09-14 2009-10-13 Jumptap, Inc. Location influenced search results
US8209344B2 (en) 2005-09-14 2012-06-26 Jumptap, Inc. Embedding sponsored content in mobile applications
US8503995B2 (en) 2005-09-14 2013-08-06 Jumptap, Inc. Mobile dynamic advertisement creation and placement
US8131271B2 (en) 2005-11-05 2012-03-06 Jumptap, Inc. Categorization of a mobile user profile based on browse behavior
US8688671B2 (en) 2005-09-14 2014-04-01 Millennial Media Managing sponsored content based on geographic region
US7676394B2 (en) 2005-09-14 2010-03-09 Jumptap, Inc. Dynamic bidding and expected value
US10592930B2 (en) 2005-09-14 2020-03-17 Millenial Media, LLC Syndication of a behavioral profile using a monetization platform
US7752209B2 (en) 2005-09-14 2010-07-06 Jumptap, Inc. Presenting sponsored content on a mobile communication facility
US8819659B2 (en) 2005-09-14 2014-08-26 Millennial Media, Inc. Mobile search service instant activation
US8364521B2 (en) 2005-09-14 2013-01-29 Jumptap, Inc. Rendering targeted advertisement on mobile communication facilities
US8238888B2 (en) 2006-09-13 2012-08-07 Jumptap, Inc. Methods and systems for mobile coupon placement
US8615719B2 (en) 2005-09-14 2013-12-24 Jumptap, Inc. Managing sponsored content for delivery to mobile communication facilities
US8812526B2 (en) 2005-09-14 2014-08-19 Millennial Media, Inc. Mobile content cross-inventory yield optimization
US7912458B2 (en) 2005-09-14 2011-03-22 Jumptap, Inc. Interaction analysis and prioritization of mobile content
US8805339B2 (en) 2005-09-14 2014-08-12 Millennial Media, Inc. Categorization of a mobile user profile based on browse and viewing behavior
US7958322B2 (en) * 2005-10-25 2011-06-07 Waratek Pty Ltd Multiple machine architecture with overhead reduction
US7660960B2 (en) * 2005-10-25 2010-02-09 Waratek Pty, Ltd. Modified machine architecture with partial memory updating
US7761670B2 (en) * 2005-10-25 2010-07-20 Waratek Pty Limited Modified machine architecture with advanced synchronization
US7581069B2 (en) * 2005-10-25 2009-08-25 Waratek Pty Ltd. Multiple computer system with enhanced memory clean up
US8015236B2 (en) * 2005-10-25 2011-09-06 Waratek Pty. Ltd. Replication of objects having non-primitive fields, especially addresses
US20070100828A1 (en) * 2005-10-25 2007-05-03 Holt John M Modified machine architecture with machine redundancy
US7849369B2 (en) * 2005-10-25 2010-12-07 Waratek Pty Ltd. Failure resistant multiple computer system and method
US8175585B2 (en) 2005-11-05 2012-05-08 Jumptap, Inc. System for targeting advertising content to a plurality of mobile communication facilities
US8571999B2 (en) 2005-11-14 2013-10-29 C. S. Lee Crawford Method of conducting operations for a social network application including activity list generation
JP4402051B2 (en) * 2006-01-16 2010-01-20 株式会社ソニー・コンピュータエンタテインメント Data processing system and data processing method
US7682961B2 (en) * 2006-06-08 2010-03-23 International Business Machines Corporation Methods of forming solder connections and structure thereof
US20080120475A1 (en) * 2006-10-05 2008-05-22 Holt John M Adding one or more computers to a multiple computer system
US7958329B2 (en) * 2006-10-05 2011-06-07 Waratek Pty Ltd Hybrid replicated shared memory
US20080140970A1 (en) * 2006-10-05 2008-06-12 Holt John M Advanced synchronization and contention resolution
US7849151B2 (en) * 2006-10-05 2010-12-07 Waratek Pty Ltd. Contention detection
WO2008040082A1 (en) * 2006-10-05 2008-04-10 Waratek Pty Limited Multiple computer system with dual mode redundancy architecture
US20080140973A1 (en) * 2006-10-05 2008-06-12 Holt John M Contention detection with data consolidation
US20080133862A1 (en) * 2006-10-05 2008-06-05 Holt John M Contention detection with modified message format
US20080126503A1 (en) * 2006-10-05 2008-05-29 Holt John M Contention resolution with echo cancellation
US20080151902A1 (en) * 2006-10-05 2008-06-26 Holt John M Multiple network connections for multiple computers
US20080114899A1 (en) * 2006-10-05 2008-05-15 Holt John M Switch protocol for network communications
US20080140633A1 (en) * 2006-10-05 2008-06-12 Holt John M Synchronization with partial memory replication
AU2007304895A1 (en) * 2006-10-05 2008-04-10 Waratek Pty Limited Advanced contention detection
US20080133692A1 (en) * 2006-10-05 2008-06-05 Holt John M Multiple computer system with redundancy architecture
WO2008040081A1 (en) * 2006-10-05 2008-04-10 Waratek Pty Limited Job scheduling amongst multiple computers
US20080140856A1 (en) * 2006-10-05 2008-06-12 Holt John M Multiple communication networks for multiple computers
US20080114853A1 (en) * 2006-10-05 2008-05-15 Holt John M Network protocol for network communications
US20080120478A1 (en) * 2006-10-05 2008-05-22 Holt John M Advanced synchronization and contention resolution
US20080133869A1 (en) * 2006-10-05 2008-06-05 Holt John M Redundant multiple computer architecture
US20080126572A1 (en) * 2006-10-05 2008-05-29 Holt John M Multi-path switching networks
US20080133861A1 (en) * 2006-10-05 2008-06-05 Holt John M Silent memory reclamation
US7852845B2 (en) * 2006-10-05 2010-12-14 Waratek Pty Ltd. Asynchronous data transmission
US20080126372A1 (en) * 2006-10-05 2008-05-29 Holt John M Cyclic redundant multiple computer architecture
US7949837B2 (en) * 2006-10-05 2011-05-24 Waratek Pty Ltd. Contention detection and resolution
US20100121935A1 (en) * 2006-10-05 2010-05-13 Holt John M Hybrid replicated shared memory
US20080250221A1 (en) * 2006-10-09 2008-10-09 Holt John M Contention detection with data consolidation
US8316190B2 (en) * 2007-04-06 2012-11-20 Waratek Pty. Ltd. Computer architecture and method of operation for multi-computer distributed processing having redundant array of independent systems with replicated memory and code striping
US20080277314A1 (en) * 2007-05-08 2008-11-13 Halsey Richard B Olefin production utilizing whole crude oil/condensate feedstock and hydrotreating
GB0714394D0 (en) * 2007-07-24 2007-09-05 Keycorp Ltd Graphic user interface parsing
US8682875B2 (en) * 2007-10-24 2014-03-25 International Business Machines Corporation Database statistics for optimization of database queries containing user-defined functions
US9904788B2 (en) 2012-08-08 2018-02-27 Amazon Technologies, Inc. Redundant key management
US9225675B2 (en) 2012-08-08 2015-12-29 Amazon Technologies, Inc. Data storage application programming interface
US9253053B2 (en) * 2012-10-11 2016-02-02 International Business Machines Corporation Transparently enforcing policies in hadoop-style processing infrastructures
US10558581B1 (en) * 2013-02-19 2020-02-11 Amazon Technologies, Inc. Systems and techniques for data recovery in a keymapless data storage system
US9448742B2 (en) * 2014-03-27 2016-09-20 Western Digital Technologies, Inc. Communication between a host and a secondary storage device
CN107133086B (en) 2016-02-29 2020-09-04 阿里巴巴集团控股有限公司 Task processing method, device and system based on distributed system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3913070A (en) * 1973-02-20 1975-10-14 Memorex Corp Multi-processor data processing system
US4972314A (en) * 1985-05-20 1990-11-20 Hughes Aircraft Company Data flow signal processor method and apparatus
US5995996A (en) * 1993-06-15 1999-11-30 Xerox Corporation Pipelined image processing system for a single application environment

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5146559A (en) * 1986-09-29 1992-09-08 Hitachi, Ltd. System for recognizing program constitution within a distributed processing system by collecting constitution messages generated by different processors
CA1322422C (en) * 1988-07-18 1993-09-21 James P. Emmond Single-keyed indexed file for tp queue repository
US5025369A (en) * 1988-08-25 1991-06-18 David Schwartz Enterprises, Inc. Computer system
JP2785455B2 (en) * 1990-07-03 1998-08-13 株式会社日立製作所 Computer instruction execution method
JP3200932B2 (en) * 1992-03-24 2001-08-20 株式会社日立製作所 Electronic dialogue system
WO1993020511A1 (en) * 1992-03-31 1993-10-14 Aggregate Computing, Inc. An integrated remote execution system for a heterogenous computer network environment
JPH0695986A (en) * 1992-06-19 1994-04-08 Westinghouse Electric Corp <We> Real-time data imaging network system and operating method thereof
US5495618A (en) * 1992-08-26 1996-02-27 Eastman Kodak Company System for augmenting two dimensional data sets in a two dimensional parallel computer system
JP3003440B2 (en) 1993-01-19 2000-01-31 株式会社日立製作所 Load distribution control method and distributed processing system
US5394394A (en) * 1993-06-24 1995-02-28 Bolt Beranek And Newman Inc. Message header classifier
EP0694838A1 (en) * 1994-07-25 1996-01-31 International Business Machines Corporation Step level recovery
US5687372A (en) * 1995-06-07 1997-11-11 Tandem Computers, Inc. Customer information control system and method in a loosely coupled parallel processing environment
US5794210A (en) 1995-12-11 1998-08-11 Cybergold, Inc. Attention brokerage
US5778367A (en) 1995-12-14 1998-07-07 Network Engineering Software, Inc. Automated on-line information service and directory, particularly for the world wide web
WO1997022066A1 (en) 1995-12-15 1997-06-19 The Softpages, Inc. Method for computer aided advertisement
US5812793A (en) * 1996-06-26 1998-09-22 Microsoft Corporation System and method for asynchronous store and forward data replication
US5944779A (en) * 1996-07-02 1999-08-31 Compbionics, Inc. Cluster of workstations for solving compute-intensive applications by exchanging interim computation results using a two phase communication protocol
US5946463A (en) * 1996-07-22 1999-08-31 International Business Machines Corporation Method and system for automatically performing an operation on multiple computer systems within a cluster
US5862223A (en) 1996-07-24 1999-01-19 Walker Asset Management Limited Partnership Method and apparatus for a cryptographically-assisted commercial network system designed to facilitate and support expert-based commerce
US6285987B1 (en) 1997-01-22 2001-09-04 Engage, Inc. Internet advertising system
CA2209549C (en) * 1997-07-02 2000-05-02 Ibm Canada Limited-Ibm Canada Limitee Method and apparatus for loading data into a database in a multiprocessor environment
US6185698B1 (en) * 1998-04-20 2001-02-06 Sun Microsystems, Incorporated Method and apparatus using ranking to select repair nodes in formation of a dynamic tree for multicast repair
US6151633A (en) * 1998-04-20 2000-11-21 Sun Microsystems, Inc. Method and apparatus for routing and congestion control in multicast networks
US6009455A (en) * 1998-04-20 1999-12-28 Doyle; John F. Distributed computation utilizing idle networked computers
US6078866A (en) 1998-09-14 2000-06-20 Searchup, Inc. Internet site searching and listing service based on monetary ranking of site listings
US6336118B1 (en) * 1998-12-03 2002-01-01 International Business Machines Corporation Framework within a data processing system for manipulating program objects
US6292888B1 (en) * 1999-01-27 2001-09-18 Clearwater Networks, Inc. Register transfer unit for electronic processor
US6269373B1 (en) * 1999-02-26 2001-07-31 International Business Machines Corporation Method and system for persisting beans as container-managed fields
US6269361B1 (en) 1999-05-28 2001-07-31 Goto.Com System and method for influencing a position on a search result list generated by a computer network search engine
US20020004735A1 (en) 2000-01-18 2002-01-10 William Gross System and method for ranking items

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3913070A (en) * 1973-02-20 1975-10-14 Memorex Corp Multi-processor data processing system
US4972314A (en) * 1985-05-20 1990-11-20 Hughes Aircraft Company Data flow signal processor method and apparatus
US5995996A (en) * 1993-06-15 1999-11-30 Xerox Corporation Pipelined image processing system for a single application environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1277108A4 *

Also Published As

Publication number Publication date
GB2392997B (en) 2005-03-30
AU4145301A (en) 2001-08-20
CN1422403A (en) 2003-06-04
CN1262915C (en) 2006-07-05
WO2001059561A9 (en) 2002-02-07
EP1277108A4 (en) 2007-01-03
KR100502878B1 (en) 2005-07-21
US6775831B1 (en) 2004-08-10
DE10195549T1 (en) 2003-03-13
CA2400216C (en) 2012-08-28
EP1277108A1 (en) 2003-01-22
JP2003523010A (en) 2003-07-29
KR20020079849A (en) 2002-10-19
GB2392997A (en) 2004-03-17
AU2001241453B2 (en) 2004-10-14
CA2400216A1 (en) 2001-08-16
GB0219491D0 (en) 2002-10-02

Similar Documents

Publication Publication Date Title
AU2001241453B2 (en) System and method for rapid completion of data processing tasks distributed on a network
AU2001241453A1 (en) System and method for rapid completion of data processing tasks distributed on a network
US7996838B2 (en) System and program storage device for facilitating workload management in a computing environment
Snodgrass A relational approach to monitoring complex systems
Raman et al. Matchmaking: An extensible framework for distributed resource management
Eckstein et al. PICO: An object-oriented framework for parallel branch and bound
US7376663B1 (en) XML-based representation of mobile process calculi
EP0588447B1 (en) Operating system and data base having a rule language for condition driven computer operation
Lunde Empirical evaluation of some features of instruction set processor architectures
JP5844333B2 (en) Sustainable data storage technology
US6889243B1 (en) Job scheduling analysis method and system using historical job execution data
EP2541408A1 (en) Method and system for processing data for database modification
EP0871118B1 (en) Parallel data processing system and method of controlling such a system
Hailpern et al. Dynamic reconfiguration in an object-based programming language with distributed shared data
Ousterhout Partitioning and cooperation in a distributed multiprocessor operating system: Medusa
Friedman et al. Windows 2000 performance guide
Giuliano et al. Prism: A testbed for parallel control
EP0831406A2 (en) Implementing a workflow engine in a database management system
Zhao Making Digital Libraries Flexible, Scalable and Reliable: Reengineering the MARIAN System in JAVA
Hailpern et al. An architecture for dynamic reconfiguration in a distributed object-based programming language
Gray Ph. D. Thesis Proprosal: Transportable Agents
Schoen The CAOS system
Bal et al. Programming languages
Hoetzel et al. Benchmarking in Focus
Giuliano et al. PRISM: A TESTBED FOR PARALLEL CONTROL¹

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

ENP Entry into the national phase

Ref document number: 0219491

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20010206

Format of ref document f/p: F

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGE 1, DESCRIPTION, REPLACED BY A NEW PAGE 1 (WITH AN UPDATED VERSION OF THE PAMPHLET FRONT PAGE);PAGES 1/5-5/5, DRAWINGS, REPLACED BY NEW PAGES 1/2-2/2; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

ENP Entry into the national phase

Ref country code: JP

Ref document number: 2001 558824

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1020027010416

Country of ref document: KR

Ref document number: 2400216

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2001241453

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2001912700

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 018077366

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 1020027010416

Country of ref document: KR

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 2001912700

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 2001241453

Country of ref document: AU

WWG Wipo information: grant in national office

Ref document number: 1020027010416

Country of ref document: KR

REG Reference to national code

Ref country code: DE

Ref legal event code: 8607