WO1997049039A1 - Apparatus and methods for highly available directory services in the distributed computing environment - Google Patents

Apparatus and methods for highly available directory services in the distributed computing environment Download PDF

Info

Publication number
WO1997049039A1
WO1997049039A1 PCT/US1997/010739 US9710739W WO9749039A1 WO 1997049039 A1 WO1997049039 A1 WO 1997049039A1 US 9710739 W US9710739 W US 9710739W WO 9749039 A1 WO9749039 A1 WO 9749039A1
Authority
WO
WIPO (PCT)
Prior art keywords
agent
server
cds
master
agents
Prior art date
Application number
PCT/US1997/010739
Other languages
French (fr)
Inventor
Elmootazbellah N. Elnozahy
Vivek Ratan
Mark E. Segal
Original Assignee
Bell Communications Research, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bell Communications Research, Inc. filed Critical Bell Communications Research, Inc.
Publication of WO1997049039A1 publication Critical patent/WO1997049039A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2041Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with more than one idle spare processing component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1479Generic software techniques for error detection or fault masking
    • G06F11/1482Generic software techniques for error detection or fault masking by means of middleware or OS functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2097Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/465Distributed object oriented systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2028Failover techniques eliminating a faulty processor or activating a spare

Definitions

  • the present invention relates generally to a highly available directory service, and in particular to apparatus and methods for a highly available directory service in Distributed Computing Environment.
  • DCE Distributed Computing Environment
  • OSF Open Software Foundation
  • RPC Remote Procedure Call
  • Security Service authenticates the identities of users, authorizes access to resources on a distributed network, and provides user and server account management
  • Directory Service provides a single naming model throughout the distributed environment
  • Time Service synchronizes the system clocks throughout the network
  • Threads Service provides multiple threads of execution capability
  • Distributed File Service provides access to files across a network.
  • Directory Service performs typical naming services in a distributed computing environment and acts as a central repository for information about resources in the distributed system.
  • Typical resources are users, machines, and RPC-based services.
  • the information consists of the name of the resource and its associated attributes.
  • Typical attributes include a user's home directory, or the location of an RPC-based server.
  • the Directory Service provides a distributed and replicated repository for information on various resources of a distributed system, such as location of servers, and the services they offer. Clients use the name space to locate service providers in a well-defined, network-independent manner.
  • the Directory Service consists of a Global Directory Service (GDS) that spans different administrative domains, and a Cell Directory Service (CDS).
  • GDS Global Directory Service
  • CDS Cell Directory Service
  • a cell is the unit of management and administration in DCE and typically consists of a group of machines on a local area network.
  • CDS maintains a local, hierarchical name space within the cell.
  • the Cell Directory Service manages a database of information about the resources in a group of machines called a DCE cell.
  • the GDS enables intercell communications by locating cells which have been registered in the global naming environment. The present invention focuses on enhancing the availability and accuracy of CDS.
  • the CDS name space is partitioned over one or more CDS servers to improve the scalability and availability of the service.
  • a CDS server maintains an in-memory database, called a clearinghouse, to store the cell's name space, or portions thereof. Clients access a server using the DCE RPC mechanism.
  • CDS servers typically announce their presence by broadcasting their network addresses periodically over the local area network.
  • CDS names may be replicated on different servers.
  • the replication model follows the primary copy approach.
  • the unit of replication is a directory, but subdirectories of a replicated directory are not automatically replicated.
  • One physical copy of a directory is designated a master replica, while the others are designated read-only replicas. Updates can only be made to the master replica. Lookups can be handled by any server that contains a copy of the directory. This design helps offload lookup requests from the site containing the master copy of an entry.
  • CDS continues to service lookup requests for that directory through the available replicas, although no update requests can be handled until the server containing the master replica recovers from failure or a new master replica is created.
  • DCE allows flexibility in specifying which directory entries are to be replicated, and the degree of consistency across replicas.
  • the CDS propagates updates to the read-only replicas either immediately after performing the update on the master copy, or after a certain amount of time, depending on the desired degree of consistency. The propagation occurs on a best-effort basis. If a failure occurs in communication, the propagation is periodically retried in the background until it succeeds.
  • CDS Code Division Multiple Access
  • CDS propagates an update to read-only replicas after it sends the corresponding reply to the client.
  • a DCE application sends a request to advertise its services in the name space.
  • the server containing the master replica of the corresponding name space entry may fail after sending the reply to the application but before propagating the updates to the read-only replicas.
  • the application's advertisement will not be available until the server mamtaining the master replica recovers, which may take a long time.
  • the application itself is not aware of the problem since it has received a reply indicating that the advertisement was properly handled, and therefore would not attempt any corrective action.
  • CDS replicas do not necessarily return correct information. This occurs if the master replica does not immediately propagate updates to the other replicas, or if a communication failure prevents such updates from reaching the replicas. Applications, therefore, cannot trust information returned by read-only replicas.
  • CDS In the case of failure of a CDS server that maintained the master replica of a directory, CDS allows a read-only replica of the directory, on another server, to be promoted to a master replica.
  • This mechanism fails if the security server was also running on the same machine as the failed CDS server (as is common in commercial installations). As a result, the system is effectively susceptible to a single point of failure with respect to update requests for some directory entries.
  • this mechanism needs to be executed for every directory for which the failed server maintained a master replica. Obtaining a list of each such directory is cumbersome and time corisuming. Finally, the entire reconfiguration mechanism requires manual intervention and can take many minutes to execute, rendering it inadequate when a high degree of availability is required.
  • CDS does not support any form of automatic failure detection or reconfiguration.
  • the failure of a CDS server is detected by RPC timeouts in clients invoking operations on the server. Failed servers must then be manually restarted and configured into the system, as described earlier.
  • the present invention relates to apparatus and methods for a highly available directory service. Additional objects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objects and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.
  • Systems consistent with the present invention provide enhanced CDS by supporting the partial replication of the name space, continuous and correct operation of the service, and automatic failover.
  • the enhancements ensure the consistency of the name space data, and are transparent to application developers and end users, all without a significant performance penalty.
  • the present invention can be added to any existing DCE implementation and will use existing DCE components without requiring changes or replacements to the standard components or knowledge of their internal implementation.
  • DCE provides a distributed and replicated CDS which implements a hierarchical name space that can be used for a variety of purposes.
  • Systems consistent with the present invention have servers that use the CDS to advertise themselves so that clients may locate them. Clients locate the desired servers by searching this name space. They also rely on the name service to locate backup servers in the presence of failures.
  • the continuous availability of the CDS and the coherence and consistency of its information are therefore crucial to the successful operation of the system.
  • the invention comprises apparatus for maintaining directories, comprising a first directory, a first server for managing the first directory, and a first agent for intercepting a request to the first server, and forwarding the request to the first server and a second agent.
  • the invention also comprises methods comprised of the steps of storing a first directory, managing the first directory using a first server, and intercepting a request to the first server, and forwarding the request to the first server and a second agent.
  • the directory service as specified by OSF/1 and implemented by independent vendors suffers from several problems that affect its correct operation and availability. Since the standard is distributed in the form of working code that is identical among all vendors, applications that use this service cannot rely on it for correct operation and cannot function properly when high availability is required.
  • the present invention consists of apparatus and methods that could be added to any existing directory service implementation of DCE to correct its behavior and improve its availability.
  • the apparatus and methods need no knowledge of the internal workings of DCE or the source code of the service, and do not need to change to DCE's directory service implementation, working code, or applications.
  • the apparatus and methods can therefore be added seamlessly into any existing DCE implementation without any changes to existing components.
  • FIG. 1 is a block diagram showing a system name server architecture in accordance with the principles of the invention
  • FIG. 2 is a flowchart illustrating how an agent interposes itself between a client and a server
  • FIG. 3 is a flowchart illustrating propogation of name space update requests
  • FIG. 4 is a flowchart illustrating master crash monitoring and election of a new master in response to a master crash
  • FIG. 5 is a flowchart illustrating detection of a replica machine crash, and exclusion of the replica from future operations.
  • the present invention is comprised of extensions to a commercial CDS server system.
  • the extensions increase the robustness and accuracy of the CDS server system.
  • the highly- available CDS in accordance with the present invention appears to clerks running on clients of the CDS as a single CDS server that is accessible through standard DCE interfaces.
  • the present invention thus ensures compatibility with existing clerks without modifying them.
  • the hierarchical name space managed by the CDS servers contains a Ahighly-available subtree® whose root is a name space entry maintained by the CDS server. Entries in this subtree appear to clerks as non-replicated entries that are accessible as defined in the DCE standard.
  • the present invention guarantees that access to entries in the subtree will be highly available for lookups as well as updates.
  • the present invention also guarantees that lookups for these entries return the most recent version.
  • the preferred embodiment disclosed herein implements a highly available subtree, the present invention may also be used to make the entire tree highly available.
  • the present invention inserts agents between well-established CDS entities (i.e., the client clerk and CDS server), it can be used in any DCE implementation. There is no need to modify the source code of the DCE implementation to insert the agents. The agents appear to the clerks as typical CDS servers.
  • Fig. 1 is a block diagram showing a server architecture consistent with the principles of the present invention.
  • Master CDS machine 60 is comprised of a CDS server 62, clearinghouse 64, and security service 66.
  • Master agent 68 is used in accordance with the principles of the present invention to handle requests from client machine 78.
  • Client machine 78 is comprised of a CDS clerk 80 and DCE application 82.
  • Client machine 78 represents one or more clients which make requests to the CDS server system.
  • Client machine 78 executes DCE application 82, which periodically requires directory information.
  • CDS clerk 80 handles these requests.
  • the architecture further includes one or more replica CDS machines, represented by CDS machine 70.
  • Replica CDS machine 70 is primarily comprised of CDS server 72 and clearinghouse 74, as is commonly understood in the DCE standard.
  • replica CDS machine 70 is also comprised of replica agent 76 which handles name server requests received from master CDS machine 60 or other replica CDS machines 70.
  • the highly available CDS in accordance with the principles of the present invention consists of two main components: standard CDS servers 62 and 72 and high-availability agents 68 and 76.
  • Each CDS server maintains at least a partial copy of the highly-available subtree in its clearinghouse 64 and 74, respectively.
  • the subtree contains CDS naming information. Support for this replication, however, does not rely on the existing CDS replication mechanism, thus avoiding the problems outlined above.
  • Each of the standard CDS servers is configured to run as the only standard master CDS server of the cell, and is not aware of any of the other servers. This configuration is unusual since DCE allows only one standard master CDS server per cell. To overcome this problem, the advertisements from each server are turned off and master agent 68 performs the advertisement function for the cell. Turning the advertisement function on and off in a server is well understood in DCE.
  • each CDS server a master component as is well understood, specified, and implemented by DCE. Such an arrangement thus would not be possible without turning off the advertisements.
  • the number of the standard CDS servers can be varied and depends on the degree of availability required.
  • One of the servers is designated as a Amaster server@, in this case master CDS machine 60, while the others are designated as Areplica servers®, such as replica CDS machine 70.
  • Fig. 1 shows a single replica server 70, several may be present.
  • This configuration allows use of the underlying support of CDS for maintaining clearinghouses, thus reducing development costs and allowing the use of existing CDS backups and manipulation tools which depend on the format of the clearinghouse. It also allows using the invention to be used with clearinghouses that contain existing name spaces, making it possible to add the apparatus and methods disclosed herein to existing DCE installations.
  • Agents 68 and 76 manage replication, ensure consistency and coherence, and maintain the system configuration.
  • Agents 68 and 76 run on each CDS machine.
  • Agent 68 runs on the same machine as the master server and is called the master agent.
  • Agents, such as agent 76, running on machines with replica servers, are called replica agents.
  • a different type of agent, herein called client agent, runs on each CDS client in the cell.
  • Fig. 2 is a flowchart showing how each agent inserts themselves between the CDS clerk 80 and CDS servers 62 and 72.
  • Master agent 68 and replica agents 76 transparently insert themselves between the CDS clerks and the standard CDS servers through an unconventional manipulation of the DCE address resolution mechanism.
  • the agent first records the CDS server's endpoint information (step 110) before overwriting the information. This allows the agent to still communicate with the CDS server. Each server may have several endpoints.
  • Each agent then exports its information to the RPC endpoint mapper on its machine (which is responsible for directing RPCs to the appropriate process), in effect overwriting the corresponding standard CDS server's endpoints (step 112).
  • the master agent then acts as the standard master CDS server by advertising itself as such.
  • Master agent 68 can now intercept RPCs from CDS client 78 that are sent to master CDS server 62.
  • Master agent 68 forwards the client request to master CDS server 62 on its machine (step 120). If the request requires an update of the name space (step 122) master agent 68 also forwards the request to each available replica agent (e.g., agent 76) (step 124). Each replica agent in turn forwards this request to the replica server on its machine (step 124).
  • the client operation is performed on the master clearinghouse 64 as well as on each of the replica clearinghouses 74, thus keeping the various clearinghouses consistent.
  • master agent 68 determines that the update request has been handled by the master CDS server 62 and by each of the available replica servers 72 (step 126).
  • master agent 68 forwards the reply from master CDS server 62 back to client 78 (step 130). If the replica servers have not executed the request, the request is forwarded again (step 124). If the request is not a name space update, it is handled at master CDS machine 60 (step 128).
  • the modified server in accordance with the present invention is also integrated with the DCE security service 66.
  • master agent 68 When master agent 68 receives a CDS request, it authenticates client 78 and verifies whether the client has proper authorization to perform the corresponding operation. This step is necessary because the indirection introduced by master agent 68 makes master CDS server 62 perceive master agent 68 as the "client,” and thus uses its security credentials for authorization checking, instead of the real client's credentials. This mechanism duplicates the authentication and authorization checks performed by a clearinghouse. Alternatively, the system could use security delegation, which enables an intermediary to act on behalf of a client by using the client's security credentials. This feature obviates the need for master agent 68 to perform security checks. Fig.
  • FIG. 4 is a flowchart illustrating how replica agents monitor liveness of master agent 68.
  • Liveness of master agent 68 is determined from whether or not master agent 68 has crashed. If master agent 68 has not crashed, it is alive.
  • Each replica agent, such as replica agent 76 monitors liveness of the master (step 130), which is manifested as a communications failure in DCE RPC, as is understood in the art. If the machine 60 where master agent 68 resides crashes (step 132), the replica agents, such as replica agent 76, elect a new master agent (step 134). Electing a new master may be based on location, load, or any of a variety of other criteria for selecting a new master agent.
  • the new master agent immediately starts broadcasting advertisements about itself and is ready to accept CDS requests from clients (step 136). Continuous availability of lookup and update requests is thus provided, because the new master is immediately ready to accept CDS requests. Further, since the new master had an up-to-date version of each directory entry, clients are guaranteed to obtain coherent data from the new master, contrasting the operation of vanilla CDS where clients may obtain stale or incoherent information in similar situations.
  • the new master agent also broadcasts this designation change to all client agents on the local network (step 136). In parallel, each client agent reinitializes the clerks' cache and reconfigures the client to run with the new cell master (step 138). This step is necessary to remove any persistent bindings that existed with the failed agent.
  • Fig. 5 is a flowchart of steps performed by the master agent to handle a crash of the machine on which a replica agent resides. This failure is detected by the master agent when it forwards a client request to each of the replica agents (step 152). The failed replica agent is then excluded from further communication (step 154).
  • An alternative embodiment dynamically integrates new replica agents. This enables failed replica agents, as well as new agents, to restart and dynamically join the highly available system.
  • the experimental environment consists of a switched 10 Mb/s Ethernet network connecting a number of Digital Equipment Corporation (DEC) 3000/400 workstations. Each workstation runs Digital UNIX (OSF/1) v3.0 or higher, DEC's DCE version 1.3, and is equipped with a 133 MHz Alpha processor and 64 MB of memory.
  • DEC Digital Equipment Corporation
  • Latency is measured as the time between an application issuing an update request and receiving the corresponding reply from a remote CDS server.
  • An administration tool, cdscp which is part of the DCE installation and allows end users to manipulate objects in CDS (performance was also measured at the Name Space Library level, and results are identical) was used.
  • a request to create an object in CDS through cdscp takes an average of 404 milliseconds using the vendor-supplied CDS server. About 92% of this time (372 milliseconds) is spent on the client side in the cdscp interface and clerk processing, about 10 milliseconds on the CDS server creating the entry, and the rest on processing the RPC.
  • the same operation takes an average of 461 milliseconds for a modified CDS server (with no backup), showing a latency increase of about 14%.
  • the modified server in accordance with the present invention needs to perform authorization checks on a client request.
  • these checks are necessary but duplicate the functionality provided at a clearinghouse.
  • About 53 out of the 57 milliseconds of additional time is spent doing these checks.
  • the remaining 4 milliseconds is the actual cost incurred by the master agent in intercepting RPCs and forwarding it to its clearinghouse.
  • the security delegation functionality on another platform has also been used and using this functionality potentially reduces latency costs significantly.
  • Another way to reduce latency costs is to cache the authorization information of objects in the replicated name space with the master agent when they are created.
  • Table 1 Measurements of latency (ms) and throughput (requests/sec), without security checks.
  • Table 1 compares the latency for a single update request, without performing any authorization checks, for a standard CDS server with our modified server with different number of CDS replicas.
  • the master agent concurrently forwards RPCs to replica agents, and the extra overhead of 4-6 msec whenever an extra replica is added to the system is due to the cost of thread manipulation and context switching, parameter copying (to generate messages to each replica), and synchronization (waiting for replicas to finish before sending a reply to the client).
  • This relatively small overhead shows that it is feasible to use a relatively large number of replicas for added reliability, without a significant performance penalty.
  • Throughput is measured as the maximum rate of concurrent update requests that CDS can handle. This rate was found to be an average of 3.09 requests/second for the vendor's CDS server, and 3.01 requests/second for our server with no backup. The reduction in the throughput is thus less than 3%, most of which is due to the redundant authorization check that the agent has to perform. Excluding the security checks (see Table 1), the throughput of the server with no backup is also around 3.09 requests/sec, essentially the same as that of the vanilla server. Furthermore, the throughput reduces by less than 0.5% every time an additional replica is added. These figures are surprisingly low given that agents add extra overhead and serialize all update requests.
  • the present invention interposes an agent between CDS clerks and the standard servers. Unlike CDS, the present invention allows continuous updates even if the master clearinghouse fails, ensures coherent information in clerks' caches, guarantees consistency among the replicas of the name space, and includes support for automatic configuration without operator intervention.
  • the performance penalty for these benefits is about 14% in latency and 3% in throughput. Most of this penalty is due to extra authorization checks that will be removed in newer versions of DCE.
  • the present invention provides several advantages. First, it is compatible with existing CDS clerks, which are the processes on client machines that maintain the local cache and act as proxies for remote CDS servers. Second, the invention provides continuous operation availability of lookup and update requests despite failures. Third, the present invention also ensures correct operation. For example, an update must be performed on all available backups before a reply is sent to the clerk. Fourth, the failover period is short. A backup must take over in a time span that is less than a reasonable timeout value that the clients use in communicating with the server. Fifth, the present invention is transparent to the clerks and application programs to the greatest extent possible. Finally, all these features are provided with a small performance penalty.

Abstract

In the Distributed Computing Environment (DCE) standard, availability of directory services is increased by apparatus and methods using agents inserted between requesting clients and servers. By using agents, additional functions are carried out which are not performed in a typical DCE environment. Each agent (68) inserts itself between the requesters (78) and servers (60) by writing over the pointer to the server (60) with information pointing to the agent (68), thus redirecting requests to themselves. The agent (68) then receives incoming requests and forwards them on to its associated server (62) and other agents (76). The agent (68) handling requests for the master server (60) is called the 'master' agent and the agents handling requests for replica servers are 'replica' agents. The agents make sure requests are performed before replying to the original requester. Agents also monitor themselves. If a master agent (68) crashes, the remaining agents (76) elect a new master agent. If a replica agent (76) crashes, the master agent excludes the agent (76) from further communications. The apparatus and methods provide a highly available and robust directory server.

Description

APPARATUS AND METHODS FOR HIGHLY AVAILABLE DIRECTORY SERVICES IN THE DISTRIBUTED COMPUTING ENVIRONMENT
BACKGROUND OF THE INVENTION
The present invention relates generally to a highly available directory service, and in particular to apparatus and methods for a highly available directory service in Distributed Computing Environment.
Distributed Computing Environment (DCE) is suite of software utilities and operating system extensions that can be used to create applications on networks of heterogeneous hardware - PCs, Unix workstations, minicomputers and mainframes. The DCE is a standard developed by the Open Software Foundation (OSF), and is designed to simplify building heterogeneous client/server applications. Several services are provided by DCE: Remote Procedure Call (RPC) facilitates client-server communication, so that an application can effectively access resources distributed across a network; Security Service authenticates the identities of users, authorizes access to resources on a distributed network, and provides user and server account management; Directory Service provides a single naming model throughout the distributed environment; Time Service synchronizes the system clocks throughout the network; Threads Service provides multiple threads of execution capability; and, Distributed File Service provides access to files across a network.
Directory Service performs typical naming services in a distributed computing environment and acts as a central repository for information about resources in the distributed system. Typical resources are users, machines, and RPC-based services. The information consists of the name of the resource and its associated attributes. Typical attributes include a user's home directory, or the location of an RPC-based server.
The Directory Service provides a distributed and replicated repository for information on various resources of a distributed system, such as location of servers, and the services they offer. Clients use the name space to locate service providers in a well-defined, network-independent manner. The Directory Service consists of a Global Directory Service (GDS) that spans different administrative domains, and a Cell Directory Service (CDS). A cell is the unit of management and administration in DCE and typically consists of a group of machines on a local area network. CDS maintains a local, hierarchical name space within the cell. The Cell Directory Service manages a database of information about the resources in a group of machines called a DCE cell. The GDS enables intercell communications by locating cells which have been registered in the global naming environment. The present invention focuses on enhancing the availability and accuracy of CDS.
The CDS name space is partitioned over one or more CDS servers to improve the scalability and availability of the service. A CDS server maintains an in-memory database, called a clearinghouse, to store the cell's name space, or portions thereof. Clients access a server using the DCE RPC mechanism. CDS servers typically announce their presence by broadcasting their network addresses periodically over the local area network.
The performance and availability of CDS is crucial to applications built using DCE, especially for those applications having high availability requirements. To address these requirements, CDS names may be replicated on different servers. The replication model follows the primary copy approach. The unit of replication is a directory, but subdirectories of a replicated directory are not automatically replicated. One physical copy of a directory is designated a master replica, while the others are designated read-only replicas. Updates can only be made to the master replica. Lookups can be handled by any server that contains a copy of the directory. This design helps offload lookup requests from the site containing the master copy of an entry. Should the server containing the master copy of a directory entry fail, CDS continues to service lookup requests for that directory through the available replicas, although no update requests can be handled until the server containing the master replica recovers from failure or a new master replica is created. DCE allows flexibility in specifying which directory entries are to be replicated, and the degree of consistency across replicas. The CDS propagates updates to the read-only replicas either immediately after performing the update on the master copy, or after a certain amount of time, depending on the desired degree of consistency. The propagation occurs on a best-effort basis. If a failure occurs in communication, the propagation is periodically retried in the background until it succeeds.
There are several deficiencies in CDS specifications that affect naming service availability and correctness. The DCE CDS falls short in providing the necessary degree of availability. Furthermore, CDS often returns inconsistent or incoherent information as a result of a lookup, making correct operation impossible.
CDS propagates an update to read-only replicas after it sends the corresponding reply to the client. Consider the scenario where a DCE application sends a request to advertise its services in the name space. The server containing the master replica of the corresponding name space entry may fail after sending the reply to the application but before propagating the updates to the read-only replicas. In such a case the application's advertisement will not be available until the server mamtaining the master replica recovers, which may take a long time. The application itself is not aware of the problem since it has received a reply indicating that the advertisement was properly handled, and therefore would not attempt any corrective action.
Another problem related to CDS operation is that lookups performed on CDS replicas do not necessarily return correct information. This occurs if the master replica does not immediately propagate updates to the other replicas, or if a communication failure prevents such updates from reaching the replicas. Applications, therefore, cannot trust information returned by read-only replicas.
In the case of failure of a CDS server that maintained the master replica of a directory, CDS allows a read-only replica of the directory, on another server, to be promoted to a master replica. There are three problems with using this mechanism. First, this mechanism fails if the security server was also running on the same machine as the failed CDS server (as is common in commercial installations). As a result, the system is effectively susceptible to a single point of failure with respect to update requests for some directory entries. Second, this mechanism needs to be executed for every directory for which the failed server maintained a master replica. Obtaining a list of each such directory is cumbersome and time corisuming. Finally, the entire reconfiguration mechanism requires manual intervention and can take many minutes to execute, rendering it inadequate when a high degree of availability is required.
Another related problem is that CDS does not support any form of automatic failure detection or reconfiguration. The failure of a CDS server is detected by RPC timeouts in clients invoking operations on the server. Failed servers must then be manually restarted and configured into the system, as described earlier.
What is needed then is a system which overcomes the above problems in order to provide a robust and accurate directory service.
DESCRIPTION OF THE INVENTION
The present invention relates to apparatus and methods for a highly available directory service. Additional objects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objects and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.
Systems consistent with the present invention provide enhanced CDS by supporting the partial replication of the name space, continuous and correct operation of the service, and automatic failover. The enhancements ensure the consistency of the name space data, and are transparent to application developers and end users, all without a significant performance penalty. The present invention can be added to any existing DCE implementation and will use existing DCE components without requiring changes or replacements to the standard components or knowledge of their internal implementation.
DCE provides a distributed and replicated CDS which implements a hierarchical name space that can be used for a variety of purposes. Systems consistent with the present invention have servers that use the CDS to advertise themselves so that clients may locate them. Clients locate the desired servers by searching this name space. They also rely on the name service to locate backup servers in the presence of failures. The continuous availability of the CDS and the coherence and consistency of its information are therefore crucial to the successful operation of the system.
To achieve the objects and in accordance with the purpose of the invention, as embodied and broadly described herein, the invention comprises apparatus for maintaining directories, comprising a first directory, a first server for managing the first directory, and a first agent for intercepting a request to the first server, and forwarding the request to the first server and a second agent. The invention also comprises methods comprised of the steps of storing a first directory, managing the first directory using a first server, and intercepting a request to the first server, and forwarding the request to the first server and a second agent.
The directory service as specified by OSF/1 and implemented by independent vendors suffers from several problems that affect its correct operation and availability. Since the standard is distributed in the form of working code that is identical among all vendors, applications that use this service cannot rely on it for correct operation and cannot function properly when high availability is required. The present invention consists of apparatus and methods that could be added to any existing directory service implementation of DCE to correct its behavior and improve its availability. The apparatus and methods need no knowledge of the internal workings of DCE or the source code of the service, and do not need to change to DCE's directory service implementation, working code, or applications. The apparatus and methods can therefore be added seamlessly into any existing DCE implementation without any changes to existing components.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and together with the description, serve to explain the principles of the invention. In the drawings:
FIG. 1 is a block diagram showing a system name server architecture in accordance with the principles of the invention;
FIG. 2 is a flowchart illustrating how an agent interposes itself between a client and a server;
FIG. 3 is a flowchart illustrating propogation of name space update requests;
FIG. 4 is a flowchart illustrating master crash monitoring and election of a new master in response to a master crash; and
FIG. 5 is a flowchart illustrating detection of a replica machine crash, and exclusion of the replica from future operations. BEST MODE FOR CARRYING OUT THE INVENTION
Reference will now be made in detail to the present preferred embodiment of the invention, which is illustrated in the accompanying drawings.
The present invention is comprised of extensions to a commercial CDS server system. The extensions increase the robustness and accuracy of the CDS server system. The highly- available CDS in accordance with the present invention appears to clerks running on clients of the CDS as a single CDS server that is accessible through standard DCE interfaces. The present invention thus ensures compatibility with existing clerks without modifying them. The hierarchical name space managed by the CDS servers contains a Ahighly-available subtree® whose root is a name space entry maintained by the CDS server. Entries in this subtree appear to clerks as non-replicated entries that are accessible as defined in the DCE standard. Unlike the standard CDS operation, the present invention guarantees that access to entries in the subtree will be highly available for lookups as well as updates. The present invention also guarantees that lookups for these entries return the most recent version. Although the preferred embodiment disclosed herein implements a highly available subtree, the present invention may also be used to make the entire tree highly available.
Because the present invention inserts agents between well-established CDS entities (i.e., the client clerk and CDS server), it can be used in any DCE implementation. There is no need to modify the source code of the DCE implementation to insert the agents. The agents appear to the clerks as typical CDS servers.
Fig. 1 is a block diagram showing a server architecture consistent with the principles of the present invention. Master CDS machine 60 is comprised of a CDS server 62, clearinghouse 64, and security service 66. Master agent 68 is used in accordance with the principles of the present invention to handle requests from client machine 78. Client machine 78 is comprised of a CDS clerk 80 and DCE application 82.
Client machine 78 represents one or more clients which make requests to the CDS server system. Client machine 78 executes DCE application 82, which periodically requires directory information. CDS clerk 80 handles these requests.
The architecture further includes one or more replica CDS machines, represented by CDS machine 70. Replica CDS machine 70 is primarily comprised of CDS server 72 and clearinghouse 74, as is commonly understood in the DCE standard. In accordance with the principles of the present invention, replica CDS machine 70 is also comprised of replica agent 76 which handles name server requests received from master CDS machine 60 or other replica CDS machines 70.
The highly available CDS in accordance with the principles of the present invention consists of two main components: standard CDS servers 62 and 72 and high-availability agents 68 and 76. Each CDS server maintains at least a partial copy of the highly-available subtree in its clearinghouse 64 and 74, respectively. The subtree contains CDS naming information. Support for this replication, however, does not rely on the existing CDS replication mechanism, thus avoiding the problems outlined above. Each of the standard CDS servers is configured to run as the only standard master CDS server of the cell, and is not aware of any of the other servers. This configuration is unusual since DCE allows only one standard master CDS server per cell. To overcome this problem, the advertisements from each server are turned off and master agent 68 performs the advertisement function for the cell. Turning the advertisement function on and off in a server is well understood in DCE.
Turning off the advertisements from the vanilla CDS servers is necessary to prevent clients from accessing them directly and bypassing the replica agents. Also, DCE is not configured to allow more than one master CDS server in a replicated server group. The present invention makes each CDS server a master component as is well understood, specified, and implemented by DCE. Such an arrangement thus would not be possible without turning off the advertisements.
The number of the standard CDS servers can be varied and depends on the degree of availability required. One of the servers is designated as a Amaster server@, in this case master CDS machine 60, while the others are designated as Areplica servers®, such as replica CDS machine 70. Although Fig. 1 shows a single replica server 70, several may be present. This configuration allows use of the underlying support of CDS for maintaining clearinghouses, thus reducing development costs and allowing the use of existing CDS backups and manipulation tools which depend on the format of the clearinghouse. It also allows using the invention to be used with clearinghouses that contain existing name spaces, making it possible to add the apparatus and methods disclosed herein to existing DCE installations. Agents 68 and 76 manage replication, ensure consistency and coherence, and maintain the system configuration.
Agents 68 and 76 run on each CDS machine. Agent 68 runs on the same machine as the master server and is called the master agent. Agents, such as agent 76, running on machines with replica servers, are called replica agents. A different type of agent, herein called client agent, runs on each CDS client in the cell.
Fig. 2 is a flowchart showing how each agent inserts themselves between the CDS clerk 80 and CDS servers 62 and 72. Master agent 68 and replica agents 76 transparently insert themselves between the CDS clerks and the standard CDS servers through an unconventional manipulation of the DCE address resolution mechanism. The agent first records the CDS server's endpoint information (step 110) before overwriting the information. This allows the agent to still communicate with the CDS server. Each server may have several endpoints. Each agent then exports its information to the RPC endpoint mapper on its machine (which is responsible for directing RPCs to the appropriate process), in effect overwriting the corresponding standard CDS server's endpoints (step 112). The master agent then acts as the standard master CDS server by advertising itself as such. Master agent 68 can now intercept RPCs from CDS client 78 that are sent to master CDS server 62.
The process of intercepting RPCs is detailed in Fig. 3. Master agent 68 forwards the client request to master CDS server 62 on its machine (step 120). If the request requires an update of the name space (step 122) master agent 68 also forwards the request to each available replica agent (e.g., agent 76) (step 124). Each replica agent in turn forwards this request to the replica server on its machine (step 124). The client operation is performed on the master clearinghouse 64 as well as on each of the replica clearinghouses 74, thus keeping the various clearinghouses consistent. Once master agent 68 determines that the update request has been handled by the master CDS server 62 and by each of the available replica servers 72 (step 126), master agent 68 forwards the reply from master CDS server 62 back to client 78 (step 130). If the replica servers have not executed the request, the request is forwarded again (step 124). If the request is not a name space update, it is handled at master CDS machine 60 (step 128).
The modified server in accordance with the present invention is also integrated with the DCE security service 66. When master agent 68 receives a CDS request, it authenticates client 78 and verifies whether the client has proper authorization to perform the corresponding operation. This step is necessary because the indirection introduced by master agent 68 makes master CDS server 62 perceive master agent 68 as the "client," and thus uses its security credentials for authorization checking, instead of the real client's credentials. This mechanism duplicates the authentication and authorization checks performed by a clearinghouse. Alternatively, the system could use security delegation, which enables an intermediary to act on behalf of a client by using the client's security credentials. This feature obviates the need for master agent 68 to perform security checks. Fig. 4 is a flowchart illustrating how replica agents monitor liveness of master agent 68. Liveness of master agent 68 is determined from whether or not master agent 68 has crashed. If master agent 68 has not crashed, it is alive. Each replica agent, such as replica agent 76, monitors liveness of the master (step 130), which is manifested as a communications failure in DCE RPC, as is understood in the art. If the machine 60 where master agent 68 resides crashes (step 132), the replica agents, such as replica agent 76, elect a new master agent (step 134). Electing a new master may be based on location, load, or any of a variety of other criteria for selecting a new master agent. The new master agent immediately starts broadcasting advertisements about itself and is ready to accept CDS requests from clients (step 136). Continuous availability of lookup and update requests is thus provided, because the new master is immediately ready to accept CDS requests. Further, since the new master had an up-to-date version of each directory entry, clients are guaranteed to obtain coherent data from the new master, contrasting the operation of vanilla CDS where clients may obtain stale or incoherent information in similar situations. The new master agent also broadcasts this designation change to all client agents on the local network (step 136). In parallel, each client agent reinitializes the clerks' cache and reconfigures the client to run with the new cell master (step 138). This step is necessary to remove any persistent bindings that existed with the failed agent.
Fig. 5 is a flowchart of steps performed by the master agent to handle a crash of the machine on which a replica agent resides. This failure is detected by the master agent when it forwards a client request to each of the replica agents (step 152). The failed replica agent is then excluded from further communication (step 154). An alternative embodiment dynamically integrates new replica agents. This enables failed replica agents, as well as new agents, to restart and dynamically join the highly available system.
Several experiments quantify the performance impact of adding high availability to CDS. Latency increase and throughput reduction resulting from performing name space updates were measured. The performance of lookups were also measured and are essentially the same, hese two performance metrics are particularly important for failure-free operation and represent the direct performance cost that applications pay in return for high availability and correctness. The experiments also measured the failover time which qualifies the duration of service unavailability due to a failure in the master CDS server. The experimental environment consists of a switched 10 Mb/s Ethernet network connecting a number of Digital Equipment Corporation (DEC) 3000/400 workstations. Each workstation runs Digital UNIX (OSF/1) v3.0 or higher, DEC's DCE version 1.3, and is equipped with a 133 MHz Alpha processor and 64 MB of memory.
Latency is measured as the time between an application issuing an update request and receiving the corresponding reply from a remote CDS server. An administration tool, cdscp, which is part of the DCE installation and allows end users to manipulate objects in CDS (performance was also measured at the Name Space Library level, and results are identical) was used. A request to create an object in CDS through cdscp takes an average of 404 milliseconds using the vendor-supplied CDS server. About 92% of this time (372 milliseconds) is spent on the client side in the cdscp interface and clerk processing, about 10 milliseconds on the CDS server creating the entry, and the rest on processing the RPC. The same operation takes an average of 461 milliseconds for a modified CDS server (with no backup), showing a latency increase of about 14%.
As mentioned earlier, the modified server in accordance with the present invention needs to perform authorization checks on a client request. In the absence of security delegation, these checks are necessary but duplicate the functionality provided at a clearinghouse. About 53 out of the 57 milliseconds of additional time is spent doing these checks. The remaining 4 milliseconds is the actual cost incurred by the master agent in intercepting RPCs and forwarding it to its clearinghouse. The security delegation functionality on another platform has also been used and using this functionality potentially reduces latency costs significantly. Another way to reduce latency costs is to cache the authorization information of objects in the replicated name space with the master agent when they are created.
Figure imgf000015_0001
Table 1 : Measurements of latency (ms) and throughput (requests/sec), without security checks.
Table 1 compares the latency for a single update request, without performing any authorization checks, for a standard CDS server with our modified server with different number of CDS replicas. The master agent concurrently forwards RPCs to replica agents, and the extra overhead of 4-6 msec whenever an extra replica is added to the system is due to the cost of thread manipulation and context switching, parameter copying (to generate messages to each replica), and synchronization (waiting for replicas to finish before sending a reply to the client). This relatively small overhead shows that it is feasible to use a relatively large number of replicas for added reliability, without a significant performance penalty.
Throughput is measured as the maximum rate of concurrent update requests that CDS can handle. This rate was found to be an average of 3.09 requests/second for the vendor's CDS server, and 3.01 requests/second for our server with no backup. The reduction in the throughput is thus less than 3%, most of which is due to the redundant authorization check that the agent has to perform. Excluding the security checks (see Table 1), the throughput of the server with no backup is also around 3.09 requests/sec, essentially the same as that of the vanilla server. Furthermore, the throughput reduces by less than 0.5% every time an additional replica is added. These figures are surprisingly low given that agents add extra overhead and serialize all update requests. This can be ascribed to the fact that the cost of the call from an agent to the CDS server to create an entry is a very small compared to the overall cost of the operation, when initiated from a client. Furthermore, throughput is increased by multi-threading at the agents and the CDS servers.
It takes a total of 3.2 seconds for a replica server to detect a failure and become the new master server (this does not include the timeout period). This figure primarily includes the time to restart the security server on the replica's machine, broadcast the change to client agents, and start advertising information about the new master. Reinitializing a clerk's cache requires 77.36 seconds, using a slightly modified version of the vendor supplied DCE script (called dcesetup). This large figure reflects the time necessary to configure a CDS clerk. This process requires restarting all local DCE server daemons, deleting the persistent cache, and communicating with the CDS and security servers. Nevertheless, the new master server is available to handle requests while clerks reconfigure.
The present invention interposes an agent between CDS clerks and the standard servers. Unlike CDS, the present invention allows continuous updates even if the master clearinghouse fails, ensures coherent information in clerks' caches, guarantees consistency among the replicas of the name space, and includes support for automatic configuration without operator intervention. The performance penalty for these benefits is about 14% in latency and 3% in throughput. Most of this penalty is due to extra authorization checks that will be removed in newer versions of DCE.
The present invention provides several advantages. First, it is compatible with existing CDS clerks, which are the processes on client machines that maintain the local cache and act as proxies for remote CDS servers. Second, the invention provides continuous operation availability of lookup and update requests despite failures. Third, the present invention also ensures correct operation. For example, an update must be performed on all available backups before a reply is sent to the clerk. Fourth, the failover period is short. A backup must take over in a time span that is less than a reasonable timeout value that the clients use in communicating with the server. Fifth, the present invention is transparent to the clerks and application programs to the greatest extent possible. Finally, all these features are provided with a small performance penalty.
Although the present invention focuses on CDS, the principles described herein could be applied to other directory services.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. The specification and examples are exemplary only, and the true scope and spirit of the invention is defined by the following claims and their equivalents.

Claims

WE CLAIM:
1. Apparatus for mamtaming directories, comprising: a first directory; a first server for managing the first directory; and a first agent for intercepting a request to the first server, and forwarding the request to the first server and a second agent.
2. The apparatus according to claim 1 , wherein the first agent comprises: means for copying an endpoint of the first server, and overwriting the endpoint with first agent endpoint information.
3. The apparatus according to claim 1 , wherein the first agent comprises: means for determining whether the request to the first server has been completed; and means for forwarding a reply to a source of the request in response to a determination that the request to the first server has been completed.
4. The apparatus according to claim 1 , wherein the first agent comprises: means for determining whether the request to the second agent has been completed; and means for forwarding a reply to a source of the request in response to a determination that the request has been completed.
5. The apparatus according to claim 1 , wherein the first agent comprises: means for forwarding the request to a third agent.
6. The apparatus according to claim 1, wherein the second agent comprises: means for monitoring whether the first agent is alive.
7. The apparatus according to claim 6, wherein the second agent further comprises: means for electing a new agent to replace the first agent when the means for monitoring determines that the first agent is not alive.
8. The apparatus according to claim 7, wherein the new agent comprises: means for broadcasting to other agents that the new agent is replacing the first agent.
9. The apparatus according to claim 8, wherein the second agent comprises: means for reinitializing and reconfiguring the second agent in response to the broadcasting.
10. The apparatus according to claim 1 , wherein the first agent comprises: means for determining that the second agent is not alive; and means for excluding the second agent from further communication.
11. A method for maintaining directories, comprising the steps of: storing a first directory; managing the first directory using a first server; intercepting a request to the first server; and forwarding the request to the first server and a second agent.
12. The method according to claim 11 , further comprising a step of copying an endpoint of the first server; and overwriting the endpoint with first agent endpoint information.
13. The method according to claim 11 , wherein the step of intercepting comprises the substeps of: determining whether the request to the first server has been completed; and forwarding a reply to a source of the request in response to a determination that the request to the first server has been completed.
14. The method according to claim 11 , further comprising the steps of: storing a second directory; managing the second directory using a second server; and receiving requests to the second server using the second agent.
15. The method according to claim 11 , wherein the step of intercepting using a first agent comprises: determining whether the request to the second agent has been completed; and forwarding a reply to a source of the request in response to a determination that the request has been completed.
16. The method according to claim 11 , wherein the step of intercepting using a first agent comprises a substep of: forwarding the request to a third agent.
17. The method according to claim 14, wherein the step of receiving using a second agent comprises a substep of: monitoring whether the first agent is alive.
18. The method according to claim 17, wherein the step of receiving using a second agent further comprises a substep of: electing a new agent to replace the first agent when the means for monitoring determines that the first agent is not alive.
19. The method according to claim 18, further comprising a step of: broadcasting from the new agent to other agents an indication that the new agent is replacing the first agent.
20. The method according to claim 19, further comprising a step of: reinitializing and reconfiguring the second agent in response to the indication.
21. The method according to claim 14, further comprising steps of: determining that the second agent is not alive; and excluding the second agent from further communication.
PCT/US1997/010739 1996-06-21 1997-06-20 Apparatus and methods for highly available directory services in the distributed computing environment WO1997049039A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US1999996P 1996-06-21 1996-06-21
US60/019,999 1996-06-21

Publications (1)

Publication Number Publication Date
WO1997049039A1 true WO1997049039A1 (en) 1997-12-24

Family

ID=21796201

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/010739 WO1997049039A1 (en) 1996-06-21 1997-06-20 Apparatus and methods for highly available directory services in the distributed computing environment

Country Status (2)

Country Link
US (1) US6014686A (en)
WO (1) WO1997049039A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1096751A2 (en) * 1999-10-21 2001-05-02 Sun Microsystems, Inc. Method and apparatus for reaching agreement between nodes in a distributed system
FR2803062A1 (en) * 1999-12-23 2001-06-29 Bull Sa Electronic directory digital word administration distribution having bus sub assemblies replicated everywhere where local applications are and creating two objects carrying out object creation/recording/broadcasting steps.
EP1137993A1 (en) * 1998-11-05 2001-10-04 Bea Systems, Inc. A duplicated naming service in a distributed processing system
EP1223510A3 (en) * 2001-01-16 2003-03-12 Siemens Aktiengesellschaft Method for automatic restoring data in a database
FR2843209A1 (en) * 2002-08-02 2004-02-06 Cimai Technology Software application mirroring method for replication of a software application in different nodes of a computer cluster to provide seamless continuity to client computers in the case of failure of an application server
GB2407887A (en) * 2003-11-06 2005-05-11 Siemens Med Solutions Health Automatically modifying fail-over configuration of back-up devices
WO2007035747A2 (en) * 2005-09-19 2007-03-29 Millennium It (Usa) Inc. Scalable fault tolerant system
US7334232B2 (en) 1998-11-05 2008-02-19 Bea Systems, Inc. Clustered enterprise Java™ in a secure distributed processing system
US7454755B2 (en) 1998-11-05 2008-11-18 Bea Systems, Inc. Smart stub or enterprise Java™ bean in a distributed processing system

Families Citing this family (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6446070B1 (en) * 1998-02-26 2002-09-03 Sun Microsystems, Inc. Method and apparatus for dynamic distributed computing over a network
US6185611B1 (en) * 1998-03-20 2001-02-06 Sun Microsystem, Inc. Dynamic lookup service in a distributed system
US6272559B1 (en) * 1997-10-15 2001-08-07 Sun Microsystems, Inc. Deferred reconstruction of objects and remote loading for event notification in a distributed system
US6393497B1 (en) * 1998-03-20 2002-05-21 Sun Microsystems, Inc. Downloadable smart proxies for performing processing associated with a remote procedure call in a distributed system
GB9620196D0 (en) * 1996-09-27 1996-11-13 British Telecomm Distributed processing
US5832529A (en) * 1996-10-11 1998-11-03 Sun Microsystems, Inc. Methods, apparatus, and product for distributed garbage collection
SE9603753L (en) * 1996-10-14 1998-04-06 Mirror Image Internet Ab Procedure and apparatus for information transmission on the Internet
US6256675B1 (en) * 1997-05-06 2001-07-03 At&T Corp. System and method for allocating requests for objects and managing replicas of objects on a network
US6453334B1 (en) 1997-06-16 2002-09-17 Streamtheory, Inc. Method and apparatus to allow remotely located computer programs and/or data to be accessed on a local computer in a secure, time-limited manner, with persistent caching
US6192405B1 (en) * 1998-01-23 2001-02-20 Novell, Inc. Method and apparatus for acquiring authorized access to resources in a distributed system
US8060613B2 (en) * 1998-02-10 2011-11-15 Level 3 Communications, Llc Resource invalidation in a content delivery network
US6185598B1 (en) * 1998-02-10 2001-02-06 Digital Island, Inc. Optimized network resource location
GB2334353B (en) * 1998-02-12 2002-08-07 Ibm An apparatus,method and computer program product for client/server computing with the ability to select which servers are capable of creating transaction stat
EP1057107A1 (en) * 1998-02-26 2000-12-06 Sun Microsystems, Inc. Dynamic lookup service in a distributed system
US6415289B1 (en) * 1998-03-19 2002-07-02 Williams Communications, Inc. Network information control method utilizing a common command format and a centralized storage management system
US6286047B1 (en) * 1998-09-10 2001-09-04 Hewlett-Packard Company Method and system for automatic discovery of network services
US7076476B2 (en) * 1999-03-02 2006-07-11 Microsoft Corporation Method and system for integrated service administration via a directory service
US6275470B1 (en) 1999-06-18 2001-08-14 Digital Island, Inc. On-demand overlay routing for computer-based communication networks
US6553401B1 (en) * 1999-07-09 2003-04-22 Ncr Corporation System for implementing a high volume availability server cluster including both sharing volume of a mass storage on a local site and mirroring a shared volume on a remote site
US8543901B1 (en) 1999-11-01 2013-09-24 Level 3 Communications, Llc Verification of content stored in a network
JP3967509B2 (en) * 1999-12-22 2007-08-29 株式会社東芝 Recording medium storing a program for determining the server computer that was last processing, and a highly available computer system
IL140504A0 (en) * 2000-02-03 2002-02-10 Bandwiz Inc Broadcast system
WO2001080524A2 (en) * 2000-04-17 2001-10-25 Circadence Corporation Method and system for overcoming denial of service attacks
US7062567B2 (en) 2000-11-06 2006-06-13 Endeavors Technology, Inc. Intelligent network streaming and execution system for conventionally coded applications
US20020087883A1 (en) * 2000-11-06 2002-07-04 Curt Wohlgemuth Anti-piracy system for remotely served computer applications
US20020083183A1 (en) * 2000-11-06 2002-06-27 Sanjay Pujare Conventionally coded application conversion system for streamed delivery and execution
US8831995B2 (en) * 2000-11-06 2014-09-09 Numecent Holdings, Inc. Optimized server for streamed applications
US8402124B1 (en) * 2000-11-16 2013-03-19 International Business Machines Corporation Method and system for automatic load balancing of advertised services by service information propagation based on user on-demand requests
US7451196B1 (en) 2000-12-15 2008-11-11 Stream Theory, Inc. Method and system for executing a software application in a virtual environment
US7296275B2 (en) * 2001-01-04 2007-11-13 Sun Microsystems, Inc. Method and system for passing objects in a distributed system using serialization contexts
US7525956B2 (en) 2001-01-11 2009-04-28 Transnexus, Inc. Architectures for clearing and settlement services between internet telephony clearinghouses
WO2002057957A1 (en) * 2001-01-16 2002-07-25 Sangate Systems, Inc. System and method for cross-platform update propagation
US6820097B2 (en) 2001-01-16 2004-11-16 Sepaton, Inc. System and method for cross-platform update propagation
WO2002086748A1 (en) * 2001-04-18 2002-10-31 Distributed Computing, Inc. Method and apparatus for testing transaction capacity of site on a global communication network
US7370365B2 (en) * 2001-09-05 2008-05-06 International Business Machines Corporation Dynamic control of authorization to access internet services
US7756969B1 (en) 2001-09-07 2010-07-13 Oracle America, Inc. Dynamic provisioning of identification services in a distributed system
US7660887B2 (en) 2001-09-07 2010-02-09 Sun Microsystems, Inc. Systems and methods for providing dynamic quality of service for a distributed system
US20030051029A1 (en) * 2001-09-07 2003-03-13 Reedy Dennis G. Dynamic provisioning of sevice components in a distributed system
WO2003027906A2 (en) * 2001-09-28 2003-04-03 Savvis Communications Corporation System and method for policy dependent name to address resolutioin.
US7860964B2 (en) * 2001-09-28 2010-12-28 Level 3 Communications, Llc Policy-based content delivery network selection
US7373644B2 (en) * 2001-10-02 2008-05-13 Level 3 Communications, Llc Automated server replication
US20080279222A1 (en) * 2001-10-18 2008-11-13 Level 3 Communications Llc Distribution of traffic across a computer network
US20030079027A1 (en) 2001-10-18 2003-04-24 Michael Slocombe Content request routing and load balancing for content distribution networks
US9167036B2 (en) 2002-02-14 2015-10-20 Level 3 Communications, Llc Managed object replication and delivery
US7096228B2 (en) * 2002-03-27 2006-08-22 Microsoft Corporation Method and system for managing data records on a computer network
US7093013B1 (en) * 2002-06-19 2006-08-15 Alcatel High availability system for network elements
US7089323B2 (en) * 2002-06-21 2006-08-08 Microsoft Corporation Method for multicasting a message on a computer network
JP2004326478A (en) * 2003-04-25 2004-11-18 Hitachi Ltd Storage device system and management program
US7404189B2 (en) * 2003-12-30 2008-07-22 International Business Machines Corporation Scheduler supporting web service invocation
US7792874B1 (en) 2004-01-30 2010-09-07 Oracle America, Inc. Dynamic provisioning for filtering and consolidating events
US7577721B1 (en) * 2004-06-08 2009-08-18 Trend Micro Incorporated Structured peer-to-peer push distribution network
US20060010203A1 (en) * 2004-06-15 2006-01-12 Nokia Corporation Personal server and network
US20060048136A1 (en) * 2004-08-25 2006-03-02 Vries Jeff D Interception-based resource detection system
US7240162B2 (en) 2004-10-22 2007-07-03 Stream Theory, Inc. System and method for predictive streaming
WO2006055445A2 (en) 2004-11-13 2006-05-26 Stream Theory, Inc. Hybrid local/remote streaming
US7979862B2 (en) * 2004-12-21 2011-07-12 Hewlett-Packard Development Company, L.P. System and method for replacing an inoperable master workload management process
US9361311B2 (en) * 2005-01-12 2016-06-07 Wandisco, Inc. Distributed file system using consensus nodes
DE102005010690B4 (en) * 2005-03-09 2007-04-12 Knorr-Bremse Systeme für Schienenfahrzeuge GmbH Oil-injected compressor with temperature switch
US20060218165A1 (en) * 2005-03-23 2006-09-28 Vries Jeffrey De Explicit overlay integration rules
WO2006102621A2 (en) * 2005-03-23 2006-09-28 Stream Theory, Inc. System and method for tracking changes to files in streaming applications
US8024523B2 (en) 2007-11-07 2011-09-20 Endeavors Technologies, Inc. Opportunistic block transmission with time constraints
US7624432B2 (en) * 2005-06-28 2009-11-24 International Business Machines Corporation Security and authorization in management agents
US7904759B2 (en) * 2006-01-11 2011-03-08 Amazon Technologies, Inc. System and method for service availability management
US8438657B2 (en) * 2006-02-07 2013-05-07 Siemens Aktiengesellschaft Method for controlling the access to a data network
US8601112B1 (en) * 2006-03-14 2013-12-03 Amazon Technologies, Inc. Method and system for collecting and analyzing time-series data
US7979439B1 (en) 2006-03-14 2011-07-12 Amazon Technologies, Inc. Method and system for collecting and analyzing time-series data
US9037698B1 (en) 2006-03-14 2015-05-19 Amazon Technologies, Inc. Method and system for collecting and analyzing time-series data
US7461289B2 (en) * 2006-03-16 2008-12-02 Honeywell International Inc. System and method for computer service security
US8676959B2 (en) * 2006-03-27 2014-03-18 Sap Ag Integrated heartbeat monitoring and failover handling for high availability
US8261345B2 (en) * 2006-10-23 2012-09-04 Endeavors Technologies, Inc. Rule-based application access management
EP2031816B1 (en) * 2007-08-29 2012-02-22 NTT DoCoMo, Inc. Optimal operation of hierarchical peer-to-peer networks
US8892738B2 (en) 2007-11-07 2014-11-18 Numecent Holdings, Inc. Deriving component statistics for a stream enabled application
US20090157766A1 (en) * 2007-12-18 2009-06-18 Jinmei Shen Method, System, and Computer Program Product for Ensuring Data Consistency of Asynchronously Replicated Data Following a Master Transaction Server Failover Event
CA2616229A1 (en) * 2007-12-21 2009-06-21 Ibm Canada Limited - Ibm Canada Limitee Redundant systems management frameworks for network environments
US8930538B2 (en) 2008-04-04 2015-01-06 Level 3 Communications, Llc Handling long-tail content in a content delivery network (CDN)
US9762692B2 (en) * 2008-04-04 2017-09-12 Level 3 Communications, Llc Handling long-tail content in a content delivery network (CDN)
US10924573B2 (en) 2008-04-04 2021-02-16 Level 3 Communications, Llc Handling long-tail content in a content delivery network (CDN)
US9100246B1 (en) * 2008-06-19 2015-08-04 Symantec Corporation Distributed application virtualization
US8782748B2 (en) 2010-06-22 2014-07-15 Microsoft Corporation Online service access controls using scale out directory features
US8856580B2 (en) * 2011-04-07 2014-10-07 Hewlett-Packard Development Company, L.P. Controller election
US9569513B1 (en) * 2013-09-10 2017-02-14 Amazon Technologies, Inc. Conditional master election in distributed databases
EP3242410B1 (en) * 2014-03-25 2020-05-20 Lantiq Beteiligungs-GmbH & Co.KG Interference mitigation
US9395969B2 (en) 2014-09-23 2016-07-19 International Business Machines Corporation Complex computer environment installation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5408619A (en) * 1987-09-08 1995-04-18 Digital Equipment Corporation Naming service database updating technique

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5426427A (en) * 1991-04-04 1995-06-20 Compuserve Incorporated Data transmission routing system
JP3599364B2 (en) * 1993-12-15 2004-12-08 富士通株式会社 Network equipment
US5689701A (en) * 1994-12-14 1997-11-18 International Business Machines Corporation System and method for providing compatibility between distributed file system namespaces and operating system pathname syntax
US5692180A (en) * 1995-01-31 1997-11-25 International Business Machines Corporation Object-oriented cell directory database for a distributed computing environment
US5784612A (en) * 1995-05-03 1998-07-21 International Business Machines Corporation Configuration and unconfiguration of distributed computing environment components
US5713017A (en) * 1995-06-07 1998-01-27 International Business Machines Corporation Dual counter consistency control for fault tolerant network file servers
US5852724A (en) * 1996-06-18 1998-12-22 Veritas Software Corp. System and method for "N" primary servers to fail over to "1" secondary server
US5857191A (en) * 1996-07-08 1999-01-05 Gradient Technologies, Inc. Web application server with secure common gateway interface

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5408619A (en) * 1987-09-08 1995-04-18 Digital Equipment Corporation Naming service database updating technique

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1137993A4 (en) * 1998-11-05 2006-09-27 Bea Systems Inc A duplicated naming service in a distributed processing system
EP1137993A1 (en) * 1998-11-05 2001-10-04 Bea Systems, Inc. A duplicated naming service in a distributed processing system
US8069447B2 (en) 1998-11-05 2011-11-29 Oracle International Corporation Smart stub or enterprise java bean in a distributed processing system
US7480679B2 (en) 1998-11-05 2009-01-20 Bea Systems, Inc. Duplicated naming service in a distributed processing system
US7454755B2 (en) 1998-11-05 2008-11-18 Bea Systems, Inc. Smart stub or enterprise Java™ bean in a distributed processing system
US7334232B2 (en) 1998-11-05 2008-02-19 Bea Systems, Inc. Clustered enterprise Java™ in a secure distributed processing system
EP1096751A3 (en) * 1999-10-21 2001-08-22 Sun Microsystems, Inc. Method and apparatus for reaching agreement between nodes in a distributed system
EP1096751A2 (en) * 1999-10-21 2001-05-02 Sun Microsystems, Inc. Method and apparatus for reaching agreement between nodes in a distributed system
US6957254B1 (en) 1999-10-21 2005-10-18 Sun Microsystems, Inc Method and apparatus for reaching agreement between nodes in a distributed system
FR2803062A1 (en) * 1999-12-23 2001-06-29 Bull Sa Electronic directory digital word administration distribution having bus sub assemblies replicated everywhere where local applications are and creating two objects carrying out object creation/recording/broadcasting steps.
EP1223510A3 (en) * 2001-01-16 2003-03-12 Siemens Aktiengesellschaft Method for automatic restoring data in a database
FR2843209A1 (en) * 2002-08-02 2004-02-06 Cimai Technology Software application mirroring method for replication of a software application in different nodes of a computer cluster to provide seamless continuity to client computers in the case of failure of an application server
WO2004015574A3 (en) * 2002-08-02 2004-09-02 Meiosys Functional continuity by replicating a software application in a multi-computer architecture
WO2004015574A2 (en) * 2002-08-02 2004-02-19 Meiosys Functional continuity by replicating a software application in a multi-computer architecture
US7225356B2 (en) 2003-11-06 2007-05-29 Siemens Medical Solutions Health Services Corporation System for managing operational failure occurrences in processing devices
GB2407887B (en) * 2003-11-06 2006-04-19 Siemens Med Solutions Health Managing failover information in a group of networked processing devices
GB2407887A (en) * 2003-11-06 2005-05-11 Siemens Med Solutions Health Automatically modifying fail-over configuration of back-up devices
WO2007035747A2 (en) * 2005-09-19 2007-03-29 Millennium It (Usa) Inc. Scalable fault tolerant system
WO2007035747A3 (en) * 2005-09-19 2007-05-18 Millennium It Usa Inc Scalable fault tolerant system
US7966514B2 (en) 2005-09-19 2011-06-21 Millennium It (Usa), Inc. Scalable fault tolerant system

Also Published As

Publication number Publication date
US6014686A (en) 2000-01-11

Similar Documents

Publication Publication Date Title
US6014686A (en) Apparatus and methods for highly available directory services in the distributed computing environment
JP4307673B2 (en) Method and apparatus for configuring and managing a multi-cluster computer system
US10248655B2 (en) File storage system, cache appliance, and method
US8862644B2 (en) Data distribution system
US6393485B1 (en) Method and apparatus for managing clustered computer systems
US9436694B2 (en) Cooperative resource management
US5796999A (en) Method and system for selectable consistency level maintenance in a resilent database system
US5713017A (en) Dual counter consistency control for fault tolerant network file servers
US6360331B2 (en) Method and system for transparently failing over application configuration information in a server cluster
US6938031B1 (en) System and method for accessing information in a replicated database
US7007047B2 (en) Internally consistent file system image in distributed object-based data storage
US6751674B1 (en) Method and system for replication in a hybrid network
Burrows The Chubby lock service for loosely-coupled distributed systems
US6748381B1 (en) Apparatus and method for maintaining consistency of shared data resources in a cluster environment
US8099388B2 (en) Efficient handling of mostly read data in a computer server
US5822531A (en) Method and system for dynamically reconfiguring a cluster of computer systems
US10922303B1 (en) Early detection of corrupt data partition exports
US20020174420A1 (en) Apparatus and method for automated creation of resource types
CN105814544B (en) System and method for supporting persistent partition recovery in a distributed data grid
US9699025B2 (en) System and method for managing multiple server node clusters using a hierarchical configuration data structure
US20050005200A1 (en) Method and apparatus for executing applications on a distributed computer system
EP0684558A1 (en) Distributed data processing system
US7000016B1 (en) System and method for multi-site clustering in a network
US8417679B1 (en) Fast storage writes
US20070011328A1 (en) System and method for application deployment service

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 98503381

Format of ref document f/p: F

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase