WO2008054388A1 - System and method for network disaster recovery - Google Patents

System and method for network disaster recovery Download PDF

Info

Publication number
WO2008054388A1
WO2008054388A1 PCT/US2006/042662 US2006042662W WO2008054388A1 WO 2008054388 A1 WO2008054388 A1 WO 2008054388A1 US 2006042662 W US2006042662 W US 2006042662W WO 2008054388 A1 WO2008054388 A1 WO 2008054388A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
personal information
data
information management
destination server
Prior art date
Application number
PCT/US2006/042662
Other languages
French (fr)
Inventor
Tyrone F. Pike
Tim Egbert
Gordon Gundmundson
Dana Rees
Scott Smith
Alan Smoot
Stephen Taylor
Original Assignee
Cemaphore Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cemaphore Systems, Inc. filed Critical Cemaphore Systems, Inc.
Priority to PCT/US2006/042662 priority Critical patent/WO2008054388A1/en
Publication of WO2008054388A1 publication Critical patent/WO2008054388A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2074Asynchronous techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2066Optimisation of the communication load

Definitions

  • the present invention relates to computer network solutions for overcoming network outages. More specifically, the present invention relates to systems and methods for replicating network data to provide continued access in the event a portion of the network becomes unavailable.
  • a computer network generally comprises a system of separate devices linked for communication with one another to allow functions such as coordinated execution of software applications or remote access of data.
  • Common devices making up a network may include application and data servers, caching devices, network management devices, and clients such as end user workstations. These devices may reside at a single location comprising a local area network (LAN), or they may be spread across various remote locations as part of a wide area network (WAN).
  • LAN local area network
  • WAN wide area network
  • DR Disaster Recovery
  • HA high- availability
  • RPO Recovery Point Objective
  • RTO Recovery Time Objective
  • network failures may include any instance wherein a portion of a network becomes unavailable, such as a single server or even an entire network location.
  • a network failure may be the result of network hardware or software malfunctions, but is also intended to encompass other situations where portions of a network are unavailable such as scheduled maintenance or administrator activity.
  • a computer network system comprising a source server at a primary location configured to provide application services and data to a client; a destination server at a remote location configured to store backup copies of the application services and data; a first shadow server at the primary location configured to monitor the transaction stream for mailboxes configured on the source server and to send email data to the remote location; a second shadow server at the remote location for receiving the email data and storing the email data in a corresponding mailbox on the destination server; and means for providing the email data on the destination server to a client when the source server is unavailable.
  • a method of replicating data between a primary location and a remote location comprising monitoring the transaction stream for mailboxes configured on a source email server; sending email data related to each transaction detected to a remote location; storing the email data in a corresponding mailbox on a destination server at the remote location; and providing the email data stored on the destination server to a client when the source server is unavailable.
  • FIG. 1 is a schematic representation of an exemplary network according to the present invention.
  • network 2 is illustrated as comprising a primary network location 4 and a remote network location 6.
  • Primary location 4 includes one or more source servers 8, such as Microsoft Exchange ® servers, for storing and/or providing email services and data to network clients 10.
  • FIG. 1 shows that remote location 6 also includes one or more destination servers 12, which are used to store backup copies of at least portions of the email services and data residing on source servers 8. While the present embodiment is described in terms of email services and data, it should be understood that PIM or similar services and data, such as for calendar applications, contacts, etc., are also within the scope of the present invention.
  • Primary location 4 and remote location 6 further include one or more shadow servers 14A, 14B co-located with source and destination servers 8 and 12.
  • Shadow servers 14 are linked by a communications channel, illustrated in FIG. 1 as WAN 16. Although a number of destination servers 12 equal to the number of source servers 8 is illustrated in FIG. 1, it should be understood that this is not required, and that there may be more or less destination servers 12 than there are source servers 8.
  • a network configuration according to the present invention may, for example, support multiple source servers 8 providing input to a single destination server 12 which, under a failure condition, can support multiple client mailboxes.
  • shadow server 14A monitors the transaction stream for mailboxes configured on source servers 8, and tracks the delivery of messages or other transactions associated with those mailboxes.
  • transactions are externally harvested from the source Exchange transaction engine. This makes the system of the present invention simple to deploy and administer, requiring no installation of software to be made on the source servers 8.
  • message extraction from and insertion to source and destination servers 8 and 12 is performed via external MAPI interfaces.
  • configuration and management of a system according to the present invention may be based upon the integration of a Microsoft Management Console (MMC) snap-in component into the administration of the Active Directory (AD) section of the MMC environment.
  • MMC Microsoft Management Console
  • AD Active Directory
  • the plug-in is an intuitive extension of existing AD operations adding shadow servers 14A, 14B as a form of computer object in a domain. Consequently, most management operations fit into the model familiar to an Exchange server administrator.
  • a web services administrator interface may be used to provide Web administration or anything that can consume web services, possibly via a WSDL.
  • WAN 16 to shadow server 14B at remote location 6 and is replicated in the corresponding destination server mailbox on destination server 12.
  • Messages or transactions delivered to a source server 8 are not synchronously delivered to destination server 12 as part of the I/O stream, which may lead to unacceptable latency delays on the source side. Instead, asynchronous, in-order delivery of transactions are made. (In-order delivery of a transaction stream is a necessary attribute to ensure that transactions cannot be misinterpreted due to ordering changes - e.g., delete message occurring before create message, etc.) Accordingly, there are no heavy synchronous loads on the WAN bandwidth and high-speed, dedicated links are not required as may be the case with prior art DR solutions.
  • This further provides a multi-location point-to-point failover solution and functions at the granularity of a single mailbox and provides transactional remote updates. It therefore enables selective configuration for management of specific mailboxes allowing an administrator to deploy a minimal configuration solution for critical users in the company's email community, resulting in significantly lower acquisition cost, lower storage demands and lower performance requirements placed on the destination server 12.
  • Replication of data according to the present invention maintains complete consistency where a source and destination mailbox is guaranteed to be maintained in a completely self-consistent state at all times. Because replication between source andatty
  • destination servers 8 and 12 is transaction based, data is more efficiently moved across WAN 16, and database corruption cannot be propagated as happens with many prior art DR solutions;
  • the use of awareness of the context of the transactions being collected from the source servers 8, for example, allows the present invention to be able to optimize the use of network bandwidth so as to remove many repetitive elements.
  • a prime contributor to data repetition is when reading multiple mailboxes that contain messages with the same attachments.
  • the source server 8 may employ a Single Instance Store model for mapping multiple mailboxes to the same attachment. Generally, this only exists for attachments or messages hosted in mailboxes on the same server storage group.
  • Instance image of the attachment is maintained across all target mailboxes that reside in the same storage group on the destination server 12.
  • the communications transport layer will only transmit the attachment once across the link.
  • the use of advanced buffering compression methods also allows the communications layer to detect commonality of message contents and properties and reduces the amount of transmitted data. Thus, messages are analyzed for commonality across multiple mailboxes and only transported once across WAN 16.
  • fail-over between source and destination mailboxes is a manually driven operation because fail- over may be due to scheduled maintenance on a server; load-balancing of a user workload at a location; or failure of a storage group, server, or entire site.
  • the mailboxes on the destination server 12 are generally not available to the client application 10. Instead, they exist as administrable shadow mailboxes only. This is particularly necessary for Microsoft Exchange environments, because Active Directory (AD) is the arbiter of the unique location of a mailbox in the network.
  • AD Active Directory
  • a fail-over can be performed at the mailbox, storage-group, or Exchange server level and causes one or more of the selected mailboxes to be made available to the client session at the target destination location.
  • the failover may occur by:
  • the experience of the client application 10 in the event of a failover may be that access to the client mailbox will be interrupted.
  • the client 10 may require an add-in to be installed for the Microsoft Outlook ® (or other client email application), or failover will require a restart of Outlook so that it can access the AD server for the current mapping to the correct Exchange server. If the add-in is installed however, then the fail-over can occur seamlessly and transparently from the client perspective.
  • the add-in detects the connection failure, validates the configuration from AD of the mapping of mailbox to Exchange server, detects the change, and then remaps the connection to the destination server.
  • the present invention enables a near-instantaneous RTO based on an administrator triggering the failover.
  • the asynchronous transaction method employed for the replication of the mailboxes results in a predictable, but not pre-determined RPO. This is subject to a number of variables including the workload on a source server 8 leading up to the point of failover; the available bandwidth of the communications channel over WAN 16; and the performance characteristics of the servers.
  • embodiments of the present invention may include the ability to notify the administrator should the system start to fall behind the flow of transactions on source servers 8.
  • the recovery or fail back operation i.e. returning to functionality at the primary location 4, is similar in concept to the failover.

Abstract

Disclosed herein is a system and method for network disaster recovery. The system and method include a computer network system, comprising a source server at a primary location configured to provide application services and data to a client; a destination server at a remote location configured to store backup copies of the application services and data; a first shadow server at the primary location configured to monitor the transaction stream for personal information management databases configured on the source server and to send personal information management data to the remote location; a second shadow server at the remote location for receiving the personal information management data and storing the personal information management data in a corresponding database on the destination server; and means for providing the personal information management data on the destination server to a client when the source server is unavailable.

Description

TITLE OF THE INVENTION
SYSTEM AND METHOD FOR NETWORK DISASTER RECOVERY BACKGROUND OF THE INVENTION
[0001] Cross Reference to Related Applications: This application claims priority to and incorporates by reference provisional application No. 60/732,247 filed October 31, 2005.
[0002] Field of the Invention: The present invention relates to computer network solutions for overcoming network outages. More specifically, the present invention relates to systems and methods for replicating network data to provide continued access in the event a portion of the network becomes unavailable.
[0003] State of the Art: As known in the art, a computer network generally comprises a system of separate devices linked for communication with one another to allow functions such as coordinated execution of software applications or remote access of data. Common devices making up a network may include application and data servers, caching devices, network management devices, and clients such as end user workstations. These devices may reside at a single location comprising a local area network (LAN), or they may be spread across various remote locations as part of a wide area network (WAN).
[0004] As customers come to expect the continuation of service under various failure scenarios, many network applications must "live" in a highly available environment. A common example of such an application is an email system like that provided using Microsoft Exchange®. Email is rapidly becoming the single most critical application for many organizations. Many companies are doing a significant part of their core business via email and, as such, its continual availability is of crucial importance. [0005] One approach to ensuring the availability of an application during a failure is to maintain a backup location for application services and data. On the local level; this may involve providing a secondary server for storing backup copies of the application services and data. In the event the primary server for that application fails, the network switches or "fails over" to the secondary server, thereby enabling continued access to the application. On a broader network level, protection from failures of an entire primary network location, sometimes referred to as "Disaster Recovery" (DR), may involve migrating application services and data to another remote location so applications can continue operating. As such, DR can be thought of as a trans-locale variant of a high- availability (HA) solution, designed to survive the loss of not just a single server, but up to and including the loss of an entire data center or branch office site. [0006] A key issue with the above-described HA solutions, especially with respect to DR, is that of data coherency between the primary network location and the remote, backup location. Because any data maintained by the primary location must be copied to the remote location, the remote location may not have the most recently stored data, depending on the frequency of replication. Upon failover to the remote location, any data received at the primary location since the last replication event may be lost. Controlling data coherency such that the data at remote location is as close as possible to the primary location at the time of failover is referred to as the "Recovery Point Objective" (RPO).
[0007] Many DR products on the market today attempt the impossible by keeping a zero or low RPO. They often attempt to do this by forcing all data write operations to the primary location to occur synchronously to the remote location. Using this approach, the lost data is minimized or eradicated altogether. This generates a large amount of network traffic and is very susceptible to high link latency in terms of overall performance. Optimizations of this scheme often involve shipping recovery logs that can be replayed later once a failure occurs. This operational mode results in a number of products that exhibit a high "Recovery Time Objective" (RTO), i.e. the time it takes to restore access to application services and data, because log replays can be slow but with a very short RPO time.
[0008] In many applications, this characteristic of low RPO at the expense of a high RTO is appropriate, but in the case of email, the prime concern is often to bring email services back on-line for all users as soon as possible. Most users would prefer to have a partial mailbox available immediately, with usable email sends and receives for new messages, and have older emails be made available later. Log replay schemes do not often support such a model.
BRIEF SUMMARY OF PREFERRED EMBODIMENTS OF THE INVENTION
[0009] In accordance with preferred embodiments of the present invention, systems and methods are disclosed for maintaining access to application services and data during network failures. As used herein, network failures may include any instance wherein a portion of a network becomes unavailable, such as a single server or even an entire network location. Furthermore, a network failure may be the result of network hardware or software malfunctions, but is also intended to encompass other situations where portions of a network are unavailable such as scheduled maintenance or administrator activity.
[0010] According to one embodiment of the present invention, a computer network system is provided comprising a source server at a primary location configured to provide application services and data to a client; a destination server at a remote location configured to store backup copies of the application services and data; a first shadow server at the primary location configured to monitor the transaction stream for mailboxes configured on the source server and to send email data to the remote location; a second shadow server at the remote location for receiving the email data and storing the email data in a corresponding mailbox on the destination server; and means for providing the email data on the destination server to a client when the source server is unavailable. According to another embodiment, a method of replicating data between a primary location and a remote location is provided, comprising monitoring the transaction stream for mailboxes configured on a source email server; sending email data related to each transaction detected to a remote location; storing the email data in a corresponding mailbox on a destination server at the remote location; and providing the email data stored on the destination server to a client when the source server is unavailable. [0011] Other and further features and advantages of the present invention will be apparent from the following description when read in conjunction with the accompanying drawing. It will be understood by one of ordinary skill in the art that the following preferred embodiments are provided for illustrative and exemplary purposes only, and that numerous combinations of the elements of the various embodiments of the present invention are possible. BRIEF DESCRIPTION OF THE DRAWINGS
[0012] FIG. 1 is a schematic representation of an exemplary network according to the present invention.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS OF THE
INVENTION
[0013] Referring in general to the accompanying drawing, various aspects of an exemplary computer network 2 are shown. Common elements of the disclosed embodiments are designated with like reference numerals for clarity. It should be understood that the figure presented is not meant to be illustrative of an actual configuration for a computer network, but is merely an idealized schematic representation employed to more clearly and fully depict the invention.
[0014] Turning to FIG. 1, network 2 is illustrated as comprising a primary network location 4 and a remote network location 6. Primary location 4 includes one or more source servers 8, such as Microsoft Exchange® servers, for storing and/or providing email services and data to network clients 10. FIG. 1 shows that remote location 6 also includes one or more destination servers 12, which are used to store backup copies of at least portions of the email services and data residing on source servers 8. While the present embodiment is described in terms of email services and data, it should be understood that PIM or similar services and data, such as for calendar applications, contacts, etc., are also within the scope of the present invention. Primary location 4 and remote location 6 further include one or more shadow servers 14A, 14B co-located with source and destination servers 8 and 12. Shadow servers 14 are linked by a communications channel, illustrated in FIG. 1 as WAN 16. Although a number of destination servers 12 equal to the number of source servers 8 is illustrated in FIG. 1, it should be understood that this is not required, and that there may be more or less destination servers 12 than there are source servers 8. A network configuration according to the present invention may, for example, support multiple source servers 8 providing input to a single destination server 12 which, under a failure condition, can support multiple client mailboxes.
[0015] In order to maintain data coherency between the servers at primary location 4 and remote location 6, shadow server 14A monitors the transaction stream for mailboxes configured on source servers 8, and tracks the delivery of messages or other transactions associated with those mailboxes. In the case of Microsoft Exchange , for example, transactions are externally harvested from the source Exchange transaction engine. This makes the system of the present invention simple to deploy and administer, requiring no installation of software to be made on the source servers 8. According to one embodiment of the present invention, message extraction from and insertion to source and destination servers 8 and 12 is performed via external MAPI interfaces. [0016] When deployed with Microsoft Exchange®, configuration and management of a system according to the present invention may be based upon the integration of a Microsoft Management Console (MMC) snap-in component into the administration of the Active Directory (AD) section of the MMC environment. As such, the plug-in is an intuitive extension of existing AD operations adding shadow servers 14A, 14B as a form of computer object in a domain. Consequently, most management operations fit into the model familiar to an Exchange server administrator. As a further example, a web services administrator interface may be used to provide Web administration or anything that can consume web services, possibly via a WSDL. [0017] Each new message or transaction that is detected is transported across
WAN 16 to shadow server 14B at remote location 6 and is replicated in the corresponding destination server mailbox on destination server 12. Messages or transactions delivered to a source server 8 are not synchronously delivered to destination server 12 as part of the I/O stream, which may lead to unacceptable latency delays on the source side. Instead, asynchronous, in-order delivery of transactions are made. (In-order delivery of a transaction stream is a necessary attribute to ensure that transactions cannot be misinterpreted due to ordering changes - e.g., delete message occurring before create message, etc.) Accordingly, there are no heavy synchronous loads on the WAN bandwidth and high-speed, dedicated links are not required as may be the case with prior art DR solutions.
[0018] This further provides a multi-location point-to-point failover solution and functions at the granularity of a single mailbox and provides transactional remote updates. It therefore enables selective configuration for management of specific mailboxes allowing an administrator to deploy a minimal configuration solution for critical users in the company's email community, resulting in significantly lower acquisition cost, lower storage demands and lower performance requirements placed on the destination server 12.
[0019] Replication of data according to the present invention maintains complete consistency where a source and destination mailbox is guaranteed to be maintained in a completely self-consistent state at all times. Because replication between source and
destination servers 8 and 12 is transaction based, data is more efficiently moved across WAN 16, and database corruption cannot be propagated as happens with many prior art DR solutions;
[0020] The use of awareness of the context of the transactions being collected from the source servers 8, for example, allows the present invention to be able to optimize the use of network bandwidth so as to remove many repetitive elements. A prime contributor to data repetition is when reading multiple mailboxes that contain messages with the same attachments. The source server 8 may employ a Single Instance Store model for mapping multiple mailboxes to the same attachment. Generally, this only exists for attachments or messages hosted in mailboxes on the same server storage group.
[0021] According to one embodiment of the present invention, the same Single
Instance image of the attachment is maintained across all target mailboxes that reside in the same storage group on the destination server 12. In addition to this destination storage optimization, the communications transport layer will only transmit the attachment once across the link. The use of advanced buffering compression methods also allows the communications layer to detect commonality of message contents and properties and reduces the amount of transmitted data. Thus, messages are analyzed for commonality across multiple mailboxes and only transported once across WAN 16. [0022] According to another embodiment of the present invention, fail-over between source and destination mailboxes is a manually driven operation because fail- over may be due to scheduled maintenance on a server; load-balancing of a user workload at a location; or failure of a storage group, server, or entire site. Attempting to automate fail-over is fraught with risks because false detection of failure can easily occur and can lead to a so-called "split-brain" problem where the source and destination mailboxes both believe themselves to be the authoritative mailbox for the user. [0023] The mailboxes on the destination server 12 according to one embodiment of the present invention are generally not available to the client application 10. Instead, they exist as administrable shadow mailboxes only. This is particularly necessary for Microsoft Exchange environments, because Active Directory (AD) is the arbiter of the unique location of a mailbox in the network. Once initiated, a fail-over can be performed at the mailbox, storage-group, or Exchange server level and causes one or more of the selected mailboxes to be made available to the client session at the target destination location. [0024] By way of example, the failover may occur by:
1. Atomically modifying the AD configuration to map the client mailbox to the destination Exchange server
2. Modifying the destination Exchange server to map the client mailbox to the current server
[0025] For some email systems, the experience of the client application 10 in the event of a failover may be that access to the client mailbox will be interrupted. With a Microsoft Exchange® system, for example, the client 10 may require an add-in to be installed for the Microsoft Outlook® (or other client email application), or failover will require a restart of Outlook so that it can access the AD server for the current mapping to the correct Exchange server. If the add-in is installed however, then the fail-over can occur seamlessly and transparently from the client perspective. The add-in detects the connection failure, validates the configuration from AD of the mapping of mailbox to Exchange server, detects the change, and then remaps the connection to the destination server.
[0026] The present invention enables a near-instantaneous RTO based on an administrator triggering the failover. The asynchronous transaction method employed for the replication of the mailboxes, however, results in a predictable, but not pre-determined RPO. This is subject to a number of variables including the workload on a source server 8 leading up to the point of failover; the available bandwidth of the communications channel over WAN 16; and the performance characteristics of the servers. In order to maintain a reasonable RPO during the continuing operation of the system, embodiments of the present invention may include the ability to notify the administrator should the system start to fall behind the flow of transactions on source servers 8. [0027] The recovery or fail back operation, i.e. returning to functionality at the primary location 4, is similar in concept to the failover. Once again, incremental message transactions are moved from the source to the destination until both have converged to a substantially coherent state. In essence, the recovery is a replication and fail-over in reverse. Because restores of mailboxes are performed incrementally based on changes having occurred to the source or destination mailbox during the outage, no full restore is ever required.
[0028] Although the present invention has been described with respect to the above exemplary embodiments, various additions, deletions and modifications are contemplated as being within its scope.

Claims

CLAIMS What is claimed is:
1. A computer network system, comprising: a source server at a primary location configured to provide application services and data to a client; a destination server at a remote location configured to store backup copies of the application services and data; a first shadow server at the primary location configured to monitor the transaction stream for personal information management databases configured on the source server and to send personal information management data to the remote location; a second shadow server at the remote location for receiving the personal information management data and storing the personal information management data in a corresponding database on the destination server; and means for providing the personal information management data on the destination server to a client when the source server is unavailable.
2. The system of claim 1 wherein the personal information management database is an e-mail mailbox and the personal information management data is e-mail data.
3. The system of claim 1 wherein the first shadow server and the second shadow server are linked by a wide area network (WAN) communications channel.
4. The system of claim 2 whereby the source server and the destination server extract and distribute e-mail data using the Messaging Application Programming Interface (MAPI) protocol.
5. The system of claim 1 wherein the shadow servers utilize industry standard management components to monitor and transact application services and data to the destination server.
6. The system of claim 1 whereby each new transaction is delivered to the destination server in an asynchronous in-order fashion.
7. The system of claim 1 whereby the source server employs a Single Instance Store model for mapping multiple personal information management databases to the same personal information management data.
8. The system of claim 2 whereby the source server employs a Single Instance Store model for mapping multiple e-mail mailboxes to the same attachment.
9. The system of claim 1 which further employs advanced buffering compression methods to allow the detection of commonality of personal information management data and properties and reduces the amount of data transmitted.
10. The system of claim 1 whereby the transfer of data between the source server and the destination server is a manually driven operation.
11. The system of claim 1 which further includes an Active Directory component.
12. The system of claim 11 wherein personal information management data is mapped to the destination server by automatically modifying the Active Directory configuration.
13. The system of claim 11 whereby the destination server is modified to map the client to the source server.
14. The system of claim 11 which includes an additional add-in component that allows the detection of connection failure, validates the configuration from Active Directory of the mapping of the personal information database to source server, detects changes, and remaps the connection to the destination server.
15. A method of ensuring continued access to application services and data between a primary location and a remote location, comprising the steps of: monitoring the transaction stream for personal information management databases configured on a source server; sending personal information data related to each transaction detected to a remote location; storing the personal information data in a corresponding personal information database on a destination server at the remote location; and providing the personal information management data stored on the destination server to a client when the source server is unavailable.
16. The method of claim 15 wherein the personal information management databases comprise e-mail mailboxes; wherein the source server is an e-mail server; and wherein the personal information management data is e-mail data.
17. The method of claim 15 which further includes a first shadow server for monitoring the transaction stream of the source server and second shadow server for receiving transactions from the first shadow server and replicating data to the destination server.
18. The method of claim 17 wherein the first shadow server and the second shadow server are linked by a wide area network (WAN) communications channel.
19. The method of claim 16 whereby the source server and the destination server extract and distribute e-mail data using the Messaging Application Programming Interface (MAPI) protocol.
20. The method of claim 17 wherein the shadow servers utilize industry standard management components to monitor and transact application services and data to the destination server.
21. The method of claim 15 whereby each new transaction is delivered to the destination server in an asynchronous in-order fashion.
22. The method of claim 15 whereby the source server employs a Single Instance Store model for mapping multiple personal information management databases to the same personal information management data.
23. The method of claim 16 whereby the source server employs a Single Instance Store model for mapping multiple e-mail mailboxes to the same attachment.
24. The method of claim 15 which further employs advanced buffering compression methods to allow the detection of commonality of personal information management data and properties and reduces the amount of data transmitted.
25. The method of claim 15 whereby the transfer of data between the source server and the destination server is a manually driven operation.
26. The method of claim 15 which further includes an Active Directory component.
27. The method of claim 26 wherein personal information management data is mapped to the destination server by automatically modifying the Active Directory configuration.
28. The method of claim 26 whereby the destination server is modified to map the client to the source server.
29. The method of claim 26 which includes an additional add-in component that allows the detection of connection failure, validates the configuration from Active
Directory of the mapping of the personal information management database to source server, detects changes, and remaps the connection to the destination server.
30. A~ computer-readable storage medium storing program code for causing a computer to perform the steps of: monitoring the transaction stream for personal information management databases configured on a source server; sending personal information data related to each transaction detected to a remote location; storing the personal information data in a corresponding personal information database on a destination server at the remote location; and providing the personal information management data stored on the destination server to a client when the source server is unavailable.
PCT/US2006/042662 2006-11-01 2006-11-01 System and method for network disaster recovery WO2008054388A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2006/042662 WO2008054388A1 (en) 2006-11-01 2006-11-01 System and method for network disaster recovery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2006/042662 WO2008054388A1 (en) 2006-11-01 2006-11-01 System and method for network disaster recovery

Publications (1)

Publication Number Publication Date
WO2008054388A1 true WO2008054388A1 (en) 2008-05-08

Family

ID=39344572

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/042662 WO2008054388A1 (en) 2006-11-01 2006-11-01 System and method for network disaster recovery

Country Status (1)

Country Link
WO (1) WO2008054388A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103546582A (en) * 2013-11-12 2014-01-29 北京京东尚科信息技术有限公司 Method, device and system for backup of application services of server

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6820098B1 (en) * 2002-03-15 2004-11-16 Hewlett-Packard Development Company, L.P. System and method for efficient and trackable asynchronous file replication
US7155633B2 (en) * 2003-12-08 2006-12-26 Solid Data Systems, Inc. Exchange server method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6820098B1 (en) * 2002-03-15 2004-11-16 Hewlett-Packard Development Company, L.P. System and method for efficient and trackable asynchronous file replication
US7155633B2 (en) * 2003-12-08 2006-12-26 Solid Data Systems, Inc. Exchange server method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103546582A (en) * 2013-11-12 2014-01-29 北京京东尚科信息技术有限公司 Method, device and system for backup of application services of server

Similar Documents

Publication Publication Date Title
US9448898B2 (en) Network traffic routing
US9110837B2 (en) System and method for creating and maintaining secondary server sites
US8276016B2 (en) Enterprise service availability through identity preservation
US7363365B2 (en) Autonomous service backup and migration
US8161318B2 (en) Enterprise service availability through identity preservation
US8275907B2 (en) Adding individual database failover/switchover to an existing storage component with limited impact
US7496579B2 (en) Transitioning of database service responsibility responsive to server failure in a partially clustered computing environment
CN101501668B (en) Enterprise service availability through identity preservation
US20070168500A1 (en) Enterprise service availability through identity preservation
US20070150526A1 (en) Enterprise server version migration through identity preservation
US20060015764A1 (en) Transparent service provider
US20140108532A1 (en) System and method for supporting guaranteed multi-point delivery in a distributed data grid
JP5537181B2 (en) Message system
US20070174660A1 (en) System and method for enabling site failover in an application server environment
US20060015584A1 (en) Autonomous service appliance
JPWO2008105098A1 (en) Memory mirroring control program, memory mirroring control method, and memory mirroring control device
US8751583B2 (en) System and method for providing business continuity through secure e-mail
WO2008054388A1 (en) System and method for network disaster recovery
EP1766900A2 (en) Transparent service provider
JP2014170573A (en) Message system and data storage server
CN114116277A (en) InfluxDB high-availability cluster implementation system and method

Legal Events

Date Code Title Description
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06827288

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06827288

Country of ref document: EP

Kind code of ref document: A1