US20130218841A1 - Systems and methods for providing business continuity services - Google Patents

Systems and methods for providing business continuity services Download PDF

Info

Publication number
US20130218841A1
US20130218841A1 US13/840,646 US201313840646A US2013218841A1 US 20130218841 A1 US20130218841 A1 US 20130218841A1 US 201313840646 A US201313840646 A US 201313840646A US 2013218841 A1 US2013218841 A1 US 2013218841A1
Authority
US
United States
Prior art keywords
data
data center
server
replicated
executing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/840,646
Inventor
George B. Hall
Jerry M. Overton
Geoffrey L. Sinn
Paul S. Penny
Steven R. Bulmer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Strategic Technologies Inc USA
Original Assignee
Strategic Technologies Inc USA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Strategic Technologies Inc USA filed Critical Strategic Technologies Inc USA
Priority to US13/840,646 priority Critical patent/US20130218841A1/en
Publication of US20130218841A1 publication Critical patent/US20130218841A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06F17/30088
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/128Details of file system snapshots on the file-level, e.g. snapshot creation, administration, deletion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1469Backup restoration techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2048Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share neither address space nor persistent storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2097Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/84Using snapshots, i.e. a logical point-in-time copy of the data

Definitions

  • the invention relates to methods and systems for providing business continuity services. More particularly, the invention relates to methods and systems for accessing, replicating, and storing customer data, and using the stored data to create fully recovered systems, applications and networks in the event of a disaster.
  • At least one conversion server for executing a conversion process to convert the replicated data volumes from a proprietary disk file type to a disk file type that can be recovered;
  • a replication software controller controls the flow of replication data to the target server.
  • any of the target server, storage system and alternate computing resources are located at a remote data center.
  • the remote data center is a Tier IV, SAS70 certified data center.
  • the replicated data undergoes CDP replication securely across a WAN cloud at the data block level to the target server.
  • cloning software is installed on the source server and the target server.
  • the storage system includes a storage array provisioned so that the replicated data is stored in customer assigned volumes.
  • the at least one conversion server executes a conversion process for converting the replicated data volumes from a “thick” disk file to a “thin” disk file.
  • the “thick” disk file is deleted once the “thin” disk file has been stored on the storage system.
  • the system also includes:
  • the system also includes at least one operator at a remote data center for receiving status notifications and monitoring the status of the cloning, replication, conversion, storage and virtualization processes.
  • the remote data center is a Tier IV, SAS70 certified data center and the remote data center includes:
  • the invention includes a method for providing business continuity services after a disaster or other loss of data including:
  • the method further includes:
  • the recovered servers are located at a remote data center.
  • the remote data center is a Tier IV, SAS70 certified data center.
  • the recovered servers are provided within eight (8) hours of receiving notification of disaster or other loss of information at the operating system environment.
  • FIG. 1 is a block diagram of a system according to embodiments of the invention.
  • FIG. 2 is a block diagram of a portion of a system according to embodiments of the invention in which customer data is accessed, cloned and replicated.
  • FIG. 3 is a block diagram of a portion of a system according to embodiments of the invention, in which the first step of a conversion process readies data for recovery.
  • FIG. 4 is a block diagram of a portion of a system according to embodiments of the invention, in which the second step of a conversion process readies data for recovery.
  • FIG. 5 is a block diagram of a portion of a system according to embodiments of the invention, in which an operating system environment is recreated using alternate computing resources.
  • FIG. 6 is a flow chart of a method of accessing, cloning and replicating customer data according to embodiments of the invention.
  • FIG. 7 is a flow chart of a method of converting and storing customer data according to embodiments of the invention.
  • FIG. 8 is a flow chart of a method of using the stored data to create virtual operating systems according to embodiments of the invention.
  • the system 100 includes at least one source server 200 , at least one target server 300 , a storage system 400 , and alternate computing resources 600 including at least one conversion server 500 .
  • FIG. 2 is a block diagram depicting a system in which customer data is cloned and replicated to a remote site according to embodiments of the invention.
  • the operating system environment 202 on a target server 300 at a customer site is cloned and the cloned images 206 and application data 208 are sent to a Tier IV data center target site.
  • the cloning of the operating system environment 202 may be effected using commercially available server cloning software, such as Acronis.
  • server cloning software such as Acronis.
  • both the source server 200 and the target server 300 have a copy of the server cloning software.
  • the cloning software agent 218 executes pre and post scripts which execute automated status notifications 214 to operators at a remote data center 302 .
  • the operators at the remote data center 302 monitor the status of the system.
  • the pre and post scripts also create control information trigger files.
  • the replication of the operating system environment 202 may be effected using commercially available replication software, such as InMage Scout.
  • both the source server 200 and the target server 300 have a copy of the replication software.
  • the replication software client 216 and the replication software controller 212 control the flow of replication data 210 to the target server 300 at a remote data center 302 .
  • the replicated data 210 may include the cloning software image files 206 , protected data volumes 208 control information and downstream conversion process trigger files. This replicated data 210 undergoes CDP replication securely across a WAN cloud 304 at the data block level to a target server 300 at a remote data center 302 .
  • the replication software includes a source server agent, a replication control server and a target server agent.
  • the source server agent is installed on each source server 200 at the customer location.
  • the replication control server is a linux server meeting performance specifications and is installed at the customer location closest to the source servers.
  • the target server 300 stores the replicated data 210 in customer assigned volumes 402 within the storage system 400 .
  • the storage system 400 includes a commercial storage array provisioned in a way that each customer is assigned their own volume 402 .
  • the volume is NFS accessible to the replication target and conversion hosts.
  • the replication target and conversion hosts are virtualized servers on virtualization software.
  • the storage system 400 facilitates specific software such as flexible volumes, in which a volume grows and shrinks, read-write snapshots and deduplication. Deduplication is run on the customer volume to reduce storage consumption on the storage system 400 .
  • FIG. 3 is a block diagram depicting the conversion process of a system according to embodiments of the invention.
  • the replicated data 210 is converted to an appropriate file type and readied for recovery.
  • the conversion process is automated.
  • control information from the imaging process triggers the mounting of a read-only snapshot of the replicated data 210 .
  • the read-only snapshot is mounted on the target server 300 in order to gain access to the replicated data 210 .
  • the replicated data 210 is in a format proprietary to the cloning software used.
  • Acronis software is used to image the target operating system 202 and therefore the replicated data 210 is in a proprietary Acronis file format.
  • the cloning software may be used to convert the proprietary image file into an appropriate file format for the next phase of the system of the invention.
  • cloning software tools are utilized to convert the proprietary file into a proprietary “thick” disk file 410 .
  • This converted “thick” disk file 410 is then stored in a shared storage system 400 at the remote data center 302 .
  • a second step of the conversion process the replicated data 210 undergoes further conversion to ready the data for recovery.
  • key steps take place on a conversion server 500 .
  • a scheduled and automated conversion script executes daily to query the storage system looking for “thick” disk files 410 that need to be converted to “thin” disk files 412 . This process is performed to reduce storage consumption on the storage system 400 .
  • a conversion script executes virtualization software utilities 502 on one or more conversion servers 500 and instructs the conversion servers to perform a “thick” oversized disk file 410 to “thin” smaller sized disk file 412 conversion for each “thick” disk file 410 needing conversion.
  • the virtualization software utilities 502 reduce the size of the “thick” disk files 410 by removing 0 size blocks “white space” from the file and compression the file. This process may be performed to reduce space consumption on the storage system 400 .
  • the “thin” disk files 412 are stored until recovery of the files is needed due to a disaster or other data loss.
  • the conversion software used is VMware ESX.
  • the “thick” disk files 410 are deleted from the target server 300 at the remote data center 302 . Throughout this process, automated status notifications are triggered and sent to operators monitoring systems according to embodiments of the invention.
  • operators at the data center 302 execute scripts which create a snapshot copy of the protected data volumes 416 of the customer volume 402 on the storage system 400 .
  • the operators begin to import cloned server configuration files into the virtualization software 502 , configure the network and attach “thin” disk files 412 and the snapshot copy of data volumes 416 stored on the storage system 400 .
  • the virtualization software 502 is installed on the alternate computing resources 600 .
  • Operators power on and boot up the recovered servers 602 , configure custom network settings on each server 602 and recover Windows active directory authentication servers. Networks are configured to mirror the customers production environment. This includes firewall rules and DNS nameserver reconfigurations. Operators will configure remote administrative methods using “Remote Desktop,” SSL VPN or web access to the servers 602 . Once all is verified as operational, the operators turn over the servers 602 to the customer 700 and the servers 602 are considered “production recovered servers.” Customer 700 may then access the recovered servers 602 via the internet 606 through a firewall 604 . During the recovery process, progress is communicated to the customer on an hourly basis by phone and tracked in service ticket.
  • the target server 300 , the storage system 400 and the alternate computing resources 600 are located in a Tier IV, SAS70 certified data center(s).
  • Embodiments of the invention include a method, as shown in FIGS. 6-8 for cloning, replicating, converting, storing and recovering customer data.
  • FIG. 6 is a flow chart illustrating a process 700 for cloning data.
  • data is accessed in an operating system.
  • the data is accessed using commercially available server cloning software.
  • the cloning process is executed and the data is cloned.
  • Pre and post scripts are also executed by the cloning agent and thus, steps 708 and 710 are automated.
  • steps 708 and 710 are automated.
  • operators monitoring the process are notified of the execution status of the cloning process.
  • the pre and post scripts create control information trigger files.
  • Commercially available replication software may be used to effect step 706 , in which the data is replicated to a target server.
  • FIG. 7 is a flow chart illustrating a process 800 for converting the replicated data.
  • a read-only snapshot of the data is mounted to a target server.
  • the cloning software is used to convert the replicated data to a proprietary “thick” disk file.
  • Pre and post scripts are also executed by the cloning agent and thus, steps 814 and 816 are automated.
  • steps 814 and 816 are automated.
  • operators monitoring the process are notified of the execution status of the conversion process.
  • the pre and post scripts create control information trigger files.
  • the “thick” disk files are stored on a storage system.
  • the replicated data undergoes further conversion to ready the data for recovery.
  • the storage system is queried at least daily looking for “thick” disk files.
  • the “thick” disk files are converted to “thin” disk files.
  • the “thick” disk files on the storage system may be deleted to reduce space consumption.
  • FIG. 8 is a flow chart illustrating a process 900 for recovering the data in the event of a disaster or other loss of information.
  • operators at a data center receive notification of a disaster or other loss of information.
  • the operators execute scripts to create a snapshot copy of data volumes on the storage system.
  • the operators import cloned server configuration files using virtualization software on alternate computing resources.
  • the operators configure the network and attach the “thin” disk files and the snapshot copy of data volumes to create recovered servers at step 908 .
  • the operators configure network settings on the recovered servers to mirror the customer's production environment.
  • the operators provide access to the recovered servers to the client via the internet.
  • Embodiments of the invention include providing a virtual recovered operating system environment identical to the customer's “destroyed” environment in less than 8 hours after notification of a disaster.

Abstract

Systems and methods for providing business continuity services after a disaster or other loss of data are provided. The system and methods include accessing, replicating, and storing customer data. In the event of a disaster or other loss of data, the stored data is used to create fully recovered systems. The systems and methods provide for a remote data center that offers protection against physical disasters. The systems and methods include providing a virtual recovered operating system environment identical to the source operating system environment in less than 8 hours after notification of a disaster.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation of U.S. application Ser. No. 12/883,899, filed Sep. 16, 2010, which claims priority to U.S. Provisional Application No. 61/243,061, filed Sep. 16, 2009, both of which are herein incorporated by reference in their entirety.
  • TECHNICAL FIELD
  • The invention relates to methods and systems for providing business continuity services. More particularly, the invention relates to methods and systems for accessing, replicating, and storing customer data, and using the stored data to create fully recovered systems, applications and networks in the event of a disaster.
  • BACKGROUND
  • As computers' ability to process and store data improves and their prices drop, many companies today use computers in their businesses. At one end of the spectrum, even manual labor companies often use computers for order taking and invoicing. At the other end of this spectrum are internet businesses that exist solely on servers. In either scenario, access to data is key to the business's continuation.
  • Many back up solutions exist for computer users. These solutions often back up the user's data to an external hard drive or tape drive that is located at the user's place of business. In the event of a fire or a natural disaster, the user still loses that data. Other back up solutions include backing up the user's data to an online-based server. Online-based back up can mitigate the problem of complete data loss found in the first scenario. However, an online-based back up still includes problems because there may still be delays in the time it takes a business to function again. This problem remains because in the event of a fire or natural disaster, the software that uses the data may also be lost if the computer equipment is lost. Thus, the user must repurchase or otherwise obtain its software and computer equipment before it can make use of the data that can be recovered via the online-based back up.
  • Another problem with current back up and recovery solutions is that existing solutions required a significant amount of human input. Often someone must start the back up or at least schedule the back up and confirm that it occurred. This is particularly troublesome because existing back up and recovery solutions are overly complex, costly, and require specialization in hardware, software and skill sets.
  • Thus, a business continuity solution is needed that provides back up and recovery of computing services and eliminates the above-referenced challenges.
  • SUMMARY
  • Embodiments of the invention include a system for providing business continuity services after a disaster or other loss of data in which the system includes:
  • (a) at least one source server for executing a cloning process in which data volumes and server configuration files are cloned and for executing a replication process in which data volumes are replicated;
  • (b) at least one target server for mounting a read-only snapshot of the replicated data volumes;
  • (c) at least one conversion server for executing a conversion process to convert the replicated data volumes from a proprietary disk file type to a disk file type that can be recovered; and
  • (d) a storage system for storing the converted disk files.
  • In some embodiments, a replication software controller controls the flow of replication data to the target server.
  • In certain embodiments, any of the target server, storage system and alternate computing resources are located at a remote data center. In other embodiments, the remote data center is a Tier IV, SAS70 certified data center.
  • In other embodiments, the replicated data undergoes CDP replication securely across a WAN cloud at the data block level to the target server.
  • In certain embodiments, cloning software is installed on the source server and the target server.
  • In other embodiments, the storage system includes a storage array provisioned so that the replicated data is stored in customer assigned volumes.
  • In some embodiments, the at least one conversion server executes a conversion process for converting the replicated data volumes from a “thick” disk file to a “thin” disk file.
  • In other embodiments, the “thick” disk file is deleted once the “thin” disk file has been stored on the storage system.
  • In certain embodiments, the system also includes:
  • (e) alternate computing resources for executing a virtualization process of the read-only snapshot, the converted disk files and cloned server configuration files to create at least one recovered server.
  • In certain embodiments, the system also includes at least one operator at a remote data center for receiving status notifications and monitoring the status of the cloning, replication, conversion, storage and virtualization processes.
  • Embodiments of the invention include a system for providing business continuity services after a disaster or other loss of data in which the system includes:
  • (a) at least one source server for executing a cloning process in which data volumes and server configuration files are cloned and for executing a replication process in which data volumes are replicated;
  • (b) a remote data center; and
  • (c) at least one operator at the remote data center for receiving status notification and monitoring the status of the cloning, replication, conversion, storage and virtualization processes.
  • In certain embodiments of the invention, the remote data center is a Tier IV, SAS70 certified data center and the remote data center includes:
  • (a) at least one target server for mounting a read-only snapshot of the replicated data volumes;
  • (b) at least one conversion server for executing a conversion process to convert the replicated data volumes from a proprietary disk file type to a disk file type that can be recovered;
  • (c) a storage system for storing the converted disk files; and
  • (d) alternate computing resources for executing a virtualization process of the read-only snapshot, the converted disk files and cloned server configuration files to create at least one recovered server.
  • In some embodiments, the invention includes a method for providing business continuity services after a disaster or other loss of data including:
  • (a) accessing data in an operating system environment;
  • (b) executing a cloning process in which the data is cloned;
  • (c) replicating the data to at least one target server;
  • (d) mounting a read-only snapshot of the replicated data to the target server;
  • (e) converting the replicated data to a disk file type that can be recovered;
  • (f) storing the read-only snapshot and converted data on a storage system.
  • In other embodiments of the invention, the method further includes:
  • (g) receiving notification of disaster or other loss of information at the operating system environment;
  • (h) executing scripts to create a snapshot copy of a data volumes on the storage system;
  • (i) importing and virtualizing cloned server configuration files;
  • (j) attaching the recoverable disk files and snapshot copy of a data volumes to create recovered servers;
  • (k) configuring network settings on the recovered servers to mirror a pre-disaster environment; and
  • (l) providing secure access to the recovered servers via a WAN cloud.
  • In certain embodiments, the recovered servers are located at a remote data center. In other embodiments, the remote data center is a Tier IV, SAS70 certified data center.
  • In certain embodiments, the recovered servers are provided within eight (8) hours of receiving notification of disaster or other loss of information at the operating system environment.
  • Further details and embodiments of the invention are set forth below. These and other features, aspects and advantages of the invention are better understood when the following Detailed Description of the Invention is read with reference to the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a system according to embodiments of the invention.
  • FIG. 2 is a block diagram of a portion of a system according to embodiments of the invention in which customer data is accessed, cloned and replicated.
  • FIG. 3 is a block diagram of a portion of a system according to embodiments of the invention, in which the first step of a conversion process readies data for recovery.
  • FIG. 4 is a block diagram of a portion of a system according to embodiments of the invention, in which the second step of a conversion process readies data for recovery.
  • FIG. 5 is a block diagram of a portion of a system according to embodiments of the invention, in which an operating system environment is recreated using alternate computing resources.
  • FIG. 6 is a flow chart of a method of accessing, cloning and replicating customer data according to embodiments of the invention.
  • FIG. 7 is a flow chart of a method of converting and storing customer data according to embodiments of the invention.
  • FIG. 8 is a flow chart of a method of using the stored data to create virtual operating systems according to embodiments of the invention.
  • DETAILED DESCRIPTION
  • This invention will now be described more fully with reference to the drawings, showing preferred embodiments of the invention. However, this invention can be embodied in many different forms and should not be construed as limited to the embodiments set forth.
  • As shown in FIG. 1, in embodiments of the invention the system 100 includes at least one source server 200, at least one target server 300, a storage system 400, and alternate computing resources 600 including at least one conversion server 500.
  • FIG. 2 is a block diagram depicting a system in which customer data is cloned and replicated to a remote site according to embodiments of the invention. In general, in this phase of the system, the operating system environment 202 on a target server 300 at a customer site is cloned and the cloned images 206 and application data 208 are sent to a Tier IV data center target site.
  • The cloning of the operating system environment 202 may be effected using commercially available server cloning software, such as Acronis. In preferred embodiments of the invention, both the source server 200 and the target server 300 have a copy of the server cloning software. As shown in FIG. 2, the cloning software agent 218 executes pre and post scripts which execute automated status notifications 214 to operators at a remote data center 302. The operators at the remote data center 302 monitor the status of the system. The pre and post scripts also create control information trigger files.
  • The replication of the operating system environment 202 may be effected using commercially available replication software, such as InMage Scout. In preferred embodiments of the invention, both the source server 200 and the target server 300 have a copy of the replication software. As shown in FIG. 2, the replication software client 216 and the replication software controller 212 control the flow of replication data 210 to the target server 300 at a remote data center 302. The replicated data 210 may include the cloning software image files 206, protected data volumes 208 control information and downstream conversion process trigger files. This replicated data 210 undergoes CDP replication securely across a WAN cloud 304 at the data block level to a target server 300 at a remote data center 302.
  • In preferred embodiments of the invention, the replication software includes a source server agent, a replication control server and a target server agent. The source server agent is installed on each source server 200 at the customer location. The replication control server is a linux server meeting performance specifications and is installed at the customer location closest to the source servers. Preferably, at the remote data center 302, the target server 300 stores the replicated data 210 in customer assigned volumes 402 within the storage system 400.
  • In preferred embodiments of the invention, the storage system 400 includes a commercial storage array provisioned in a way that each customer is assigned their own volume 402. The volume is NFS accessible to the replication target and conversion hosts. The replication target and conversion hosts are virtualized servers on virtualization software. The storage system 400 facilitates specific software such as flexible volumes, in which a volume grows and shrinks, read-write snapshots and deduplication. Deduplication is run on the customer volume to reduce storage consumption on the storage system 400.
  • FIG. 3 is a block diagram depicting the conversion process of a system according to embodiments of the invention. In this first step, the replicated data 210 is converted to an appropriate file type and readied for recovery. In a preferred embodiment of the invention, the conversion process is automated.
  • In the first step of the conversion process, control information from the imaging process triggers the mounting of a read-only snapshot of the replicated data 210. The read-only snapshot is mounted on the target server 300 in order to gain access to the replicated data 210. At this stage, the replicated data 210 is in a format proprietary to the cloning software used. For example, in a preferred embodiment of the invention, Acronis software is used to image the target operating system 202 and therefore the replicated data 210 is in a proprietary Acronis file format.
  • The cloning software may be used to convert the proprietary image file into an appropriate file format for the next phase of the system of the invention. In a preferred embodiment, cloning software tools are utilized to convert the proprietary file into a proprietary “thick” disk file 410. This converted “thick” disk file 410 is then stored in a shared storage system 400 at the remote data center 302.
  • In a second step of the conversion process, the replicated data 210 undergoes further conversion to ready the data for recovery. In preferred embodiments of the invention, key steps take place on a conversion server 500. First, a scheduled and automated conversion script executes daily to query the storage system looking for “thick” disk files 410 that need to be converted to “thin” disk files 412. This process is performed to reduce storage consumption on the storage system 400. Once “thick” disk files have been identified, a conversion script executes virtualization software utilities 502 on one or more conversion servers 500 and instructs the conversion servers to perform a “thick” oversized disk file 410 to “thin” smaller sized disk file 412 conversion for each “thick” disk file 410 needing conversion. In a preferred embodiment, the virtualization software utilities 502 reduce the size of the “thick” disk files 410 by removing 0 size blocks “white space” from the file and compression the file. This process may be performed to reduce space consumption on the storage system 400. The “thin” disk files 412 are stored until recovery of the files is needed due to a disaster or other data loss.
  • In a preferred embodiment, the conversion software used is VMware ESX. In other embodiments of the invention, as a final step in the conversion process, the “thick” disk files 410 are deleted from the target server 300 at the remote data center 302. Throughout this process, automated status notifications are triggered and sent to operators monitoring systems according to embodiments of the invention.
  • In the event of a disaster recovery test exercise or a real disaster, as shown in FIG. 5, operators at the data center 302 execute scripts which create a snapshot copy of the protected data volumes 416 of the customer volume 402 on the storage system 400. The operators begin to import cloned server configuration files into the virtualization software 502, configure the network and attach “thin” disk files 412 and the snapshot copy of data volumes 416 stored on the storage system 400. In a preferred embodiment of the invention, the virtualization software 502 is installed on the alternate computing resources 600.
  • Operators power on and boot up the recovered servers 602, configure custom network settings on each server 602 and recover Windows active directory authentication servers. Networks are configured to mirror the customers production environment. This includes firewall rules and DNS nameserver reconfigurations. Operators will configure remote administrative methods using “Remote Desktop,” SSL VPN or web access to the servers 602. Once all is verified as operational, the operators turn over the servers 602 to the customer 700 and the servers 602 are considered “production recovered servers.” Customer 700 may then access the recovered servers 602 via the internet 606 through a firewall 604. During the recovery process, progress is communicated to the customer on an hourly basis by phone and tracked in service ticket.
  • In preferred embodiments of the invention, the target server 300, the storage system 400 and the alternate computing resources 600 are located in a Tier IV, SAS70 certified data center(s).
  • Embodiments of the invention include a method, as shown in FIGS. 6-8 for cloning, replicating, converting, storing and recovering customer data. FIG. 6 is a flow chart illustrating a process 700 for cloning data. At step 702, data is accessed in an operating system. Preferably, the data is accessed using commercially available server cloning software. At step 704, the cloning process is executed and the data is cloned. Pre and post scripts are also executed by the cloning agent and thus, steps 708 and 710 are automated. At step 708, operators monitoring the process are notified of the execution status of the cloning process. At step 710, the pre and post scripts create control information trigger files. Commercially available replication software may be used to effect step 706, in which the data is replicated to a target server.
  • FIG. 7 is a flow chart illustrating a process 800 for converting the replicated data. At step 802, a read-only snapshot of the data is mounted to a target server. At step 804, the cloning software is used to convert the replicated data to a proprietary “thick” disk file. Pre and post scripts are also executed by the cloning agent and thus, steps 814 and 816 are automated. At step 814, operators monitoring the process are notified of the execution status of the conversion process. At step 816, the pre and post scripts create control information trigger files. At step 806, the “thick” disk files are stored on a storage system.
  • In the next part of the conversion process, the replicated data undergoes further conversion to ready the data for recovery. Using a scheduled and automated conversion script, at step 808, the storage system is queried at least daily looking for “thick” disk files. At step 810, the “thick” disk files are converted to “thin” disk files. As a final and optional step 812 in the conversion process, the “thick” disk files on the storage system may be deleted to reduce space consumption.
  • FIG. 8 is a flow chart illustrating a process 900 for recovering the data in the event of a disaster or other loss of information. At step 902, operators at a data center receive notification of a disaster or other loss of information. At step 904, the operators execute scripts to create a snapshot copy of data volumes on the storage system. At step 906, the operators import cloned server configuration files using virtualization software on alternate computing resources. The operators configure the network and attach the “thin” disk files and the snapshot copy of data volumes to create recovered servers at step 908. At step 910, the operators configure network settings on the recovered servers to mirror the customer's production environment. Finally, at step 912, the operators provide access to the recovered servers to the client via the internet.
  • Embodiments of the invention include providing a virtual recovered operating system environment identical to the customer's “destroyed” environment in less than 8 hours after notification of a disaster.
  • The foregoing description is provided for describing various embodiments and structures relating to the invention. Various modifications, additions and deletions may be made to these embodiments and/or structures without departing from the scope and spirit of the invention.

Claims (22)

We claim:
1. A system for providing business continuity services after a disaster or other loss of data comprising:
(a) at least one source server for executing a cloning process in which data volumes and server configuration files are cloned and for executing a replication process in which data volumes are replicated;
(b) at least one target server for mounting a read-only snapshot of the replicated data volumes;
(c) at least one conversion server for executing a conversion process to convert the replicated data volumes from a proprietary disk file type to a disk file type that can be recovered; and
(d) a storage system for storing the converted disk files.
2. The system of claim 1 further comprising a replication software controller to control the flow of replication data to the target server.
3. The system of claim 1 wherein the target server is located at a remote data center.
4. The system of claim 3 wherein the remote data center is a Tier IV, SAS70 certified data center.
5. The system of claim 1 wherein the storage system is located at a remote data center.
6. The system of claim 5 wherein the remote data center is a Tier IV, SAS70 certified data center.
7. The system of claim 1 wherein the alternate computing resources are located at a remote data center.
8. The system of claim 7 wherein the remote data center is a Tier IV, SAS70 certified data center.
9. The system of claim 1 wherein the replicated data undergoes CDP replication securely across a WAN cloud at the data block level to the target server.
10. The system of claim 1 further comprising at least one operator at a remote data center for receiving status notification and monitoring the status of the cloning, replication, conversion, storage and virtualization processes.
11. The system of claim 1 wherein cloning software is installed on the source server and the target server.
12. The system of claim 1 wherein the storage system comprises a storage array provisioned so that the replicated data is stored in customer assigned volumes.
13. The system of claim 1 wherein the at least one conversion server executes a conversion process for converting the replicated data volumes from a “thick” disk file to a “thin” disk file.
14. The system of claim 13 wherein the “thick” disk file is deleted once the “thin” disk file has been stored on the storage system.
15. The system of claim 1 further comprising:
(e) alternate computing resources for executing a virtualization process of the read-only snapshot, the converted disk files and cloned server configuration files to create at least one recovered server.
16. A system for providing business continuity services after a disaster or other loss of data comprising:
(a) at least one source server for executing a cloning process in which data volumes and server configuration files are cloned and for executing a replication process in which data volumes are replicated;
(b) a remote data center; and
(c) at least one operator at the remote data center for receiving status notification and monitoring the status of the cloning, replication, conversion, storage and virtualization processes;
wherein the remote data center is a Tier IV, SAS70 certified data center and wherein the remote data center comprises:
(a) at least one target server for mounting a read-only snapshot of the replicated data volumes;
(b) at least one conversion server for executing a conversion process to convert the replicated data volumes from a proprietary disk file type to a disk file type that can be recovered;
(c) a storage system for storing the converted disk files; and
(d) alternate computing resources for executing a virtualization process of the read-only snapshot, the converted disk files and cloned server configuration files to create at least one recovered server.
17. A method for providing business continuity services after a disaster or other loss of data comprising:
(g) accessing data in an operating system environment;
(h) executing a cloning process in which the data is cloned;
(i) replicating the data to at least one target server;
(j) mounting a read-only snapshot of the replicated data to the target server;
(k) converting the replicated data to a disk file type that can be recovered;
(l) storing the read-only snapshot and converted data on a storage system.
18. The method of claim 17 wherein the cloning process is executed by cloning software on a source server.
19. The method of claim 17 wherein the target server is located at a remote data center.
20. The method of claim 19 wherein the remote data center is a Tier IV, SAS70 certified data center.
21. The method of claim 17 wherein the storage system is located at a remote data center.
22. The method of claim 21 wherein the remote data center is a Tier IV, SAS70 certified data center.
US13/840,646 2009-09-16 2013-03-15 Systems and methods for providing business continuity services Abandoned US20130218841A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/840,646 US20130218841A1 (en) 2009-09-16 2013-03-15 Systems and methods for providing business continuity services

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US24306109P 2009-09-16 2009-09-16
US12/883,899 US8412678B2 (en) 2009-09-16 2010-09-16 Systems and methods for providing business continuity services
US13/840,646 US20130218841A1 (en) 2009-09-16 2013-03-15 Systems and methods for providing business continuity services

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/883,899 Continuation US8412678B2 (en) 2009-09-16 2010-09-16 Systems and methods for providing business continuity services

Publications (1)

Publication Number Publication Date
US20130218841A1 true US20130218841A1 (en) 2013-08-22

Family

ID=44342502

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/883,899 Active US8412678B2 (en) 2009-09-16 2010-09-16 Systems and methods for providing business continuity services
US13/840,646 Abandoned US20130218841A1 (en) 2009-09-16 2013-03-15 Systems and methods for providing business continuity services

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/883,899 Active US8412678B2 (en) 2009-09-16 2010-09-16 Systems and methods for providing business continuity services

Country Status (1)

Country Link
US (2) US8412678B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427163A (en) * 2015-12-09 2016-03-23 华夏银行股份有限公司 Data control system
US9716746B2 (en) 2013-07-29 2017-07-25 Sanovi Technologies Pvt. Ltd. System and method using software defined continuity (SDC) and application defined continuity (ADC) for achieving business continuity and application continuity on massively scalable entities like entire datacenters, entire clouds etc. in a computing system environment
CN108255641A (en) * 2017-12-25 2018-07-06 南京壹进制信息技术股份有限公司 A kind of CDP disaster recovery methods based on cloud platform

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8412678B2 (en) * 2009-09-16 2013-04-02 Strategic Technologies, Inc. Systems and methods for providing business continuity services
US8930363B2 (en) * 2011-12-23 2015-01-06 Sap Se Efficient handling of address data in business transaction documents
US9286578B2 (en) * 2011-12-23 2016-03-15 Sap Se Determination of a most suitable address for a master data object instance
US8805989B2 (en) 2012-06-25 2014-08-12 Sungard Availability Services, Lp Business continuity on cloud enterprise data centers
US10860237B2 (en) * 2014-06-24 2020-12-08 Oracle International Corporation Storage integrated snapshot cloning for database
US10387447B2 (en) 2014-09-25 2019-08-20 Oracle International Corporation Database snapshots
US10346362B2 (en) 2014-09-26 2019-07-09 Oracle International Corporation Sparse file access
US11068437B2 (en) 2015-10-23 2021-07-20 Oracle Interntional Corporation Periodic snapshots of a pluggable database in a container database
US9477555B1 (en) * 2015-11-16 2016-10-25 International Business Machines Corporation Optimized disaster-recovery-as-a-service system
CN105354113B (en) * 2015-11-27 2019-01-25 上海爱数信息技术股份有限公司 A kind of system and method for server, management server
US10592469B1 (en) * 2016-06-29 2020-03-17 EMC IP Holding Company, LLC Converting files between thinly and thickly provisioned states
US10203897B1 (en) * 2016-12-02 2019-02-12 Nutanix, Inc. Dynamic data compression
US11068460B2 (en) 2018-08-06 2021-07-20 Oracle International Corporation Automated real-time index management
CN112740195A (en) 2018-08-06 2021-04-30 甲骨文国际公司 Techniques for maintaining statistics in a database system
CN112631831A (en) * 2020-12-22 2021-04-09 苏州柏科数据信息科技研究院有限公司 Bare computer recovery method and system of service system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030011087A1 (en) * 2001-07-16 2003-01-16 Imation Corp. Two-sided replication of data storage media
US20060179061A1 (en) * 2005-02-07 2006-08-10 D Souza Roy P Multi-dimensional surrogates for data management
US20060190497A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Support for schema evolution in a multi-node peer-to-peer replication environment
US20060190503A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Online repair of a replicated table
US7209571B2 (en) * 2000-01-13 2007-04-24 Digimarc Corporation Authenticating metadata and embedding metadata in watermarks of media signals
US20080256138A1 (en) * 2007-03-30 2008-10-16 Siew Yong Sim-Tang Recovering a file system to any point-in-time in the past with guaranteed structure, content consistency and integrity
US7538899B2 (en) * 2002-01-08 2009-05-26 Fujifilm Corporation Print terminal apparatus
US20090300302A1 (en) * 2008-05-29 2009-12-03 Vmware, Inc. Offloading storage operations to storage hardware using a switch
US20110191296A1 (en) * 2009-09-16 2011-08-04 Wall George B Systems And Methods For Providing Business Continuity Services
US20110218968A1 (en) * 2005-06-24 2011-09-08 Peter Chi-Hsiung Liu System And Method for High Performance Enterprise Data Protection
US20110225123A1 (en) * 2005-08-23 2011-09-15 D Souza Roy P Multi-dimensional surrogates for data management
US20110289046A1 (en) * 2009-10-01 2011-11-24 Leach R Wey Systems and Methods for Archiving Business Objects
US20120101991A1 (en) * 2010-06-19 2012-04-26 Srivas Mandayam C Map-Reduce Ready Distributed File System

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5963962A (en) * 1995-05-31 1999-10-05 Network Appliance, Inc. Write anywhere file-system layout
US6697960B1 (en) * 1999-04-29 2004-02-24 Citibank, N.A. Method and system for recovering data to maintain business continuity
US7383463B2 (en) * 2004-02-04 2008-06-03 Emc Corporation Internet protocol based disaster recovery of a server
US20070094659A1 (en) * 2005-07-18 2007-04-26 Dell Products L.P. System and method for recovering from a failure of a virtual machine
US20070276951A1 (en) * 2006-05-25 2007-11-29 Nicholas Dale Riggs Apparatus and method for efficiently and securely transferring files over a communications network
US20080263079A1 (en) * 2006-10-24 2008-10-23 Flextronics Ap, Llc Data recovery in an enterprise data storage system
US10481962B2 (en) * 2008-05-30 2019-11-19 EMC IP Holding Company LLC Method for data disaster recovery assessment and planning

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7209571B2 (en) * 2000-01-13 2007-04-24 Digimarc Corporation Authenticating metadata and embedding metadata in watermarks of media signals
US20030011087A1 (en) * 2001-07-16 2003-01-16 Imation Corp. Two-sided replication of data storage media
US7538899B2 (en) * 2002-01-08 2009-05-26 Fujifilm Corporation Print terminal apparatus
US20060179061A1 (en) * 2005-02-07 2006-08-10 D Souza Roy P Multi-dimensional surrogates for data management
US20060190503A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Online repair of a replicated table
US20060190497A1 (en) * 2005-02-18 2006-08-24 International Business Machines Corporation Support for schema evolution in a multi-node peer-to-peer replication environment
US20120005160A1 (en) * 2005-02-18 2012-01-05 International Business Machines Corporation Online repair of a replicated table
US20110218968A1 (en) * 2005-06-24 2011-09-08 Peter Chi-Hsiung Liu System And Method for High Performance Enterprise Data Protection
US20110225123A1 (en) * 2005-08-23 2011-09-15 D Souza Roy P Multi-dimensional surrogates for data management
US20080256138A1 (en) * 2007-03-30 2008-10-16 Siew Yong Sim-Tang Recovering a file system to any point-in-time in the past with guaranteed structure, content consistency and integrity
US20090300302A1 (en) * 2008-05-29 2009-12-03 Vmware, Inc. Offloading storage operations to storage hardware using a switch
US20110191296A1 (en) * 2009-09-16 2011-08-04 Wall George B Systems And Methods For Providing Business Continuity Services
US8412678B2 (en) * 2009-09-16 2013-04-02 Strategic Technologies, Inc. Systems and methods for providing business continuity services
US20110289046A1 (en) * 2009-10-01 2011-11-24 Leach R Wey Systems and Methods for Archiving Business Objects
US20120101991A1 (en) * 2010-06-19 2012-04-26 Srivas Mandayam C Map-Reduce Ready Distributed File System

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9716746B2 (en) 2013-07-29 2017-07-25 Sanovi Technologies Pvt. Ltd. System and method using software defined continuity (SDC) and application defined continuity (ADC) for achieving business continuity and application continuity on massively scalable entities like entire datacenters, entire clouds etc. in a computing system environment
CN105427163A (en) * 2015-12-09 2016-03-23 华夏银行股份有限公司 Data control system
CN108255641A (en) * 2017-12-25 2018-07-06 南京壹进制信息技术股份有限公司 A kind of CDP disaster recovery methods based on cloud platform
CN108255641B (en) * 2017-12-25 2020-08-18 南京壹进制信息科技有限公司 CDP disaster recovery method based on cloud platform

Also Published As

Publication number Publication date
US20110191296A1 (en) 2011-08-04
US8412678B2 (en) 2013-04-02

Similar Documents

Publication Publication Date Title
US8412678B2 (en) Systems and methods for providing business continuity services
US11797395B2 (en) Application migration between environments
US20220261419A1 (en) Provisioning and managing replicated data instances
US20230050233A1 (en) Envoy for multi-tenant compute infrastructure
US11372729B2 (en) In-place cloud instance restore
EP1907935B1 (en) System and method for virtualizing backup images
US20190391880A1 (en) Application backup and management
US20220129355A1 (en) Creation of virtual machine packages using incremental state updates
US9558076B2 (en) Methods and systems of cloud-based disaster recovery
US10909000B2 (en) Tagging data for automatic transfer during backups
US11663093B2 (en) Automated development of recovery plans
US20210334123A1 (en) Methods and systems for booting virtual machines in the cloud
US9489271B1 (en) User interface for restoring databases
Tadesse Efficient Bare Metal Backup and Restore in OpenStack Based Cloud InfrastructureDesign: Implementation and Testing of a Prototype

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION