US20080034177A1 - Storage system, method of controlling storage system, and storage device - Google Patents
Storage system, method of controlling storage system, and storage device Download PDFInfo
- Publication number
- US20080034177A1 US20080034177A1 US11/898,945 US89894507A US2008034177A1 US 20080034177 A1 US20080034177 A1 US 20080034177A1 US 89894507 A US89894507 A US 89894507A US 2008034177 A1 US2008034177 A1 US 2008034177A1
- Authority
- US
- United States
- Prior art keywords
- logical volume
- control unit
- storage apparatus
- time
- pair
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2058—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using more than 2 mirrored copies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2064—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring while ensuring consistency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2066—Optimisation of the communication load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2069—Management of state, configuration or failover
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
- G06F11/2074—Asynchronous techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2082—Data synchronisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2038—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/84—Using snapshots, i.e. a logical point-in-time copy of the data
Abstract
The present invention provides a storage system and a method of controlling the storage system, in which a second site rapidly resumes system process when a first site is damaged. The storage system comprises a first site including a first storage device, a second site including a second storage device, and a third site including a third storage device, and the method of controlling the storage system comprises a step of making a logical volume of the second storage device consistent with a logical volume of the first storage device by remotely copying only the differential data between the logical volume of the first storage device and the logical volume of the second storage device from a logical volume of the third storage device to the logical volume of the second storage device when the first site is damaged.
Description
- This is a continuation application of U.S. Ser. No. 11/526,598, filed Sep. 26, 2006, which is a continuation application of U.S. Ser. No. 11/196,418, filed Aug. 4, 2005 (now U.S. Pat. No. 7,185,152), which is a continuation application of Ser. No. 10/823,618, filed Apr. 24, 2004 (now U.S. Pat. No. 7,114,044).
- This application relates to and claims priority from Japanese Patent Application No. 2003-309194, filed on Sep. 1, 2003, the entire disclosure of which is incorporated herein by reference.
- 1. Field of the Invention
- The present invention relates to a storage system and a method of controlling the same.
- 2. Description of the Related Art
- Disaster recovery is important in an information-processing system. As a technique for recovery from a disaster, the standard technique is known to store and manage a copy of the data on a storage device at a primary site by a storage device located at a remote site (hereinafter, this technique is referred to as ‘remote copy’). According to this technique, when the primary site is damaged, processes performed at the primary site are continuously performed at the remote site by using data in the storage device located at the remote site.
- In the above-described method, in order to continue to perform the process that was being performed at the primary site at the remote site when the primary site is damaged, it is necessary to perform a remote copy from the primary site to the remote site in real time. However, the primary site and the remote site are often separated distantly from each other. Therefore, if the remote copy from the primary site to the remote site is performed in real time, it takes a considerable amount of time for data communication, and the processing performance of the primary site is decreased. In an information-processing system required for high availability (HA), the processing performance of the primary site should not be decreased, and the process should be rapidly resumed at the remote site when the primary site is damaged.
- The present invention is designed to solve the aforementioned problem, and it is an object of the present invention to provide a storage system and a method of controlling the same.
- To achieve this object, there is provided a method of controlling a storage system comprising a first storage device having a first storage volume provided at a first site, a second storage device having a second storage volume provided at a second site, and a third storage device having a third storage volume provided at a third site, the storage devices being connected so as to communicate with each other, wherein the method includes the steps of: storing a copy of data stored in the first storage volume in the second storage volume at a first time; writing the copy of data written in the first storage volume into the third storage volume; storing, in the third storage device, a write history of the data written in the first storage volume as a first differential management table after the first time; and allowing the third storage device to make the contents of the data stored in the second storage volume consistent with the contents of the data stored in the first storage volume using the first differential management table and the third storage volume of the third storage device.
- Here, in an information-processing system in which a first site is a primary site, a second site is a remote site, and a third site is a local site provided in the vicinity of the first site, it becomes possible to make the second storage volume consistent with the first storage volume by remotely copying only the differential data between the first storage volume and the second storage volume from the third storage volume to the second storage volume when the first site is damaged.
- Therefore, it is possible to provide a storage system and a method of controlling the storage system having the above effects.
-
FIG. 1 is a diagram illustrating a schematic configuration of an information-processing system according to the present embodiment; -
FIG. 2 is a diagram illustrating a configuration of a disk array device according to the present embodiment; -
FIG. 3 is a diagram illustrating a configuration of an information-processing apparatus according to the present embodiment; -
FIG. 4 is a diagram illustrating a configuration of a channel control unit in a storage device according to the present embodiment; -
FIG. 5 is a diagram illustrating a table stored in a shared memory in a storage device according to the present embodiment; -
FIG. 6 is a diagram illustrating a pair management table according to the present embodiment; -
FIG. 7 is a diagram illustrating a differential management table according to the present embodiment; -
FIG. 8 is a diagram illustrating a change in the state of each storage device when a normal operation is performed at the first site in the information-processing system according to the present embodiment; -
FIG. 9 is a flow diagram explaining a processing flow in a first information-processing apparatus when a normal operation is performed at the first site in the information-processing system according to the present embodiment; -
FIGS. 10A and 10B are flow charts explaining a processing flow in a third storage device when a normal operation is performed at the first site in the information-processing system according to the present embodiment; and -
FIGS. 11A and 11B are diagrams illustrating an operation in which a logical volume of a second storage device is made to be consistent with a logical volume of the first storage device in order to shift a second site into use as the normal system in the information-processing system according to the present embodiment. - Configuration of Information-Processing System
-
FIG. 1 is a block diagram illustrating the entire configuration of an information-processing system including a storage system 100 according to the present embodiment. - The information-processing system comprises a storage device, or storage system 10 (hereinafter, referred to as ‘a first storage device’ or ‘first storage system’) provided at a first site, an information-processing apparatus 11 (hereinafter, referred to as ‘a first information-processing apparatus’) accessing the first storage device, a storage device, or storage system 20 (hereinafter, referred to as ‘a second storage device’ or ‘second storage system’) provided at a second site, an information-processing apparatus 21 (hereinafter, referred to as ‘a second information-processing apparatus’) accessing the second storage device, and a storage device, or storage system 30 (hereinafter, ‘a third storage device’ or ‘third storage system’) provided at a third site. As a detailed example of each of the sites, this could be a computer facility operated by an organization, such as a university or corporation, or a data center in which an operation of a web server on the Internet or an operation of ASP (Application Service Provider) is performed. The information-processing system is built for disaster recovery in the event of an earthquake, fire, typhoon, flood, lightning, act of terrorism, and the like. This is described in detail in U.S. patent application Ser. No. 10/096,375. The U.S. patent application Ser. No. 10/096,375 is included herein by reference.
- In this embodiment, the first site is the above-described primary site, the second site is the above-described remote site, and the third site is the above-described local site.
- The
storage devices first network 40. Thefirst network 40 is, for example, the Gigabit Ethernet (a registered trademark), an asynchronous transfer mode (ATM), a public line, and the like. - Each of the information-
processing apparatuses processing apparatuses processing apparatuses - For example, the application software could provide a function of an automatic teller system of a bank or a seat reservation system of an airplane. The information-
processing apparatuses storage devices -
FIG. 2 shows the detailed configuration of a disk array device described as one example of the first tothird storage devices third storage devices channel control unit 101, aremote communication interface 102, adisk control unit 103, a sharedmemory 104, acache memory 105, aswitching control unit 106, which is composed of, for example, a crossbar switch, for connecting the above components such that they can communicate with each other, amanagement terminal 107, and a plurality of storages, ordisk devices 108. - The
cache memory 105 is mainly used to temporarily store data communicated between thechannel control unit 101 and thedisk control unit 103. For example, if data input/output commands received from the information-processing apparatuses channel control unit 101 are write commands, thechannel control unit 101 writes write data received from the information-processing apparatuses cache memory 105. Further, thedisk control unit 103 reads out the write data from thecache memory 105 and writes it into thestorage 108. - The
disk control unit 103 reads out data input/output (I/O) requests which are written into the sharedmemory 104 by thechannel control unit 101, and writes or reads data into or from thestorage 108 according to commands (e.g., commands of SCSI (Small Computer System Interface) standards) set up in the data I/O requests. Thedisk control unit 103 writes the data read out from thestorage 108 into thecache memory 105. Further, thedisk control unit 103 transmits notice of the completion of data write or data read to thechannel control unit 101. Thedisk control unit 103 may have a function for controlling thestorages 108 in a RAID (Redundant Array of Inexpensive Disks) level (e.g., 0, 1, 5) defined in a so-called RAID manner. - The
storage 108 is, for example, a hard disk device. Thestorage 108 may be integrated with the disk array device or may be independently provided therefrom. A storage area provided by thestorage device 108 at each site is managed using alogical volume 109, which is a volume logically set up in the storage area, as a unit. Writing or reading data into or from thestorage 108 can be performed by designating an identifier, which is given to thelogical volume 109. - The
management terminal 107 is a computer for maintaining and managing the disk array device or thestorage 108. The change of software or parameters carried out in thechannel control unit 101 or thedisk control unit 103 is performed by the instructions from themanagement terminal 107. Themanagement terminal 107 may be built in the disk array device or may be separately provided therefrom. - The
remote communication interface 102 is a communication interface (a channel extender) for data communication withother storage devices remote communication interface 102. Theremote communication interface 102 converts an interface (e.g., an interface, such as a Fibre Channel, an ESCON (a registered trademark), or a FICON (a registered trademark)) of thechannel control unit 101 into a communication method for thefirst network 40. It allows data transmission between thestorage devices - Further, the disk array device may be, for example, a device functioning as a NAS (Network Attached Storage) that is configured to accept data input/output requests according to file name designation from the information-
processing apparatuses - The shared
memory 104 is accessible by both thechannel control unit 101 and theremote communication interface 102, and thedisk control unit 103. The shared memory is used to deliver data input/output request commands and also stores management information for thestorage devices storage device 108. In this embodiment, a consistency group management table 200, a pair management table 201, and a differential management table 202 are stored in the sharedmemory 104, as shown inFIG. 5 . -
FIG. 3 is a block diagram illustrating the configuration of the information-processing apparatuses - The information-
processing apparatuses CPU 110, amemory 120, aport 130, a recordingmedium reading device 140, aninput device 150, and anoutput device 160. - The
CPU 110 is in charge of the control of the entire information-processing apparatuses memory 120. The recordingmedium reading device 140 is a device for reading out programs or data recorded on arecording medium 170. The read programs or data are stored in thememory 120. Thus, for example, it is possible to read a storagedevice management program 121 or anapplication program 122, which is recorded on therecording medium 170, from therecording medium 170 using the recordingmedium reading device 140 and to store it in thememory 120. As therecording medium 170, a flexible disk, a CD-ROM, a semiconductor memory, and the like may be used. The recordingmedium reading device 140 may be built in the information-processing apparatuses input device 150 is used for inputting data into the information-processing apparatuses input device 150, for example, a keyboard, a mouse, and the like are used. Theoutput device 160 is a device for outputting information to the outside. As theoutput device 160, for example, a display, a printer, and the like are used. Theport 130 is a device for communicating with thestorage devices port 130 may be used for communication between the information-processing apparatuses device management program 121 or theapplication program 122 may be received from the information-processing apparatuses port 130 and may be stored in thememory 120. - The storage
device management program 121 is a program for performing the copy management of the data stored in thelogical volume 109 that is included in thestorage 108. Various commands for performing the copy management are transmitted to thestorage devices - The
application program 122 is a program for making the information-processingapparatus -
FIG. 4 is a block diagram showing the configuration of thechannel control unit 101. - The
channel control unit 101 comprises aCPU 211, acache memory 212, acontrol memory 213, aport 215, and abus 216. - The
CPU 211 is in charge of the control of the entirechannel control unit 101 and executes acontrol program 214 stored in thecontrol memory 213. The copy management of data according to the present embodiment is realized as thecontrol program 214 stored in thecontrol memory 213 is executed. Thecache memory 212 is a memory for temporarily storing data, commands, and the like, which are communicated with the information-processing apparatuses port 215 is a communication interface for communicating with the information-processing apparatuses storage devices bus 216 connects these devices to each other. - [Pair Management Table]
- A pair management table 201 is a copy management table of the
logical volume 109 of each of thestorage devices - A pair refers to a combination of
logical volumes 109 formed by twological volumes 109. Further, a case in which twological volumes 109 forming a pair are in thesame storage devices different storage devices logical volumes 109 forming a pair, one is a main logical volume, and the other is a secondary logical volume. It is possible to combine a plurality of secondary logical volumes with one main logical volume. - If the information-
processing apparatuses source storage devices source storage devices memory 104 in each of the copysource storage devices source storage devices destination storage devices source storage devices memories 104 in the copydestination storage devices - The column ‘type of pair’ in the pair management table 201 represents whether the pair is a local pair or a remote pair. The column ‘copy manner’ represents whether a remote copy manner is a synchronous manner or an asynchronous manner when the pair is the remote pair. Further, the remote copy and the manner thereof will be described later. The columns ‘copy source device’ and ‘copy destination device’ represent the
storage devices storage devices - The column ‘state of pair’ represents the state of the pair. As the state of the pair, there are ‘under pair’, ‘under split’, and ‘under resynchronization’.
- In case of the ‘under pair’, data written into the main logical volume from the information-
processing apparatuses - In case of the ‘under split’, even though data is written from information-
processing apparatuses - The ‘under resynchronization’ is a state in the course of shifting from the ‘under split’ to the ‘under pair’. That is, in the ‘under split’, the data update performed on the main logical volume is being reflected to the secondary logical volume. If the reflection is completed, the state of the pair becomes the ‘under pair’. Further, shifting the pair from the state of the ‘under split’ to the state of the ‘under pair’ is called pair re-forming.
- The formation of a pair, the split of a pair, and the re-formation of a pair are performed by the operator inputting instructions from the
input device 150 to the information-processing apparatuses device management program 121 is executed. The instructions input by the operator are transmitted to thechannel control units 101 in thestorage devices channel control unit 101 executes thecontrol program 214 to perform the formation of the pair or a change of the state of the pair according to the instructions. Thechannel control unit 101 performs the control of thelogical volume 109 according to the pairing status of the formed pair. For example, thechannel control unit 101 reflects the copy of the updated data of the main logical volume to the secondary logical volume with respect to the pair in the ‘under pair’ state. - The change of the pairing status by the
channel control unit 101 is sequentially performed on each pair. The reason is that, for example, in a state in which a pair is formed by combining a plurality of secondary logical volumes with one main logical volume as described above, when a plurality of pairing conditions is simultaneously changed, management for the main logical volume is complicated. - Further, while the formation of a pair or the shift of a paring status is initiated by the instructions received by information-
processing apparatuses processing apparatuses port 215. - In this embodiment, a pair is set up as shown in the pair management table of
FIG. 6 . That is, the logical volume 109 (hereinafter, referred to as ‘a first logical volume’) at the first site forms a local pair. Further, the secondary logical volume (hereinafter, referred to as ‘a first secondary logical volume’) of the local pair of the first logical volume and the logical volume 109 (hereinafter, referred to as ‘a second logical volume’) at the second site form a remote pair. The remote pair of the first main logical volume and the logical volume 109 (hereinafter, referred to as ‘a third logical volume’) at the third site is always in the ‘under pair’ state, and the third logical volume is always consistent with the first main logical volume by the remote copy in a synchronous manner, which will be described later. - Further, data backup from the first logical volume to the second logical volume is performed as follows. First, the
first storage device 10 shifts the local pair of the first logical volume to the ‘under split’ by the instructions from the first information-processingapparatus 11. When split is completed, thefirst storage device 10 re-forms the remote pair of the first secondary logical volume and the second logical volume according to the instructions from the first information-processingapparatus 11. Further, the first information-processing apparatus can continue to perform the process using a main logical volume (hereinafter, referred to as ‘a first main logical volume’) of the local pair of the first logical volume during the re-formation of the remote pair. - [Consistency Group]
- The number of a consistency group (i.e., a pair group) to which the pair belongs is written into the column ‘consistency group’ of the paring status management table 201. Here, a consistency group refers to a group formed of pairs of a plurality of
logical volumes 109, which is controlled so that the shift to the split state thereof is simultaneously made. That is, a change of the paring status with respect to the plurality of pairs is sequentially performed on each pair as described above. However, a plurality of pairs belonging to the same consistency group is controlled such that the shift to the split state thereof is simultaneously performed (hereinafter, it is referred to as simultaneity of the shift to the split state). - For example, a case is considered in which data is written from the information-
processing apparatuses storage devices storage devices processing devices storage devices - As described above, forming a consistency group with respect to a plurality of pairs is particularly effective when one data item is stored in the plurality of
logical volumes 109, for example, when the write data is too large to be stored in onelogical volume 109, and when one file data item is controlled so that it is stored in the plurality oflogical volumes 109. - In addition, securing simultaneity for the shift of each pair to the split state using the consistency group is effective even when there is a data write request or a data read request from the information-
processing apparatuses - That is, the reason is that, when the consistency group has not been formed, data writing or data reading can be performed with respect to a secondary logical volume of the pair the shift to the split state of which is completed, but data writing or data reading is prohibited with respect to a secondary logical volume of the pair the shift to the split state of which is not completed.
- Further, a split for a pair of the consistency group may be performed by designating the start time thereof. The start time of the split is instructed by a command transmitted from the information-
processing apparatuses - [Remote Copy]
- When the above-stated remote pairs are formed in the
logical volumes 109 of the first tothird storage devices source storage devices destination storage devices source storage devices destination storage devices processing apparatuses source storage devices source storage devices destination storage devices first network 40. Then, the copydestination storage devices source storage devices destination storage devices processing apparatuses source storage devices source storage devices destination storage devices destination storage devices source storage devices destination storage devices source storage devices destination storage devices - Further, the remote copy manner includes a synchronous manner and an asynchronous manner, which are determined in the ‘copy manner’ column of the differential management table 202. In the case of the synchronous manner, if the information-
processing apparatuses source storage devices source storage devices source storage devices destination storage devices destination storage devices source storage devices source storage devices processing apparatuses - In such a synchronous manner, the information-
processing apparatuses processing apparatuses processing apparatuses processing apparatuses processing apparatuses source storage devices - Meanwhile, in the asynchronous manner, when the information-
processing apparatuses source storage devices source storage devices source storage devices destination storage devices destination storage devices source storage devices source storage devices processing apparatuses destination storage devices processing apparatuses source storage devices destination storage devices - In this embodiment, a remote pair between the first main logical volume and the third logical volume performs a remote copy in the synchronous manner. Thus, the contents of the first main logical volume are always consistent with the contents of the third logical volume.
- [Differential Management Table]
- The differential management table 202 shows whether any block of a certain
logical volume 109 is updated on the basis of a predetermined time. As shown inFIG. 5 , the table is generally recorded in the sharedmemories 104 of thestorage devices logical volume 109. The differential management table 202 in which an initial state of each item is ‘0’ is recorded in the sharedmemories 104 of thestorage devices storage devices logical volumes 109, each of them updates a bit on the differential management table, which indicates a block in which the data is stored, to ‘1’. - In this embodiment, whether the
third storage device 30 updates any block of the third logical volume on the basis of the time when the local pair of the first logical volume is split is recorded in the sharedmemory 104 of thethird storage device 30 as the differential management table 202. As described above, the remote pair between the first main logical volume and the third logical volume is always the ‘under pair’, thereby performing the remote copy in the synchronous manner. Thus, the differential management table 202 of thethird storage device 30 indicates a write history of data recorded in the first main logical volume after the local pair of the first logical volume is split. That is, in the differential management table 202, data in the third logical volume of a block in which ‘1’ is set up is data recorded in the first main logical volume after the local pair of the first logical volume is split. In addition, the first information-processingapparatus 11 backs up the first secondary logical volume to the second logical volume in a state in which the local pair of the first logical volume is split. - In this way, when the first site is damaged, the
third storage device 30 makes the second logical volume consistent with the first main logical volume by remotely copying only data in the third logical volume, which is set up to ‘1’ in the differential management table 202 of thethird storage device 30, to the second logical volume. - However, when the first site is damaged while the
first storage device 10 performs a remote copy from the first secondary logical volume to the second logical volume, the second logical volume has not been updated so as to be consistent with the first main logical volume at the time when the local pair of the first logical volume is split the last time. Thus, thethird storage device 30 can make the second logical volume consistent with the first main logical volume only with the differential management table 202 at the time when the first logical volume is split the last time. - Thus, in this embodiment, the
third storage device 30 records the differential management table 202 at the time when the local pair of the first logical volume is split the time before last and the last time. Accordingly, when the first site is damaged while thefirst storage device 11 performs a remote copy from the first secondary logical volume to the second logical volume, thethird storage device 30 can make the second logical volume consistent with the first main logical volume using the differential management table 202 recorded at the time when the local pair of the first logical volume is split the time before last. -
FIG. 7 shows an example in which the first site is damaged while thefirst storage device 10 is performing a remote copy from the first secondary logical volume to the second logical volume. In the differential management table 202, ‘1’ is set up with respect to a block of the third logical volume on which thethird storage device 30 performs update, andFIG. 7 shows the contents of write data in the logical volume for the sake of convenience in the present invention. As shown inFIG. 7 , at the time when it is damaged, update is completed up to the data ‘5’ in the remote copy from the first secondary logical volume to the second logical volume, which is performed by thefirst storage device 10. However, since the update of the subsequent data ‘9’ is not completed, the second logical volume is not consistent with the first main logical volume at the time when the local pair of the first logical volume is split the last time. Thus, thethird storage device 30 remotely copies, to the second logical volume, data ‘1’, ‘5’, ‘9’, ‘3’, ‘4’, and ‘2’ of theblocks 1 to 6 in which ‘1’ is set up in the differential management table at the time when the local pair of the first logical volume is split the time before last. Accordingly, thethird storage device 30 can make the second logical volume consistent with the first main logical volume at the time when the first site is damaged. - Further, if the
storage devices processing apparatuses processing apparatuses processing apparatuses logical volume 109 that performs the update management in the differential management table 202 forms the consistency group. - In this embodiment, the third logical volume has two differential management tables 202, as described above. If the
third storage device 30 receives a differential delete request from the first information-processingapparatus 11, it deletes only the differential management table 202 at the time when the local pair of the first logical volume is split the time before last, and begins to record on the differential management table 202 update information at the time when the local pair is newly split. - Transition in General Operation
-
FIG. 8 illustrates the transition of a data state in each site at the time T1 to T3 when the first information-processingapparatus 11 operates a normal system. Further, STn written in thelogical volume 109, indicates that thelogical volume 109 is consistent with the first main logical volume at the time Tn. Further, Δtxy indicates a differential management table 202 from the time Tx to the time Ty. - First, a state at the time T1 will be described. Assuming that the last split time of a local pair in the first logical volume is T0, a state is illustrated in which the first information-processing
apparatus 11 splits the local pair of the first logical volume at the time T1. The state is as follows: the local pair of the first logical volume is the ‘under split’, a remote pair of the first main logical volume and the third logical volume is the ‘under pair’, and a remote pair of the first secondary logical volume and the second logical volume is the ‘under split’. The first information-processingapparatus 11 designates the time T1, which is the same as the split of the first logical volume, and transmits a delete request from the differential management table 202 to thethird storage device 30. Thethird storage device 30 having received the request deletes, at the time T1, the differential management table 202 recorded at the time when the local pair of the first logical volume is split the time before last. Further, the second logical volume is in the state of the first main logical volume at the time T0. - Next, a state at the time T2 will be described. In this state, the
first storage device 10 re-forms a local pair of the first secondary logical volume and the second logical volume according to an instruction from the first information-processingapparatus 11. The first information-processingapparatus 11 monitors the pairing status of a remote pair of the first secondary logical volume and the second logical volume. Then, when the paring status becomes the ‘under pair’, the first information-processingapparatus 11 instructs thefirst storage device 10 to split the remote pair. - Finally, a state at the time T3 will be described. In this state, the first information-processing
apparatus 11 instructs thefirst storage device 10 to re-form a local pair of the first logical volume. Then, thefirst storage device 10 re-forms the local pair of the first logical volume. - As describe above, the transition from the state at the time T1 to the state at the time T3 repeatedly occurs when the first site is not damaged. The operation of the first information-processing
apparatus 11 and thethird storage device 30 during this transition will be described. - First, the operation of the first information-processing
apparatus 11 will be described with reference toFIG. 9 . The first information-processingapparatus 11 instructs thefirst storage device 10 to split the local pair of the first logical volume with time designation (S911). Here, the split is executed on the local pair in a consistency group unit. Further, the first information-processingapparatus 11 transmits to the third storage device 30 a differential delete request in which the same time as the split is set up as a differential delete start time (S912). Further, the differential delete request is performed on the remote pair in the consistency group, which is the same as the local pair on which the split is performed. When the first information-processingapparatus 11 receives split completion notice from thefirst storage device 10 and differential delete completion notice from the third storage device 30 (S914) after the setup time has elapsed (S913), it instructs thefirst storage device 10 to re-form the remote pair of the first secondary logical volume and the second logical volume (S915). The first information-processingapparatus 11 monitors the pairing status of the remote pair of the first secondary logical volume and the second logical volume. Then, when the state of the pair becomes the ‘under pair’ (S916), the first information-processingapparatus 11 instructs the first storage device to split the pair (S917). Subsequently, the first information-processingapparatus 11 instructs thefirst storage device 10 to re-form the local pair of the first logical volume (S918). The first information-processingapparatus 11 repeatedly executes such a series of processes. - Next, the operation of the
third storage device 30 will be described with reference toFIGS. 10A and 10B . When thethird storage device 30 receives a differential delete request from the first information-processingapparatus 11, it sets up a differential delete start time to a consistency group management table 200 (S1011). Thethird storage device 30 monitors a data write request from thefirst storage device 10 during the differential delete start time (S1012). When thethird storage device 30 receives the data write request from thefirst storage device 10, it confirms whether data is recorded in thelogical volume 109 in which the differential management table 202 has not yet been deleted (S1013). If the data writing is performed in thelogical volume 109 in which the differential management table 202 has not yet been deleted, thethird storage device 30 compares the time set in the write request with the differential delete start time (S1014). When the time set in the write request is later than the differential delete start time, thethird storage device 30 first deletes the differential management table 202 of alogical volume 109, which is the target of the write operation (S1015). Thethird storage device 30 then records the information regarding the write data, for example, a write position on the differential management table 202 (S1016). Further, if a data write request from thefirst storage device 10 is directed to thelogical volume 109 in which the differential management table 202 has already been deleted, thethird storage device 30 does not perform the deletion of the differential management table 202 and records the written information on the differential management table 202 (S1017). Further, even though the write is directed to thelogical volume 109 in which the differential management table 202 is not deleted, thethird storage device 30 does not perform the deletion of the differential management table 202 and records the written information on the differential management table 202 when the time set in the write request is faster than the differential delete start time (S1017). - As such, the
third storage device 30 compares the time set up in the write request received from thefirst storage device 10 with the differential delete start time set up in the differential delete request received from the first information-processingapparatus 11, and performs the data writing on the differential management table 202 and the data deletion from the differential management table 202 in a time sequence. That is, the data writing on the first main logical volume and the split for the local pair of the first logical volume performed by thefirst storage device 10, and the data writing on the differential management table 202 and the data deletion from the differential management table 202 performed by thethird storage device 30 is performed in regular order. Accordingly, it is secured that the differential management table 202 of thethird storage device 30 is a write history of data written to the first main logical volume after a time when the local pair of the first logical volume is split. - Further, if the
third storage device 30 does not receive a write request from thefirst storage device 10, it deletes the differential management table 202 of thelogical volume 109 which has not yet been deleted (S1019) after the differential delete start time has been elapsed (S1018). Since thethird storage device 30 deletes the differential management table 202 in a consistency group unit, it confirms whether the deletion of the differential management tables 202 for all pairs in the consistency group is completed (S1020). If the deletion of the differential management table 202 for all pairs in the consistency group is completed, thethird storage device 30 deletes the differential delete start time from the consistency group management table (S1021) and transmits to the first information-processing apparatus 11 a notice that the differential management table 202 is completely deleted (S1022). Further, eachlogical volume 109 has two differential management tables 202 as described above. Therefore, when thethird storage device 30 receives a differential delete request from the first information-processingapparatus 11, it deletes only the differential management table 202 recorded at the time when the local pair of the first logical volume is split the time before last. - [Process when First Site is Damaged]
- The operation of making the second logical volume consistent with the first main logical volume in order to operate the second site as a main system when the first site is damaged will be discussed. First, the second information-processing
apparatus 21 instructs thesecond storage device 20 to acquire the paring status of the remote pair of the second logical volume and the first secondary logical volume. Thesecond storage device 20 having received this instruction refers to the pair management table 202 in the sharedmemory 104 of thesecond storage device 20 and transmits the contents of the table to the second information-processingapparatus 21. - Next, the second information-processing
apparatus 21 instructs thesecond storage device 20 to form a remote pair in which a third logical volume is a main logical volume and a second logical volume is a secondary logical volume. Further, the second information-processingapparatus 21 transmits to thesecond storage device 20 whether the remote pair of the second logical volume and the first secondary logical volume was in the ‘resynchronization’ state. - When the second information-processing
apparatus 21 instructs thesecond storage device 20 to form a pair of the third logical volume and the second logical volume, thesecond storage device 20 updates the pair management table 202 of thesecond storage device 20. Further, thesecond storage device 20 transmits to thethird storage device 30 the pair formation instruction and the state of the pair of the second logical volume and the first secondary logical volume received from the second information-processingapparatus 21. When thethird storage device 30 receives them, it updates the pair management table 202 of thethird storage device 30. Then, thethird storage device 30 performs a remote copy from the third logical volume to the second logical volume based on the state of the pair of the second logical volume and the first secondary logical volume, which is received from thesecond storage device 20. - That is, if the state of the remote pair of the second logical volume and the first secondary logical volume is not the ‘resynchronization’, the
third storage device 30 refers to the differential management table 202 at the time when the local pair of the first logical volume is split the last time. Further, if the state of the remote pair of the second logical volume and the first secondary logical volume is the ‘under resynchronization’, thethird storage device 30 refers to the differential management table 202 at the time when the local pair of the first logical volume is split the time before last. Thethird storage device 30 remotely copies only the third logical volume block on the referred differential management table 202, in which ‘1’ is set up, to the second logical volume. -
FIG. 11A illustrates an example in which the remote pair of the second logical volume and the first secondary logical volume is not in the ‘resynchronization’ state. Assuming that the time when the local pair of the first logical volume is split the time before last is T0 and the time when it is split in the last stage is T1,FIG. 11A illustrates a situation in which the first site is damaged at the time T3. Since the pairing status of the remote pair of the second logical volume and the first secondary logical volume is the ‘under split’, thethird storage device 30 remotely copies the block recorded on the differential management table 202 (ΔT13) at the time T1 from the third logical volume to the second logical volume. Therefore, it is possible to equalize the second logical volume, which is ST3, with the first main logical volume. - Next,
FIG. 11B illustrates an example in which the remote pair of the second logical volume and the first secondary logical volume is in the ‘resynchronization’ state. Assuming that the time when the local pair of the first logical volume is split the time before last is T0 and the time when it is split in the last stage is T1,FIG. 11B illustrates a situation in which the first site is damaged at the time T2. Since the state of the local pair of the second logical volume and the first secondary logical volume is the ‘resynchronization’, thethird storage device 30 remotely copies the block recorded on the differential management table 202 (AT02) at the time T0 from the third logical volume to the second logical volume. Therefore, it is possible to equalize the second logical volume, which is ST2, with the first main logical volume. - In the aforementioned embodiments of the present invention, the embodiments are intended to easy understanding of the present invention, but the present invention is not limited thereto. The present invention may be modified without departing the spirit thereof, and includes its equivalents.
Claims (13)
1. (canceled)
2. A storage apparatus comprising:
a control unit;
a plurality of disks forming a logical volume; and
a bitmap indicating a location in the logical volume,
wherein the control unit receives an instruction including bitmap clear timing information, based on which the control unit clears the bitmap,
wherein when the control unit receives a write request to the logical volume, the write request including a time stamp, the control unit compare a first time specified by the time stamp and a second time specified by the bitmap clear timing information, and
wherein if the first time is later than the second time, the control unit clears the bitmap.
3. A storage apparatus according to claim 2 , wherein after the control unit clears the bitmap, the control unit starts to record information indicating a location, in which data is updated according to the write request, by using the bitmap.
4. A storage apparatus according to claim 2 ,
wherein when the control unit does not receive the write request to the logical volume, the control unit compares a current time and the second time, and
wherein if the current time is later than the second time, the control unit clears the bitmap.
5. A method for managing location information in a logical volume by a storage system having a control unit, a plurality of disks configuring the logical volume, and a bitmap used for recording the location information, comprising the steps of:
receiving an instruction including bitmap clear timing information, based on which the control unit clears the bitmap;
receiving a write request to the logical volume, the write request including a time stamp;
comparing a first time specified by the time stamp and a second time specified by the bitmap clear timing information; and
clearing the bitmap, if the first time is later than the second time.
6. A method for managing location information according to claim 5 , further comprising a step of;
starting to record information indicating a location, in which data is updated according to the write request, by using the bitmap.
7. A method for managing location information according to claim 5 , further comprising steps of;
comparing a current time and the second time, if the control unit does not receive the write request to the logical volume; and
clearing the bitmap, if the current time is later than the second time.
8. A third storage apparatus comprising;
a third control unit;
a plurality of disks configuring a third logical volume; and
a plurality of tables used for recording location information in the third logical volume,
wherein the third control unit receives a write request from a first storage apparatus coupled to the third storage apparatus, and records the location information indicating a location, in which data is to be stored according to the write request, by using the plurality of tables, and
wherein the third control unit clears one of the plurality of tables in relation to a copy operation between the first storage apparatus and a second storage apparatus coupled to the first storage apparatus.
9. A third storage apparatus according to claim 8 , wherein the third control unit receives the write request from the first storage apparatus according to a synchronous remote copy operation.
10. A third storage apparatus according to claim 8 , wherein the third control unit clears the one of the plurality of tables in relation to an asynchronous remote copy operation between the first storage apparatus and the second storage apparatus.
11. A method for managing location information in a third logical volume by a third storage apparatus in relation to a copy operation between a first storage apparatus and a second storage apparatus,
wherein the third storage system having a third control unit, a plurality of third disks configuring the third logical volume, and a plurality of tables used for recording the location information, comprising the steps of:
receiving a write request from the first storage apparatus;
recording the location information indicating a location, in which data is stored according to the write request, by using the plurality of tables; and
clearing one of the plurality of tables in relation to a copy operation between the first storage apparatus and a second storage apparatus coupled to the first storage apparatus.
12. A method for managing location information according to claim 11 , wherein the third control unit receives the write request from the first storage apparatus according to a synchronous remote copy operation.
13. A method for managing location information according to claim 11 , wherein the third control unit clears the one of the plurality of tables in relation to an asynchronous remote copy operation between the first storage apparatus and the second storage apparatus.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/898,945 US20080034177A1 (en) | 2003-09-01 | 2007-09-18 | Storage system, method of controlling storage system, and storage device |
US11/907,643 US20080052479A1 (en) | 2003-09-01 | 2007-10-16 | Storage system, method of controlling storage system, and storage device |
US12/567,229 US20100017574A1 (en) | 2003-09-01 | 2009-09-25 | Storage system, method of controlling storage system, and storage device |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-309194 | 2003-09-01 | ||
JP2003309194A JP4021823B2 (en) | 2003-09-01 | 2003-09-01 | Remote copy system and remote copy method |
US10/823,618 US7114044B2 (en) | 2003-09-01 | 2004-04-14 | Storage system, method of controlling storage system, and storage device |
US11/196,418 US7185152B2 (en) | 2003-09-01 | 2005-08-04 | Storage system, method of controlling storage system, and storage device |
US11/526,598 US7287132B2 (en) | 2003-09-01 | 2006-09-26 | Storage system, method of controlling storage system, and storage device |
US11/898,945 US20080034177A1 (en) | 2003-09-01 | 2007-09-18 | Storage system, method of controlling storage system, and storage device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/526,598 Continuation US7287132B2 (en) | 2003-09-01 | 2006-09-26 | Storage system, method of controlling storage system, and storage device |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/907,643 Continuation US20080052479A1 (en) | 2003-09-01 | 2007-10-16 | Storage system, method of controlling storage system, and storage device |
US12/567,229 Continuation US20100017574A1 (en) | 2003-09-01 | 2009-09-25 | Storage system, method of controlling storage system, and storage device |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080034177A1 true US20080034177A1 (en) | 2008-02-07 |
Family
ID=34191244
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/823,618 Expired - Fee Related US7114044B2 (en) | 2003-09-01 | 2004-04-14 | Storage system, method of controlling storage system, and storage device |
US11/196,418 Expired - Lifetime US7185152B2 (en) | 2003-09-01 | 2005-08-04 | Storage system, method of controlling storage system, and storage device |
US11/526,598 Expired - Fee Related US7287132B2 (en) | 2003-09-01 | 2006-09-26 | Storage system, method of controlling storage system, and storage device |
US11/898,945 Abandoned US20080034177A1 (en) | 2003-09-01 | 2007-09-18 | Storage system, method of controlling storage system, and storage device |
US11/907,643 Abandoned US20080052479A1 (en) | 2003-09-01 | 2007-10-16 | Storage system, method of controlling storage system, and storage device |
US12/567,229 Abandoned US20100017574A1 (en) | 2003-09-01 | 2009-09-25 | Storage system, method of controlling storage system, and storage device |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/823,618 Expired - Fee Related US7114044B2 (en) | 2003-09-01 | 2004-04-14 | Storage system, method of controlling storage system, and storage device |
US11/196,418 Expired - Lifetime US7185152B2 (en) | 2003-09-01 | 2005-08-04 | Storage system, method of controlling storage system, and storage device |
US11/526,598 Expired - Fee Related US7287132B2 (en) | 2003-09-01 | 2006-09-26 | Storage system, method of controlling storage system, and storage device |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/907,643 Abandoned US20080052479A1 (en) | 2003-09-01 | 2007-10-16 | Storage system, method of controlling storage system, and storage device |
US12/567,229 Abandoned US20100017574A1 (en) | 2003-09-01 | 2009-09-25 | Storage system, method of controlling storage system, and storage device |
Country Status (4)
Country | Link |
---|---|
US (6) | US7114044B2 (en) |
EP (3) | EP1517242B1 (en) |
JP (1) | JP4021823B2 (en) |
DE (3) | DE602004023003D1 (en) |
Families Citing this family (75)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7434219B2 (en) | 2000-01-31 | 2008-10-07 | Commvault Systems, Inc. | Storage of application specific profiles correlating to document versions |
US7107298B2 (en) | 2001-09-28 | 2006-09-12 | Commvault Systems, Inc. | System and method for archiving objects in an information store |
JP4021823B2 (en) * | 2003-09-01 | 2007-12-12 | 株式会社日立製作所 | Remote copy system and remote copy method |
JP2005202893A (en) * | 2004-01-19 | 2005-07-28 | Hitachi Ltd | Storage device controller, storage system, recording medium recording program, information processor, and method for controlling storage system |
JP4382602B2 (en) * | 2004-04-23 | 2009-12-16 | 株式会社日立製作所 | Remote copy system |
US7366857B2 (en) * | 2004-04-30 | 2008-04-29 | Hewlett-Packard Development Company, L.P. | Internal disk array mirror architecture |
JP2005346164A (en) * | 2004-05-31 | 2005-12-15 | Toshiba Corp | Data processor and data transfer control method |
JP2006119745A (en) * | 2004-10-19 | 2006-05-11 | Hitachi Ltd | Computer system and method for controlling it |
US7937547B2 (en) * | 2005-06-24 | 2011-05-03 | Syncsort Incorporated | System and method for high performance enterprise data protection |
JP2007047892A (en) * | 2005-08-08 | 2007-02-22 | Hitachi Ltd | Computer system and status management method for computer system |
JP2007066162A (en) | 2005-09-01 | 2007-03-15 | Hitachi Ltd | Storage system and method for managing storage system |
JP2007066154A (en) | 2005-09-01 | 2007-03-15 | Hitachi Ltd | Storage system for copying data to be stored in multiple storages |
JP4955996B2 (en) * | 2005-09-20 | 2012-06-20 | 株式会社日立製作所 | Volume migration method and storage network system |
US7702851B2 (en) | 2005-09-20 | 2010-04-20 | Hitachi, Ltd. | Logical volume transfer method and storage network system |
JP4773788B2 (en) | 2005-09-29 | 2011-09-14 | 株式会社日立製作所 | Remote copy control in storage system |
JP5111754B2 (en) * | 2005-11-14 | 2013-01-09 | 株式会社日立製作所 | Storage control system |
WO2007095587A2 (en) * | 2006-02-14 | 2007-08-23 | Yottayotta, Inc. | Systems and methods for obtaining ultra-high data availability and geographic disaster tolerance |
CN100580509C (en) * | 2006-03-10 | 2010-01-13 | 瀚宇彩晶股份有限公司 | LCD and defect mending method used for same |
US8843783B2 (en) | 2006-03-31 | 2014-09-23 | Emc Corporation | Failover to backup site in connection with triangular asynchronous replication |
US7647525B2 (en) | 2006-03-31 | 2010-01-12 | Emc Corporation | Resumption of operations following failover in connection with triangular asynchronous replication |
US7430646B2 (en) | 2006-03-31 | 2008-09-30 | Emc Corporation | Planned switchover in connection with triangular asynchronous replication |
NZ548528A (en) * | 2006-07-14 | 2009-02-28 | Arc Innovations Ltd | Text encoding system and method |
JP4818843B2 (en) * | 2006-07-31 | 2011-11-16 | 株式会社日立製作所 | Storage system for remote copy |
WO2008041754A1 (en) * | 2006-10-04 | 2008-04-10 | Nikon Corporation | Electronic device |
US7734669B2 (en) * | 2006-12-22 | 2010-06-08 | Commvault Systems, Inc. | Managing copies of data |
JP5180578B2 (en) * | 2007-12-21 | 2013-04-10 | 株式会社野村総合研究所 | Business continuity system |
US8769048B2 (en) | 2008-06-18 | 2014-07-01 | Commvault Systems, Inc. | Data protection scheduling, such as providing a flexible backup window in a data protection system |
US8352954B2 (en) | 2008-06-19 | 2013-01-08 | Commvault Systems, Inc. | Data storage resource allocation by employing dynamic methods and blacklisting resource request pools |
US9128883B2 (en) | 2008-06-19 | 2015-09-08 | Commvault Systems, Inc | Data storage resource allocation by performing abbreviated resource checks based on relative chances of failure of the data storage resources to determine whether data storage requests would fail |
US8134462B1 (en) * | 2008-08-08 | 2012-03-13 | The United States Of America As Represented By The Secretary Of The Navy | Self-contained sensor package for water security and safety |
JP2010049522A (en) * | 2008-08-22 | 2010-03-04 | Hitachi Ltd | Computer system and method for managing logical volumes |
US8725688B2 (en) | 2008-09-05 | 2014-05-13 | Commvault Systems, Inc. | Image level copy or restore, such as image level restore without knowledge of data object metadata |
US20100070474A1 (en) | 2008-09-12 | 2010-03-18 | Lad Kamleshkumar K | Transferring or migrating portions of data objects, such as block-level data migration or chunk-based data migration |
TWI526823B (en) * | 2009-01-23 | 2016-03-21 | 普安科技股份有限公司 | Method and apparatus for performing volume replication using unified architecture |
US8140720B2 (en) | 2009-02-09 | 2012-03-20 | Hitachi, Ltd. | Method of setting communication path in storage system, and management apparatus therefor |
US8498997B2 (en) * | 2009-09-23 | 2013-07-30 | Hitachi, Ltd. | Server image migration |
US8849966B2 (en) * | 2009-10-13 | 2014-09-30 | Hitachi, Ltd. | Server image capacity optimization |
US8202205B2 (en) * | 2010-02-09 | 2012-06-19 | GoBe Healthy, LLC | Omni-directional exercise device |
JP5270796B2 (en) * | 2010-04-07 | 2013-08-21 | 株式会社日立製作所 | Asynchronous remote copy system and storage control method |
US8407950B2 (en) | 2011-01-21 | 2013-04-02 | First Solar, Inc. | Photovoltaic module support system |
US8346990B2 (en) * | 2011-01-31 | 2013-01-01 | Lsi Corporation | Methods and systems for tracking data activity levels |
US8849762B2 (en) | 2011-03-31 | 2014-09-30 | Commvault Systems, Inc. | Restoring computing environments, such as autorecovery of file systems at certain points in time |
US10157184B2 (en) | 2012-03-30 | 2018-12-18 | Commvault Systems, Inc. | Data previewing before recalling large data files |
US9633216B2 (en) | 2012-12-27 | 2017-04-25 | Commvault Systems, Inc. | Application of information management policies based on operation with a geographic entity |
US9459968B2 (en) | 2013-03-11 | 2016-10-04 | Commvault Systems, Inc. | Single index to query multiple backup formats |
JP6217302B2 (en) * | 2013-10-15 | 2017-10-25 | 富士通株式会社 | Storage management device, information processing system, and storage management program |
US9798596B2 (en) | 2014-02-27 | 2017-10-24 | Commvault Systems, Inc. | Automatic alert escalation for an information management system |
US9648100B2 (en) | 2014-03-05 | 2017-05-09 | Commvault Systems, Inc. | Cross-system storage management for transferring data across autonomous information management systems |
US9823978B2 (en) | 2014-04-16 | 2017-11-21 | Commvault Systems, Inc. | User-level quota management of data objects stored in information management systems |
US9740574B2 (en) | 2014-05-09 | 2017-08-22 | Commvault Systems, Inc. | Load balancing across multiple data paths |
JP2015225603A (en) * | 2014-05-29 | 2015-12-14 | 富士通株式会社 | Storage control device, storage control method, and storage control program |
US9514013B2 (en) * | 2014-06-27 | 2016-12-06 | International Business Machines Corporation | Maintaining inactive copy relationships for secondary storages of active copy relationships having a common primary storage for use in case of a failure of the common primary storage |
US11249858B2 (en) | 2014-08-06 | 2022-02-15 | Commvault Systems, Inc. | Point-in-time backups of a production application made accessible over fibre channel and/or ISCSI as data sources to a remote application by representing the backups as pseudo-disks operating apart from the production application and its host |
US9852026B2 (en) | 2014-08-06 | 2017-12-26 | Commvault Systems, Inc. | Efficient application recovery in an information management system based on a pseudo-storage-device driver |
US9444811B2 (en) | 2014-10-21 | 2016-09-13 | Commvault Systems, Inc. | Using an enhanced data agent to restore backed up data across autonomous storage management systems |
US10185503B1 (en) | 2015-06-30 | 2019-01-22 | EMC IP Holding Company | Consistency group fault tolerance |
US10185758B1 (en) * | 2015-06-30 | 2019-01-22 | EMC IP Holding Company LLC | Direct to remote replication |
US9921764B2 (en) | 2015-06-30 | 2018-03-20 | International Business Machines Corporation | Using inactive copy relationships to resynchronize data between storages |
US9727243B2 (en) | 2015-06-30 | 2017-08-08 | International Business Machines Corporation | Using inactive copy relationships to resynchronize data between storages |
US9766825B2 (en) | 2015-07-22 | 2017-09-19 | Commvault Systems, Inc. | Browse and restore for block-level backups |
US10296368B2 (en) | 2016-03-09 | 2019-05-21 | Commvault Systems, Inc. | Hypervisor-independent block-level live browse for access to backed up virtual machine (VM) data and hypervisor-free file-level recovery (block-level pseudo-mount) |
US10382544B2 (en) | 2016-04-08 | 2019-08-13 | International Business Machines Corporation | Establishing reverse paths between servers in a copy environment |
US10838821B2 (en) | 2017-02-08 | 2020-11-17 | Commvault Systems, Inc. | Migrating content and metadata from a backup system |
US10740193B2 (en) | 2017-02-27 | 2020-08-11 | Commvault Systems, Inc. | Hypervisor-independent reference copies of virtual machine payload data based on block-level pseudo-mount |
US10891069B2 (en) | 2017-03-27 | 2021-01-12 | Commvault Systems, Inc. | Creating local copies of data stored in online data repositories |
US10776329B2 (en) | 2017-03-28 | 2020-09-15 | Commvault Systems, Inc. | Migration of a database management system to cloud storage |
US11074140B2 (en) | 2017-03-29 | 2021-07-27 | Commvault Systems, Inc. | Live browsing of granular mailbox data |
US10664352B2 (en) | 2017-06-14 | 2020-05-26 | Commvault Systems, Inc. | Live browsing of backed up data residing on cloned disks |
US10795927B2 (en) | 2018-02-05 | 2020-10-06 | Commvault Systems, Inc. | On-demand metadata extraction of clinical image data |
US10789387B2 (en) | 2018-03-13 | 2020-09-29 | Commvault Systems, Inc. | Graphical representation of an information management system |
US11029875B2 (en) * | 2018-09-28 | 2021-06-08 | Dell Products L.P. | System and method for data storage in distributed system across multiple fault domains |
US10860443B2 (en) | 2018-12-10 | 2020-12-08 | Commvault Systems, Inc. | Evaluation and reporting of recovery readiness in a data storage management system |
US11308034B2 (en) | 2019-06-27 | 2022-04-19 | Commvault Systems, Inc. | Continuously run log backup with minimal configuration and resource usage from the source machine |
JP7331027B2 (en) | 2021-02-19 | 2023-08-22 | 株式会社日立製作所 | Scale-out storage system and storage control method |
CN112950380B (en) * | 2021-03-29 | 2022-10-21 | 中国建设银行股份有限公司 | Block chain-based transaction consistency processing method and device |
Citations (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5592618A (en) * | 1994-10-03 | 1997-01-07 | International Business Machines Corporation | Remote copy secondary data copy validation-audit function |
US5799323A (en) * | 1995-01-24 | 1998-08-25 | Tandem Computers, Inc. | Remote duplicate databased facility with triple contingency protection |
US5835953A (en) * | 1994-10-13 | 1998-11-10 | Vinca Corporation | Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating |
US5937414A (en) * | 1997-02-28 | 1999-08-10 | Oracle Corporation | Method and apparatus for providing database system replication in a mixed propagation environment |
US6101497A (en) * | 1996-05-31 | 2000-08-08 | Emc Corporation | Method and apparatus for independent and simultaneous access to a common data set |
US20010007102A1 (en) * | 1999-02-17 | 2001-07-05 | Mathieu Gagne | Method and apparatus for cascading data through redundant data storage units |
US20020059505A1 (en) * | 1998-06-30 | 2002-05-16 | St. Pierre Edgar J. | Method and apparatus for differential backup in a computer storage system |
US20020103816A1 (en) * | 2001-01-31 | 2002-08-01 | Shivaji Ganesh | Recreation of archives at a disaster recovery site |
US20030005235A1 (en) * | 2001-07-02 | 2003-01-02 | Sun Microsystems, Inc. | Computer storage systems |
US20030014523A1 (en) * | 2001-07-13 | 2003-01-16 | John Teloh | Storage network data replicator |
US20030051111A1 (en) * | 2001-08-08 | 2003-03-13 | Hitachi, Ltd. | Remote copy control method, storage sub-system with the method, and large area data storage system using them |
US6643750B2 (en) * | 2001-02-28 | 2003-11-04 | Hitachi, Ltd. | Storage apparatus system and method of data backup |
US20030229764A1 (en) * | 2002-06-05 | 2003-12-11 | Hitachi, Ltd. | Data storage subsystem |
US6671705B1 (en) * | 1999-08-17 | 2003-12-30 | Emc Corporation | Remote mirroring system, device, and method |
US20040098547A1 (en) * | 1998-12-31 | 2004-05-20 | Yuval Ofek | Apparatus and methods for transferring, backing up, and restoring data in a computer system |
US20040139128A1 (en) * | 2002-07-15 | 2004-07-15 | Becker Gregory A. | System and method for backing up a computer system |
US20040193802A1 (en) * | 2003-03-25 | 2004-09-30 | Emc Corporation | Reading virtual ordered writes at local storage device |
US20040193816A1 (en) * | 2003-03-25 | 2004-09-30 | Emc Corporation | Reading virtual ordered writes at a remote storage device |
US20040230859A1 (en) * | 2003-05-15 | 2004-11-18 | Hewlett-Packard Development Company, L.P. | Disaster recovery system with cascaded resynchronization |
US20040260736A1 (en) * | 2003-06-18 | 2004-12-23 | Kern Robert Frederic | Method, system, and program for mirroring data at storage locations |
US20050050267A1 (en) * | 1991-01-31 | 2005-03-03 | Hitachi, Ltd | Storage unit subsystem |
US20050060507A1 (en) * | 2003-09-17 | 2005-03-17 | Hitachi, Ltd. | Remote storage disk control device with function to transfer commands to remote storage devices |
US20050066122A1 (en) * | 2003-03-25 | 2005-03-24 | Vadim Longinov | Virtual ordered writes |
US6898685B2 (en) * | 2003-03-25 | 2005-05-24 | Emc Corporation | Ordering data writes from a local storage device to a remote storage device |
US20050120093A1 (en) * | 2001-05-10 | 2005-06-02 | Hitachi, Ltd. | Remote copy for a storgae controller in a heterogeneous environment |
US20050120056A1 (en) * | 2003-12-01 | 2005-06-02 | Emc Corporation | Virtual ordered writes for multiple storage devices |
US20050132248A1 (en) * | 2003-12-01 | 2005-06-16 | Emc Corporation | Data recovery for virtual ordered writes for multiple storage devices |
US6922763B2 (en) * | 2002-03-29 | 2005-07-26 | Hitachi, Ltd. | Method and apparatus for storage system |
US20050188254A1 (en) * | 2000-05-25 | 2005-08-25 | Hitachi, Ltd. | Storage system making possible data synchronization confirmation at time of asynchronous remote copy |
US20050198454A1 (en) * | 2004-03-08 | 2005-09-08 | Emc Corporation | Switching between virtual ordered writes mode and synchronous or semi-synchronous RDF transfer mode |
US7013317B2 (en) * | 2001-11-07 | 2006-03-14 | Hitachi, Ltd. | Method for backup and storage system |
US7165155B1 (en) * | 2003-08-29 | 2007-01-16 | Emc Corporation | System and method for tracking changes associated with incremental copying |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE60043873D1 (en) | 1999-06-01 | 2010-04-08 | Hitachi Ltd | Method for data backup |
JP3983516B2 (en) | 2001-10-25 | 2007-09-26 | 株式会社日立製作所 | Storage system |
US6745303B2 (en) * | 2002-01-03 | 2004-06-01 | Hitachi, Ltd. | Data synchronization of multiple remote storage |
JP4021823B2 (en) * | 2003-09-01 | 2007-12-12 | 株式会社日立製作所 | Remote copy system and remote copy method |
-
2003
- 2003-09-01 JP JP2003309194A patent/JP4021823B2/en not_active Expired - Lifetime
-
2004
- 2004-04-14 US US10/823,618 patent/US7114044B2/en not_active Expired - Fee Related
- 2004-04-20 EP EP04252318A patent/EP1517242B1/en not_active Expired - Lifetime
- 2004-04-20 DE DE602004023003T patent/DE602004023003D1/en not_active Expired - Lifetime
- 2004-04-20 DE DE602004015216T patent/DE602004015216D1/en not_active Expired - Lifetime
- 2004-04-20 EP EP09165749A patent/EP2120149A3/en not_active Withdrawn
- 2004-04-20 DE DE04252318T patent/DE04252318T1/en active Pending
- 2004-04-20 EP EP06025221A patent/EP1777625B1/en not_active Expired - Fee Related
-
2005
- 2005-08-04 US US11/196,418 patent/US7185152B2/en not_active Expired - Lifetime
-
2006
- 2006-09-26 US US11/526,598 patent/US7287132B2/en not_active Expired - Fee Related
-
2007
- 2007-09-18 US US11/898,945 patent/US20080034177A1/en not_active Abandoned
- 2007-10-16 US US11/907,643 patent/US20080052479A1/en not_active Abandoned
-
2009
- 2009-09-25 US US12/567,229 patent/US20100017574A1/en not_active Abandoned
Patent Citations (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050050267A1 (en) * | 1991-01-31 | 2005-03-03 | Hitachi, Ltd | Storage unit subsystem |
US5592618A (en) * | 1994-10-03 | 1997-01-07 | International Business Machines Corporation | Remote copy secondary data copy validation-audit function |
US5835953A (en) * | 1994-10-13 | 1998-11-10 | Vinca Corporation | Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating |
US5799323A (en) * | 1995-01-24 | 1998-08-25 | Tandem Computers, Inc. | Remote duplicate databased facility with triple contingency protection |
US6101497A (en) * | 1996-05-31 | 2000-08-08 | Emc Corporation | Method and apparatus for independent and simultaneous access to a common data set |
US5937414A (en) * | 1997-02-28 | 1999-08-10 | Oracle Corporation | Method and apparatus for providing database system replication in a mixed propagation environment |
US20020059505A1 (en) * | 1998-06-30 | 2002-05-16 | St. Pierre Edgar J. | Method and apparatus for differential backup in a computer storage system |
US20040098547A1 (en) * | 1998-12-31 | 2004-05-20 | Yuval Ofek | Apparatus and methods for transferring, backing up, and restoring data in a computer system |
US20010007102A1 (en) * | 1999-02-17 | 2001-07-05 | Mathieu Gagne | Method and apparatus for cascading data through redundant data storage units |
US6671705B1 (en) * | 1999-08-17 | 2003-12-30 | Emc Corporation | Remote mirroring system, device, and method |
US20050188254A1 (en) * | 2000-05-25 | 2005-08-25 | Hitachi, Ltd. | Storage system making possible data synchronization confirmation at time of asynchronous remote copy |
US20020103816A1 (en) * | 2001-01-31 | 2002-08-01 | Shivaji Ganesh | Recreation of archives at a disaster recovery site |
US6643750B2 (en) * | 2001-02-28 | 2003-11-04 | Hitachi, Ltd. | Storage apparatus system and method of data backup |
US20050120093A1 (en) * | 2001-05-10 | 2005-06-02 | Hitachi, Ltd. | Remote copy for a storgae controller in a heterogeneous environment |
US20030005235A1 (en) * | 2001-07-02 | 2003-01-02 | Sun Microsystems, Inc. | Computer storage systems |
US20030014523A1 (en) * | 2001-07-13 | 2003-01-16 | John Teloh | Storage network data replicator |
US20030051111A1 (en) * | 2001-08-08 | 2003-03-13 | Hitachi, Ltd. | Remote copy control method, storage sub-system with the method, and large area data storage system using them |
US7013317B2 (en) * | 2001-11-07 | 2006-03-14 | Hitachi, Ltd. | Method for backup and storage system |
US6922763B2 (en) * | 2002-03-29 | 2005-07-26 | Hitachi, Ltd. | Method and apparatus for storage system |
US20030229764A1 (en) * | 2002-06-05 | 2003-12-11 | Hitachi, Ltd. | Data storage subsystem |
US20050071393A1 (en) * | 2002-06-05 | 2005-03-31 | Hitachi, Ltd. | Data storage subsystem |
US20040139128A1 (en) * | 2002-07-15 | 2004-07-15 | Becker Gregory A. | System and method for backing up a computer system |
US20040193802A1 (en) * | 2003-03-25 | 2004-09-30 | Emc Corporation | Reading virtual ordered writes at local storage device |
US20050066122A1 (en) * | 2003-03-25 | 2005-03-24 | Vadim Longinov | Virtual ordered writes |
US6898685B2 (en) * | 2003-03-25 | 2005-05-24 | Emc Corporation | Ordering data writes from a local storage device to a remote storage device |
US20040193816A1 (en) * | 2003-03-25 | 2004-09-30 | Emc Corporation | Reading virtual ordered writes at a remote storage device |
US20040230859A1 (en) * | 2003-05-15 | 2004-11-18 | Hewlett-Packard Development Company, L.P. | Disaster recovery system with cascaded resynchronization |
US20040260736A1 (en) * | 2003-06-18 | 2004-12-23 | Kern Robert Frederic | Method, system, and program for mirroring data at storage locations |
US7165155B1 (en) * | 2003-08-29 | 2007-01-16 | Emc Corporation | System and method for tracking changes associated with incremental copying |
US20050060507A1 (en) * | 2003-09-17 | 2005-03-17 | Hitachi, Ltd. | Remote storage disk control device with function to transfer commands to remote storage devices |
US20050120056A1 (en) * | 2003-12-01 | 2005-06-02 | Emc Corporation | Virtual ordered writes for multiple storage devices |
US20050132248A1 (en) * | 2003-12-01 | 2005-06-16 | Emc Corporation | Data recovery for virtual ordered writes for multiple storage devices |
US20050198454A1 (en) * | 2004-03-08 | 2005-09-08 | Emc Corporation | Switching between virtual ordered writes mode and synchronous or semi-synchronous RDF transfer mode |
Also Published As
Publication number | Publication date |
---|---|
US20070022266A1 (en) | 2007-01-25 |
EP1777625A3 (en) | 2007-09-05 |
EP1777625B1 (en) | 2009-09-02 |
DE04252318T1 (en) | 2007-11-29 |
EP1517242B1 (en) | 2008-07-23 |
DE602004015216D1 (en) | 2008-09-04 |
US20100017574A1 (en) | 2010-01-21 |
JP2005078453A (en) | 2005-03-24 |
JP4021823B2 (en) | 2007-12-12 |
DE602004023003D1 (en) | 2009-10-15 |
US7185152B2 (en) | 2007-02-27 |
EP1777625A2 (en) | 2007-04-25 |
US20050050288A1 (en) | 2005-03-03 |
US20060004894A1 (en) | 2006-01-05 |
EP1517242A3 (en) | 2006-02-08 |
EP2120149A3 (en) | 2010-01-13 |
EP1517242A2 (en) | 2005-03-23 |
EP2120149A2 (en) | 2009-11-18 |
US7114044B2 (en) | 2006-09-26 |
US7287132B2 (en) | 2007-10-23 |
US20080052479A1 (en) | 2008-02-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7185152B2 (en) | Storage system, method of controlling storage system, and storage device | |
US7055057B2 (en) | Coherency of non-committed replicate data after failover/failback | |
US7185228B2 (en) | Method for controlling information processing system and information processing system | |
US8453011B2 (en) | Storage system and data restoration method thereof | |
US7139885B2 (en) | Method and apparatus for managing storage based replication | |
US8332361B2 (en) | Data processing system and storage subsystem provided in data processing system | |
JP4387116B2 (en) | Storage system control method and storage system | |
JP4993913B2 (en) | Storage control device and data management method thereof | |
EP1758018A2 (en) | Storage system and storage system management method | |
US20050193248A1 (en) | Computer system for recovering data based on priority of the data | |
JP4902289B2 (en) | Backup system and backup method | |
US20070050574A1 (en) | Storage system and storage system management method | |
JP5286212B2 (en) | Remote copy control method and system in storage cluster environment | |
JP2005309793A (en) | Data processing system | |
JP2002259183A (en) | Storage device system and backup method of data | |
US8583884B2 (en) | Computing system and backup method | |
US20080294858A1 (en) | Storage system and data management method | |
JP4177419B2 (en) | Storage system control method, storage system, and storage apparatus | |
JP2006318505A (en) | Remote copy system and remote copy method | |
JP2008262600A (en) | Storage system control method, storage system, and storage device | |
US11379328B2 (en) | Transitioning from a donor four site data replication system to a target four site data replication system | |
JP2008065856A (en) | Control method of storage system, storage system and storage device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |