US20050268057A1 - Storage system - Google Patents
Storage system Download PDFInfo
- Publication number
- US20050268057A1 US20050268057A1 US11/187,832 US18783205A US2005268057A1 US 20050268057 A1 US20050268057 A1 US 20050268057A1 US 18783205 A US18783205 A US 18783205A US 2005268057 A1 US2005268057 A1 US 2005268057A1
- Authority
- US
- United States
- Prior art keywords
- storage
- group
- storage device
- storage system
- lus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0605—Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2069—Management of state, configuration or failover
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Definitions
- the present invention relates to a storage system, and more specifically, to a storage system where the volume into which data is to be replicated can be selected.
- a storage device may be equipped with a Fibre Channel interface, the standardization of which is being promoted by the ANSI T11 committee (hereinafter called “FC storage device”), or with an AT Attachment (ATA) interface, the standardization of which is being promoted by the ANSI T13 committee (hereinafter called “ATA storage devices”).
- FC storage device the standardization of which is being promoted by the ANSI T11 committee
- ATA AT Attachment
- ATA storage devices are relatively inexpensive and are primarily used in desktop personal computers for home use.
- FC storage devices are primarily used in corporate server systems since they have higher data input/output (I/O) performance than ATA storage devices and are reliable and robust enough to be employed in around-the-clock operation.
- a storage system can also be constructed of different types of storage device so that it may use them for different purposes depending on the performance, costs, and other factors, as disclosed in the Laid-open Patent Specification No. Heisei 10 (1998)-301720, which provides a means of enhancing the reliability of data stored in a storage system.
- an arrangement can be made such that in a storage system having different areas of cache for different types of data, the cache hit ratio is improved by optimizing the allocation of such cache areas.
- the present invention proposes three solutions: (a) the destination volume is selected by considering the type of the storage device on which the source volume resides and how the source volume is allocated to different areas of the cache; (b) the destination volume is selected by considering how the source volume is allocated to different areas of the cache; and (c) the destination volume is selected by using the criteria table that lists the selection criteria of destination volumes and how to view destination volumes according to the selection criteria.
- FIG. 1 illustrates the configuration of a storage system according to a preferred embodiment of the present invention.
- FIG. 2 shows an example of the storage device management table.
- FIG. 3 shows an example of the LU management table.
- FIG. 4 shows an example of the pair management table.
- FIG. 5 shows an example of the cache group information table.
- FIG. 6 shows an example of the configuration information management table.
- FIG. 7 illustrates the configuration of a storage system according to another preferred embodiment of the present invention.
- FIG. 8 illustrates the process flow of automatically selecting a sub-LU.
- FIG. 9 illustrates the process flow of selecting a sub-LU from a different characteristics group.
- FIG. 10 shows an example of the priority information table specifying the priority among a set of different characteristics.
- FIG. 11 shows an example of the criteria table specifying how the destination volume should be selected using the sub-LU selection criteria given by the user.
- FIG. 12 illustrates the process flow of creating an LU and registering it as a sub-LU.
- FIG. 1 illustrates the configuration of a computer system containing a storage system that allows the user to select a destination volume by considering the characteristics of the source volume.
- a dotted-line ellipse denotes a program or a table of information.
- a first storage system 70 A is connected to a host 10 , a management server 100 , and a second storage system 70 B of a similar configuration.
- An FC, SCSI, or other similar interface is used for communication between the first storage system 70 A and the host 10 .
- the first storage system 70 A is connected to the management server 100 through a management network 200 and to the second storage system 70 B through a communication path 300 .
- the communication path 300 can be implemented using FC or Enterprise Systems Connection (ESCON), the present invention does not limit the choice to any of these.
- the first storage system 70 A comprises a storage control unit 20 , a group of FC storage devices 28 , and a group of ATA storage devices 31 .
- the storage control unit 20 comprises a CPU 21 , a memory 22 , a cache 23 which temporarily holds part of data received from or data to be sent to the host 10 , a host FC interface 24 which carries out data transfer between the host 10 and the storage control unit 20 , a storage system FC interface 14 which carries out data transfer between the network 300 and the storage control unit 20 , an FC device interface 25 which carries out data transfer between the group of FC storage devices 28 and the memory 22 or the cache 21 , an ATA device interface 26 which carries out data transfer between the group of ATA storage devices 31 and the memory 22 or the cache 21 , and a management interface 27 which sends and receives control information to and from the management server 100 , all being interconnected by an internal bus.
- the FC storage device group 28 comprises one or more FC storage devices 29
- the ATA storage device group 31 comprises one or more ATA storage devices 32 .
- An example of an ATA storage device is Sequential ATA (SATA) storage device.
- SATA Sequential ATA
- FC and ATA are chosen in this description of the preferred embodiment, the use of other types of storage device is not precluded.
- the FC device interface 25 is connected to the FC storage device group 28 through an FC. Any protocols, such as FC arbitration loop, point-to-point, or fabric, can be employed for this interface.
- the ATA device interface 26 is connected to the ATA storage device group 31 through an ATA bus.
- the memory 22 contains a set of programs that are executed by the CPU 21 : a Redundant Array of Inexpensive Disks (RAID) control program 41 for controlling the operation of the storage system 70 A and a management agent 80 for controlling the configuration of the storage system 70 A.
- RAID Redundant Array of Inexpensive Disks
- the memory 22 also stores various management information in the form of various tables such as a storage device management table 44 which holds information on the FC storage device group 28 and the ATA storage device group 31 , an LU management table 45 which holds information on logical storage areas (hereinafter abbreviated to “LUs”) 30 (FC LUs) constructed on the FC storage device group 28 and logical storage areas 33 (ATA LUs) constructed on the ATA storage device group 31 , a pair management table 46 which holds information on the source and destination of data replication, a cache group information table 47 used for controlling cache groups (explained later), and a configuration information table 48 used when the second storage system 70 B makes its own LUs available to the storage system 70 A as the latter's LUs.
- LUs logical storage areas
- cache group refers to the LU group for which a part of the cache 23 is allocated.
- the RAID control program 41 comprises three components (not shown in FIG. 1 ): a component that issues commands to the FC storage device group 28 and the ATA storage device group 31 , a component that manages the FC storage device group 28 and the ATA storage device group 31 , and a component that manages the LUs allocated to these storage device groups.
- the RAID control program 41 contains, as subprograms, a replication creation program 42 and a sub-LU selection assistance program 43 .
- report timing such as synchronous (a report is sent to the upper equipment upon completion of the data replication) and asynchronous (a report is sent to the upper equipment without waiting for the completion of the data replication), but these variations are not distinguished here, since the present invention applies to them equally.
- the management agent 80 is a program that receives data sent from the management server 100 , registers and updates the information on the storage devices (storage device information) according to the input data, and sends storage device information to the management server 100 .
- the management server 100 comprises a CPU 110 , a main storage 120 , an input unit 130 (such as a keyboard), an output unit 140 (such as a display device), a management interface 150 for communication with the management network 200 , and a storage unit 160 , all of which are interconnected by an internal bus.
- the storage unit 160 stores a storage manager 131 and the sub-LU selection assistance program 43 , both of which run on the CPU 110 . By executing these programs, the CPU 110 collects, at regular intervals, information stored in tables 44 through 48 in the storage system 70 ( 70 A and 70 B) and produces a replication of them.
- the host 10 which may be a personal computer, a workstation, or a general-purpose computer such as a mainframe, is equipped with a Host Bus Adapter (HBA) (not shown in the diagram), which is an FC interface for connection with the outside world.
- HBA Host Bus Adapter
- HRA is also given a worldwide name (WWN).
- FIG. 2 shows an example of the storage device management table 44 which holds all the key information on each storage device. It is organized into several columns (fields) for each storage device (row): a storage device number column 241 , a storage device type column 242 , an array configuration column 243 , a use column 244 , and an operating status column 245 .
- the storage device number column 241 holds a unique identification number assigned to each storage device 29 or 32 .
- the storage device type column 242 indicates the type of the interface employed such as FC and ATA.
- the array configuration column 243 contains two pieces of information for each entry: the sequence number for the RAID group (the group of storage devices put together for redundancy purposes) which the storage device belongs to and the RAID level of the group. For example, “(1) RAID5” means that the storage device belongs to the first RAID group, which has a level 5 configuration.
- the storage system 70 can have more than one RAID group, e.g., a RAID1 RAID group and a RAID5 RAID group.
- a RAID group may be composed of all or some of the storage devices contained in the storage system 70 .
- the use column 244 indicates the use of the RAID group the storage device belongs to, e.g., DB (database) or FS (file system).
- the operating status column 245 indicates whether the storage device is in operating state (ON) or in stopped state (OFF).
- FIG. 3 shows an example of the LU management table 45 which holds all the information needed to manage the LUs under the control of the storage control unit 20 . It is organized into several columns for each LU (row): an LU number column 251 , a host allocated column 252 , an LUN column 253 , a capacity column 254 , an LU type column 255 , and a paired LU number column 256 .
- the LU number column 251 holds the identification number given to the LU.
- the host allocated column 252 indicates whether the LU is allocated to the host 10 : “yes” if it is indeed allocated; otherwise “no”.
- the LUN column 253 indicates the SCSI logical unit number required by the host 10 to access the LU (provided that the LU is allocated to the host 10 ).
- the capacity column 254 indicates the capacity allocated to LU.
- the LU type column 255 indicates the type of the LU, for example, FC or ATA.
- the paired LU number column 256 holds the identification number of the paired LU: if the LU is a main LU (an LU containing the original data) then its sub-LU, i.e., the LU containing the replicated data; if the LU is a sub-LU, then its main LU.
- FIG. 4 shows an example of the pair management table 46 , which holds information on the pairing between different LUs within the storage system 70 , i.e., which LU holds a copy of which LU (a couple of LUs having this relationship is called an “LU pair”). It is organized into four columns: an LU pair number column 261 , a main LU number column 262 , a sub-LU number column 263 , and a pairing status column 264 .
- the LU pair number column 261 holds the unique identification number of the LU pair.
- the main LU number column 262 indicates the LU number assigned to the main LU
- the sub-LU number column 263 indicates the LU number assigned to the sub-LU.
- the pairing status column 264 indicates the status of the LU pair at any given point in time, such as “paired,” in which synchronism is maintained between the two LUs in the LU pair and their contents match, or “split,” in which synchronism is not maintained between the two LUs in the LU pair.
- the storage system 70 A may from time to time change the status of an LU pair from “paired” to “split.” Once the status is changed in this direction, the sub-LU holds the contents of the main LU at the time of the status change (this is referred to as “taking a snapshot”). The host 10 can later save the contents of the sub-LU into another storage device or medium (such as a magnetic tape), making it a backup of the data stored in the LU pair at the time of snapshot. Alternatively, the sub-LU itself can be used as the backup.
- the CPU 21 When the storage system 70 is powered on, the CPU 21 , by running the RAID control program 41 , identifies all the storage devices that are connected to either the FC device interface 25 or the ATA device interface 26 , and registers them in the storage device management table 44 by filling in the storage device number column 241 and the storage device type column 242 .
- the user may provide the device type information via the input unit 130 in the management server 100 , in which case the CPU 21 enters the provided information into the corresponding entries in the storage device type column 242 .
- the CPU 21 then fills in the array configuration column 243 and the use column 244 according to the commands given by the user.
- the CPU 21 When the user enters the storage device number 241 and issues a command for obtaining the device type information 242 via the input unit 130 , the CPU 21 , by running the RAID control program 41 , obtains the required information from the storage device management table 44 and sends it to the management server 100 through the management interface 27 .
- the management server 100 displays the received information on the output unit 140 . This process may be skipped if the user is to specify the storage device type.
- the user selects a storage device based on the information displayed on the output unit 140 and enters, through an input unit 130 , a command for constructing a RAID group using the selected storage device.
- the user also enters its intended use.
- the CPU 21 receives through the management interface 27 the information sent from the management server 100 about the RAID group and its use, and enters it into the corresponding entries in the array configuration column 243 and the use column 244 .
- FIG. 5 shows an example of the cache group information table 47 which holds information to be used by the storage system 70 A in managing cache groups. It is organized into three rows: a cache group ID row 461 , which lists cache groups, an allocated capacity row 462 , which indicates the capacity allocated to each cache group, and an LU ID row 463 , which lists the LUs belonging to each cache group.
- This table makes it possible to create or delete a cache group, add or delete LUs to or from a cache group, and change the capacity allocation dynamically, i.e., without halting other processing.
- FIG. 6 shows an example of the configuration information table 48 which holds information on the LUs managed by the storage system 70 A. It is organized into five columns: a port ID column 481 , which indicates the identification number of the external interface port the LU is connected to, a WWN column 482 , which corresponds to the port ID, an LUN column 483 , which holds the logical unit number (LUN) of the LU, a capacity column 484 , which indicates the capacity available on the LU, and a mapped LUN column 485 , which holds the identification of the LU in the storage system 70 B to which the LU in this entry is mapped.
- the LUs appearing in this column belong to the storage system 70 B; all the other LUs belong to the storage system 70 A.
- the storage system 70 A makes the LUs in the storage system 70 B accessible to the host 10 as if they belonged to itself. In other words, the host 10 can send to the storage system 70 A data input/output commands directed to LUs in the storage system 70 B.
- FIG. 8 shows the process flow of automatically selecting the sub-LU that best matches the main LU (with no conditions specified for selection).
- the CPU 21 determines whether, within the group of LUs allocated to the host 10 that creates replications, there are any free LUs that may be considered as a sub-LU for the given LU (step 801 ). If there is none, the CPU 21 sends a message to that effect to the output unit 140 (step 802 ) and terminates the processing. If any free LUs are found, the CPU 21 obtains the characteristics of the main LU from the storage device management table 44 and the LU management table 45 (step 803 ). It then checks to see if there are any free LUs in the group of LUs having the same characteristics (hereinafter called the “characteristics group”) as the main LU (step 804 ).
- step 805 selects one at random and allocates it as a sub-LU (step 805 ).
- An example of the characteristic is the storage device type (FC or ATA), so that the search is made among the group of storage devices of the same type as the main LU's.
- Another example is the cache group, so that the search is made among the group of LUs belonging to the same cache group as the main LU's.
- a third example is the storage control unit, so that the search is made among the group of LUs belonging to the storage control unit 20 that controls the main LU.
- a still another example is a combination of multiple characteristics, so that the search is made among the group of LUs having the same characteristics in all respects.
- the CPU 21 registers it in the LU management table 45 and the pair management table 46 (step 807 ).
- step 804 If no free LUs meeting the requirements are found in step 804 , the CPU 21 proceeds to step 806 to make a selection among other characteristics groups.
- FIG. 9 illustrates the detailed flow of step 806 in FIG. 8 .
- the CPU 21 first checks whether there is only one characteristic specified for the main LU (step 901 ). If there is indeed only one, then it selects a sub-LU among the characteristics group(s) having characteristic values different from the main LU. For example, if the main LU is an FC storage device, then the selection is made among the group of storage devices other than FC devices (such as ATA). If there is more than one characteristics group that have characteristics different from the main LU's, then the selection is made among the group that has a free LU and has a light load (step 902 ). A characteristics group is considered to have a light load if few replications have been made within it.
- the CPU 21 then makes a list of all the free LUs in the selected characteristics group (step 904 ) and selects one as the sub-LU at random among this list (step 905 ). It then registers it in the LU management table 45 and the pair management table 46 (step 906 ). If step 901 reveals that more than one characteristic is specified for the main LU, the storage system 70 selects at random one characteristic which is specified for the main LU and for which free LUs are found (step 903 ). For example, if the storage device type and the cache division unit are specified for the main LU and no free LUs have been found that meet both characteristics, then the search is made among the group of the same storage device type but with a different cache division unit. If there is more than one group having a different cache division unit, then the search is made among the group with the lightest load as in step 902 .
- FIG. 10 shows an example of the priority information table 1300 listing the priority among a set of different characteristics as specified by the user.
- This table is stored in the memory 22 .
- the CPU 21 selects a characteristics group considering the priority specified in this table.
- the storage system 70 determines whether a sub-LU should be selected among a group which is of the same storage device type but has a different cache division unit or a group which is of a different storage device type but has the same cache division unit.
- the user may directly specify the selection criteria. For example, if the user requests that the speed of replication be given priority, then a free LU will be selected from the group having the same cache division unit as the main LU's. If the user wants to prevent the replication of the sub-LU from slowing down the I/O processing by the main LU, he/she can have a free LU selected from a group having a cache division unit different from the main LU's.
- the storage system 70 selects a sub-LU according to the priority specified in this table.
- the hard disk device type 1301 is given the highest priority.
- the search is made among a group of storage devices of the same type (FC or ATA) as the main LU's.
- the characteristic that is next in priority is picked as the selection criterion.
- a selection is to be made among a group of LUs having the same cache division unit as the main LU's. If there is none, then LUs in other groups are considered.
- the characteristic that is one level below in priority is picked as the selection criterion. In this example, that is the first storage system, which is specified in the storage system number entry 1303 .
- the second storage system will be searched.
- the characteristics and priority mentioned here are just arbitrary examples, and a different set of characteristics and priority may be employed.
- a variation of the present invention would be to allow the user to specify the characteristics and selection criteria.
- Such a scheme can also be used in step 903 in FIG. 9 as an alternative selection method in case there is more than one characteristic involved.
- FIG. 11 shows an example of the criteria table 1400 stored in the memory 22 that specifies how the destination volume should be selected using the sub-LU selection criteria given by the user. More specifically, it lists the selection criteria 1401 given by the user and the selection algorithm 1402 for each such criterion. For example, if the user specifies “reliability,” then the CPU 21 , by referencing the selection algorithm column 1402 , determines that the selection should be made among FC storage devices. If there are no free LUs among FC storage devices, then storage devices of other types should be considered. If the user specifies “Backup” and “Number of copies: 7,” then the selection should be made among ATA storage devices, according to the selection algorithm column 1402 .
- the CPU 21 Upon selecting a sub-LU, The CPU 21 registers it in the pair management table 46 by making a new entry and entering the parameters specified by the user as well as the identification of the sub-LU.
- the user can, by simply specifying a main LU, have the storage system select an optimum sub-LU and create a pair. Whereas in the example described here only one sub-LU is selected, it is also possible to select multiple sub-LUs, show them to the user, and have the user make the final selection.
- sub-LUs are selected among the LUs that already exist in the system.
- a sub-LU can be created and then allocated, as shown in FIG. 12 .
- the CPU 21 determines whether, within the group of LUs allocated to the host 10 that creates replications, there are any free LUs that may be considered as a sub-LU for the specified LU (step 1201 ). If there are any, it carries out the same process as the one shown in FIG. 8 (more specifically, from step 803 on) (step 1203 ). If there is none, it obtains the characteristics of the main LU and creates an LU of the same device type as the main LU's. For example, if the main LU is constructed on a group of FC storage devices, a new LU is created on a group of FC storage devices with a parity group, and this new LU is allocated to the host (step 1202 ). This newly created LU is then selected as a sub-LU (step 1204 ), and registered in the LU management table 45 and the pair management table 46 (step 1205 ), in the same manner as in step 807 in FIG. 8 .
- FIG. 7 illustrates an example of the storage system 70 A provided with one or more protocol converting adapters 600 .
- the first storage system 70 A is connected to the second storage system 70 B through a network 61 .
- the protocol converting adapter 600 is a unit for channel connection that is independent of the storage control unit 20 and handles protocols conforming to local area network (LAN), switched line, leased line, ESCON, and other standards.
- LAN local area network
- ESCON ESCON
- One or more protocol converting adapters 600 are connected to one or more storage control units 20 through a network 63 .
- the protocol converting adapter 600 Upon receiving an input/output command from the host 10 , the protocol converting adapter 600 deciphers it, performs a protocol conversion, determines whether the LU holding the requested data belongs to the storage control unit 20 or is located in the second storage system 70 B, and forwards the command to the appropriate destination. It determines where the desired LU is located by referencing the configuration information table 48 stored in the memory in the processor 900 , which is also connected to the network 63 . Upon receiving the command, the storage control unit 20 calls the sub-LU selection assistance program 43 .
- the management server 100 recognizes the first storage system 70 A through the network 62 .
- the management server 100 may be directly connected to the first storage system 70 A using a dedicated path.
- the storage media used in the storage devices in the foregoing descriptions may take a variety of forms, such as magnetic media and optical media.
- the programs mentioned in the foregoing descriptions may be loaded into the system from a storage medium such as a CD-ROM or may be downloaded from another piece of equipment through a network.
- the present invention thus realizes a storage system in which, in replicating the contents of a storage volume, a destination volume or a storage device or a group of storage devices on which it is to be created can be selected by considering the characteristics of the source volume and/or the storage devices on which the source volume resides. It also provides a storage system with a replication assistance capability that allows the user to select a destination volume without being concerned with the volume or device characteristics, thus reducing the user's burden.
Abstract
In a storage system comprising multiple storage devices with different interfaces, the characteristics, including the type of interface, of the storage devices on which the source volume is constructed are checked, a volume having the characteristics that match them is selected as the destination volume, and the contents of the source volume are automatically replicated into the destination volume. Candidates for replication (destination) volumes are automatically selected and presented to the user or the administrator, and replication volumes are automatically allocated, without requiring the user or the system administrator to be concerned with the characteristics of the storage devices.
Description
- The present invention relates to a storage system, and more specifically, to a storage system where the volume into which data is to be replicated can be selected.
- In recent years, demand is increasing for shortening the time needed to replicate data stored in a storage device in a corporate storage system into another storage device for backup. This is primarily because less and less time is allocated for backup operation as the main corporate operation is getting longer, whereas more and more time is required to back up an increasing amount of data. To cope with such situations, an increasing number of companies have started using an arrangement where data to be backed up is replicated in a separate storage area or storage device. In such an arrangement, the data to be backed up is taken from the second storage area or storage device, while the main job stream continues using the first (original) storage area or storage device, so that the backup operation does not interfere with the main job stream.
- Various interfaces are employed to connect a storage device to different pieces of computer equipment. Thus a storage device may be equipped with a Fibre Channel interface, the standardization of which is being promoted by the ANSI T11 committee (hereinafter called “FC storage device”), or with an AT Attachment (ATA) interface, the standardization of which is being promoted by the ANSI T13 committee (hereinafter called “ATA storage devices”).
- ATA storage devices are relatively inexpensive and are primarily used in desktop personal computers for home use. In contrast, FC storage devices are primarily used in corporate server systems since they have higher data input/output (I/O) performance than ATA storage devices and are reliable and robust enough to be employed in around-the-clock operation.
- A storage system can also be constructed of different types of storage device so that it may use them for different purposes depending on the performance, costs, and other factors, as disclosed in the Laid-open Patent Specification No. Heisei 10 (1998)-301720, which provides a means of enhancing the reliability of data stored in a storage system.
- Furthermore, as disclosed in the U.S. Pat. No. 5,434,992, an arrangement can be made such that in a storage system having different areas of cache for different types of data, the cache hit ratio is improved by optimizing the allocation of such cache areas.
- In selecting a volume as the destination of replication in a storage system with storage devices having different characteristics or with a dividable cache, however, none of these inventions consider how well these characteristics of the destination candidates match the characteristics of the source volume. Also, these inventions are not specifically designed to relieve the user of the burden of selecting one out of a list of possible destination volumes, which tends to grow as the capacity of the storage system grows.
- It is an object of the present invention to provide a storage system that, in selecting a destination volume for data replication, takes the characteristics of the source volume into consideration.
- It is another object of the present invention to provide a storage system equipped with a replication support function that allows the user to select a destination volume without being concerned with the characteristics of the source volume.
- The present invention proposes three solutions: (a) the destination volume is selected by considering the type of the storage device on which the source volume resides and how the source volume is allocated to different areas of the cache; (b) the destination volume is selected by considering how the source volume is allocated to different areas of the cache; and (c) the destination volume is selected by using the criteria table that lists the selection criteria of destination volumes and how to view destination volumes according to the selection criteria.
-
FIG. 1 illustrates the configuration of a storage system according to a preferred embodiment of the present invention. -
FIG. 2 shows an example of the storage device management table. -
FIG. 3 shows an example of the LU management table. -
FIG. 4 shows an example of the pair management table. -
FIG. 5 shows an example of the cache group information table. -
FIG. 6 shows an example of the configuration information management table. -
FIG. 7 illustrates the configuration of a storage system according to another preferred embodiment of the present invention. -
FIG. 8 illustrates the process flow of automatically selecting a sub-LU. -
FIG. 9 illustrates the process flow of selecting a sub-LU from a different characteristics group. -
FIG. 10 shows an example of the priority information table specifying the priority among a set of different characteristics. -
FIG. 11 shows an example of the criteria table specifying how the destination volume should be selected using the sub-LU selection criteria given by the user. -
FIG. 12 illustrates the process flow of creating an LU and registering it as a sub-LU. -
FIG. 1 illustrates the configuration of a computer system containing a storage system that allows the user to select a destination volume by considering the characteristics of the source volume. InFIG. 1 , a dotted-line ellipse denotes a program or a table of information. - A
first storage system 70A is connected to ahost 10, amanagement server 100, and asecond storage system 70B of a similar configuration. An FC, SCSI, or other similar interface is used for communication between thefirst storage system 70A and thehost 10. Thefirst storage system 70A is connected to themanagement server 100 through amanagement network 200 and to thesecond storage system 70B through acommunication path 300. Whereas thecommunication path 300 can be implemented using FC or Enterprise Systems Connection (ESCON), the present invention does not limit the choice to any of these. - The
first storage system 70A comprises astorage control unit 20, a group ofFC storage devices 28, and a group ofATA storage devices 31. - The
storage control unit 20 comprises aCPU 21, amemory 22, acache 23 which temporarily holds part of data received from or data to be sent to thehost 10, ahost FC interface 24 which carries out data transfer between thehost 10 and thestorage control unit 20, a storagesystem FC interface 14 which carries out data transfer between thenetwork 300 and thestorage control unit 20, anFC device interface 25 which carries out data transfer between the group ofFC storage devices 28 and thememory 22 or thecache 21, anATA device interface 26 which carries out data transfer between the group ofATA storage devices 31 and thememory 22 or thecache 21, and amanagement interface 27 which sends and receives control information to and from themanagement server 100, all being interconnected by an internal bus. - The FC
storage device group 28 comprises one or moreFC storage devices 29, whereas the ATAstorage device group 31 comprises one or moreATA storage devices 32. An example of an ATA storage device is Sequential ATA (SATA) storage device. Although FC and ATA are chosen in this description of the preferred embodiment, the use of other types of storage device is not precluded. - The
FC device interface 25 is connected to the FCstorage device group 28 through an FC. Any protocols, such as FC arbitration loop, point-to-point, or fabric, can be employed for this interface. - The ATA
device interface 26 is connected to the ATAstorage device group 31 through an ATA bus. - The
memory 22 contains a set of programs that are executed by the CPU 21: a Redundant Array of Inexpensive Disks (RAID)control program 41 for controlling the operation of thestorage system 70A and amanagement agent 80 for controlling the configuration of thestorage system 70A. Thememory 22 also stores various management information in the form of various tables such as a storage device management table 44 which holds information on the FCstorage device group 28 and the ATAstorage device group 31, an LU management table 45 which holds information on logical storage areas (hereinafter abbreviated to “LUs”) 30 (FC LUs) constructed on the FCstorage device group 28 and logical storage areas 33 (ATA LUs) constructed on the ATAstorage device group 31, a pair management table 46 which holds information on the source and destination of data replication, a cache group information table 47 used for controlling cache groups (explained later), and a configuration information table 48 used when thesecond storage system 70B makes its own LUs available to thestorage system 70A as the latter's LUs. - In both the FC
LUs 30 and the ATALUs 33, storage areas are divided into groups by LU. A part of thecache 23 may be allocated to each such group. The term “cache group” refers to the LU group for which a part of thecache 23 is allocated. - The
RAID control program 41 comprises three components (not shown inFIG. 1 ): a component that issues commands to the FCstorage device group 28 and the ATAstorage device group 31, a component that manages the FCstorage device group 28 and the ATAstorage device group 31, and a component that manages the LUs allocated to these storage device groups. - The
RAID control program 41 contains, as subprograms, areplication creation program 42 and a sub-LUselection assistance program 43. In data replication, there are variations of report timing, such as synchronous (a report is sent to the upper equipment upon completion of the data replication) and asynchronous (a report is sent to the upper equipment without waiting for the completion of the data replication), but these variations are not distinguished here, since the present invention applies to them equally. - The
management agent 80 is a program that receives data sent from themanagement server 100, registers and updates the information on the storage devices (storage device information) according to the input data, and sends storage device information to themanagement server 100. - The
management server 100 comprises aCPU 110, amain storage 120, an input unit 130 (such as a keyboard), an output unit 140 (such as a display device), amanagement interface 150 for communication with themanagement network 200, and astorage unit 160, all of which are interconnected by an internal bus. Thestorage unit 160 stores astorage manager 131 and the sub-LUselection assistance program 43, both of which run on theCPU 110. By executing these programs, theCPU 110 collects, at regular intervals, information stored in tables 44 through 48 in the storage system 70 (70A and 70B) and produces a replication of them. - The
host 10, which may be a personal computer, a workstation, or a general-purpose computer such as a mainframe, is equipped with a Host Bus Adapter (HBA) (not shown in the diagram), which is an FC interface for connection with the outside world. HRA is also given a worldwide name (WWN). -
FIG. 2 shows an example of the storage device management table 44 which holds all the key information on each storage device. It is organized into several columns (fields) for each storage device (row): a storagedevice number column 241, a storagedevice type column 242, anarray configuration column 243, ause column 244, and anoperating status column 245. - The storage
device number column 241 holds a unique identification number assigned to eachstorage device device type column 242 indicates the type of the interface employed such as FC and ATA. Thearray configuration column 243 contains two pieces of information for each entry: the sequence number for the RAID group (the group of storage devices put together for redundancy purposes) which the storage device belongs to and the RAID level of the group. For example, “(1) RAID5” means that the storage device belongs to the first RAID group, which has alevel 5 configuration. The storage system 70 can have more than one RAID group, e.g., a RAID1 RAID group and a RAID5 RAID group. A RAID group may be composed of all or some of the storage devices contained in the storage system 70. - The
use column 244 indicates the use of the RAID group the storage device belongs to, e.g., DB (database) or FS (file system). Theoperating status column 245 indicates whether the storage device is in operating state (ON) or in stopped state (OFF). -
FIG. 3 shows an example of the LU management table 45 which holds all the information needed to manage the LUs under the control of thestorage control unit 20. It is organized into several columns for each LU (row): anLU number column 251, a host allocatedcolumn 252, anLUN column 253, acapacity column 254, anLU type column 255, and a pairedLU number column 256. TheLU number column 251 holds the identification number given to the LU. The host allocatedcolumn 252 indicates whether the LU is allocated to the host 10: “yes” if it is indeed allocated; otherwise “no”. TheLUN column 253 indicates the SCSI logical unit number required by thehost 10 to access the LU (provided that the LU is allocated to the host 10). Thecapacity column 254 indicates the capacity allocated to LU. TheLU type column 255 indicates the type of the LU, for example, FC or ATA. The pairedLU number column 256 holds the identification number of the paired LU: if the LU is a main LU (an LU containing the original data) then its sub-LU, i.e., the LU containing the replicated data; if the LU is a sub-LU, then its main LU. -
FIG. 4 shows an example of the pair management table 46, which holds information on the pairing between different LUs within the storage system 70, i.e., which LU holds a copy of which LU (a couple of LUs having this relationship is called an “LU pair”). It is organized into four columns: an LUpair number column 261, a mainLU number column 262, asub-LU number column 263, and apairing status column 264. The LUpair number column 261 holds the unique identification number of the LU pair. The mainLU number column 262 indicates the LU number assigned to the main LU, whereas thesub-LU number column 263 indicates the LU number assigned to the sub-LU. Thepairing status column 264 indicates the status of the LU pair at any given point in time, such as “paired,” in which synchronism is maintained between the two LUs in the LU pair and their contents match, or “split,” in which synchronism is not maintained between the two LUs in the LU pair. - The
storage system 70A may from time to time change the status of an LU pair from “paired” to “split.” Once the status is changed in this direction, the sub-LU holds the contents of the main LU at the time of the status change (this is referred to as “taking a snapshot”). Thehost 10 can later save the contents of the sub-LU into another storage device or medium (such as a magnetic tape), making it a backup of the data stored in the LU pair at the time of snapshot. Alternatively, the sub-LU itself can be used as the backup. - What follows is a description of how the
storage control unit 20 constructs a RAID group using the storage device management table 44 under the instructions given by the user or the system administrator. - When the storage system 70 is powered on, the
CPU 21, by running theRAID control program 41, identifies all the storage devices that are connected to either theFC device interface 25 or theATA device interface 26, and registers them in the storage device management table 44 by filling in the storagedevice number column 241 and the storagedevice type column 242. Alternatively, the user may provide the device type information via theinput unit 130 in themanagement server 100, in which case theCPU 21 enters the provided information into the corresponding entries in the storagedevice type column 242. - The
CPU 21 then fills in thearray configuration column 243 and theuse column 244 according to the commands given by the user. When the user enters thestorage device number 241 and issues a command for obtaining thedevice type information 242 via theinput unit 130, theCPU 21, by running theRAID control program 41, obtains the required information from the storage device management table 44 and sends it to themanagement server 100 through themanagement interface 27. Themanagement server 100 then displays the received information on theoutput unit 140. This process may be skipped if the user is to specify the storage device type. - The user then selects a storage device based on the information displayed on the
output unit 140 and enters, through aninput unit 130, a command for constructing a RAID group using the selected storage device. The user also enters its intended use. TheCPU 21 receives through themanagement interface 27 the information sent from themanagement server 100 about the RAID group and its use, and enters it into the corresponding entries in thearray configuration column 243 and theuse column 244. -
FIG. 5 shows an example of the cache group information table 47 which holds information to be used by thestorage system 70A in managing cache groups. It is organized into three rows: a cachegroup ID row 461, which lists cache groups, an allocatedcapacity row 462, which indicates the capacity allocated to each cache group, and anLU ID row 463, which lists the LUs belonging to each cache group. This table makes it possible to create or delete a cache group, add or delete LUs to or from a cache group, and change the capacity allocation dynamically, i.e., without halting other processing. -
FIG. 6 shows an example of the configuration information table 48 which holds information on the LUs managed by thestorage system 70A. It is organized into five columns: aport ID column 481, which indicates the identification number of the external interface port the LU is connected to, aWWN column 482, which corresponds to the port ID, anLUN column 483, which holds the logical unit number (LUN) of the LU, acapacity column 484, which indicates the capacity available on the LU, and a mappedLUN column 485, which holds the identification of the LU in thestorage system 70B to which the LU in this entry is mapped. The LUs appearing in this column belong to thestorage system 70B; all the other LUs belong to thestorage system 70A. - By using the configuration information table 48, the
storage system 70A makes the LUs in thestorage system 70B accessible to thehost 10 as if they belonged to itself. In other words, thehost 10 can send to thestorage system 70A data input/output commands directed to LUs in thestorage system 70B. -
FIG. 8 shows the process flow of automatically selecting the sub-LU that best matches the main LU (with no conditions specified for selection). - The
CPU 21 determines whether, within the group of LUs allocated to thehost 10 that creates replications, there are any free LUs that may be considered as a sub-LU for the given LU (step 801). If there is none, theCPU 21 sends a message to that effect to the output unit 140 (step 802) and terminates the processing. If any free LUs are found, theCPU 21 obtains the characteristics of the main LU from the storage device management table 44 and the LU management table 45 (step 803). It then checks to see if there are any free LUs in the group of LUs having the same characteristics (hereinafter called the “characteristics group”) as the main LU (step 804). If there are any, it then selects one at random and allocates it as a sub-LU (step 805). An example of the characteristic is the storage device type (FC or ATA), so that the search is made among the group of storage devices of the same type as the main LU's. Another example is the cache group, so that the search is made among the group of LUs belonging to the same cache group as the main LU's. A third example is the storage control unit, so that the search is made among the group of LUs belonging to thestorage control unit 20 that controls the main LU. A still another example is a combination of multiple characteristics, so that the search is made among the group of LUs having the same characteristics in all respects. - Once a sub-LU is selected in
step 805, theCPU 21 registers it in the LU management table 45 and the pair management table 46 (step 807). - If no free LUs meeting the requirements are found in
step 804, theCPU 21 proceeds to step 806 to make a selection among other characteristics groups. -
FIG. 9 illustrates the detailed flow ofstep 806 inFIG. 8 . - The
CPU 21 first checks whether there is only one characteristic specified for the main LU (step 901). If there is indeed only one, then it selects a sub-LU among the characteristics group(s) having characteristic values different from the main LU. For example, if the main LU is an FC storage device, then the selection is made among the group of storage devices other than FC devices (such as ATA). If there is more than one characteristics group that have characteristics different from the main LU's, then the selection is made among the group that has a free LU and has a light load (step 902). A characteristics group is considered to have a light load if few replications have been made within it. TheCPU 21 then makes a list of all the free LUs in the selected characteristics group (step 904) and selects one as the sub-LU at random among this list (step 905). It then registers it in the LU management table 45 and the pair management table 46 (step 906). Ifstep 901 reveals that more than one characteristic is specified for the main LU, the storage system 70 selects at random one characteristic which is specified for the main LU and for which free LUs are found (step 903). For example, if the storage device type and the cache division unit are specified for the main LU and no free LUs have been found that meet both characteristics, then the search is made among the group of the same storage device type but with a different cache division unit. If there is more than one group having a different cache division unit, then the search is made among the group with the lightest load as instep 902. - What follows now is a description of how the user specifies the conditions for selecting a sub-LU.
-
FIG. 10 shows an example of the priority information table 1300 listing the priority among a set of different characteristics as specified by the user. This table is stored in thememory 22. TheCPU 21 selects a characteristics group considering the priority specified in this table. In other words, the storage system 70 determines whether a sub-LU should be selected among a group which is of the same storage device type but has a different cache division unit or a group which is of a different storage device type but has the same cache division unit. Alternatively, the user may directly specify the selection criteria. For example, if the user requests that the speed of replication be given priority, then a free LU will be selected from the group having the same cache division unit as the main LU's. If the user wants to prevent the replication of the sub-LU from slowing down the I/O processing by the main LU, he/she can have a free LU selected from a group having a cache division unit different from the main LU's. - If there is no preference given by the user, then the storage system 70 selects a sub-LU according to the priority specified in this table. In this example, the hard
disk device type 1301 is given the highest priority. Hence, the search is made among a group of storage devices of the same type (FC or ATA) as the main LU's. Then the characteristic that is next in priority is picked as the selection criterion. In this example, a selection is to be made among a group of LUs having the same cache division unit as the main LU's. If there is none, then LUs in other groups are considered. Next, the characteristic that is one level below in priority is picked as the selection criterion. In this example, that is the first storage system, which is specified in the storagesystem number entry 1303. If there are no free LUs in the first storage system, then the second storage system will be searched. The characteristics and priority mentioned here are just arbitrary examples, and a different set of characteristics and priority may be employed. A variation of the present invention would be to allow the user to specify the characteristics and selection criteria. Such a scheme can also be used instep 903 inFIG. 9 as an alternative selection method in case there is more than one characteristic involved. -
FIG. 11 shows an example of the criteria table 1400 stored in thememory 22 that specifies how the destination volume should be selected using the sub-LU selection criteria given by the user. More specifically, it lists theselection criteria 1401 given by the user and theselection algorithm 1402 for each such criterion. For example, if the user specifies “reliability,” then theCPU 21, by referencing theselection algorithm column 1402, determines that the selection should be made among FC storage devices. If there are no free LUs among FC storage devices, then storage devices of other types should be considered. If the user specifies “Backup” and “Number of copies: 7,” then the selection should be made among ATA storage devices, according to theselection algorithm column 1402. - Upon selecting a sub-LU, The
CPU 21 registers it in the pair management table 46 by making a new entry and entering the parameters specified by the user as well as the identification of the sub-LU. - In this manner, the user can, by simply specifying a main LU, have the storage system select an optimum sub-LU and create a pair. Whereas in the example described here only one sub-LU is selected, it is also possible to select multiple sub-LUs, show them to the user, and have the user make the final selection.
- In the example described above, sub-LUs are selected among the LUs that already exist in the system. As an alternative, if there are no free LUs in the host requesting replication, a sub-LU can be created and then allocated, as shown in
FIG. 12 . - As in
step 801 inFIG. 8 , theCPU 21 determines whether, within the group of LUs allocated to thehost 10 that creates replications, there are any free LUs that may be considered as a sub-LU for the specified LU (step 1201). If there are any, it carries out the same process as the one shown inFIG. 8 (more specifically, fromstep 803 on) (step 1203). If there is none, it obtains the characteristics of the main LU and creates an LU of the same device type as the main LU's. For example, if the main LU is constructed on a group of FC storage devices, a new LU is created on a group of FC storage devices with a parity group, and this new LU is allocated to the host (step 1202). This newly created LU is then selected as a sub-LU (step 1204), and registered in the LU management table 45 and the pair management table 46 (step 1205), in the same manner as instep 807 inFIG. 8 . - As another embodiment of the present invention,
FIG. 7 illustrates an example of thestorage system 70A provided with one or moreprotocol converting adapters 600. Thefirst storage system 70A is connected to thesecond storage system 70B through anetwork 61. Alternatively, a similar configuration without thenetwork 61 and thesecond storage system 70B can be thought of. Theprotocol converting adapter 600 is a unit for channel connection that is independent of thestorage control unit 20 and handles protocols conforming to local area network (LAN), switched line, leased line, ESCON, and other standards. One or moreprotocol converting adapters 600 are connected to one or morestorage control units 20 through anetwork 63. - Upon receiving an input/output command from the
host 10, theprotocol converting adapter 600 deciphers it, performs a protocol conversion, determines whether the LU holding the requested data belongs to thestorage control unit 20 or is located in thesecond storage system 70B, and forwards the command to the appropriate destination. It determines where the desired LU is located by referencing the configuration information table 48 stored in the memory in theprocessor 900, which is also connected to thenetwork 63. Upon receiving the command, thestorage control unit 20 calls the sub-LUselection assistance program 43. - The
management server 100 recognizes thefirst storage system 70A through thenetwork 62. Alternatively, themanagement server 100 may be directly connected to thefirst storage system 70A using a dedicated path. - The storage media used in the storage devices in the foregoing descriptions may take a variety of forms, such as magnetic media and optical media. The programs mentioned in the foregoing descriptions may be loaded into the system from a storage medium such as a CD-ROM or may be downloaded from another piece of equipment through a network.
- The present invention thus realizes a storage system in which, in replicating the contents of a storage volume, a destination volume or a storage device or a group of storage devices on which it is to be created can be selected by considering the characteristics of the source volume and/or the storage devices on which the source volume resides. It also provides a storage system with a replication assistance capability that allows the user to select a destination volume without being concerned with the volume or device characteristics, thus reducing the user's burden.
Claims (2)
1. A storage system comprising a plurality of storage devices and a storage control unit for controlling the plurality of storage devices, the storage control unit comprising:
a first interface for connection to a computer;
a memory for storing a program which, when replicating data in the plurality of storage devices, is capable of selecting a destination volume considering the characteristics of the source volume;
a CPU for executing the program; and
a second interface for connection to the plurality of storage devices.
2.-20. (canceled)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/187,832 US20050268057A1 (en) | 2003-06-24 | 2005-07-25 | Storage system |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-178879 | 2003-06-24 | ||
JP2003178879A JP2005018185A (en) | 2003-06-24 | 2003-06-24 | Storage device system |
US10/649,766 US7152146B2 (en) | 2003-06-24 | 2003-08-28 | Control of multiple groups of network-connected storage devices |
US11/187,832 US20050268057A1 (en) | 2003-06-24 | 2005-07-25 | Storage system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/649,766 Continuation US7152146B2 (en) | 2003-06-24 | 2003-08-28 | Control of multiple groups of network-connected storage devices |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050268057A1 true US20050268057A1 (en) | 2005-12-01 |
Family
ID=33535032
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/649,766 Expired - Fee Related US7152146B2 (en) | 2003-06-24 | 2003-08-28 | Control of multiple groups of network-connected storage devices |
US11/187,832 Abandoned US20050268057A1 (en) | 2003-06-24 | 2005-07-25 | Storage system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/649,766 Expired - Fee Related US7152146B2 (en) | 2003-06-24 | 2003-08-28 | Control of multiple groups of network-connected storage devices |
Country Status (2)
Country | Link |
---|---|
US (2) | US7152146B2 (en) |
JP (1) | JP2005018185A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090157991A1 (en) * | 2007-12-18 | 2009-06-18 | Govinda Nallappa Rajan | Reliable storage of data in a distributed storage system |
US20090210455A1 (en) * | 2008-02-19 | 2009-08-20 | Prasenjit Sarkar | Continuously available program replicas |
US7908503B2 (en) | 2005-10-03 | 2011-03-15 | Hitachi, Ltd. | Method of saving power consumed by a storage system |
US20110119435A1 (en) * | 2006-05-31 | 2011-05-19 | Takashige Iwamura | Flash memory storage system |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7035911B2 (en) | 2001-01-12 | 2006-04-25 | Epicrealm, Licensing Llc | Method and system for community data caching |
US7188145B2 (en) * | 2001-01-12 | 2007-03-06 | Epicrealm Licensing Llc | Method and system for dynamic distributed data caching |
JP4087149B2 (en) * | 2002-05-20 | 2008-05-21 | 株式会社日立製作所 | Disk device sharing system and computer |
JP2004348464A (en) | 2003-05-22 | 2004-12-09 | Hitachi Ltd | Storage device and communication signal shaping circuit |
JP4060235B2 (en) | 2003-05-22 | 2008-03-12 | 株式会社日立製作所 | Disk array device and disk array device control method |
JP4156499B2 (en) | 2003-11-28 | 2008-09-24 | 株式会社日立製作所 | Disk array device |
JP4317436B2 (en) * | 2003-12-16 | 2009-08-19 | 株式会社日立製作所 | Disk array system and interface conversion device |
JP4497918B2 (en) | 2003-12-25 | 2010-07-07 | 株式会社日立製作所 | Storage system |
JP2005228278A (en) * | 2004-01-14 | 2005-08-25 | Hitachi Ltd | Management method, management device and management program of storage area |
GB0401246D0 (en) * | 2004-01-21 | 2004-02-25 | Ibm | Method and apparatus for controlling access to logical units |
JP4634049B2 (en) | 2004-02-04 | 2011-02-16 | 株式会社日立製作所 | Error notification control in disk array system |
US20060036904A1 (en) * | 2004-08-13 | 2006-02-16 | Gemini Storage | Data replication method over a limited bandwidth network by mirroring parities |
US7457980B2 (en) * | 2004-08-13 | 2008-11-25 | Ken Qing Yang | Data replication method over a limited bandwidth network by mirroring parities |
US7350102B2 (en) * | 2004-08-26 | 2008-03-25 | International Business Machine Corporation | Cost reduction schema for advanced raid algorithms |
JP4612373B2 (en) * | 2004-09-13 | 2011-01-12 | 株式会社日立製作所 | Storage device and information system using the storage device |
JP4540495B2 (en) * | 2005-02-07 | 2010-09-08 | 富士通株式会社 | Data processing apparatus, data processing method, data processing program, and recording medium |
JP5031195B2 (en) * | 2005-03-17 | 2012-09-19 | 株式会社日立製作所 | Storage management software and grouping method |
JP4806556B2 (en) * | 2005-10-04 | 2011-11-02 | 株式会社日立製作所 | Storage system and configuration change method |
JP2007156597A (en) * | 2005-12-01 | 2007-06-21 | Hitachi Ltd | Storage device |
JP2008102672A (en) * | 2006-10-18 | 2008-05-01 | Hitachi Ltd | Computer system, management computer, and method of setting operation control information |
JP4920390B2 (en) * | 2006-12-05 | 2012-04-18 | 株式会社東芝 | Storage device |
US7721063B2 (en) * | 2006-12-05 | 2010-05-18 | International Business Machines Corporation | System, method and program for configuring a data mirror |
EP2212926A2 (en) * | 2007-10-19 | 2010-08-04 | QUALCOMM MEMS Technologies, Inc. | Display with integrated photovoltaics |
JP2009265920A (en) * | 2008-04-24 | 2009-11-12 | Hitachi Ltd | Information processing apparatus, data writing method, and program |
JP5488952B2 (en) | 2008-09-04 | 2014-05-14 | 株式会社日立製作所 | Computer system and data management method |
JP5233733B2 (en) | 2009-02-20 | 2013-07-10 | 富士通株式会社 | Storage device, storage control device, and storage control program |
US8281091B2 (en) * | 2009-03-03 | 2012-10-02 | International Business Machines Corporation | Automatic selection of storage volumes in a data storage system |
JP5297250B2 (en) | 2009-03-30 | 2013-09-25 | 富士通株式会社 | Storage system and information storage method |
JP5947336B2 (en) * | 2014-06-03 | 2016-07-06 | 日本電信電話株式会社 | Snapshot control device, snapshot control method, and snapshot control program |
JP6939488B2 (en) * | 2017-12-08 | 2021-09-22 | 富士フイルムビジネスイノベーション株式会社 | Electronic devices and image formation systems |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5430855A (en) * | 1991-02-06 | 1995-07-04 | Storage Technology Corporation | Disk drive array memory system using nonuniform disk drives |
US5504861A (en) * | 1994-02-22 | 1996-04-02 | International Business Machines Corporation | Remote data duplexing |
US6021454A (en) * | 1998-03-27 | 2000-02-01 | Adaptec, Inc. | Data transfer between small computer system interface systems |
US6347358B1 (en) * | 1998-12-22 | 2002-02-12 | Nec Corporation | Disk control unit and disk control method |
US20030126388A1 (en) * | 2001-12-27 | 2003-07-03 | Hitachi, Ltd. | Method and apparatus for managing storage based replication |
US6601104B1 (en) * | 1999-03-11 | 2003-07-29 | Realtime Data Llc | System and methods for accelerated data storage and retrieval |
US6643667B1 (en) * | 1999-03-19 | 2003-11-04 | Hitachi, Ltd. | System and method for replicating data |
US6681310B1 (en) * | 1999-11-29 | 2004-01-20 | Microsoft Corporation | Storage management system having common volume manager |
US6763442B2 (en) * | 2000-07-06 | 2004-07-13 | Hitachi, Ltd. | Data reallocation among storage systems |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5434992A (en) | 1992-09-04 | 1995-07-18 | International Business Machines Corporation | Method and means for dynamically partitioning cache into a global and data type subcache hierarchy from a real time reference trace |
JPH10301720A (en) | 1997-04-24 | 1998-11-13 | Nec Ibaraki Ltd | Disk array device |
US7509420B2 (en) | 2000-02-18 | 2009-03-24 | Emc Corporation | System and method for intelligent, globally distributed network storage |
US6745310B2 (en) * | 2000-12-01 | 2004-06-01 | Yan Chiew Chow | Real time local and remote management of data files and directories and method of operating the same |
US6763436B2 (en) | 2002-01-29 | 2004-07-13 | Lucent Technologies Inc. | Redundant data storage and data recovery system |
JP2003316522A (en) | 2002-04-26 | 2003-11-07 | Hitachi Ltd | Computer system and method for controlling the same system |
JP2004070403A (en) | 2002-08-01 | 2004-03-04 | Hitachi Ltd | File storage destination volume control method |
US20040199618A1 (en) * | 2003-02-06 | 2004-10-07 | Knight Gregory John | Data replication solution |
US7146474B2 (en) | 2003-03-12 | 2006-12-05 | International Business Machines Corporation | System, method and computer program product to automatically select target volumes for a fast copy to optimize performance and availability |
-
2003
- 2003-06-24 JP JP2003178879A patent/JP2005018185A/en active Pending
- 2003-08-28 US US10/649,766 patent/US7152146B2/en not_active Expired - Fee Related
-
2005
- 2005-07-25 US US11/187,832 patent/US20050268057A1/en not_active Abandoned
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5430855A (en) * | 1991-02-06 | 1995-07-04 | Storage Technology Corporation | Disk drive array memory system using nonuniform disk drives |
US5504861A (en) * | 1994-02-22 | 1996-04-02 | International Business Machines Corporation | Remote data duplexing |
US6021454A (en) * | 1998-03-27 | 2000-02-01 | Adaptec, Inc. | Data transfer between small computer system interface systems |
US6347358B1 (en) * | 1998-12-22 | 2002-02-12 | Nec Corporation | Disk control unit and disk control method |
US6601104B1 (en) * | 1999-03-11 | 2003-07-29 | Realtime Data Llc | System and methods for accelerated data storage and retrieval |
US6643667B1 (en) * | 1999-03-19 | 2003-11-04 | Hitachi, Ltd. | System and method for replicating data |
US6681310B1 (en) * | 1999-11-29 | 2004-01-20 | Microsoft Corporation | Storage management system having common volume manager |
US6763442B2 (en) * | 2000-07-06 | 2004-07-13 | Hitachi, Ltd. | Data reallocation among storage systems |
US20030126388A1 (en) * | 2001-12-27 | 2003-07-03 | Hitachi, Ltd. | Method and apparatus for managing storage based replication |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8924637B2 (en) | 2001-01-21 | 2014-12-30 | Hitachi, Ltd. | Flash memory storage system |
US7908503B2 (en) | 2005-10-03 | 2011-03-15 | Hitachi, Ltd. | Method of saving power consumed by a storage system |
US20110119435A1 (en) * | 2006-05-31 | 2011-05-19 | Takashige Iwamura | Flash memory storage system |
US8166235B2 (en) | 2006-05-31 | 2012-04-24 | Hitachi, Ltd. | Flash memory storage system |
US8359426B2 (en) | 2006-05-31 | 2013-01-22 | Hitachi, Ltd. | Flash memory storage system |
US20090157991A1 (en) * | 2007-12-18 | 2009-06-18 | Govinda Nallappa Rajan | Reliable storage of data in a distributed storage system |
US8131961B2 (en) * | 2007-12-18 | 2012-03-06 | Alcatel Lucent | Reliable storage of data in a distributed storage system |
US20090210455A1 (en) * | 2008-02-19 | 2009-08-20 | Prasenjit Sarkar | Continuously available program replicas |
US8229886B2 (en) * | 2008-02-19 | 2012-07-24 | International Business Machines Corporation | Continuously available program replicas |
US20120221516A1 (en) * | 2008-02-19 | 2012-08-30 | International Business Machines Corporation | Continuously available program replicas |
US10108510B2 (en) * | 2008-02-19 | 2018-10-23 | International Business Machines Corporation | Continuously available program replicas |
Also Published As
Publication number | Publication date |
---|---|
US20040268069A1 (en) | 2004-12-30 |
US7152146B2 (en) | 2006-12-19 |
JP2005018185A (en) | 2005-01-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7152146B2 (en) | Control of multiple groups of network-connected storage devices | |
JP5341184B2 (en) | Storage system and storage system operation method | |
US7441096B2 (en) | Hierarchical storage management system | |
JP4124331B2 (en) | Virtual volume creation and management method for DBMS | |
US8984031B1 (en) | Managing data storage for databases based on application awareness | |
US7702851B2 (en) | Logical volume transfer method and storage network system | |
JP4147198B2 (en) | Storage system | |
US8024603B2 (en) | Data migration satisfying migration-destination requirements | |
US7376726B2 (en) | Storage path control method | |
JP5154200B2 (en) | Data reading method, data management system, and storage system | |
CN101276366B (en) | Computer system preventing storage of duplicate files | |
US7827193B2 (en) | File sharing system, file sharing device and file sharing volume migration method | |
JP4574408B2 (en) | Storage system control technology | |
US20060074957A1 (en) | Method of configuration management of a computer system | |
US20100082900A1 (en) | Management device for storage device | |
US20070101059A1 (en) | Storage control system and control method for storage control which suppress the amount of power consumed by the storage control system | |
US20100036896A1 (en) | Computer System and Method of Managing Backup of Data | |
JP2005275829A (en) | Storage system | |
JP2004070403A (en) | File storage destination volume control method | |
US20050125426A1 (en) | Storage system, storage control device, and control method for storage system | |
US20100082546A1 (en) | Storage Tiers for Database Server System | |
JP2005228278A (en) | Management method, management device and management program of storage area | |
JP4232357B2 (en) | Computer system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |