US20080065850A1 - Data storage system and control method thereof - Google Patents

Data storage system and control method thereof Download PDF

Info

Publication number
US20080065850A1
US20080065850A1 US11/976,484 US97648407A US2008065850A1 US 20080065850 A1 US20080065850 A1 US 20080065850A1 US 97648407 A US97648407 A US 97648407A US 2008065850 A1 US2008065850 A1 US 2008065850A1
Authority
US
United States
Prior art keywords
management server
configuration information
configuration
logical unit
host
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/976,484
Inventor
Yasuaki Nakamura
Toshio Nakano
Akinobu Shimada
Tatsuya Murakami
Hiroshi Morishima
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/976,484 priority Critical patent/US20080065850A1/en
Publication of US20080065850A1 publication Critical patent/US20080065850A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0637Permissions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/084Configuration by using pre-existing information, e.g. using templates or copying from other elements
    • H04L41/0843Configuration by using pre-existing information, e.g. using templates or copying from other elements based on generic templates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/085Retrieval of network configuration; Tracking network configuration history
    • H04L41/0853Retrieval of network configuration; Tracking network configuration history by actively collecting configuration information or by backing up configuration information

Definitions

  • the present invention relates to a centralized management art of a data storage system (hereinafter merely referred to as a storage system) when multiple computers that use information and multiple external storage systems (hereinafter referred to as disk subsystems) that store information are connected to a network and arranged separately, and more particularly to a management art of the whole storage system that extends over the multiple disk subsystems.
  • a storage system when multiple computers that use information and multiple external storage systems (hereinafter referred to as disk subsystems) that store information are connected to a network and arranged separately, and more particularly to a management art of the whole storage system that extends over the multiple disk subsystems.
  • the configuration information of each system is acquired and the whole configuration in which the whole system was integrated needs to be defined.
  • the configuration information includes, for example, setting concerning an internal access path of a disk subsystem, a logical unit, the capacity or access authority of the logical unit and data move, setting concerning data copying between the disk subsystems, setting or acquisition of a performance control mode or performance data, setting of a maintenance method and a fault or user operation event.
  • system administrators periodically collected the configuration or performance of a disk subsystem, fault, expansion and other events (hereinafter referred to as events) that will occur under the system using software that a host computer (hereinafter merely referred to as a host) which uses the disk subsystem manages. That is, a system administrator had to connect the host computer to each disk subsystem and acquire the configuration information of these systems, then provide the definition and necessary setting of the whole system configuration using management software by manual operation.
  • events that will occur under the system using software that a host computer (hereinafter merely referred to as a host) which uses the disk subsystem manages. That is, a system administrator had to connect the host computer to each disk subsystem and acquire the configuration information of these systems, then provide the definition and necessary setting of the whole system configuration using management software by manual operation.
  • a system administrator should collectively perform setting that extends over between the multiple disk subsystems. This is because the configuration of the whole system is defined more easily than it is defined every disk subsystems and the number of times the configuration is checked and redefined is reduced, thereby reducing artificial misoperation. This is because the system operation can also be improved if the setting that extends over between the multiple disk subsystems is performed collectively.
  • the state is considered in which a certain user A installs a database and another application A in a host computer and multiple disk subsystems are used as an external storage system. Because the size of the file that the application A of the user A uses was reduced, a system administrator S of this external storage system is assumed to have added a logical unit (LU) to a disk subsystem.
  • LU logical unit
  • the disk subsystem may also use another application B (higher performance than for the application A is requested) that the user B uses in another host.
  • the added logical unit should have shared a physical resource (physical unit) with a logical unit allocated so that the application B that requests high performance can use it, the addition of this logical unit is affected and performance degradation will be caused concerning the execution of the applications in which importance is attached to the performance.
  • the addition of the logical unit that the system administrator S made and that was made for the user A is a measure for maintaining and increasing the execution of the application A of the user A, the measure will cause degradation of the execution performance of the application B, and is eventually said to be artificial misoperation when it is viewed from the performance aspect of the whole system.
  • a system administrator normally monitors the performance of an application using a performance monitoring tool. Because the performance monitoring tool monitors the process operating state of the application or the read and write performance of the file that the application uses, a cause in which the addition of the previous logical unit gave rise to the performance degradation of another application cannot be ascertained.
  • One object of the present invention is to provide a management art for allowing multiple system administrators (L persons) to manage multiple (M units of) disk subsystems transversely and collectively and realize predetermined setting quickly and simply in the configuration of the multiple (M units of) disk subsystems shared from multiple (N units of) hosts.
  • Another object of the present invention is to provide a management art of a disk subsystem by which an influence that the configuration modification of the disk subsystem has on the performance of the application executed by a host can be grasped.
  • a further object of the present invention is to provide a management art by which the additional time of the planned logical unit capacity in a disk subsystem can be determined.
  • the present invention has been made in view of the above circumstances and provides a data storage system and a control method thereof having following features.
  • the configuration information of all multiple disk subsystems for example, performance or a setting change, is acquired in point of time series and stores it in a database (“configuration information database”) of a management server part, then manages it in a centralized manner.
  • a function of associating the file that an application uses with the “configuration information database” is provided using a function of detecting the position of the file on a logical unit.
  • FIG. 1 is a drawing showing an outline of a system in which multiple host computers are provided with multiple disk subsystems that send and receive and share data via a network;
  • FIG. 2 is a drawing showing a functional block of a management server to which the present invention is applied;
  • FIG. 3 is a drawing illustrating a flow of a procedure in which desired setting that extends over multiple disk subsystems of FIG. 2 are performed collectively;
  • FIG. 4 is an example of analysis made using the present invention and a drawing showing a flow in which a history of configuration information is traced and a cause of the performance degradation of an application is investigated using the configuration information database that the management server possesses;
  • FIG. 5 is an example of analysis made using the present invention and a drawing showing a flow of analyzing the expansion schedule of a logical unit using a history of file size possessed by the management server.
  • FIG. 1 shows a schematic configuration of the whole system when multiple host computers 10 are connected to multiple disk subsystems 20 that send and receive and share data via an SAN (storage area network) 40 .
  • Each disk subsystem 20 is provided with an external connection interface 21 for sending event information in order to define and refer to its own configuration, show performance and data and post a fault.
  • a management server part 30 is an interface of a local area network (LAN) 50 that differs from the SAN 40 and can be connected to the multiple hosts 10 and the multiple disk subsystems 20 .
  • LAN local area network
  • FIG. 1 only the one management server 30 is shown, but the multiple management servers can also be shown. Further, the management server 30 can also be installed inside the disk subsystems 20 and also be positioned at a place physically separated from these disk subsystems 20 .
  • the technical term of a “management server” includes a part of an external storage system having a server function, and is appropriately described as a “management server part”.
  • the configuration of the all multiple disk subsystems 20 is defined collectively from a certain management server part 30 extending over between these multiple subsystems.
  • the management server part is merely described as the management server 30 below.
  • An exclusive control command is issued from the management server 30 to the systems 20 so that the management server 30 will be the only one setting means.
  • the exclusive control command indicates a command that occupies the multiple disk subsystems 20 selected optionally in a time zone.
  • the occupancy time may also be about one hour when it is long.
  • setting information is created separately and a control method by which setting is performed in a slight occupancy time is prepared.
  • the management server 30 has also a function of checking that the setting terminates normally.
  • the functional block of the management server 30 is described with reference to FIG. 2 .
  • a user management layer 31 manages multiple users A to C connected to the management server 30 .
  • a system administrator is included in a user.
  • An object management layer 32 manages acquisition of the configuration information of each disk subsystem 20 and a setting request from the user.
  • the object management layer 32 has a configuration information database 321 .
  • An agent management layer 33 issues an exclusive control command to each disk subsystem 20 via a subsystem interface 341 in accordance with a request from the object management layer 32 .
  • An interface layer 34 has the subsystem interface 341 that performs data sending and receiving with each disk subsystem 20 and a host interface 342 that controls access with each host agent 11 .
  • the object management layer 32 acquires the configuration, performance and fault and other event information of each disk subsystem 20 and stores them in the configuration information database 321 .
  • the only system administrator (user) whose access was permitted by the user management layer 31 performs the change, expansion, or deletion of parameters of the multiple disk subsystems 20 stored in the configuration information database 321 extending over the same systems 20 .
  • the configuration information database 321 and the configuration information of an actual disk subsystem 20 can match without differing from each other at a predetermined point of time.
  • the management server 30 releases all the occupied multiple disk subsystems 20 by the agent management layer 33 when the configuration modification, expansion and deletion of the systems 20 are completed by the object management layer 32 also including the registration into its own configuration information database.
  • the information that is the configuration information database 321 of the object management layer 32 and that the management server 30 handles relates to the configuration information about the setting concerning an internal access path of each disk subsystem 20 , a logical unit, these capacity and access authority and data move, setting concerning data copying between disk subsystems, setting of the performance or control of each disk subsystem, acquisition of the performance data of each disk subsystem, setting of a maintenance method and fault and user operation events.
  • the information acquisition timing of the disk subsystem 20 and the host 10 is before the configuration is instructed to the system 20 when the only system administrator (user) accesses the management server 30 and the management server 30 defines the configuration of the system 20 .
  • the acquisition timing is also established when the fault, maintenance and other events of the disk subsystem 20 occurred. Specifically, the acquisition timing is established in the following items.
  • the agent management layer 33 to which an event, such as a fault, was posted posts the event to the object management layer 32 of the upper layer using an interrupt function and the management server 30 recognizes by the object management layer 32 that received this event that the state of the disk subsystem 20 was changed. After this event was recognized, the management server 30 acquires the configuration information of the system 20 and updates the information about the configuration information database.
  • the management server 30 specifies the modification and registers it in its own configuration information database 321 .
  • the flag is set in the disk subsystem 20 of the configuration information database that the management server 30 possesses and the database is managed, the subsequent processing, especially, the acquisition of information is performed efficiently by making an inquiry into only the system 20 of which the flag is on in the database.
  • FIG. 3 An example of a procedure in which desired setting is performed collectively extending over multiple disk subsystems using the above method is described with reference to FIG. 3 .
  • This procedure indicates that one of multiple system administrators define these configurations against the multiple disk subsystems 20 (two units X and Y here).
  • a function unique to a disk subsystem that assigns an access authority from a host to a logical unit and prevents invalid access to the logical unit, then protects data is used.
  • Two disk subsystems 20 (X and Y) connected to the single specific host 10 ( FIG. 1 ) possess the predetermined number of logical units. Under the environment where the multiple hosts 10 share the multiple disk subsystems 20 , security needs to be set so that the logical unit that the specific host 10 accesses cannot be accessed from another host 10 .
  • the system administrator S logs in the management server 30 and requests access permission (step 311 ).
  • the management server 30 issues an exclusive control command to the disk subsystems 20 (X and Y) so that the management server 30 can become the only control server that enables the configuration setting of the whole system (step 312 ).
  • the management server 30 acquires the configuration information of each of the disk subsystems X and Y when the exclusive control command is issued (step 313 ) and stores it in the configuration information database 321 in FIG. 2 .
  • the only system administrator S (user A) whose access to the management server 30 was permitted makes a modification of the system configuration collectively extending over the disk subsystems X and Y (step 315 ) based on the configuration information of the disk subsystems X and Y stored in the configuration information database 321 (step 314 ).
  • the modification of the configuration in this example indicates that the system administrator S assigns an “access authority from the specific host 10 to a predetermined logical unit” to the specific host 10 .
  • a unique address for example, a WWN (World Wide Name) or MAC address is allocated in a network assigned to the logical unit that the host 10 under a port can access and a host bus adapter that the host connected to the port is equipped with.
  • the port indicates an input/output function used when the disk subsystem 20 sends and receives data to and from the host 10 .
  • the management server 30 completes the modification of such system configuration including the registration into its own configuration information database (step 316 ) and releases the occupied multiple disk subsystems X and Y (step 317 ).
  • disk subsystems X and Y are controlled exclusively from the steps 312 to 317 of this example, even if another system administrator T (user B) issues a setting request to the management server 30 (step 318 ), the management server 30 posts to the system administrator T that the system administrator S is being set.
  • the management server 30 further associates the host logical configuration information with the configuration information of the disk subsystems X and Y according to the following procedure.
  • the host logical configuration information indicates the access path information to a logical unit viewed from a file on an operating system (herein after merely referred to as an OS), position of the logical unit in which the file was stored, file size, a database and each OS.
  • the access path to the logical unit viewed from each OS can be specified using three items of a host adapter card ID, a controller ID and a logical unit number, for example, if the OS is a UNIX-system OS.
  • Such associating is performed to make a file accessed from the host 10 and a logical unit inside the disk subsystem 20 that stores this file correspond to each other by linking an ID that indicates a physical area inside the system 20 and the information of a device path used when a system administrator incorporates the system 20 and to manage them collectively.
  • the multiple hosts 10 install the host agent 11 and the host agent 11 is activated synchronizing with a subsequent event in the following cases.
  • the host agent 11 issues a command for identifying an access path into the logical unit from its own host 10 , to the logical unit of the disk subsystem 20 to which its own host 10 can access in order to acquire the “host logical configuration information” on the OS of the host 10 that dominates the host agent.
  • the host agent 11 acquires the name and size of the file stored inside the logical unit and the position on the file system to which the file belongs using an OS, a database or an application interface for high-level middleware.
  • the management server 30 collects the “host logical configuration information” that each host agent 11 acquired and associates it with an internal access path contained in the configuration information of the disk subsystem 20 , then stores it in the configuration information database 321 .
  • a system administrator can check the position of the logical unit in which the file is stored by making an inquiry into the management server.
  • the management server 30 collects the data of file size that the host agent acquires and the application of the host 10 uses synchronizing with a periodic inquiry into each disk subsystems 20 and accumulates it in the configuration information database 321 of the management server 30 in point of time series.
  • the management server 30 similarly accumulates the contents before and after the system configuration was modified in the configuration information database of the management server 30 in point of time series also when a system administrator modified the configuration of the disk subsystem 20 and the configuration was modified by the fault and maintenance events.
  • a system administrator can momentarily retrieve the host logical configuration information and the modified contents of the system configuration against the time series data stored in the configuration information database of the management server 30 as keys. Accordingly, an interrelationship when the configuration of the disk subsystem 20 was modified with time concerning the performance, file size and other parameters of the disk subsystem can be found and analyzed.
  • the first specific example is the case where a problem is analyzed using the management server to which the present invention applied when the following event occurred.
  • the case is considered where a certain user A uses a database and another application in the host 10 , and a system administrator added a logical unit to the disk subsystem 20 that uses the application to expand the file size.
  • the addition of a physical unit can also follow the addition of the logical unit.
  • the disk subsystem 20 may also use another application B (higher performance than for the application A is requested) that the user B uses in another host.
  • FIG. 4 shows a measure for which the system administrator S can take using the management server 30 to which the present invention applied (using the historical data of the configuration information database 321 when the performance of the application was degraded after a certain point of time.
  • the information that specifies a file that an application uses and the time when the performance of the application was degraded are input to the management server 30 (step 411 ).
  • the management server 30 that received the input specifies a physical unit storage position indicating a logical unit that corresponds to the file is positioned at which physical area in the disk subsystem 20 , based on the data of the configuration information table 301 in the configuration information database 321 . Subsequently, another logical unit that shares the physical unit is retrieved (step 412 ).
  • the contents of a setting change related to the physical unit storage position are retrieved from the data in which a history of the setting change, for example, the setting change historical table 302 is accumulated before the time when the performance of the application was degraded using this retrieval result (step 413 ).
  • the management server 30 checks that the performance of the logical volume is degraded after the performance degradation time of the application, referring to a performance historical table 303 that indicates the performance history of a logical unit, for example (step 414 ). If the performance is degraded, the fact (estimated cause or additional time) that the relevant setting change is assumed to be a cause is posted to the system administrator (step 415 ).
  • the second specific example is the case where a problem is analyzed using the management server 30 to which the present invention applied when the following event occurred.
  • a system administrator periodically, for example, quarterly, investigates an increasing tendency toward the file size that an application uses, and arranges a schedule of the additional capacity of the logical unit that the disk subsystem 20 retains against this increasing tendency.
  • the management server supports planning according to the following procedure.
  • the management server 30 periodically inquires the host agent 11 of the file size and accumulates the file size in a database in point of time series. Subsequently, the management server 30 retrieves associating with the logical unit in which the file was stored from the contents of the configuration information database 321 . As shown in FIG.
  • the time (t 4 ) when the file size is equal to the logical unit capacity limit (c 4 ) is predicted and is posted to a system administrator as the time when the logical unit needs to be added, for example, additional time.
  • a system administrator previously sets a file size threshold, for example, user threshold (c 3 ).
  • a file size threshold for example, user threshold (c 3 ).
  • the file size exceeded the user threshold (at t 3 )
  • the system administrator is warned against the fact that the addition of the logical unit will be required in the near future.
  • the high-level application of the management server 30 there is an application that manages the connection modes of the host 10 , switches (not shown) and the disk subsystem 20 those are the components of the SAN 40 , and that manages the information about each component in a centralized manner, and that is provided with a function of performing fault monitoring and performance display, then performs centralized management.
  • This high-level application can acquire the time series information and history of the configuration and performance of each disk subsystems 20 collectively by making an inquiry into the management server 30 without making any inquiry into each component.
  • This high-level application is used by multiple system administrators (or users) and the management of multiple disk subsystems can be performed by the centralized management of an exclusive control command and a configuration information database.
  • the historical management of the system configuration can be performed using the configuration information database in which the configuration information of the whole system was accumulated in point of time series.
  • multiple system administrators in the configuration of multiple (M units of) disk subsystems shared by multiple (N units of) hosts, multiple system administrators (L persons) can manage the M units of disk subsystems transversely and collectively and realize predetermined setting quickly and simply.
  • the administrators can grasp an influence that a change of the system configuration gives, in an application executed by a host.
  • the addition time according to plan with logical unit capacity in the system can be determined.
  • the configuration information of the whole storage system can be managed in a centralized manner in point of time series, the degradation of the application performance caused by the modification of the system configuration, the addition with logical unit capacity into the storage system according to plan, and the analysis and prediction of the other events can be performed easily.

Abstract

In the configuration of multiple (M units of) disk subsystems shared from multiple (N units of) hosts, having an exclusive control command that limits access to all disk subsystems temporarily is provided, using this exclusive control command, configuration information of all multiple disk subsystems, for example, performance and a setting change, are acquired in point of time series and stored in a management server database (“configuration information database”), then managed in a centralized manner, a function that associates the file that the application uses with the “configuration information database” is provided using a function that detects the position on the logical unit, and a means that can retrieve the modified contents of the system configuration and time as keys is provided in the “configuration information database”.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This is a continuation application of U.S. patent application Ser. No. 10/077,966, filed on Feb. 20, 2002, now allowed, the contents of which are incorporated herein by reference.
  • FIELD OF THE INVENTION
  • The present invention relates to a centralized management art of a data storage system (hereinafter merely referred to as a storage system) when multiple computers that use information and multiple external storage systems (hereinafter referred to as disk subsystems) that store information are connected to a network and arranged separately, and more particularly to a management art of the whole storage system that extends over the multiple disk subsystems.
  • BACKGROUND OF THE INVENTION
  • In order to perform centralized management of data extending over multiple external storage systems, for example, multiple disk subsystems, the configuration information of each system is acquired and the whole configuration in which the whole system was integrated needs to be defined. Here, the configuration information, includes, for example, setting concerning an internal access path of a disk subsystem, a logical unit, the capacity or access authority of the logical unit and data move, setting concerning data copying between the disk subsystems, setting or acquisition of a performance control mode or performance data, setting of a maintenance method and a fault or user operation event.
  • In the past, system administrators periodically collected the configuration or performance of a disk subsystem, fault, expansion and other events (hereinafter referred to as events) that will occur under the system using software that a host computer (hereinafter merely referred to as a host) which uses the disk subsystem manages. That is, a system administrator had to connect the host computer to each disk subsystem and acquire the configuration information of these systems, then provide the definition and necessary setting of the whole system configuration using management software by manual operation.
  • An art for displaying in mapping mode that a logical volume that can access a disk subsystem from a host corresponds to which physical unit of the disk subsystem is disclosed in U.S. Pat. No. 5,973,690. However, there is no suggestion concerning transverse management between multiple disk subsystems.
  • In order to define the whole configuration in which the whole system was integrated, desirably, a system administrator should collectively perform setting that extends over between the multiple disk subsystems. This is because the configuration of the whole system is defined more easily than it is defined every disk subsystems and the number of times the configuration is checked and redefined is reduced, thereby reducing artificial misoperation. This is because the system operation can also be improved if the setting that extends over between the multiple disk subsystems is performed collectively.
  • The state is considered in which a certain user A installs a database and another application A in a host computer and multiple disk subsystems are used as an external storage system. Because the size of the file that the application A of the user A uses was reduced, a system administrator S of this external storage system is assumed to have added a logical unit (LU) to a disk subsystem.
  • However, the disk subsystem may also use another application B (higher performance than for the application A is requested) that the user B uses in another host.
  • In such case as this, if the added logical unit should have shared a physical resource (physical unit) with a logical unit allocated so that the application B that requests high performance can use it, the addition of this logical unit is affected and performance degradation will be caused concerning the execution of the applications in which importance is attached to the performance.
  • In other words, although the addition of the logical unit that the system administrator S made and that was made for the user A is a measure for maintaining and increasing the execution of the application A of the user A, the measure will cause degradation of the execution performance of the application B, and is eventually said to be artificial misoperation when it is viewed from the performance aspect of the whole system.
  • A system administrator normally monitors the performance of an application using a performance monitoring tool. Because the performance monitoring tool monitors the process operating state of the application or the read and write performance of the file that the application uses, a cause in which the addition of the previous logical unit gave rise to the performance degradation of another application cannot be ascertained.
  • With the sudden spread of the Internet, access requests from many client terminals increase. These access requests are regarded as access from multiple hosts. A storage system that integrates these many types of access also requires a measure that follows the demand of data size, and the opportunity of logical unit expansion in an individual disk subsystem is increasing constantly. It is desired to predict when the logical unit that corresponds to the file that a business-related application uses exceeds a usable capacity and arrange a schedule of planned logical unit expansion. Accordingly, an increasing tendency toward the file size, the position of the logical unit in which the file was stored and the usable capacity are investigated, and the schedule must be arranged from these relationships.
  • In the prior art, although these pieces of information were collected individually and periodically, there is no means for building these relationships. The measure is no more than a measure that depends on an empirical rule of a system administrator, and the above prediction and planning were very difficult.
  • SUMMARY OF THE INVENTION
  • One object of the present invention is to provide a management art for allowing multiple system administrators (L persons) to manage multiple (M units of) disk subsystems transversely and collectively and realize predetermined setting quickly and simply in the configuration of the multiple (M units of) disk subsystems shared from multiple (N units of) hosts.
  • Another object of the present invention is to provide a management art of a disk subsystem by which an influence that the configuration modification of the disk subsystem has on the performance of the application executed by a host can be grasped.
  • A further object of the present invention is to provide a management art by which the additional time of the planned logical unit capacity in a disk subsystem can be determined.
  • The present invention has been made in view of the above circumstances and provides a data storage system and a control method thereof having following features.
  • 1) An exclusive control command that temporarily limits access to all multiple disk subsystems is provided.
  • 2) Using this exclusive control command, the configuration information of all multiple disk subsystems, for example, performance or a setting change, is acquired in point of time series and stores it in a database (“configuration information database”) of a management server part, then manages it in a centralized manner.
  • 3) A function of associating the file that an application uses with the “configuration information database” is provided using a function of detecting the position of the file on a logical unit.
  • 4) A means that can retrieve the modified contents or time of the system configuration as keys is provided in the “configuration information database”.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Preferred embodiments of the present invention will be described in detail based on the followings, wherein:
  • FIG. 1 is a drawing showing an outline of a system in which multiple host computers are provided with multiple disk subsystems that send and receive and share data via a network;
  • FIG. 2 is a drawing showing a functional block of a management server to which the present invention is applied;
  • FIG. 3 is a drawing illustrating a flow of a procedure in which desired setting that extends over multiple disk subsystems of FIG. 2 are performed collectively;
  • FIG. 4 is an example of analysis made using the present invention and a drawing showing a flow in which a history of configuration information is traced and a cause of the performance degradation of an application is investigated using the configuration information database that the management server possesses; and
  • FIG. 5 is an example of analysis made using the present invention and a drawing showing a flow of analyzing the expansion schedule of a logical unit using a history of file size possessed by the management server.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • FIG. 1 shows a schematic configuration of the whole system when multiple host computers 10 are connected to multiple disk subsystems 20 that send and receive and share data via an SAN (storage area network) 40. Each disk subsystem 20 is provided with an external connection interface 21 for sending event information in order to define and refer to its own configuration, show performance and data and post a fault.
  • A management server part 30 is an interface of a local area network (LAN) 50 that differs from the SAN 40 and can be connected to the multiple hosts 10 and the multiple disk subsystems 20. In FIG. 1, only the one management server 30 is shown, but the multiple management servers can also be shown. Further, the management server 30 can also be installed inside the disk subsystems 20 and also be positioned at a place physically separated from these disk subsystems 20. The technical term of a “management server” includes a part of an external storage system having a server function, and is appropriately described as a “management server part”.
  • It is considered that the configuration of the all multiple disk subsystems 20 is defined collectively from a certain management server part 30 extending over between these multiple subsystems. The management server part is merely described as the management server 30 below. An exclusive control command is issued from the management server 30 to the systems 20 so that the management server 30 will be the only one setting means. Here, the exclusive control command indicates a command that occupies the multiple disk subsystems 20 selected optionally in a time zone. The occupancy time may also be about one hour when it is long. However, setting information is created separately and a control method by which setting is performed in a slight occupancy time is prepared. The management server 30 has also a function of checking that the setting terminates normally.
  • The functional block of the management server 30 is described with reference to FIG. 2.
  • A user management layer 31 manages multiple users A to C connected to the management server 30. Here, a system administrator is included in a user.
  • An object management layer 32 manages acquisition of the configuration information of each disk subsystem 20 and a setting request from the user. The object management layer 32 has a configuration information database 321.
  • An agent management layer 33 issues an exclusive control command to each disk subsystem 20 via a subsystem interface 341 in accordance with a request from the object management layer 32.
  • An interface layer 34 has the subsystem interface 341 that performs data sending and receiving with each disk subsystem 20 and a host interface 342 that controls access with each host agent 11.
  • While exclusive control is being performed, the object management layer 32 acquires the configuration, performance and fault and other event information of each disk subsystem 20 and stores them in the configuration information database 321.
  • The only system administrator (user) whose access was permitted by the user management layer 31 performs the change, expansion, or deletion of parameters of the multiple disk subsystems 20 stored in the configuration information database 321 extending over the same systems 20. As a result, the configuration information database 321 and the configuration information of an actual disk subsystem 20 can match without differing from each other at a predetermined point of time.
  • The management server 30 releases all the occupied multiple disk subsystems 20 by the agent management layer 33 when the configuration modification, expansion and deletion of the systems 20 are completed by the object management layer 32 also including the registration into its own configuration information database.
  • Here, the information that is the configuration information database 321 of the object management layer 32 and that the management server 30 handles relates to the configuration information about the setting concerning an internal access path of each disk subsystem 20, a logical unit, these capacity and access authority and data move, setting concerning data copying between disk subsystems, setting of the performance or control of each disk subsystem, acquisition of the performance data of each disk subsystem, setting of a maintenance method and fault and user operation events.
  • <Information Acquisition Timing>
  • The information acquisition timing of the disk subsystem 20 and the host 10 is before the configuration is instructed to the system 20 when the only system administrator (user) accesses the management server 30 and the management server 30 defines the configuration of the system 20. On the other hand, the acquisition timing is also established when the fault, maintenance and other events of the disk subsystem 20 occurred. Specifically, the acquisition timing is established in the following items.
  • 1) When the event is recognized and information is acquired by the management server 30 through a periodic inquiry into each disk subsystem 20.
  • 2) Further, when the fault and maintenance events that the disk subsystem 20 detected were posted from the subsystem interface 341 (FIG. 2) to the agent management layer 33.
  • In the case of 2), the agent management layer 33 to which an event, such as a fault, was posted posts the event to the object management layer 32 of the upper layer using an interrupt function and the management server 30 recognizes by the object management layer 32 that received this event that the state of the disk subsystem 20 was changed. After this event was recognized, the management server 30 acquires the configuration information of the system 20 and updates the information about the configuration information database.
  • Besides, when the configuration of the disk subsystem 20 was modified according to automatic expansion, fault and maintenance events, the management server 30 specifies the modification and registers it in its own configuration information database 321. Here, if the flag is set in the disk subsystem 20 of the configuration information database that the management server 30 possesses and the database is managed, the subsequent processing, especially, the acquisition of information is performed efficiently by making an inquiry into only the system 20 of which the flag is on in the database.
  • <Information Acquisition Method>
  • An example of a procedure in which desired setting is performed collectively extending over multiple disk subsystems using the above method is described with reference to FIG. 3. This procedure indicates that one of multiple system administrators define these configurations against the multiple disk subsystems 20 (two units X and Y here). In this example, a function unique to a disk subsystem that assigns an access authority from a host to a logical unit and prevents invalid access to the logical unit, then protects data is used.
  • Two disk subsystems 20 (X and Y) connected to the single specific host 10 (FIG. 1) possess the predetermined number of logical units. Under the environment where the multiple hosts 10 share the multiple disk subsystems 20, security needs to be set so that the logical unit that the specific host 10 accesses cannot be accessed from another host 10.
  • In FIG. 3, the system administrator S (user A) logs in the management server 30 and requests access permission (step 311). On receipt of this access permission, the management server 30 issues an exclusive control command to the disk subsystems 20 (X and Y) so that the management server 30 can become the only control server that enables the configuration setting of the whole system (step 312).
  • The management server 30 acquires the configuration information of each of the disk subsystems X and Y when the exclusive control command is issued (step 313) and stores it in the configuration information database 321 in FIG. 2.
  • The only system administrator S (user A) whose access to the management server 30 was permitted makes a modification of the system configuration collectively extending over the disk subsystems X and Y (step 315) based on the configuration information of the disk subsystems X and Y stored in the configuration information database 321 (step 314).
  • Here, the modification of the configuration in this example indicates that the system administrator S assigns an “access authority from the specific host 10 to a predetermined logical unit” to the specific host 10. Specifically, it indicates that a unique address, for example, a WWN (World Wide Name) or MAC address is allocated in a network assigned to the logical unit that the host 10 under a port can access and a host bus adapter that the host connected to the port is equipped with. Here, the port indicates an input/output function used when the disk subsystem 20 sends and receives data to and from the host 10.
  • The management server 30 completes the modification of such system configuration including the registration into its own configuration information database (step 316) and releases the occupied multiple disk subsystems X and Y (step 317).
  • While the disk subsystems X and Y are controlled exclusively from the steps 312 to 317 of this example, even if another system administrator T (user B) issues a setting request to the management server 30 (step 318), the management server 30 posts to the system administrator T that the system administrator S is being set.
  • The management server 30 further associates the host logical configuration information with the configuration information of the disk subsystems X and Y according to the following procedure. Here, the host logical configuration information indicates the access path information to a logical unit viewed from a file on an operating system (herein after merely referred to as an OS), position of the logical unit in which the file was stored, file size, a database and each OS.
  • Besides, the access path to the logical unit viewed from each OS can be specified using three items of a host adapter card ID, a controller ID and a logical unit number, for example, if the OS is a UNIX-system OS.
  • Such associating is performed to make a file accessed from the host 10 and a logical unit inside the disk subsystem 20 that stores this file correspond to each other by linking an ID that indicates a physical area inside the system 20 and the information of a device path used when a system administrator incorporates the system 20 and to manage them collectively.
  • <Activation of Host Agent>
  • The multiple hosts 10 (FIG. 1) install the host agent 11 and the host agent 11 is activated synchronizing with a subsequent event in the following cases.
  • 1) When the management server 30 modifies the configuration of the disk subsystem 20 according to a request of a system administrator and inquires each disk subsystem 20 of the acquisition of system configuration information
  • 2) When the configuration of the disk subsystem 20 is modified by fault, maintenance and other events, and the management server 30 recognizes the status change of the disk subsystem 20 and inquires each disk subsystem 20 of the acquisition of system configuration information
  • The host agent 11 issues a command for identifying an access path into the logical unit from its own host 10, to the logical unit of the disk subsystem 20 to which its own host 10 can access in order to acquire the “host logical configuration information” on the OS of the host 10 that dominates the host agent.
  • The host agent 11 acquires the name and size of the file stored inside the logical unit and the position on the file system to which the file belongs using an OS, a database or an application interface for high-level middleware.
  • The management server 30 collects the “host logical configuration information” that each host agent 11 acquired and associates it with an internal access path contained in the configuration information of the disk subsystem 20, then stores it in the configuration information database 321. A system administrator can check the position of the logical unit in which the file is stored by making an inquiry into the management server.
  • The management server 30 collects the data of file size that the host agent acquires and the application of the host 10 uses synchronizing with a periodic inquiry into each disk subsystems 20 and accumulates it in the configuration information database 321 of the management server 30 in point of time series.
  • The management server 30 similarly accumulates the contents before and after the system configuration was modified in the configuration information database of the management server 30 in point of time series also when a system administrator modified the configuration of the disk subsystem 20 and the configuration was modified by the fault and maintenance events.
  • As a result, a system administrator can momentarily retrieve the host logical configuration information and the modified contents of the system configuration against the time series data stored in the configuration information database of the management server 30 as keys. Accordingly, an interrelationship when the configuration of the disk subsystem 20 was modified with time concerning the performance, file size and other parameters of the disk subsystem can be found and analyzed.
  • SPECIFIC EXAMPLE 1 OF ANALYSIS
  • Specific examples are described below.
  • The first specific example is the case where a problem is analyzed using the management server to which the present invention applied when the following event occurred.
  • The case is considered where a certain user A uses a database and another application in the host 10, and a system administrator added a logical unit to the disk subsystem 20 that uses the application to expand the file size. In this case, the addition of a physical unit can also follow the addition of the logical unit. However, the disk subsystem 20 may also use another application B (higher performance than for the application A is requested) that the user B uses in another host.
  • In such time as this, if the added logical unit should have shared a physical resource (physical unit) with the logical unit allocated so that the application B that requests high performance can be used, the addition of this logical unit is affected and performance degradation will be caused concerning the execution of the application B in which importance is attached to the performance. In the past, when a system administrator monitors the performance of the application, an external performance monitoring tool was used because monitoring cannot be performed from a management server. This type of the tool monitors the process operating state of the application and the read and write performance from and to a file used. However, a cause that the addition of such logical unit as above gave rise to the performance degradation of the application could not be ascertained.
  • FIG. 4 shows a measure for which the system administrator S can take using the management server 30 to which the present invention applied (using the historical data of the configuration information database 321 when the performance of the application was degraded after a certain point of time.
  • The information that specifies a file that an application uses and the time when the performance of the application was degraded are input to the management server 30 (step 411). The management server 30 that received the input specifies a physical unit storage position indicating a logical unit that corresponds to the file is positioned at which physical area in the disk subsystem 20, based on the data of the configuration information table 301 in the configuration information database 321. Subsequently, another logical unit that shares the physical unit is retrieved (step 412).
  • The contents of a setting change related to the physical unit storage position are retrieved from the data in which a history of the setting change, for example, the setting change historical table 302 is accumulated before the time when the performance of the application was degraded using this retrieval result (step 413).
  • As to whether the setting change is related to the performance degradation of an application or not, the management server 30 checks that the performance of the logical volume is degraded after the performance degradation time of the application, referring to a performance historical table 303 that indicates the performance history of a logical unit, for example (step 414). If the performance is degraded, the fact (estimated cause or additional time) that the relevant setting change is assumed to be a cause is posted to the system administrator (step 415).
  • SPECIFIC EXAMPLE 2 OF ANALYSIS
  • The second specific example is the case where a problem is analyzed using the management server 30 to which the present invention applied when the following event occurred.
  • A system administrator periodically, for example, quarterly, investigates an increasing tendency toward the file size that an application uses, and arranges a schedule of the additional capacity of the logical unit that the disk subsystem 20 retains against this increasing tendency. On this occasion, the management server supports planning according to the following procedure.
  • The management server 30 periodically inquires the host agent 11 of the file size and accumulates the file size in a database in point of time series. Subsequently, the management server 30 retrieves associating with the logical unit in which the file was stored from the contents of the configuration information database 321. As shown in FIG. 5, based on the capacity (c4) of the logical unit and the relationship between the file size accumulated in point of time series and time, for example, the data of start capacity (c1, t1) and the latest capacity (c2, t2), the time (t4) when the file size is equal to the logical unit capacity limit (c4) is predicted and is posted to a system administrator as the time when the logical unit needs to be added, for example, additional time.
  • A system administrator previously sets a file size threshold, for example, user threshold (c3). When the file size exceeded the user threshold (at t3), the system administrator is warned against the fact that the addition of the logical unit will be required in the near future.
  • Further, as the high-level application of the management server 30, there is an application that manages the connection modes of the host 10, switches (not shown) and the disk subsystem 20 those are the components of the SAN 40, and that manages the information about each component in a centralized manner, and that is provided with a function of performing fault monitoring and performance display, then performs centralized management. This high-level application can acquire the time series information and history of the configuration and performance of each disk subsystems 20 collectively by making an inquiry into the management server 30 without making any inquiry into each component.
  • This high-level application is used by multiple system administrators (or users) and the management of multiple disk subsystems can be performed by the centralized management of an exclusive control command and a configuration information database.
  • According to the above embodiment, in the configuration of the multiple disk subsystems shared from multiple hosts, there is an effect in which multiple system administrators can collectively define the system configuration that extends over the multiple disk subsystems.
  • Further, the historical management of the system configuration can be performed using the configuration information database in which the configuration information of the whole system was accumulated in point of time series.
  • Furthermore, there is an effect in which an influence that the modification of the system configuration has on an application can be estimated correctly. The cause of the performance degradation of the application can be investigated.
  • Furthermore, there is an effect in which the modification time of the system configuration and the additional time of the logical unit capacity can be planned, estimated and posted.
  • According to the invention, in the configuration of multiple (M units of) disk subsystems shared by multiple (N units of) hosts, multiple system administrators (L persons) can manage the M units of disk subsystems transversely and collectively and realize predetermined setting quickly and simply.
  • The administrators can grasp an influence that a change of the system configuration gives, in an application executed by a host. The addition time according to plan with logical unit capacity in the system can be determined.
  • Because the configuration information of the whole storage system can be managed in a centralized manner in point of time series, the degradation of the application performance caused by the modification of the system configuration, the addition with logical unit capacity into the storage system according to plan, and the analysis and prediction of the other events can be performed easily.
  • Having described a preferred embodiment of the invention with reference to the accompanying drawings, it is to be understood that the invention is not limited to the embodiments and that various changes and modifications could be effected therein by one skilled in the art without departing from the spirit or scope of the invention as defined in the appended claims.

Claims (1)

1. A control method of a data storage system in which multiple external storage systems that store information are connected to a first network and each of said multiple external storage systems is arranged separately, comprising:
generating an interrupt by an external storage system to a management server; and
issuing an exclusive control command by said management server to said external storage system; wherein said exclusive control command temporarily limits access to said external storage system such that said management server is the only control server that enables configuration setting of the data storage system;
wherein said management server acquires configuration information of said external storage systems in point of time series and stores said configuration information in the database managed by said management server using said exclusive control command, and
wherein a time series acquisition is made with a simultaneous and periodic inquiry into multiple external storage systems as moments.
US11/976,484 2001-09-27 2007-10-25 Data storage system and control method thereof Abandoned US20080065850A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/976,484 US20080065850A1 (en) 2001-09-27 2007-10-25 Data storage system and control method thereof

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2001-295397 2001-09-27
JP2001295397A JP2003108420A (en) 2001-09-27 2001-09-27 Data storage system and method of controlling the system
US10/077,966 US7305462B2 (en) 2001-09-27 2002-02-20 Data storage system and control method thereof
US11/976,484 US20080065850A1 (en) 2001-09-27 2007-10-25 Data storage system and control method thereof

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/077,966 Continuation US7305462B2 (en) 2001-09-27 2002-02-20 Data storage system and control method thereof

Publications (1)

Publication Number Publication Date
US20080065850A1 true US20080065850A1 (en) 2008-03-13

Family

ID=19116840

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/077,966 Expired - Fee Related US7305462B2 (en) 2001-09-27 2002-02-20 Data storage system and control method thereof
US11/976,484 Abandoned US20080065850A1 (en) 2001-09-27 2007-10-25 Data storage system and control method thereof

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/077,966 Expired - Fee Related US7305462B2 (en) 2001-09-27 2002-02-20 Data storage system and control method thereof

Country Status (2)

Country Link
US (2) US7305462B2 (en)
JP (1) JP2003108420A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052433A1 (en) * 2004-12-22 2008-02-28 Hitachi, Ltd. Storage system
US20080140931A1 (en) * 2006-12-08 2008-06-12 Fujitsu Limited Disk array system, disk array method, and computer product
US7849278B2 (en) 2004-11-01 2010-12-07 Hitachi, Ltd Logical partition conversion for migration between storage units
US20120221729A1 (en) * 2011-02-24 2012-08-30 Hitachi, Ltd. Computer system and management method for the computer system and program
CN103516761A (en) * 2012-06-29 2014-01-15 上海斐讯数据通信技术有限公司 Time-sharing control method for server accessed by multiple terminals and cloud computing system

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003108420A (en) * 2001-09-27 2003-04-11 Hitachi Ltd Data storage system and method of controlling the system
US7043619B1 (en) * 2002-01-14 2006-05-09 Veritas Operating Corporation Storage configurator for determining an optimal storage configuration for an application
JP2003345531A (en) * 2002-05-24 2003-12-05 Hitachi Ltd Storage system, management server, and its application managing method
JP4185346B2 (en) * 2002-10-18 2008-11-26 株式会社日立製作所 Storage apparatus and configuration setting method thereof
US7546333B2 (en) * 2002-10-23 2009-06-09 Netapp, Inc. Methods and systems for predictive change management for access paths in networks
JP2004192305A (en) * 2002-12-11 2004-07-08 Hitachi Ltd METHOD AND SYSTEM FOR MANAGING iSCSI STORAGE
US7184933B2 (en) * 2003-02-28 2007-02-27 Hewlett-Packard Development Company, L.P. Performance estimation tool for data storage systems
JP2004302751A (en) * 2003-03-31 2004-10-28 Hitachi Ltd Method for managing performance of computer system and computer system managing performance of storage device
JP4285058B2 (en) * 2003-04-21 2009-06-24 株式会社日立製作所 Network management program, management computer and management method
US8261037B2 (en) * 2003-07-11 2012-09-04 Ca, Inc. Storage self-healing and capacity planning system and method
JP2005038071A (en) * 2003-07-17 2005-02-10 Hitachi Ltd Management method for optimizing storage capacity
US6912482B2 (en) * 2003-09-11 2005-06-28 Veritas Operating Corporation Data storage analysis mechanism
US7827362B2 (en) * 2004-08-24 2010-11-02 Symantec Corporation Systems, apparatus, and methods for processing I/O requests
US7730222B2 (en) * 2004-08-24 2010-06-01 Symantec Operating System Processing storage-related I/O requests using binary tree data structures
US7577807B2 (en) * 2003-09-23 2009-08-18 Symantec Operating Corporation Methods and devices for restoring a portion of a data store
US7904428B2 (en) * 2003-09-23 2011-03-08 Symantec Corporation Methods and apparatus for recording write requests directed to a data store
US7991748B2 (en) 2003-09-23 2011-08-02 Symantec Corporation Virtual data store creation and use
US7409587B2 (en) * 2004-08-24 2008-08-05 Symantec Operating Corporation Recovering from storage transaction failures using checkpoints
US7631120B2 (en) * 2004-08-24 2009-12-08 Symantec Operating Corporation Methods and apparatus for optimally selecting a storage buffer for the storage of data
US7287133B2 (en) * 2004-08-24 2007-10-23 Symantec Operating Corporation Systems and methods for providing a modification history for a location within a data store
US7725760B2 (en) * 2003-09-23 2010-05-25 Symantec Operating Corporation Data storage system
US7239581B2 (en) * 2004-08-24 2007-07-03 Symantec Operating Corporation Systems and methods for synchronizing the internal clocks of a plurality of processor modules
US7577806B2 (en) * 2003-09-23 2009-08-18 Symantec Operating Corporation Systems and methods for time dependent data storage and recovery
US8560671B1 (en) 2003-10-23 2013-10-15 Netapp, Inc. Systems and methods for path-based management of virtual servers in storage network environments
JP2005149336A (en) 2003-11-19 2005-06-09 Hitachi Ltd Storage management method and device therefor
JP4516306B2 (en) * 2003-11-28 2010-08-04 株式会社日立製作所 How to collect storage network performance information
JP4575689B2 (en) * 2004-03-18 2010-11-04 株式会社日立製作所 Storage system and computer system
JP4585217B2 (en) * 2004-03-29 2010-11-24 株式会社日立製作所 Storage system and control method thereof
JP4640770B2 (en) * 2004-10-15 2011-03-02 株式会社日立製作所 Control device connected to external device
JP4643590B2 (en) 2004-11-29 2011-03-02 富士通株式会社 Virtual volume transfer program
US20060271656A1 (en) * 2005-05-24 2006-11-30 Yuichi Yagawa System and method for auditing storage systems remotely
JP4733461B2 (en) * 2005-08-05 2011-07-27 株式会社日立製作所 Computer system, management computer, and logical storage area management method
EP2492813A3 (en) * 2005-09-27 2013-01-30 Onaro Method And Systems For Validating Accessibility And Currency Of Replicated Data
CN103927238B (en) * 2005-10-14 2017-04-12 塞门铁克操作公司 Technique For Timeline Compression In Data Store
US8484365B1 (en) * 2005-10-20 2013-07-09 Netapp, Inc. System and method for providing a unified iSCSI target with a plurality of loosely coupled iSCSI front ends
US7624178B2 (en) * 2006-02-27 2009-11-24 International Business Machines Corporation Apparatus, system, and method for dynamic adjustment of performance monitoring
US7676702B2 (en) * 2006-08-14 2010-03-09 International Business Machines Corporation Preemptive data protection for copy services in storage systems and applications
US7882393B2 (en) * 2007-03-28 2011-02-01 International Business Machines Corporation In-band problem log data collection between a host system and a storage system
US7779308B2 (en) * 2007-06-21 2010-08-17 International Business Machines Corporation Error processing across multiple initiator network
EP2251790A1 (en) * 2008-03-04 2010-11-17 Mitsubishi Electric Corporation Server device, method of detecting failure of server device, and program of detecting failure of server device
JP5081718B2 (en) * 2008-05-20 2012-11-28 株式会社日立製作所 Computer system, management server, and configuration information acquisition method
JP5412882B2 (en) 2009-03-04 2014-02-12 富士通株式会社 Logical volume configuration information providing program, logical volume configuration information providing method, and logical volume configuration information providing apparatus
JP5170055B2 (en) 2009-10-09 2013-03-27 富士通株式会社 Processing method, storage system, information processing apparatus, and program
US8214551B2 (en) * 2010-01-09 2012-07-03 International Business Machines Corporation Using a storage controller to determine the cause of degraded I/O performance
US8578108B2 (en) * 2010-08-03 2013-11-05 International Business Machines Corporation Dynamic look-ahead extent migration for tiered storage architectures
JP5425117B2 (en) 2011-01-26 2014-02-26 株式会社日立製作所 Computer system, management method thereof, and program
JP2013025742A (en) * 2011-07-26 2013-02-04 Nippon Telegr & Teleph Corp <Ntt> Distributed file management device, distributed file management method and program
US10489352B2 (en) * 2015-11-16 2019-11-26 International Business Machines Corporation Software discovery for software on shared file systems
CN106371327B (en) * 2016-09-28 2020-07-31 北京小米移动软件有限公司 Method and device for sharing control right
CN106681668A (en) * 2017-01-12 2017-05-17 郑州云海信息技术有限公司 Hybrid storage system and storage method based on solid state disk caching

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5625795A (en) * 1994-05-13 1997-04-29 Mitsubishi Denki Kabushiki Kaisha Exclusive control unit for a resource shared among computers
US5973690A (en) * 1997-11-07 1999-10-26 Emc Corporation Front end/back end device visualization and manipulation
US6212520B1 (en) * 1997-10-16 2001-04-03 Fujitsu Limited Database management system based on client/server architecture and storage medium storing a program therefor
US6308243B1 (en) * 1997-07-08 2001-10-23 Sanyo Electric Co., Ltd. Method and system for controlling exclusive access to shared resources in computers
US6363457B1 (en) * 1999-02-08 2002-03-26 International Business Machines Corporation Method and system for non-disruptive addition and deletion of logical devices
US6538669B1 (en) * 1999-07-15 2003-03-25 Dell Products L.P. Graphical user interface for configuration of a storage system
US6671776B1 (en) * 1999-10-28 2003-12-30 Lsi Logic Corporation Method and system for determining and displaying the topology of a storage array network having multiple hosts and computer readable medium for generating the topology
US6845395B1 (en) * 1999-06-30 2005-01-18 Emc Corporation Method and apparatus for identifying network devices on a storage network
US7305462B2 (en) * 2001-09-27 2007-12-04 Hitachi, Ltd. Data storage system and control method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3011035B2 (en) * 1994-12-08 2000-02-21 株式会社日立製作所 Computer system
US6253240B1 (en) * 1997-10-31 2001-06-26 International Business Machines Corporation Method for producing a coherent view of storage network by a storage network manager using data storage device configuration obtained from data storage devices
JP4232283B2 (en) * 1999-08-10 2009-03-04 ソニー株式会社 Access history presentation method, access history presentation device, resource provision method and resource provision device, and computer-readable recording medium recording a program
US6598179B1 (en) * 2000-03-31 2003-07-22 International Business Machines Corporation Table-based error log analysis

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5625795A (en) * 1994-05-13 1997-04-29 Mitsubishi Denki Kabushiki Kaisha Exclusive control unit for a resource shared among computers
US6308243B1 (en) * 1997-07-08 2001-10-23 Sanyo Electric Co., Ltd. Method and system for controlling exclusive access to shared resources in computers
US6212520B1 (en) * 1997-10-16 2001-04-03 Fujitsu Limited Database management system based on client/server architecture and storage medium storing a program therefor
US5973690A (en) * 1997-11-07 1999-10-26 Emc Corporation Front end/back end device visualization and manipulation
US6363457B1 (en) * 1999-02-08 2002-03-26 International Business Machines Corporation Method and system for non-disruptive addition and deletion of logical devices
US6845395B1 (en) * 1999-06-30 2005-01-18 Emc Corporation Method and apparatus for identifying network devices on a storage network
US6538669B1 (en) * 1999-07-15 2003-03-25 Dell Products L.P. Graphical user interface for configuration of a storage system
US6671776B1 (en) * 1999-10-28 2003-12-30 Lsi Logic Corporation Method and system for determining and displaying the topology of a storage array network having multiple hosts and computer readable medium for generating the topology
US7305462B2 (en) * 2001-09-27 2007-12-04 Hitachi, Ltd. Data storage system and control method thereof

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7849278B2 (en) 2004-11-01 2010-12-07 Hitachi, Ltd Logical partition conversion for migration between storage units
US20080052433A1 (en) * 2004-12-22 2008-02-28 Hitachi, Ltd. Storage system
US7822894B2 (en) * 2004-12-22 2010-10-26 Hitachi, Ltd Managing storage system configuration information
US20080140931A1 (en) * 2006-12-08 2008-06-12 Fujitsu Limited Disk array system, disk array method, and computer product
US20120221729A1 (en) * 2011-02-24 2012-08-30 Hitachi, Ltd. Computer system and management method for the computer system and program
US8782191B2 (en) * 2011-02-24 2014-07-15 Hitachi, Ltd. Computer system having representative management computer and management method for multiple target objects
US9088528B2 (en) 2011-02-24 2015-07-21 Hitachi, Ltd. Computer system and management method for the computer system and program
CN103516761A (en) * 2012-06-29 2014-01-15 上海斐讯数据通信技术有限公司 Time-sharing control method for server accessed by multiple terminals and cloud computing system

Also Published As

Publication number Publication date
JP2003108420A (en) 2003-04-11
US7305462B2 (en) 2007-12-04
US20030061331A1 (en) 2003-03-27

Similar Documents

Publication Publication Date Title
US7305462B2 (en) Data storage system and control method thereof
US7096336B2 (en) Information processing system and management device
US10191675B2 (en) Methods and system of pooling secondary storage devices
CN100430914C (en) Storing system having vitual source
US7124139B2 (en) Method and apparatus for managing faults in storage system having job management function
US7502902B2 (en) Storage system and data movement method
US7702962B2 (en) Storage system and a method for dissolving fault of a storage system
US20070283091A1 (en) Method, computer and computer system for monitoring performance
JP2008077325A (en) Storage device and method for setting storage device
US20080065829A1 (en) Storage system, storage system control method, and storage controller
US6823348B2 (en) File manager for storing several versions of a file
US20060015871A1 (en) Storage system management software cooperation method
JP2000089984A (en) Multifile management system

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION