US20090013213A1 - Systems and methods for intelligent disk rebuild and logical grouping of san storage zones - Google Patents

Systems and methods for intelligent disk rebuild and logical grouping of san storage zones Download PDF

Info

Publication number
US20090013213A1
US20090013213A1 US12/167,249 US16724908A US2009013213A1 US 20090013213 A1 US20090013213 A1 US 20090013213A1 US 16724908 A US16724908 A US 16724908A US 2009013213 A1 US2009013213 A1 US 2009013213A1
Authority
US
United States
Prior art keywords
rebuilding
storage
drives
data
replacement drive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/167,249
Inventor
Dean Kalman
Jeffrey MacFarland
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Steel Excel Inc
Original Assignee
Adaptec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Adaptec Inc filed Critical Adaptec Inc
Priority to US12/167,249 priority Critical patent/US20090013213A1/en
Assigned to ADAPTEC, INC. reassignment ADAPTEC, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KALMAN, DEAN, MACFARLAND, JEFFREY
Publication of US20090013213A1 publication Critical patent/US20090013213A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4405Initialisation of multiprocessor systems

Definitions

  • Embodiments of this invention generally relates to replacing a failed disk drive that is part of a RAID drive group and rebuilding the replaced disk drive and creating logical grouping of the SAN Storages.
  • RAID controllers go through a process called rebuild. For RAID 1, this would involve a copy operation from the surviving drive to the replaced drive. For RAID 5 and RAID 6, this would involve a reconstruction of the data or parity from the surviving drives to the replaced drive.
  • embodiments of the invention provide methods and systems for intelligent rebuilding of the replaced disk drive after disk failure, and creating SAN storage zones to logically group a plurality of storage devices.
  • rebuild times are becoming exorbitantly long, taking may hours or days. Long rebuild times are a detriment since they impact the overall RAID controller performance and in addition leaving user data exposed without protection. If for example a second drive fails while a RAID 5 drive group is rebuilding, the drive group will go offline and the data on that drive group will be lost. Speeding up rebuild times is therefore an essential requirement going forward.
  • an embodiment to speed up rebuild times is to use a host write tracking persistent log. The log is configured to keep track of what areas on the disk group have been written by the host since the drive group was constructed. As result, there is no need to reconstruct an unwritten area since there is no data to reconstruct.
  • a method of rebuilding a replacement drive used in a RAID group of drives includes tracking data modification operations continuously during use of the drives.
  • the method also includes saving the tracked data modifications to a log in a persistent storage, where the tracked data modifications are associated with stripe data present on the drives. Then, rebuilding a failed one of the drives with a replacement drive.
  • the rebuilding is facilitated by referencing the log from the persistent storage, and the log facilitating reading only portions of stripe data from surviving drives and omitting reading of portions from the drives where no data was written. Thus, the rebuilding only rebuilds the stripe data to the replacement drive.
  • storage zones are defined.
  • the logical grouping of SAN storage based on location or other characteristics is established, instead of based upon individual storage enclosures within a SAN.
  • the storage zone can consist of all the storage located within one computer rack, the storage contained within a building, or storage with particular characteristics, such as performance, cost, and reliability.
  • initiator permissions are defined for each created storage zone.
  • One benefit of zoning is it allows for simplified storage administration, simplified storage allocation and/or use. Initiator permissions and policy are then associated with storage zones.
  • SAN storage can be allocated via “logical grouping” and not individual storage enclosures.
  • a method of creating storage area network zones includes identifying a plurality of storage devices. Then, assigning each of the plurality of storage devices to a logical group, where the logical group being identified by characteristics. Then, presenting the plurality of storage devices as part of the logical group without regard to enclosure identifications. Access and control properties are then assigned to the logical group, which provide access to the plurality of storage devices. Administration is also now carried out for the logical group, instead of the physical characteristics or individual SANs. Thus, easy SAN grouping can be carried out, where administration is simplified.
  • FIGS. 1 and 2 show stripe data tables, illustrating data associated with rebuilding of a replacement disk drive after disk failure, in accordance with one embodiment of the present invention.
  • Embodiments of the invention provide methods and systems for intelligent rebuilding of the replaced disk drive after disk failure and creating SAN storage zones to logically group a plurality of storage devices.
  • IP packets In iSCSI (Internet Small Computer Systems Interface) compliant Storage Area Networks, the SCSI commands are sent in IP packets.
  • IP packets Use of IP packets to send SCSI commands to the disk arrays enables implementation of a SAN over an existing Ethernet. Leveraging the IP network for implementing SAN also permits use of IP and Ethernet features, such as sorting out packet routes and alternate paths for sending the packets.
  • iSCSI is a protocol that allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers.
  • This Storage Area Network (SAN) protocol allows organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally-attached disks.
  • SAN Storage Area Network
  • iSCSI can be run over long distances using existing network infrastructure.
  • Initiators are machines that need to access data and targets are machines that provide the data.
  • a target could be a RAID array or another computer system.
  • Targets handle iSCSI requests from initiators.
  • Target machines may include hot standby machines with “mirrored” storage. If the active machine fails, the standby machine will take over to provide the iSCSI service, and when the failed machine returns, the failed machine will re-synchronize with the standby machine and then take back the iSCSI service.
  • the main performance-limiting issues with disk storage relate to the slow mechanical components that are used for positioning and transferring data. Since a RAID drive group has many drives in it, an opportunity presents itself to improve performance by using the hardware in all these drives in parallel. For example, if we need to read a large file, instead of pulling it all from a single hard disk, it is much faster to chop it up into pieces, store some of the pieces on each of the drives in the group, and then use all the disks to read back the file when needed. This technique of chopping up pieces of files is called striping.
  • Striping can be done at the byte level, or in blocks.
  • Byte-level striping means that the file is broken into “byte-sized pieces”. The first byte of the file is sent to the first drive, then the second to the second drive, and so on. Sometimes byte-level striping is done as a sector of 512 bytes.
  • Block-level striping means that each file is split into blocks of a certain size and those are distributed to the various drives. The size of the blocks used is also called the stripe size (or block size, or several other names), and can be selected from a variety of choices when the drive group is set up.
  • the system and methods described herein provides a faster way of rebuilding the replaced disk in a RAID group by tracking data modification operations (or stripping information) (e.g. write, delete, update) continuously and rebuilding the replaced drive by reading only the portions of stripe from one or more surviving disk drives in the RAID array.
  • data modification operations e.g. write, delete, update
  • the disk rebuild time is enhanced by the use of a persistent write operations tracking module.
  • the persistent write operations tracking module keeps track of what areas on the disk group have been written by the host since the drive group was constructed.
  • the tracking information is stored in a persistent tracking log. With the information contained in the persistent tracking log, a replaced disk drive can be rebuilt quickly by selectively reading only parts (e.g. stripping information) of one or more surviving disk drives. There is no need to reconstruct an unwritten area since there is no data to reconstruct.
  • a simplified example using a RAID 1 drive group is shown in FIG. 1 .
  • the persistent tracking log is used to track the stripes that have been written.
  • FIG. 2 illustrates an example of the persistent tracking log.
  • the rebuild algorithm looks at the persistent log and determines which stripes need to be rebuilt.
  • stripes 0 , 1 , and 3 need to be rebuilt and stripes 2 , 4 , 5 , and 6 do not need to be rebuilt because the “written” flag is “false”, which means that after no data was written in stripes 2 , 4 , 5 , and 6 after the disk drive group was constructed or put to work in the RAID.
  • This simple example would result in a >50% increased rebuild time.
  • a percentage savings can be identified as a function of used and unused space on disk drives being rebuilt.
  • the persistent tracking log is maintained by the RAID controller. In other embodiment, the persistent tracking log may be maintained by any component of the computing system to which the RAID array is in communication with so long as the persistent tracking log can be retrieved at a later time to rebuild the replacement drive.
  • the persistent tracking log in one embodiment, is stored in a relational database. In other embodiment, the persistent tracking log is stored in a non-volatile memory, including a disk drive, ROM, Flash Memory, or any similar storage media.
  • SAN storage zones to logically group a plurality of storage devices.
  • the advantages provided by this embodiment are numerous. Most notably, the system and methods described herein eliminate a need for the user to keep track of the storage characteristics, and location of each individual storage enclosure.
  • a logical group consisting of a plurality of storage enclosures that may be located at different locations and having different storage characteristics is created.
  • the logical group of storage enclosures is then made available as a single storage enclosure to the user.
  • the administrator of the logical group may modify the characteristics of the logical group by adding or removing one or more storage enclosures, changing locations of the one or more storage enclosures in a logical group.
  • the storage enclosures in a logical group are hidden from the user. Hence, any change (e.g., adding or removing enclosures, changing location, etc.) in the structure of logical groups does not affect overall system configuration and usage. Therefore, the logical grouping of the storage enclosures simplifies the management of the Storage Area Network (SAN) and permits efficient storage, configuration and privilege management.
  • SAN Storage Area Network
  • SAN storage is no longer viewed at the enclosure level.
  • the storage enclosures are logically grouped together to meet customers' unique requirements for administrating, provisioning, and usage of the storage enclosures.
  • the storage administrator defines the storage zone by creating a logical group and adding the selected storage enclosures to the local group.
  • the access control properties are then defined and permissions to individual storage initiators e.g., iSCSI (Internet Small Computer Systems Interface), Fibre Channel (FC), SAS, etc.
  • Initiator permissions can be unique for each initiator within a storage zone.
  • logical groups of initiators can also be defined and added to a particular storage zone.
  • the SAN administrator(s) defines grouping properties for each of the physical and logical storage coupled to the SAN appliances.
  • the SAN appliance as described herein a box including slots for a plurality of server blades, RAID disk arrays, and SAN control and management software to control and manage the server blades, RAID, data buses, and other necessary components of the SAN.
  • the properties may include location of the storage, names of special characteristics, capabilities, and type of the storage.
  • each property in the properties is structured in a tree structure format. For example, under a “Location” named node in the property tree structure, a nod named “Building 23” is created. Under the “Building 23” node, a child node named “Server Room A” may be created. More sibling and child nodes may be created to properly identify a location.
  • the properties may be stored anywhere in the SAN so long as the appliance in which the zone grouping is being created may read the properties.
  • the zone grouping rule may define a set of properties that if matched would trigger creation of a zone group.
  • a zone grouping rule may be set to be active or inactive.
  • the appliance discovers all the storages that are coupled to the appliance and retrieves the properties associated with each of the storage. Further, based on one or more active zone grouping rules, the appliance attempt to match the properties of the storages. If a matching rule is satisfied, the appliance creates a zone group of the storages that provides matching properties as defined by one or more zone grouping rule.
  • the zone groups are then permanently stored in the appliance.
  • the SAN administrator may edit the zone groups if a change in the group is necessary.
  • a set of default group properties is provided.
  • One or more default group properties are attached to a newly created zone group.
  • the zone group rule would include which default group proprieties are to be used for a newly created group.
  • the group properties may include permissions and privilege grants to one or more storage initiators.
  • storage zones may be created by grouping the storage enclosures based on a location.
  • storage zones may be created by grouping the storage enclosures based on reliability characteristics of the storage enclosures.
  • a zone group may be created based on any physical or logical characteristics so long as the physical or logical characteristics is defined in the property of the storage enclosures and one or more zone group rules are defined to use the physical or logical characteristics to create zone groups.
  • initiator storage allocation does not require involvement of the Storage Area Network (SAN) administrator.
  • the storage initiators work with the storage zones and not with the physical storage enclosures.
  • more storage enclosures can be seamlessly added to a storage zone without impacting availability of storage interface to the initiators of users and without a need to create access control properties for the newly added storage enclosure.
  • new storage initiators may be added to a storage zone without impacting the usage of the physical storage enclosures in the storage zone.
  • a storage zone is treated same as a physical storage enclosures, a unique set of permission may be associated with the storage zone, similar to associating access control properties to a physical storage enclosure. Therefore, the logical grouping of SAN storage greatly simplified the administration and use of the storage enclosures.
  • the invention may employ various hardware and software implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing.
  • the invention also relates to a device or an apparatus for performing these operations.
  • the apparatus may be specially constructed for the required purposes, such as the carrier network discussed above, or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer.
  • various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
  • the programming modules, page modules, and, subsystems described in this document can be implemented using a programming language such as Flash, JAVA, C++, C, C#, Visual Basic, JAVA Script, PHP, XML, HTML etc., or a combination of programming languages. Commonly available application programming interface (API) such as HTTP API, XML API and parsers etc. are used in the implementation of the programming modules.
  • API application programming interface
  • the components and functionality described above and elsewhere in this document may be implemented on any desktop operating system which provides a support for a display screen, such as different versions of Microsoft Windows, Apple Mac, Unix/X-Windows, Linux etc. using any programming language suitable for desktop software development.
  • the invention may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like.
  • the invention may also be practiced in distributing computing environments where tasks are performed by remote processing devices that are linked through a network.
  • a storage area network is an architecture to attach remote computer storage devices (such as disk arrays, tape libraries and optical jukeboxes) to servers in such a way that, to the operating system, the devices appear as locally attached.
  • remote computer storage devices such as disk arrays, tape libraries and optical jukeboxes
  • the invention can also be embodied as computer readable code on a computer readable medium.
  • the computer readable medium is any data storage device that can store data, which can thereafter be read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, DVDs, Flash, magnetic tapes, and other optical and non-optical data storage devices.
  • the computer readable medium can also be distributed over a network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.

Abstract

A method of rebuilding a replacement drive used in a RAID group of drives is disclosed. The rebuilding method includes tracking data modification operations continuously during use of the drives. The method also includes saving the tracked data modifications to a log in a persistent storage, where the tracked data modifications are associated with stripe data present on the drives. Then, rebuilding a failed one of the drives with a replacement drive. The rebuilding is facilitated by referencing the log from the persistent storage, and the log facilitating reading only portions of stripe data from surviving drives and omitting reading of portions from the drives where no data was written. Thus, the rebuilding only rebuilds the stripe data to the replacement drive. Also provided is a zoning method, which enables logical zone creation from storage area networks.

Description

    CLAIM OF PRIORITY
  • This application claims the benefit of (1) U.S. Provisional Application No. 60/947,851, filed on Jul. 3, 2007, and entitled “Systems and Methods for Automatic Storage Initiators Grouping in a Multi-Path Storage Environment; (2) U.S. Provisional Application No. 60/947,878, filed on Jul. 3, 2007, and entitled “Systems and Methods for Server-Wide Initiator Grouping in a Multi-Path Storage Environment; (3) U.S. Provisional Patent Application No. 60/947,881, filed on Jul. 3, 2007, and entitled “Systems and Methods for Intelligent Disk Rebuild;” (4) U.S. Provisional Patent Application No. 60/947,884, filed on Jul. 3, 2007, and entitled “Systems and Methods for Logical Grouping of San Storage Zones;” and (5) U.S. Provisional Patent Application No. 60/947,886, filed on Jul. 3, 2007, and entitled “Systems and Methods for Automatic Provisioning of Storage and Operating System Installation,” the disclosures of which are incorporated herein by reference.
  • FIELD OF THE INVENTION
  • Embodiments of this invention generally relates to replacing a failed disk drive that is part of a RAID drive group and rebuilding the replaced disk drive and creating logical grouping of the SAN Storages.
  • BACKGROUND OF THE INVENTION
  • When a drive fails that is part of a RAID 1, RAID 5, or RAID 6 drive group, the failing drive should to be replaced. Once the failing drive is replaced, RAID controllers go through a process called rebuild. For RAID 1, this would involve a copy operation from the surviving drive to the replaced drive. For RAID 5 and RAID 6, this would involve a reconstruction of the data or parity from the surviving drives to the replaced drive.
  • Currently storage is allocated from individual storage enclosures. When provisioning the storage in a SAN environment, the user must understand the location, capabilities, reliability and access control associated with each storage enclosure. Therefore, the user needs to keep track of each storage enclosure for its location, reliability, capabilities, and access control characteristics.
  • In view of these issues, embodiments of the invention arise.
  • SUMMARY
  • Broadly speaking, embodiments of the invention provide methods and systems for intelligent rebuilding of the replaced disk drive after disk failure, and creating SAN storage zones to logically group a plurality of storage devices.
  • In one embodiment, with the increase in disk drive sizes, rebuild times are becoming exorbitantly long, taking may hours or days. Long rebuild times are a detriment since they impact the overall RAID controller performance and in addition leaving user data exposed without protection. If for example a second drive fails while a RAID 5 drive group is rebuilding, the drive group will go offline and the data on that drive group will be lost. Speeding up rebuild times is therefore an essential requirement going forward. In this embodiment, an embodiment to speed up rebuild times is to use a host write tracking persistent log. The log is configured to keep track of what areas on the disk group have been written by the host since the drive group was constructed. As result, there is no need to reconstruct an unwritten area since there is no data to reconstruct.
  • In another embodiment, a method of rebuilding a replacement drive used in a RAID group of drives is disclosed. The method includes tracking data modification operations continuously during use of the drives. The method also includes saving the tracked data modifications to a log in a persistent storage, where the tracked data modifications are associated with stripe data present on the drives. Then, rebuilding a failed one of the drives with a replacement drive. The rebuilding is facilitated by referencing the log from the persistent storage, and the log facilitating reading only portions of stripe data from surviving drives and omitting reading of portions from the drives where no data was written. Thus, the rebuilding only rebuilds the stripe data to the replacement drive.
  • In another embodiment, storage zones are defined. The logical grouping of SAN storage based on location or other characteristics is established, instead of based upon individual storage enclosures within a SAN. For example, the storage zone can consist of all the storage located within one computer rack, the storage contained within a building, or storage with particular characteristics, such as performance, cost, and reliability. Along these lines, initiator permissions are defined for each created storage zone. One benefit of zoning is it allows for simplified storage administration, simplified storage allocation and/or use. Initiator permissions and policy are then associated with storage zones. Thus, SAN storage can be allocated via “logical grouping” and not individual storage enclosures.
  • In yet another embodiment, a method of creating storage area network zones is disclosed. The method includes identifying a plurality of storage devices. Then, assigning each of the plurality of storage devices to a logical group, where the logical group being identified by characteristics. Then, presenting the plurality of storage devices as part of the logical group without regard to enclosure identifications. Access and control properties are then assigned to the logical group, which provide access to the plurality of storage devices. Administration is also now carried out for the logical group, instead of the physical characteristics or individual SANs. Thus, easy SAN grouping can be carried out, where administration is simplified.
  • Other aspects of the invention will become more apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the present invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIGS. 1 and 2 show stripe data tables, illustrating data associated with rebuilding of a replacement disk drive after disk failure, in accordance with one embodiment of the present invention.
  • DETAILED DESCRIPTION
  • Embodiments of the invention provide methods and systems for intelligent rebuilding of the replaced disk drive after disk failure and creating SAN storage zones to logically group a plurality of storage devices.
  • In iSCSI (Internet Small Computer Systems Interface) compliant Storage Area Networks, the SCSI commands are sent in IP packets. Use of IP packets to send SCSI commands to the disk arrays enables implementation of a SAN over an existing Ethernet. Leveraging the IP network for implementing SAN also permits use of IP and Ethernet features, such as sorting out packet routes and alternate paths for sending the packets.
  • iSCSI is a protocol that allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. This Storage Area Network (SAN) protocol allows organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally-attached disks. Unlike Fibre Channel, which requires special-purpose cabling, iSCSI can be run over long distances using existing network infrastructure.
  • In the iSCSI therefore, there are main functional entities, initiators and targets. Initiators are machines that need to access data and targets are machines that provide the data. A target could be a RAID array or another computer system. Targets handle iSCSI requests from initiators. Target machines may include hot standby machines with “mirrored” storage. If the active machine fails, the standby machine will take over to provide the iSCSI service, and when the failed machine returns, the failed machine will re-synchronize with the standby machine and then take back the iSCSI service.
  • With the increase in disk drive sizes, rebuild times are becoming exorbitantly long taking many hours or days. Long rebuild times are a detriment since they impact the overall RAID controller performance and in addition leave the customers data exposed and possibly not protected. If for example a second drive fails while a RAID 5 drive group is rebuilding, the drive group will go offline and the data on that drive group will be lost. Speeding up rebuild times is therefore an essential requirement going forward. The embodiments of the present invention typically provide a faster rebuild of the replaced drive.
  • The main performance-limiting issues with disk storage relate to the slow mechanical components that are used for positioning and transferring data. Since a RAID drive group has many drives in it, an opportunity presents itself to improve performance by using the hardware in all these drives in parallel. For example, if we need to read a large file, instead of pulling it all from a single hard disk, it is much faster to chop it up into pieces, store some of the pieces on each of the drives in the group, and then use all the disks to read back the file when needed. This technique of chopping up pieces of files is called striping.
  • Striping can be done at the byte level, or in blocks. Byte-level striping means that the file is broken into “byte-sized pieces”. The first byte of the file is sent to the first drive, then the second to the second drive, and so on. Sometimes byte-level striping is done as a sector of 512 bytes. Block-level striping means that each file is split into blocks of a certain size and those are distributed to the various drives. The size of the blocks used is also called the stripe size (or block size, or several other names), and can be selected from a variety of choices when the drive group is set up.
  • The advantages of the present invention are numerous. Most notably, the system and methods described herein provides a faster way of rebuilding the replaced disk in a RAID group by tracking data modification operations (or stripping information) (e.g. write, delete, update) continuously and rebuilding the replaced drive by reading only the portions of stripe from one or more surviving disk drives in the RAID array.
  • In one embodiment, the disk rebuild time is enhanced by the use of a persistent write operations tracking module. The persistent write operations tracking module keeps track of what areas on the disk group have been written by the host since the drive group was constructed. The tracking information is stored in a persistent tracking log. With the information contained in the persistent tracking log, a replaced disk drive can be rebuilt quickly by selectively reading only parts (e.g. stripping information) of one or more surviving disk drives. There is no need to reconstruct an unwritten area since there is no data to reconstruct. A simplified example using a RAID 1 drive group is shown in FIG. 1.
  • The persistent tracking log is used to track the stripes that have been written. FIG. 2 illustrates an example of the persistent tracking log.
  • When the rebuild algorithm starts, it looks at the persistent log and determines which stripes need to be rebuilt. In this example illustrated by FIG. 2, stripes 0, 1, and 3 need to be rebuilt and stripes 2, 4, 5, and 6 do not need to be rebuilt because the “written” flag is “false”, which means that after no data was written in stripes 2, 4, 5, and 6 after the disk drive group was constructed or put to work in the RAID. This simple example would result in a >50% increased rebuild time. Thus, a percentage savings can be identified as a function of used and unused space on disk drives being rebuilt.
  • In one embodiment, the persistent tracking log is maintained by the RAID controller. In other embodiment, the persistent tracking log may be maintained by any component of the computing system to which the RAID array is in communication with so long as the persistent tracking log can be retrieved at a later time to rebuild the replacement drive. The persistent tracking log, in one embodiment, is stored in a relational database. In other embodiment, the persistent tracking log is stored in a non-volatile memory, including a disk drive, ROM, Flash Memory, or any similar storage media.
  • In accordance with another embodiment, methods and systems for creating SAN storage zones to logically group a plurality of storage devices is provided. The advantages provided by this embodiment are numerous. Most notably, the system and methods described herein eliminate a need for the user to keep track of the storage characteristics, and location of each individual storage enclosure.
  • Instead, a logical group consisting of a plurality of storage enclosures that may be located at different locations and having different storage characteristics is created. The logical group of storage enclosures is then made available as a single storage enclosure to the user. The administrator of the logical group may modify the characteristics of the logical group by adding or removing one or more storage enclosures, changing locations of the one or more storage enclosures in a logical group.
  • In one embodiment, the storage enclosures in a logical group are hidden from the user. Hence, any change (e.g., adding or removing enclosures, changing location, etc.) in the structure of logical groups does not affect overall system configuration and usage. Therefore, the logical grouping of the storage enclosures simplifies the management of the Storage Area Network (SAN) and permits efficient storage, configuration and privilege management.
  • With the creation of the storage zone, i.e., the logical grouping of the storage enclosures, SAN storage is no longer viewed at the enclosure level. The storage enclosures are logically grouped together to meet customers' unique requirements for administrating, provisioning, and usage of the storage enclosures.
  • The storage administrator defines the storage zone by creating a logical group and adding the selected storage enclosures to the local group. The access control properties are then defined and permissions to individual storage initiators e.g., iSCSI (Internet Small Computer Systems Interface), Fibre Channel (FC), SAS, etc. Initiator permissions can be unique for each initiator within a storage zone. In one embodiment, logical groups of initiators can also be defined and added to a particular storage zone.
  • In one embodiment, the SAN administrator(s) defines grouping properties for each of the physical and logical storage coupled to the SAN appliances. The SAN appliance as described herein a box including slots for a plurality of server blades, RAID disk arrays, and SAN control and management software to control and manage the server blades, RAID, data buses, and other necessary components of the SAN. The properties may include location of the storage, names of special characteristics, capabilities, and type of the storage. In one embodiment, each property in the properties is structured in a tree structure format. For example, under a “Location” named node in the property tree structure, a nod named “Building 23” is created. Under the “Building 23” node, a child node named “Server Room A” may be created. More sibling and child nodes may be created to properly identify a location. The properties may be stored anywhere in the SAN so long as the appliance in which the zone grouping is being created may read the properties.
  • One or more zone grouping rules are then created and stored in the SAN. The zone grouping rule may define a set of properties that if matched would trigger creation of a zone group. A zone grouping rule may be set to be active or inactive. The appliance discovers all the storages that are coupled to the appliance and retrieves the properties associated with each of the storage. Further, based on one or more active zone grouping rules, the appliance attempt to match the properties of the storages. If a matching rule is satisfied, the appliance creates a zone group of the storages that provides matching properties as defined by one or more zone grouping rule. The zone groups are then permanently stored in the appliance. The SAN administrator may edit the zone groups if a change in the group is necessary.
  • A set of default group properties is provided. One or more default group properties are attached to a newly created zone group. The zone group rule would include which default group proprieties are to be used for a newly created group. The group properties may include permissions and privilege grants to one or more storage initiators.
  • In one embodiment, storage zones may be created by grouping the storage enclosures based on a location. In another embodiment, storage zones may be created by grouping the storage enclosures based on reliability characteristics of the storage enclosures. In yet another embodiment, a zone group may be created based on any physical or logical characteristics so long as the physical or logical characteristics is defined in the property of the storage enclosures and one or more zone group rules are defined to use the physical or logical characteristics to create zone groups.
  • By providing a layer of abstraction over the storage initiators and storage enclosures, initiator storage allocation does not require involvement of the Storage Area Network (SAN) administrator. The storage initiators work with the storage zones and not with the physical storage enclosures. Furthermore, more storage enclosures can be seamlessly added to a storage zone without impacting availability of storage interface to the initiators of users and without a need to create access control properties for the newly added storage enclosure. Similarly, new storage initiators may be added to a storage zone without impacting the usage of the physical storage enclosures in the storage zone.
  • Since from usage view point, a storage zone is treated same as a physical storage enclosures, a unique set of permission may be associated with the storage zone, similar to associating access control properties to a physical storage enclosure. Therefore, the logical grouping of SAN storage greatly simplified the administration and use of the storage enclosures.
  • With the above embodiments in mind, it should be understood that the invention may employ various hardware and software implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing.
  • Any of the operations described herein that form part of the invention are useful machine operations. The invention also relates to a device or an apparatus for performing these operations. The apparatus may be specially constructed for the required purposes, such as the carrier network discussed above, or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
  • The programming modules, page modules, and, subsystems described in this document can be implemented using a programming language such as Flash, JAVA, C++, C, C#, Visual Basic, JAVA Script, PHP, XML, HTML etc., or a combination of programming languages. Commonly available application programming interface (API) such as HTTP API, XML API and parsers etc. are used in the implementation of the programming modules. As would be known to those skilled in the art that the components and functionality described above and elsewhere in this document may be implemented on any desktop operating system which provides a support for a display screen, such as different versions of Microsoft Windows, Apple Mac, Unix/X-Windows, Linux etc. using any programming language suitable for desktop software development.
  • The programming modules and ancillary software components, including configuration file or files, along with setup files required for installing and related functionality as described in this document, are stored on a computer readable medium. Any computer medium such as a flash drive, a CD-ROM disk, an optical disk, a floppy disk, a hard drive, a shared drive, and an storage suitable for providing downloads from connected computers, could be used for storing the programming modules and ancillary software components. It would be known to a person skilled in the art that any storage medium could be used for storing these software components so long as the storage medium can be read by a computer system.
  • The invention may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The invention may also be practiced in distributing computing environments where tasks are performed by remote processing devices that are linked through a network.
  • As used herein, a storage area network (SAN) is an architecture to attach remote computer storage devices (such as disk arrays, tape libraries and optical jukeboxes) to servers in such a way that, to the operating system, the devices appear as locally attached.
  • The invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can thereafter be read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, DVDs, Flash, magnetic tapes, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
  • While this invention has been described in terms of several preferable embodiments, it will be appreciated that those skilled in the art upon reading the specifications and studying the drawings will realize various alternation, additions, permutations and equivalents thereof. It is therefore intended that the present invention includes all such alterations, additions, permutations, and equivalents as fall within the true spirit and scope of the claims.

Claims (12)

1. A method of rebuilding a replacement drive used in a RAID group of drives, comprising:
tracking data modification operations continuously during use of the drives;
saving the tracked data modifications to a log in a persistent storage, the tracked data modifications being associated with stripe data present on the drives; and
rebuilding a failed one of the drives with a replacement drive, the rebuilding being facilitated by referencing the log from the persistent storage, and the log facilitating reading only portions of stripe data from surviving drives and omitting reading of portions from the drives where no data was written, so that the rebuilding only rebuilds the stripe data to the replacement drive.
2. The method of rebuilding a replacement drive as recited in claim 1, wherein RAID level-5 writes data in stripes across multiple drives.
3. The method of rebuilding a replacement drive as recited in claim 1, wherein the replacement drive is rebuilt using the stripe data present on surviving drives that did not experience a failure, and the replacement drive completes the RAID group of drives.
4. The method of rebuilding a replacement drive as recited in claim 1, wherein modification operation include one or more of write operations, delete operations, or update operations.
5. The method of rebuilding a replacement drive as recited in claim 1, wherein the log identifies particular stripes to rebuild.
6. The method of rebuilding a replacement drive as recited in claim 6, wherein the log provides flags identifying written or no data.
7. The method of rebuilding a replacement drive as recited in claim 6, wherein rebuild time is reduced as a percentage of amount of stripes not requiring rebuild.
8. The method of rebuilding a replacement drive as recited in claim 1, wherein the log is stored in a relational database, a disk drive, a ROM, or a Flash Memory.
9. A method of creating storage area network zones, comprising:
identifying a plurality of storage devices;
assigning each of the plurality of storage devices to a logical group, the logical group being identified by characteristics;
presenting the plurality of storage devices as part of the logical group without regard to enclosure identifications;
assigning access control properties to the logical group, which provide access to the plurality of storage devices.
10. A method of creating storage area network zones as recited in claim 9, wherein one or more grouping rules are created and stored in a storage area network zone.
11. A method of creating storage area network zones as recited in claim 10, further comprising:
discovering each storage device in the zone; and
retrieving properties of each storage.
12. A method of creating storage area network zones as recited in claim 10, wherein the characteristics include one or more of location, name, purpose, physical attribute, or logical attribute.
US12/167,249 2007-07-03 2008-07-03 Systems and methods for intelligent disk rebuild and logical grouping of san storage zones Abandoned US20090013213A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/167,249 US20090013213A1 (en) 2007-07-03 2008-07-03 Systems and methods for intelligent disk rebuild and logical grouping of san storage zones

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US94788607P 2007-07-03 2007-07-03
US94785107P 2007-07-03 2007-07-03
US94787807P 2007-07-03 2007-07-03
US94788107P 2007-07-03 2007-07-03
US94788407P 2007-07-03 2007-07-03
US12/167,249 US20090013213A1 (en) 2007-07-03 2008-07-03 Systems and methods for intelligent disk rebuild and logical grouping of san storage zones

Publications (1)

Publication Number Publication Date
US20090013213A1 true US20090013213A1 (en) 2009-01-08

Family

ID=40222352

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/167,250 Expired - Fee Related US7913075B2 (en) 2007-07-03 2008-07-03 Systems and methods for automatic provisioning of storage and operating system installation from pre-existing iSCSI target
US12/167,249 Abandoned US20090013213A1 (en) 2007-07-03 2008-07-03 Systems and methods for intelligent disk rebuild and logical grouping of san storage zones

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/167,250 Expired - Fee Related US7913075B2 (en) 2007-07-03 2008-07-03 Systems and methods for automatic provisioning of storage and operating system installation from pre-existing iSCSI target

Country Status (1)

Country Link
US (2) US7913075B2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090172273A1 (en) * 2007-12-31 2009-07-02 Datadirect Networks, Inc. Method and system for disk storage devices rebuild in a data storage system
US8095828B1 (en) * 2009-08-31 2012-01-10 Symantec Corporation Using a data storage system for cluster I/O failure determination
JP2012127496A (en) * 2010-12-16 2012-07-05 Shih-Chou Wen Easily switchable automatic transmission eccentric shaft
US20130198563A1 (en) * 2012-01-27 2013-08-01 Promise Technology, Inc. Disk storage system with rebuild sequence and method of operation thereof
US20140025990A1 (en) * 2012-07-23 2014-01-23 Hitachi, Ltd. Storage system and data management method
US9110591B2 (en) 2011-04-22 2015-08-18 Hewlett-Packard Development Company, L.P. Memory resource provisioning using SAS zoning
US9335928B2 (en) 2011-10-01 2016-05-10 International Business Machines Corporation Using unused portion of the storage space of physical storage devices configured as a RAID
US10223221B2 (en) 2016-10-06 2019-03-05 International Business Machines Corporation Enclosure-encapsulated RAID rebuild
US10609144B2 (en) 2017-01-30 2020-03-31 Hewlett Packard Enterprise Development Lp Creating a storage area network zone based on a service level agreement
US11269521B2 (en) * 2019-04-30 2022-03-08 EMC IP Holding Company LLC Method, device and computer program product for processing disk unavailability states

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8543799B2 (en) * 2008-05-02 2013-09-24 Microsoft Corporation Client authentication during network boot
JP5091833B2 (en) * 2008-10-28 2012-12-05 株式会社日立製作所 Monitored device management system, management server, and monitored device management method
CN102301328B (en) 2009-01-29 2015-07-15 惠普开发有限公司 Loading A Plurality Of Appliances Onto A Blade
US8930769B2 (en) 2010-08-13 2015-01-06 International Business Machines Corporation Managing operating system deployment failure
US9021199B2 (en) 2012-08-15 2015-04-28 Lsi Corporation Methods and structure for normalizing storage performance across a plurality of logical volumes
CN105573803A (en) * 2015-12-22 2016-05-11 国云科技股份有限公司 Physical machine deployment method

Citations (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5522031A (en) * 1993-06-29 1996-05-28 Digital Equipment Corporation Method and apparatus for the on-line restoration of a disk in a RAID-4 or RAID-5 array with concurrent access by applications
US5557770A (en) * 1993-03-24 1996-09-17 International Business Machines Corporation Disk storage apparatus and method for converting random writes to sequential writes while retaining physical clustering on disk
US5787242A (en) * 1995-12-29 1998-07-28 Symbios Logic Inc. Method and apparatus for treatment of deferred write data for a dead raid device
US5974425A (en) * 1996-12-17 1999-10-26 Oracle Corporation Method and apparatus for reapplying changes to a database
US6067635A (en) * 1995-10-27 2000-05-23 Lsi Logic Corporation Preservation of data integrity in a raid storage device
US20030005354A1 (en) * 2001-06-28 2003-01-02 International Business Machines Corporation System and method for servicing requests to a storage array
US6516425B1 (en) * 1999-10-29 2003-02-04 Hewlett-Packard Co. Raid rebuild using most vulnerable data redundancy scheme first
US6571351B1 (en) * 2000-04-07 2003-05-27 Omneon Video Networks Tightly coupled secondary storage system and file system
US20030120864A1 (en) * 2001-12-26 2003-06-26 Lee Edward K. High-performance log-structured RAID
US20030120863A1 (en) * 2001-12-26 2003-06-26 Lee Edward K. Self-healing log-structured RAID
US6606629B1 (en) * 2000-05-17 2003-08-12 Lsi Logic Corporation Data structures containing sequence and revision number metadata used in mass storage data integrity-assuring technique
US20030188101A1 (en) * 2002-03-29 2003-10-02 International Business Machines Corporation Partial mirroring during expansion thereby eliminating the need to track the progress of stripes updated during expansion
US20030236944A1 (en) * 2002-06-24 2003-12-25 Thompson Mark J. System and method for reorganizing data in a raid storage system
US20040068612A1 (en) * 2002-10-08 2004-04-08 Stolowitz Michael C. Raid controller disk write mask
US6732124B1 (en) * 1999-03-30 2004-05-04 Fujitsu Limited Data processing system with mechanism for restoring file systems based on transaction logs
US6738863B2 (en) * 2000-11-18 2004-05-18 International Business Machines Corporation Method for rebuilding meta-data in a data storage system and a data storage system
US6760807B2 (en) * 2001-11-14 2004-07-06 International Business Machines Corporation System, apparatus and method providing adaptive write policy for disk array controllers
US20040215998A1 (en) * 2003-04-10 2004-10-28 International Business Machines Corporation Recovery from failures within data processing systems
US20050278476A1 (en) * 2004-06-10 2005-12-15 Xiotech Corporation Method, apparatus and program storage device for keeping track of writes in progress on multiple controllers during resynchronization of RAID stripes on failover
US20060041793A1 (en) * 2004-08-17 2006-02-23 Dell Products L.P. System, method and software for enhanced raid rebuild
US20060069947A1 (en) * 2004-09-10 2006-03-30 Fujitsu Limited Apparatus, method and program for the control of storage
US7051156B2 (en) * 2002-11-06 2006-05-23 Synology Inc. Raid-5 disk having cache memory
US20060161805A1 (en) * 2005-01-14 2006-07-20 Charlie Tseng Apparatus, system, and method for differential rebuilding of a reactivated offline RAID member disk
US20070028044A1 (en) * 2005-07-30 2007-02-01 Lsi Logic Corporation Methods and structure for improved import/export of raid level 6 volumes
US20070067670A1 (en) * 2005-09-19 2007-03-22 Xiotech Corporation Method, apparatus and program storage device for providing drive load balancing and resynchronization of a mirrored storage system
US20070088737A1 (en) * 2005-10-18 2007-04-19 Norihiko Kawakami Storage system for managing a log of access
US20070294565A1 (en) * 2006-04-28 2007-12-20 Network Appliance, Inc. Simplified parity disk generation in a redundant array of inexpensive disks
US20080005382A1 (en) * 2006-06-14 2008-01-03 Hitachi, Ltd. System and method for resource allocation in fault tolerant storage system
US20080040553A1 (en) * 2006-08-11 2008-02-14 Ash Kevin J Method and system for grouping tracks for destaging on raid arrays
US7350101B1 (en) * 2002-12-23 2008-03-25 Storage Technology Corporation Simultaneous writing and reconstruction of a redundant array of independent limited performance storage devices
US20080133969A1 (en) * 2006-11-30 2008-06-05 Lsi Logic Corporation Raid5 error recovery logic
US20080195808A1 (en) * 2007-02-14 2008-08-14 Via Technologies, Inc. Data migration systems and methods for independent storage device expansion and adaptation
US20080250269A1 (en) * 2007-04-05 2008-10-09 Jacob Cherian System and Method for Improving Rebuild Speed Using Data in Disk Block
US20080256420A1 (en) * 2007-04-12 2008-10-16 International Business Machines Corporation Error checking addressable blocks in storage
US7577866B1 (en) * 2005-06-27 2009-08-18 Emc Corporation Techniques for fault tolerant data storage
US7653836B1 (en) * 2005-06-10 2010-01-26 American Megatrends, Inc Logging metadata modifications in a data storage system
US7730489B1 (en) * 2003-12-10 2010-06-01 Oracle America, Inc. Horizontally scalable and reliable distributed transaction management in a clustered application server environment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7360072B1 (en) * 2003-03-28 2008-04-15 Cisco Technology, Inc. iSCSI system OS boot configuration modification
US7814126B2 (en) * 2003-06-25 2010-10-12 Microsoft Corporation Using task sequences to manage devices
US20080120403A1 (en) * 2006-11-22 2008-05-22 Dell Products L.P. Systems and Methods for Provisioning Homogeneous Servers

Patent Citations (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5557770A (en) * 1993-03-24 1996-09-17 International Business Machines Corporation Disk storage apparatus and method for converting random writes to sequential writes while retaining physical clustering on disk
US5522031A (en) * 1993-06-29 1996-05-28 Digital Equipment Corporation Method and apparatus for the on-line restoration of a disk in a RAID-4 or RAID-5 array with concurrent access by applications
US6067635A (en) * 1995-10-27 2000-05-23 Lsi Logic Corporation Preservation of data integrity in a raid storage device
US5787242A (en) * 1995-12-29 1998-07-28 Symbios Logic Inc. Method and apparatus for treatment of deferred write data for a dead raid device
US5974425A (en) * 1996-12-17 1999-10-26 Oracle Corporation Method and apparatus for reapplying changes to a database
US6732124B1 (en) * 1999-03-30 2004-05-04 Fujitsu Limited Data processing system with mechanism for restoring file systems based on transaction logs
US6516425B1 (en) * 1999-10-29 2003-02-04 Hewlett-Packard Co. Raid rebuild using most vulnerable data redundancy scheme first
US6571351B1 (en) * 2000-04-07 2003-05-27 Omneon Video Networks Tightly coupled secondary storage system and file system
US6606629B1 (en) * 2000-05-17 2003-08-12 Lsi Logic Corporation Data structures containing sequence and revision number metadata used in mass storage data integrity-assuring technique
US6738863B2 (en) * 2000-11-18 2004-05-18 International Business Machines Corporation Method for rebuilding meta-data in a data storage system and a data storage system
US20030005354A1 (en) * 2001-06-28 2003-01-02 International Business Machines Corporation System and method for servicing requests to a storage array
US6820211B2 (en) * 2001-06-28 2004-11-16 International Business Machines Corporation System and method for servicing requests to a storage array
US6760807B2 (en) * 2001-11-14 2004-07-06 International Business Machines Corporation System, apparatus and method providing adaptive write policy for disk array controllers
US20030120863A1 (en) * 2001-12-26 2003-06-26 Lee Edward K. Self-healing log-structured RAID
US20030120864A1 (en) * 2001-12-26 2003-06-26 Lee Edward K. High-performance log-structured RAID
US20030188101A1 (en) * 2002-03-29 2003-10-02 International Business Machines Corporation Partial mirroring during expansion thereby eliminating the need to track the progress of stripes updated during expansion
US20030236944A1 (en) * 2002-06-24 2003-12-25 Thompson Mark J. System and method for reorganizing data in a raid storage system
US20040068612A1 (en) * 2002-10-08 2004-04-08 Stolowitz Michael C. Raid controller disk write mask
US7051156B2 (en) * 2002-11-06 2006-05-23 Synology Inc. Raid-5 disk having cache memory
US7350101B1 (en) * 2002-12-23 2008-03-25 Storage Technology Corporation Simultaneous writing and reconstruction of a redundant array of independent limited performance storage devices
US20040215998A1 (en) * 2003-04-10 2004-10-28 International Business Machines Corporation Recovery from failures within data processing systems
US7730489B1 (en) * 2003-12-10 2010-06-01 Oracle America, Inc. Horizontally scalable and reliable distributed transaction management in a clustered application server environment
US20050278476A1 (en) * 2004-06-10 2005-12-15 Xiotech Corporation Method, apparatus and program storage device for keeping track of writes in progress on multiple controllers during resynchronization of RAID stripes on failover
US20060041793A1 (en) * 2004-08-17 2006-02-23 Dell Products L.P. System, method and software for enhanced raid rebuild
US20060069947A1 (en) * 2004-09-10 2006-03-30 Fujitsu Limited Apparatus, method and program for the control of storage
US20060161805A1 (en) * 2005-01-14 2006-07-20 Charlie Tseng Apparatus, system, and method for differential rebuilding of a reactivated offline RAID member disk
US7653836B1 (en) * 2005-06-10 2010-01-26 American Megatrends, Inc Logging metadata modifications in a data storage system
US7577866B1 (en) * 2005-06-27 2009-08-18 Emc Corporation Techniques for fault tolerant data storage
US20070028044A1 (en) * 2005-07-30 2007-02-01 Lsi Logic Corporation Methods and structure for improved import/export of raid level 6 volumes
US20070067670A1 (en) * 2005-09-19 2007-03-22 Xiotech Corporation Method, apparatus and program storage device for providing drive load balancing and resynchronization of a mirrored storage system
US20070088737A1 (en) * 2005-10-18 2007-04-19 Norihiko Kawakami Storage system for managing a log of access
US20070294565A1 (en) * 2006-04-28 2007-12-20 Network Appliance, Inc. Simplified parity disk generation in a redundant array of inexpensive disks
US20080005382A1 (en) * 2006-06-14 2008-01-03 Hitachi, Ltd. System and method for resource allocation in fault tolerant storage system
US20080040553A1 (en) * 2006-08-11 2008-02-14 Ash Kevin J Method and system for grouping tracks for destaging on raid arrays
US20080133969A1 (en) * 2006-11-30 2008-06-05 Lsi Logic Corporation Raid5 error recovery logic
US20080195808A1 (en) * 2007-02-14 2008-08-14 Via Technologies, Inc. Data migration systems and methods for independent storage device expansion and adaptation
US20080250269A1 (en) * 2007-04-05 2008-10-09 Jacob Cherian System and Method for Improving Rebuild Speed Using Data in Disk Block
US20080256420A1 (en) * 2007-04-12 2008-10-16 International Business Machines Corporation Error checking addressable blocks in storage

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7877626B2 (en) * 2007-12-31 2011-01-25 Datadirect Networks, Inc. Method and system for disk storage devices rebuild in a data storage system
US20090172273A1 (en) * 2007-12-31 2009-07-02 Datadirect Networks, Inc. Method and system for disk storage devices rebuild in a data storage system
US8095828B1 (en) * 2009-08-31 2012-01-10 Symantec Corporation Using a data storage system for cluster I/O failure determination
JP2012127496A (en) * 2010-12-16 2012-07-05 Shih-Chou Wen Easily switchable automatic transmission eccentric shaft
US9110591B2 (en) 2011-04-22 2015-08-18 Hewlett-Packard Development Company, L.P. Memory resource provisioning using SAS zoning
US9710345B2 (en) 2011-10-01 2017-07-18 International Business Machines Corporation Using unused portion of the storage space of physical storage devices configured as a RAID
US9335928B2 (en) 2011-10-01 2016-05-10 International Business Machines Corporation Using unused portion of the storage space of physical storage devices configured as a RAID
US9087019B2 (en) * 2012-01-27 2015-07-21 Promise Technology, Inc. Disk storage system with rebuild sequence and method of operation thereof
US20130198563A1 (en) * 2012-01-27 2013-08-01 Promise Technology, Inc. Disk storage system with rebuild sequence and method of operation thereof
US9047220B2 (en) * 2012-07-23 2015-06-02 Hitachi, Ltd. Storage system and data management method
US20140025990A1 (en) * 2012-07-23 2014-01-23 Hitachi, Ltd. Storage system and data management method
US20150254016A1 (en) * 2012-07-23 2015-09-10 Hitachi, Ltd. Storage system and data management method
US9411527B2 (en) * 2012-07-23 2016-08-09 Hitachi, Ltd. Storage system and data management method
US10223221B2 (en) 2016-10-06 2019-03-05 International Business Machines Corporation Enclosure-encapsulated RAID rebuild
US10609144B2 (en) 2017-01-30 2020-03-31 Hewlett Packard Enterprise Development Lp Creating a storage area network zone based on a service level agreement
US11269521B2 (en) * 2019-04-30 2022-03-08 EMC IP Holding Company LLC Method, device and computer program product for processing disk unavailability states

Also Published As

Publication number Publication date
US20090013168A1 (en) 2009-01-08
US7913075B2 (en) 2011-03-22

Similar Documents

Publication Publication Date Title
US20090013213A1 (en) Systems and methods for intelligent disk rebuild and logical grouping of san storage zones
US20230393735A1 (en) Real-time analysis for dynamic storage
US10353784B2 (en) Dynamically adjusting the number of replicas of a file according to the probability that the file will be accessed within a distributed file system
JP3895677B2 (en) System for managing movable media libraries using library partitioning
US9613039B2 (en) File system snapshot data management in a multi-tier storage environment
US9336076B2 (en) System and method for controlling a redundancy parity encoding amount based on deduplication indications of activity
US9229749B2 (en) Compute and storage provisioning in a cloud environment
US6640278B1 (en) Method for configuration and management of storage resources in a storage network
JP5537976B2 (en) Method and apparatus for using large capacity disk drive
US9047352B1 (en) Centralized searching in a data storage environment
US8069217B2 (en) System and method for providing access to a shared system image
US20050091353A1 (en) System and method for autonomically zoning storage area networks based on policy requirements
US10216450B2 (en) Mirror vote synchronization
US10162527B2 (en) Scalable and efficient access to and management of data and resources in a tiered data storage system
JP2008539531A (en) Data placement technology for striping data containers across multiple volumes in a storage system cluster
US9244822B2 (en) Automatic object model generation
JP2017524213A (en) Hash-based multi-tenancy for deduplication systems
US20100146039A1 (en) System and Method for Providing Access to a Shared System Image
US20170286144A1 (en) Software-defined storage cluster unified frontend
US7406578B2 (en) Method, apparatus and program storage device for providing virtual disk service (VDS) hints based storage
US20160342335A1 (en) Configuration update management
US20150331759A1 (en) Apparatus, system and method for temporary copy policy
US20160156506A1 (en) Migration of profiles between virtual connect domains
US20150381727A1 (en) Storage functionality rule implementation
US10831794B2 (en) Dynamic alternate keys for use in file systems utilizing a keyed index

Legal Events

Date Code Title Description
AS Assignment

Owner name: ADAPTEC, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KALMAN, DEAN;MACFARLAND, JEFFREY;REEL/FRAME:021279/0288

Effective date: 20080708

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION