WO2006078311A2 - Apparatus, system and method for differential rebuilding of a reactivated offline raid member disk - Google Patents

Apparatus, system and method for differential rebuilding of a reactivated offline raid member disk Download PDF

Info

Publication number
WO2006078311A2
WO2006078311A2 PCT/US2005/023472 US2005023472W WO2006078311A2 WO 2006078311 A2 WO2006078311 A2 WO 2006078311A2 US 2005023472 W US2005023472 W US 2005023472W WO 2006078311 A2 WO2006078311 A2 WO 2006078311A2
Authority
WO
WIPO (PCT)
Prior art keywords
disk
wip
map
stripe group
audit
Prior art date
Application number
PCT/US2005/023472
Other languages
French (fr)
Other versions
WO2006078311A3 (en
Inventor
Charlie Tseng
Kern S. Bhugra
Original Assignee
Ario Data Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ario Data Networks, Inc. filed Critical Ario Data Networks, Inc.
Publication of WO2006078311A2 publication Critical patent/WO2006078311A2/en
Publication of WO2006078311A3 publication Critical patent/WO2006078311A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • G06F11/1084Degraded mode, e.g. caused by single or multiple storage removals or disk failures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • G06F11/1092Rebuilding, e.g. when physically replacing a failing disk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1035Keeping track, i.e. keeping track of data and parity changes

Definitions

  • a redundant RAID array 100b, 100c is configured with a hot standby disk, when one member disk 110 is marked offline, typically a process known as full rebuilding for the offline member disk 110 is automatically started on the hot standby disk in the background.
  • a full rebuilding for a mirrored RAID array 100c or a parity RAID array 100b involves regenerating and writing onto the replacement disk all of the data lost from the offline member disk 110, with the replacement data including any check data being derived from all the surviving member disk(s) 110.
  • a full rebuilding is typically time consuming and can last up to several hours for a large RAID array.
  • Each such canister 130 holds two member disks 111, 112, 113, or 114 having individual carriers and sharing common enclosure accessories such as cooling fan, temperature sensor, and lock mechanism (none shown).
  • the top disk of each such canister 130 for example, member disk 2a 112a of canister 2 130b, is a member disk of RAID-I 120a array.
  • the bottom disk of the same canister 130 for example, member disk 2b 112b of canister 2 130b, is a member disk of RAID-2 120b array.
  • RAID-I 120a and RAID-2 120b are redundant RAID arrays 100b such as shown in FIG. Ib.
  • the two RAID arrays 120a and 120b may be operated independently or combined by data striping. In either case, each RAID array 120 can tolerate one disk failure.
  • the WIP initialization module creates all the WIP map entries, initializing each entry to indicate no DR process is outstanding on the corresponding stripe group.
  • the WIP map clear module clears a WIP map entry for the stripe group wherein the member stripe of the offline member disk was destined to store write data including any check data, indicating a DR process pending.
  • the extended error recovery module reactivates the offline member disk so as to make the disk online again if the disk is capable of electrical communication.
  • the extended error recovery module also detects a state change to online from offline of the offline member disk.
  • the DR registration module registers the DR process on the offline member disk becoming online following the reactivation and de- registers the completed DR process.
  • a method of the present invention is also presented for executing a DR process on a reactivated offline member disk in a redundant RAID array configured without a hot standby disk.
  • the method in the disclosed embodiments substantially includes the steps necessary to carry out the functions presented above with respect to the operation of the described apparatus and system.
  • the method includes creating a WIP map and initializing each entry thereof, clearing a WIP map entry, reactivating the offline member disk, detecting a state change to online from offline, registering a DR process, selecting a stripe group, performing the DR process, setting the WIP map entry for the selected stripe group, and de-registering the DR process upon completion.
  • the WIP initialization module creates all the WIP map entries for the offline member disk, initializing each to indicate no DR process is outstanding on the corresponding stripe group.
  • the WIP map clear module clears a map entry for the stripe group requiring a DR process due to a prior unfulfilled write requirement on the member stripe of the offline member disk.
  • the extended error recovery module reactivates the offline member disk so as to make the disk online again.
  • the DR registration module registers the DR process on the offline member disk becoming online following the reactivation.
  • the stripe group selection module selects a stripe group requiring the DR process.
  • the service module performs the DR process on the reactivated member disk at the stripe within the selected stripe group.
  • the extended error recovery handler detects the state change to online in the offline member disk 11 Ob subsequent to the reactivation, the extended error recovery handler notifies the RAID manager to start a rebuilding process thereon, in the absence of a hot standby disk.
  • the extended error recovery handler would find member disk 2 110b online again, likewise leading to a rebuilding.
  • the system 200 provides means in the storage controller 220 for tracking each changed stripe resulting from a requirement to write user data or any check data on the offline member disk HOb, in the same example, prior to a reactivation.
  • a write requirement may originate from a write data command issued by the host 210 or an internal write request such as a stripe group initialization or an online capacity expansion. Consequently, the RAID manager posts a "mark on the wall" in the storage controller's 220 memory for each stripe group containing such changed stripe.
  • the system 200 further provides means in the storage controller 220 for reconstructing the list of the marks on the wall for stripe groups requiring a DR process subsequent to such event as a power failure.
  • the storage controller 220 may use a non-volatile memory for the most recent list of the marks on the wall and use the fault-tolerant disk storage 260 to store an older list of marks on the wall. Such lists represent an audit-trail log.
  • the storage controller 220 may utilize unused storage space on member disks 110 to form a fault-tolerant disk storage for the audit-trail log instead of the separate fault-tolerant disk storage 260.
  • the system 200 services needs of the host 210 for non-stop data retrieval and storage in a storage system 205 despite any single disk failure and restores any lost data efficiently.
  • the I/O module 360 determines if the data block address is mapped to the offline member disk 110. If not, the I/O module 360 accesses the data block. If the data block address is mapped to the offline member disk 110, the I/O module 360 determines if the VO command is a read command. If not, the I/O module 360 skips an access to the data block and updates any check data on a surviving member disk 110. If the VO command is a read command, the I/O module 360 regenerates data by reading corresponding data blocks of all the surviving member disks 110 in the associated stripe group and computing the Exclusive OR of the contents read. In one embodiment, as a result of executing the I/O command, the I/O module 360 determines if updating any check data of the stripe group is required on a surviving member disk 110. If so, the I/O module 360 updates the check data.
  • the DR registration module 310 registers 425 the DR process subsequent to the state change to online of the reactivated member disk 110 detected by the extended error recovery module 355.
  • the stripe group selection module 330 selects 430 a stripe group from the set of cleared entries of the WTP map 315. For example, if the WEP map 315 is a bit map and the stripe group selection module 330 queries the bit representing stripe group three (3), the stripe group selection module 330 may select stripe group three (3) if the queried bit has a binary value zero (0), indicating that a DR process is pending.
  • the I/O module 360 regenerates 550 data by reading corresponding data blocks of all the surviving member disks 110 in the stripe group and computing the Exclusive OR of the contents read. In one embodiment, as a result of executing the I/O command, the I/O module 360 updates 560 any check data of the stripe group on a surviving member disk 110 if the check data is required to be updated.
  • the method 500 completes the execution of an VO command regardless of whether the logical drive 160 is in an online state or a degraded state.
  • the WIP map clear module 335 clears 410 the entry 610 to a binary zero (0) of the WIP map 315 for stripe group number 2. This type of cleared entries of WIP map 315 indicate that each associated respective stripe group has a pending DR process.
  • the WIP map operation 600 executes the second step: clearing the WIP map 315 entries for each stripe group having a pending DR process.
  • FIGS. 7a, 7b, 7c, and 7d are schematic block diagrams illustrating one embodiment of an exemplary manner of tracking of changed stripes 700 with a WIP map 315 and an audit-trail log 740 in accordance with the present invention.
  • the audit-trail log 740 may store portions of the audit-trail log 740a in the non-volatile memory 350 and remaining portions of the audit-trail log 740b in the fault-tolerant disk storage 260.
  • FIGs. 7a, 7b, 7c, and 7d are schematic block diagrams illustrating one embodiment of an exemplary manner of tracking of changed stripes 700 with a WIP map 315 and an audit-trail log 740 in accordance with the present invention.
  • the audit-trail log 740 may store portions of the audit-trail log 740a in the non-volatile memory 350 and remaining portions of the audit-trail log 740b in the fault-tolerant disk storage 260.

Abstract

An apparatus, system and method are disclosed for rebuilding only changed stripes of an offline member disk in a RAID array that is configured with redundancy and no hot standby (Figure 1) A win-m-progress (WIP) map tracks the changed stripes of the offline member disk prior reactivation and records completion of the differential rebuilding process.

Description

APPARATUS, SYSTEM, AND METHOD FOR DIFFERENTIAL REBUILDING OF A REACTIVATED OFFLINE
RAID MEMBER DISK
BACKGROUND OF THE INVENTION
FIELD OF THE INVENTION
[0001] This invention relates to restoring changed data onto a storage device and more particularly relates to restoring changed data onto a reactivated storage device in a redundant array of independent disks ("RAID") system. DESCRIPTION OF THE RELATED ART
[0002] In a contemporary computing environment, a storage system frequently writes data to and reads data from one or more storage devices through a storage controller. The storage devices are typically hard disk drives, optical disks, solid state disks, magnetic tape drives, DVD disks, CD ROM disks, or the like. Such storage devices are referred to hereinafter as disks.
[0003] One common storage system is a RAID system. In the RAID system, the disks coupled to the storage controller are configured to form a non-redundant or redundant RAID array. One common type of RAID configuration is a striped array. Striping is a method of concatenating multiple disks into one logical drive. Striping involves partitioning each array member disk's storage space into stripes. Each stripe is a number of consecutively addressed data blocks. The stripes are then interleaved across all member disks in the array in a regular rotating pattern, so that the combined space of the logical drive is composed of ordered groups of stripes. Each stripe group includes one stripe from each member disk at the same relative address. The stripes in a stripe group are associated with each other in a way that allows membership in the group to be determined uniquely and unambiguously by the storage controller. [0004] FIGS. Ia, Ib and Ic are schematic block diagrams illustrating one embodiment of RAID arrays 100. As depicted, each member disk 110 in the RAID array 100 comprises five stripes. In FIGS. Ia and Ib, the RAID arrays 100a and 100b include four member disks: member disk 1 11 Oa, member disk 2 110b, member disk 3 110c, and member disk 4 11Od. Each RAID array 100a, 100b comprises twenty (20) stripes arranged in five stripe groups consecutively numbered 0 through 4. Each such stripe group includes one stripe from each of the four member disks HOa, 110b, 110c, and HOd in corresponding locations.
[0005] FIG. Ia shows a configuration of a non-redundant RAID array 100a resulting in a logical drive 160a containing twenty (20) consecutively addressed data stripes configured as user data and numbered Ox, lx,...12x, and 13x in a hexadecimal representation. In FIG. Ib the RAID array 100b is a redundant RAID array, known as parity RAID array, which holds, in addition to user data, check data, commonly referred to as parity and numbered Po, Pi, P2, P3, and P4, distributed throughout the array, occupying one parity stripe per stripe group. The remaining stripes in the array are data stripes. As shown, the configured logical drive 160b has fifteen (15) consecutively addressed data stripes numbered 0χ, Ix,....Dx, and Ex in a hexadecimal representation. Check data in each stripe group is used to regenerate user data for a failed member disk when requested by a host.
[0006] FIG. Ic shows another type of redundant RAID array, a mirrored RAID array 100c, comprising member disk 1 110a and member disk 2 110b. During a write operation, the storage controller writes the same user data simultaneously on both member disksl 10a and 11 Ob in the mirrored RAID array 100c. As illustrated, the configured logical drive 160c includes five consecutively addressed data stripes numbered 0, 1 , 2, 3 and 4. For a read operation, data may be read from either member disk HOa, HOb although the storage controller generally designates one member disk 110 as the master and the other as the backup. [0007] Normally, for a logical drive 160 read or write request, the storage controller maps the specified logical drive 160 data block address to a stripe of a particular RAID member disk 110, accesses the data block, and performs the required operation on the mapped disk. Some requests may involve multiple stripes on separate member disks 110 in the stripe group, and as such, the storage controller may operate the involved member disks 110 independently in parallel. During any such operation, a disk error condition may result in a failure of one member disk 110 in the RAID array 100 to respond to the storage controller's attempt to initiate a certain action, such as a disk selection, a command transfer, a control transfer, or a data transfer. The error condition may be persistent despite a pre- specified number of retries at various operation levels including a soft device reset by the storage controller.
[0008] A disk error condition may also manifest itself as a failure to continue or complete an operation that has been started, hi any case, the storage controller will designate a persistently faulty member disk 110 as offline. Conventionally, such a "dead" disk is sent back to the manufacturer for repair. In some cases in which an operable member disk is removed for a certain service action, the storage controller may also mark the absent member disk 110 offline.
[0009] If the offline member disk 110 is a member of a non-redundant RAID array 100a, for example, the member disk 11 Ob shown in FIG. 1 a, the associated logical drive 160a will be designated as offline, making data inaccessible. In such a case, generally a user- initiated data restoration will have to occur before the associated logical drive 160a is brought back in operation. If the designated offline disk 110 is a member of a parity RAID array 100b such as the member disk 110b shown in FIG. Ib, data associated with the offline member disk HOb is still accessible. The storage controller can regenerate data for the offline member disk 110b based on the contents of all the surviving member disks 110a, 11 Oc, and 11 Od when a request for such data occurs. With a mirrored RAID array 100c such as that shown in FIG. Ic, data is available from a surviving disk 110a if another member disk 11 Ob is offline. On the other hand, with either type of the redundant RAID array 100b, 100c, any user data that is destined for the offline member disk 110 on a write request is not written there although associated check data, if any, is updated on a surviving member disk 110.
[0010] Although a redundant RAID array 100b, 100c can continue to operate with one member disk 110 marked offline, the array 100b, 100c actually enters into a degraded mode of operation and the formed logical drive 16b, 160c, such as that shown in FIG. Ib, FIG. Ic, respectively, is said to be in a "degraded state" until the underlying faulty member disk 110 is replaced and all lost data resulting from the departure of the faulty member disk 110 from the array 100b, 100c is reconstructed on the new disk. The latter process is known as rebuilding. Running in a degraded mode by a RAID array 100b, 100c results in performance degradation and zero tolerance of any subsequent disk failure.
[0011] If a redundant RAID array 100b, 100c is configured with a hot standby disk, when one member disk 110 is marked offline, typically a process known as full rebuilding for the offline member disk 110 is automatically started on the hot standby disk in the background. A full rebuilding for a mirrored RAID array 100c or a parity RAID array 100b involves regenerating and writing onto the replacement disk all of the data lost from the offline member disk 110, with the replacement data including any check data being derived from all the surviving member disk(s) 110. A full rebuilding is typically time consuming and can last up to several hours for a large RAID array.
[0012] Unfortunately, many users do not purchase a spare disk 110 for each such RAID array 100 as a hot standby replacement, knowing that the spare is seldom used, that is, only during the period of a disk failure. If a redundant RAH) array 100b, 100c is pre- configured with no hot standby disk, a hot swap disk, if available, inserted manually in place of the offline member disk 110 can be caused to undergo a similar full rebuilding automatically or manually.
[0013] Hard disk drive manufacturers, for example, receiving aforementioned dead hard disk drives for repair often find them quite operable following a power cycle and/or a special hard reset cycle, clearing the "fatal" error condition. With available advanced disk technology and array packaging technology, the storage controller can attempt to reactivate the offline member disk 110 so as to make the disk 110 online by means of special hard device reset protocols and/or an automated selective power cycle on the offline member disk 110 if the array enclosure is equipped with the latter capability. The success rate of thus bringing dead disks back to life is presently high enough to justify such an extended error recovery procedure for implementation in the storage controller for dead disk reactivation.
[0014] In some cases, a faulty member disk 110 marked offline may be made online by manually removing the disk 110 and re-inserting the disk 110 into the array. In cases in which an operable member disk 110 is designated offline because of the removal of the disk 110, re-insertion of the disk 110 may make the disk 110 online again. FIG. 1 d is a schematic block diagram illustrating one embodiment of a high-density RAID enclosure 150. As shown, the RAID enclosure 150 includes four canisters: canister 1 130a, canister 2 130b, canister 3 130c, and canister 4 130d. Each such canister 130 holds two member disks 111, 112, 113, or 114 having individual carriers and sharing common enclosure accessories such as cooling fan, temperature sensor, and lock mechanism (none shown). The top disk of each such canister 130, for example, member disk 2a 112a of canister 2 130b, is a member disk of RAID-I 120a array. The bottom disk of the same canister 130, for example, member disk 2b 112b of canister 2 130b, is a member disk of RAID-2 120b array. RAID-I 120a and RAID-2 120b are redundant RAID arrays 100b such as shown in FIG. Ib. The two RAID arrays 120a and 120b may be operated independently or combined by data striping. In either case, each RAID array 120 can tolerate one disk failure.
[0015] If, for example, member disk 2a 112a of RAID-I 120a becomes faulty, as depicted in FIG. 1 d, canister 2 130b may be removed from the RAID enclosure 150, and an available hot swap disk (not shown) may replace the faulty member disk 112a. Afterwards, canister 2 130b is re-inserted. Subsequent to the service action, both RAID-I 120a and RAID-2 120b may start full rebuilding independently, with the originally operable member disk 2b 112b restoring the online state in the latter RAID array 120b. Unfortunately, currently a time-consuming full rebuilding is likewise required of such reactivated member disk 112b in RAID-2 120b array.
[0016] From the foregoing discussion, it should be clear that a need exists for an apparatus, system, and method that track the stripes of the offline member disk 110 in a redundant RAID array 100b, 100c that were to be written on prior to making the disk 110 online by a reactivation and that execute a rebuilding only on those tracked stripes subsequent to the reactivation. Beneficially, such an apparatus, system, and method would shorten the duration of the array's degraded mode of operation and reduce the time required to complete rebuilding the reactivated member disk 110.
SUMMARY OF THE INVENTION
[0017] The present invention has been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available RAID systems. Accordingly, the present invention has been developed to provide an apparatus, system, and method for rebuilding only changed stripes of an offline RAID member disk subsequent to a reactivation that overcome many or all of the above-discussed shortcomings in the art.
[0018] The apparatus to execute differential rebuilding ("DR") is provided with a logic unit containing a plurality of modules configured to functionally execute the necessary steps of a DR process on a reactivated offline member disk in a redundant RAID array configured without a hot standby disk. These modules in the described embodiments include a work-in-process ("WTP") map, a WIP map initialization module, a WIP map clear module, an extended error recovery module, a DR registration module, a stripe group selection module, a service module, and a WIP map update module.
[0019] The WIP map is configured for the offline member disk with an entry for each stripe group of the associated RAID array. Each map entry is configured to track the completion of a DR process on the corresponding stripe group. The DR process regenerates and writes data including any check data onto the offline member disk following a reactivation for the stripe group. The WIP map initialization module creates all the WIP map entries and initializes the entries to indicate that no DR process is outstanding on each corresponding stripe group. The WIP map clear module is configured to clear or remove a map entry for the corresponding stripe group having an offline member stripe which was destined but unable to store write data including any check data prior to the reactivation. Such a stripe group potentially requires a DR process to restore such data.
[0020] The extended error recovery module initiates a reactivation to make the offline member disk online if the disk is capable of electrical communication. The extended error recovery module also detects a state change to online from offline of the offline member disk. The DR registration module registers the DR process on the offline member disk becoming online subsequent to the reactivation and de-registers the DR process upon completion of all stripe groups required to undergo the DR process.
[0021] The stripe group selection module selects a stripe group from the set of cleared WIP map entries corresponding to the stripe groups pending a DR process. The service module performs the DR process on the reactivated member disk at the stripe within the selected stripe group. The WIP map update module sets the WIP map entry for the stripe group completing the DR process, indicating no more pending DR process.
[0022] A system of the present invention is also presented to execute a DR process on a reactivated offline member disk in a redundant RAID array configured without a standby replacement. The system in the disclosed embodiments includes a RAID array of member disks, an enclosure for the RAID array, and a storage controller coupled to the RAID array. The storage controller comprises a processor, a memory coupled to the processor, a WIP map, a WIP map initialization module, a WIP map clear module, an extended error recovery module, a DR registration module, a stripe group selection module, a service module, and a WIP map update module. In one embodiment, the system further includes an input/output ("I/O") module, a non-volatile memory, an audit-trail log module, and a fault- tolerant disk storage.
[0023] The WIP initialization module creates all the WIP map entries, initializing each entry to indicate no DR process is outstanding on the corresponding stripe group. The WIP map clear module clears a WIP map entry for the stripe group wherein the member stripe of the offline member disk was destined to store write data including any check data, indicating a DR process pending. The extended error recovery module reactivates the offline member disk so as to make the disk online again if the disk is capable of electrical communication. The extended error recovery module also detects a state change to online from offline of the offline member disk. The DR registration module registers the DR process on the offline member disk becoming online following the reactivation and de- registers the completed DR process. The stripe group selection module selects a stripe group based on the WIP map cleared entries. The service module performs the DR process on the reactivated member disk at the stripe within the selected stripe group. The WIP map update module sets a WIP map entry for the stripe group completing the DR process.
[0024] In one embodiment, the VO module receives an VO command to read or write data. The I/O command comprises a data block address of an active logical drive formed from the RAID array for a data block of a stripe group. The I/O module determines if the logical drive is operating in a degraded mode. If not, the I/O module accesses the data block. If the logical drive is operating in a degraded mode, the VO module determines if a rebuilding process is active. If a rebuilding process is active, the I/O module determines if the associated stripe group is rebuilding pending. If not, the I/O module accesses the data block. If the associated stripe group is rebuilding pending, the I/O module delays access of the data block until the stripe group completes the rebuilding.
[0025] If no rebuilding process is active, the I/O module determines if the data block address is mapped to the offline member disk. If not, the VO module accesses the data block. If the data block address is mapped to the offline member disk, the I/O module determines if the VO command is a read command. If not, the I/O module skips the access of the data block and updates any check data of the associated stripe group on a surviving member disk as appropriate. If the I/O command is a read command, the I/O module regenerates data from member data blocks of all surviving member disks in the associated stripe group. In one embodiment, the I/O module updates any check data in the associated stripe group on a surviving member disk if required as a result of executing the I/O command.
[0026] In one embodiment, the audit-trail log module records an audit-trail log. The audit-trail log is configured as a log of the stripe group identifiers of the cleared WIP map entries. Each log entry indicates that the WIP map entry for the stripe group has been cleared. In a certain embodiment, the audit-trail log is stored in the non-volatile memory. In one embodiment, the audit-trail log module periodically stores a portion of the audit-trail log from the non-volatile memory to the fault-tolerant disk storage. In a further embodiment, the audit-trail log module reconstructs the WIP map from the audit-trail log stored in the nonvolatile memory and the fault-tolerant disk storage. The audit-trail log module may reconstruct the WEP map after the WIP map is inadvertently lost, such as during a power failure.
[0027] A method of the present invention is also presented for executing a DR process on a reactivated offline member disk in a redundant RAID array configured without a hot standby disk. The method in the disclosed embodiments substantially includes the steps necessary to carry out the functions presented above with respect to the operation of the described apparatus and system. In one embodiment, the method includes creating a WIP map and initializing each entry thereof, clearing a WIP map entry, reactivating the offline member disk, detecting a state change to online from offline, registering a DR process, selecting a stripe group, performing the DR process, setting the WIP map entry for the selected stripe group, and de-registering the DR process upon completion.
[0028] The WIP initialization module creates all the WIP map entries for the offline member disk, initializing each to indicate no DR process is outstanding on the corresponding stripe group. The WIP map clear module clears a map entry for the stripe group requiring a DR process due to a prior unfulfilled write requirement on the member stripe of the offline member disk. The extended error recovery module reactivates the offline member disk so as to make the disk online again. The DR registration module registers the DR process on the offline member disk becoming online following the reactivation. The stripe group selection module selects a stripe group requiring the DR process. The service module performs the DR process on the reactivated member disk at the stripe within the selected stripe group. The WIP map update module sets a WIP map entry for the stripe group completing the DR process. The service module determines if the DR process is complete. If the DR process is complete, the DR registration module de-registers the DR process and the method terminates. If the DR process is not complete, the stripe group selection module selects a next stripe group and the service module performs the DR process on the next selected stripe group.
[0029] Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussion of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.
[0030] Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.
[0031] The present invention employs a WIP map to track changed stripes of the offline member disk prior to a reactivation of the disk and the completion of a DR process on each stripe group containing the stripes in the RAID array. In addition, the present invention shortens the duration of the array's degraded mode of operation due to a member disk failure and reduces the time required to complete rebuilding the faulty member disk subsequent to a removal of the fault by rebuilding only changed stripes. These features and advantages of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter. BRIEF DESCRIPTION OF THE DRAWINGS
[0032] In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:
[0033] FIGs. Ia, Ib and Ic are schematic block diagrams illustrating one embodiment of RAID arrays;
[0034] FIG. Id is a schematic block diagram illustrating one embodiment of a high- density RAID enclosure;
[0035] FIG. 2 is a schematic block diagram illustrating one embodiment of a system for fault tolerant data storage and retrieval in accordance with the present invention;
[0036] FIG. 3 is a schematic block diagram illustrating one embodiment of a DR apparatus in accordance with the present invention;
[0037] FIG.4 is a schematic flow chart diagram illustrating one embodiment of a DR method in accordance with the present invention;
[0038] FIG. 5 is a schematic flow chart diagram illustrating one embodiment of a data access method in accordance with the present invention;
[0039] FIGs. 6a, 6b, and 6c are schematic block diagrams illustrating one embodiment of an exemplary WIP map operation for a parity RAID array in accordance with the present invention;
[0040] FIGs. 7a, 7b, 7c, and 7d are schematic block diagrams illustrating one embodiment of an exemplary tracking of changed stripes with a WIP map and an audit-trail log in accordance with the present invention; [0041 ] FIGs. 8a and 8b are schematic block diagrams illustrating one embodiment of an exemplary updating of the WIP map and the audit-trail log in accordance with the present invention; and
[0042] FIGs. 9a, 9b, and 9c are schematic block diagrams illustrating one embodiment of a WIP map recovery in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0043 ] Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom very large scale integration ("VLSI") circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
[0044] Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
[0045] Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different member disks, and may exist, at least partially, merely as electronic signals on a system or network.
[0046] Reference throughout this specification to "one embodiment," "an embodiment," or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment," "in an embodiment," and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
[0047] Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well- known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
[0048] Figure 2 depicts a schematic block diagram illustrating one embodiment of a system 200 for fault tolerant data storage and retrieval in accordance with the present invention. The system 200 services host 210 requests for writing data to and reading data from a storage system 205 with built-in fault tolerance. The storage system 205 includes a storage controller 220, member disk 1 HOa, member disk 2 110b, member disk 3 HOc, member disk 4 11Od, a RAID enclosure 240 housing the member disks 110, a fault-tolerant disk storage 260, and an interface 230 coupling the storage controller 220 to the host 210. In one embodiment, the member disks 110 form a RAID array 100. Although for purposes of clarity, four member disks 110, and one RAID enclosure 240 are shown, any number of member disks 110 and RAID enclosures 240 may be used.
[0049] As is well known to those skilled in the art, the storage controller 220 includes a processor, memory, and various modules used to perform a number of storage control functions in execution of read and write requests from the host 210. In a certain embodiment, the storage controller 220 may also include a non-volatile memory. Supporting various RAID array configurations and operations, main functional elements of the storage controller 220 may include a RAID configuration tracker, and a RAID manager, in addition to typical storage control functions such as an Input/Output ("I/O") interfacing, a host interfacing, an I/O command handler, a bi-directional data mover with buffering, an enclosure interfacing, and an error recovery handler.
[0050] The RAID configuration tracker saves and references information in the storage controller's 220 memory and/or non-volatile memory on the RAID array 100 configuration a user has created by running a RAID array configuration utilities software. In one embodiment, the same configuration information is also stored on each RAID member disk 110. The array configuration information may include the type of RAID array 100 such as illustrated in FIGs. Ia, lb and Ic, namely a non-redundant RAID array 100a, a redundant RAID array 100b with distributed parity or fixed parity (not shown), and a mirrored RAID array 100c, the number and ordering of confi gured member disks 110 in the RAID array 100, the stripe size, zero or more hot standby disks, the number of logical drives 160, and each logical drivelδO size. In a certain embodiment, write cache enablement may also be specified.
[0051] The RAID configuration tracker also tracks the state of each member disk 110. Disk states may include online, offline, standby, and rebuild. A disk is in an online state if the disk is a member of the RAID array 100 and operating properly. A disk is in an offline state if the disk failed to operate, or if the disk is not present, or if the disk is present but not powered on. A disk is in a standby state if the disk is able to operate properly but not defined as a member of the RAID array 100. A disk is in a rebuild state during the process of rebuilding involving data regeneration and writing to the disk.
[0052] Furthermore, the RAID configuration tracker tracks the state of each logical drive 160 formed from the array 100 such as the logical drives 160a, 160b, and 160c shown in FIGs Ia, Ib, and Ic, respectively. Logical drive 160 states may include online, degraded, or offline. A logical drive 160 is in an online state if all the participating member disks 110 of the RAID array 100 are online. A logical drive 160 is in a degraded state if one member disk 110 is offline or in a rebuild state. A logical drive 160 is in an offline state if no data can be read from or written to the logical drive 160. Such logical drive 160 state occurs if the underlying redundant RAID array 100b, 100c has two or more member disks 110 in an offline state, or if the underlying non-redundant RAID array 100a has one or more member disks 110 in an offline state.
[0053] The RAID manager performs a data protection function. For a mirrored array 100c such as depicted in FIG. Ic, the RAID manager causes user data to be simultaneously written on both member disks 110, so that if one member 110b is offline, data is still accessible from the surviving member 110a. For a parity RAID array 100b as shown in FIG. Ib, the RAID manger typically performs check data generation, to protect against data loss and loss of data access due to a single disk or media failure. The term check data refers to any kind of redundant information that allows regeneration of unreadable data from a combination of readable data and the redundant information itself. The parity RAID arrays 100b utilize the Boolean "Exclusive OR" function to compute check data. The function is applied bit-by-bit to corresponding stripes in each array's user data areas, and the result is written to a corresponding parity stripe.
[0054] In conjunction with the I/O command handler, the RAID manager maps the logical drive 160 data block address specified by an I/O command from the host 210 to an array data stripe number and a physical block address on the associated member disk 110 for a read or write operation. On a normal write operation the RAID manager also updates any check data in the associated stripe group. In one embodiment, the RAID Manager reads both the data to be replaced and the old check data from associated member disks 110, computes the Exclusive OR of the two data items together with the replacement data, and rewrites the resultant new check data on the parity stripe.
[0055] In the event that one member disk 110, say member disk 2 110b, fails and is marked offline subsequent to unsuccessful recovery actions of the error recovery handler, and that a data block address from an VO command is mapped to the offline member disk 110b in a parity RAID array 100b such as shown in FIG. Ib, the RAID manager handles the read command differently from a write command. For a read, the RAID manager regenerates the data by reading corresponding data blocks including check data from surviving member disks 110a, 110c, and 11Od and computing the Exclusive OR of the contents thereof.
[0056] For a write, the RAID manager skips the writing of new data on the offline member disk 110b, reads corresponding data blocks excluding check data from the surviving member disks 110, and updates the check data on a surviving member disk 110 by computing the Exclusive OR of the contents of the data blocks together with the new data. If the data block address specified by a write command is mapped to a surviving member disk 110a, 100c, 10Od, and the offline member disk 11 Ob contains associated check data, then the write command is executed as usual, with the omission of updating the check data by the RAID manager. For a write command on a mirrored RAID array 100c, such as illustrated in FIG. Ic, with one member disk 110b marked offline, for example, the RAID manager performs the write only on the surviving member disk 110a.
[0057] With one member disk 110, for example, member disk 2 110b, being designated offline, the logical drive 160b as depicted in FIG. Ib enters into a degraded state, with no more fault tolerance against a subsequent disk failure. In order to restore full data protection, the offline member disk 110b needs to be replaced by a functional disk, and the RAID manager makes the contents of the replacement disk consistent with the contents of the remaining array members. To that end, the RAID manager reads for each stripe group corresponding stripes from each of the surviving original member disks 110a, 110c, 11Od, computes the Exclusive OR of these stripes' contents, and writes the result to the replacement disk's stripe within the stripe group. This process is called rebuilding, and since each stripe group participates in the rebuilding, this kind of rebuilding process is known as full rebuilding. For rebuilding a replacement disk in a mirrored RAID array 100c such as depicted in FIG. 1 c, the RAID manager derives replacement data from the surviving member disk 110. [0058] Many RAID array 100 configurations may not include a hot standby disk. In accordance with the present invention, the error recovery handler extends its error recovery function to include disk reactivation by issuing special device reset protocols or a selective power cycle, if implemented, on the offline member disk 11 Ob in the previous example, if the disk is capable of electrical communication. If the extended error recovery handler detects the state change to online in the offline member disk 11 Ob subsequent to the reactivation, the extended error recovery handler notifies the RAID manager to start a rebuilding process thereon, in the absence of a hot standby disk. In one embodiment, if member disk 2 110b was marked offline because of a removal from the array instead of being faulty, upon reinsertion of member disk 2 110b, the extended error recovery handler would find member disk 2 110b online again, likewise leading to a rebuilding.
[0059] The system 200 provides means in the storage controller 220 for tracking each changed stripe resulting from a requirement to write user data or any check data on the offline member disk HOb, in the same example, prior to a reactivation. Such a write requirement may originate from a write data command issued by the host 210 or an internal write request such as a stripe group initialization or an online capacity expansion. Consequently, the RAID manager posts a "mark on the wall" in the storage controller's 220 memory for each stripe group containing such changed stripe.
[0060] Thus, once the RAID manager starts a rebuilding process on the reactivated member disk 110b in the example, the RAID manager needs to rebuild only each changed stripe thereon based on the marks on the wall. This rebuilding process is referred to as differential rebuilding ("DR"), as opposed to the conventional full rebuilding that reconstructs each stripe of the offline member disk 110b. Subsequent to the rebuilding on each changed stripe, the RAID manager removes the corresponding mark on the wall, indicating that the contents consistency of the associated stripe group has been restored.
[0061] The system 200 further provides means in the storage controller 220 for reconstructing the list of the marks on the wall for stripe groups requiring a DR process subsequent to such event as a power failure. In one embodiment, for access speed reasons, the storage controller 220 may use a non-volatile memory for the most recent list of the marks on the wall and use the fault-tolerant disk storage 260 to store an older list of marks on the wall. Such lists represent an audit-trail log. In an alternate embodiment, the storage controller 220 may utilize unused storage space on member disks 110 to form a fault-tolerant disk storage for the audit-trail log instead of the separate fault-tolerant disk storage 260. The system 200 services needs of the host 210 for non-stop data retrieval and storage in a storage system 205 despite any single disk failure and restores any lost data efficiently.
[0062] FIG. 3 is a schematic block diagram illustrating one embodiment of a DR apparatus 300 of the present invention. The DR apparatus 300 performs and tracks the completion of a DR process configured to rebuild each changed stripe of an offline member disk 110 in a redundant RAID array 100b, 100c with no hot standby disk such as that shown in FIGS. Ib and Ic, subsequent to a state change to online by the disk 110 following a reactivation. The DR apparatus 300 may be located in the Storage Controller 220 of Figure 2. The DR apparatus 300 includes a WIP map 315, a WIP initialization module 325, a WIP map clear module 335, a WEP map update module 345, an extended error recovery module 355, a DR registration module 310, a service module 320, a stripe group selection module 330, an audit-trail log module 340, a non-volatile memory 350, and an I/O module 360.
[0063] The WIP map 315 is configured with a WIP map entry for each stripe group of the RAID array 100. The WIP map entry tracks the completion of a DR process that involves regeneration and writing of data including any check data for a stripe group on the offline member disk 110 reactivated to become online. The WIP map initialization module 325 creates the WIP map 315 and initializes each entry to indicate no outstanding DR process on the corresponding stripe group. The WIP map clear module 335 clears a WIP map entry for a stripe group in which a member stripe belonging to the offline member disk 110 was destined to store user data or any check data but unable to do so prior to the reactivation, indicating a DR process pending subsequent to the reactivation. [0064] The extended error recovery module 355 attempts to reactivate the offline member disk 110 so that the disk may come online provided that the disk is capable of electrical communication. In certain embodiments, a member disk 110 maybe designated offline if the disk is not present, that is, it is temporarily removed from the array. Reinserting the removed member disk 110 may cause the disk to come online again. The extended error recovery module 355 is configured to detect the state change to online from offline of the offline member disk 110 subsequent to a reactivation.
[0065] In one embodiment, a reactivation by the extended error recovery module 355 includes a device reset cycle and an automated selective device power cycle. The extended error recovery module 355 may designate the offline member disk 110 as permanently offline if the disk 110 fails to come online within a pre-specified period of time, hi a certain embodiment, if the extended error recovery module 355 detects that a hot swap disk bearing a new identity such as a unit serial number replaces the offline member disk 110, the extended error recovery module 355 makes the replacement disk a candidate for a full rebuilding process on each stripe.
[0066] The DR registration module 310 registers a DR process subsequent to the reactivation whereby the offline member disk 110 returns to the online state, and de-registers the DR process upon completion. A stripe group selection module 330 selects each stripe group which has a WIP map 315 entry cleared. In one embodiment, the stripe group selection is based on an ascending numerical order of the stripe group number. The service module 320 performs the DR process, which includes regenerating and writing onto the reactivated member disk 110 data including any check data, at the stripe within the selected stripe group. The WIP map update module 345 sets the WIP map 315 entry for the stripe group completing the DR process.
[0067] The WIP map 315 entry for a stripe group may consist of one bit. The WIP initialization module 325 sets the bit of each such entry to a binary one (1) initially, indicating that the corresponding stripe group has no pending DR process. Once write data including any check data is targeted for a stripe of the offline member disk 110 prior to the reactivation, the WIP map clear module 335 clears the bit in the WIP map 315 entry for the associated stripe group to a binary zero (0). Upon completion of a DR process on the stripe group subsequent to the reactivation, the WIP map update module 345 sets the bit back to a binary one (1), indicating that the corresponding stripe group has completed a DR process.
[0068] In certain embodiments, the I/O module 360 receives an I/O command to read or write data. The I/O command includes information such as a data block address of an active logical drive 160 formed from the RAID array 100 and one or more consecutive data blocks to be accessed. The VO module 360 determines if the logical drive 160 is in a degraded state, that is, if one member disk 110 is offline or being rebuilt. If the logical drive 160 is not in a degraded state, the I/O module 360 accesses the data block. If the logical drive 160 is in a degraded state, the I/O module 360 determines if a rebuilding process is active.
[0069] If a rebuilding process is active, the I/O module 360 determines if the associated stripe group has a pending rebuilding process. If not, the I/O module 360 accesses the data block. If the associated stripe group has a pending rebuilding process, the I/O module 360 in one embodiment delays the access of the data block until the rebuilding process is complete on the stripe group.
[0070] If the logical drive 160 is in a degraded state and no rebuilding process is active, the I/O module 360 determines if the data block address is mapped to the offline member disk 110. If not, the I/O module 360 accesses the data block. If the data block address is mapped to the offline member disk 110, the I/O module 360 determines if the VO command is a read command. If not, the I/O module 360 skips an access to the data block and updates any check data on a surviving member disk 110. If the VO command is a read command, the I/O module 360 regenerates data by reading corresponding data blocks of all the surviving member disks 110 in the associated stripe group and computing the Exclusive OR of the contents read. In one embodiment, as a result of executing the I/O command, the I/O module 360 determines if updating any check data of the stripe group is required on a surviving member disk 110. If so, the I/O module 360 updates the check data.
[0071] The audit-trail log module 340 may record an audit-trail log. The audit-trail log is configured as a log of the stπpe group identifiers for each stripe group with a WIP map 315 entry cleared by the WIP map clear module 335. The stripe group identifiers in one embodiment are stripe group numbers such as those shown in FIGS. Ia, Ib, and Ic. Each log entry forms an audit trail indicating that the WIP map 315 entry has been cleared for the indicated stripe group.
[0072] In one embodiment, the fault-tolerant disk storage 260 stores a portion of the audit-trail log. The audit-trail log module 340 may periodically copy portions of the audit- trail log from the non-volatile memory 350 to the fault-tolerant disk storage 260 to free data storage space used by the audit-trail log in the non-volatile memory 350. The audit-trail log may reside in the non-volatile memory 350 or the fault-tolerant disk storage 260. Furthermore, portions of the audit-trail log may reside in both the non- volatile memory 350 and the fault-tolerant disk storage 260.
[0073] In certain embodiments, the audit-trail log module 340 recovers the cleared entries of the WIP map 315 from the audit-trail log. The audit-trail log module 340 may reconstruct the cleared entries of the WIP map 315 after the WIP map 315 is inadvertently lost in an event such as a power failure. In a further embodiment, the audit-trail log module 340 directs the WIP map initialization module 325 to re-initialize the WIP map 315. The audit-trail log module 340 may further read each entry of the audit-trail log and direct the WIP map clear module 335 to clear each corresponding entry of the WEP map 315. The DR apparatus 300 performs and tracks the completion of a DR process on the reactivated member disk 110 at each stripe within a stripe group having a pending DR process as indicated in the corresponding entry in the WIP map 315.
[0074] The schematic flow chart diagrams that follow are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbology employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
[0075] FIG.4 is a schematic flow chart diagram illustrating one embodiment of a DR method 400 in accordance with the present invention. The WIP map initialization module 325 creates 405 a WIP map 315 and initializes 405 each entry thereof, indicating that the corresponding stripe group has no outstanding DR process. In one embodiment, the WIP map 315 is a bit map with single-bit entries for tracking the completion of a DR process for each stripe group. As such, a binary one (1 ) value is given to the bit for each entry. The DR process on a stripe group involves regenerating and writing data including any check data onto an offline member disk 110 becoming online subsequent to a reactivation for the stripe group.
[0076] The WIP map clear module 335 clears 410 a WIP map 315 entry for a stripe group if the member stripe of an offline member disk 110 was destined to store write data including any check data prior to the reactivation of the disk to become online. Such a cleared entry indicates that the corresponding stripe group has a pending DR process and remains cleared until the pending DR process is completed. If the WEP map 315 is a bit map, clearing an entry amounts to resetting the bit to a binary zero (0) from a binary one (1) as initialized. In one embodiment, an audit-trail module 340 enters the number of the stripe group corresponding to the cleared WIP map (315) entry in an auto-trail log.
[0077] The extended error recovery module 355 reactivates 415 the offline member disk 110 if the disk 110 is capable of electrical communication. In one embodiment, the error recovery module 355 issues device reset protocols and/or an automated selective power cycle to the disk 110 for reactivation. In certain embodiments, an offline member disk 110 may be manually reactivated by removing and subsequently re-inserting the disk. In an alternate embodiment, if a member disk 110 became offline due to a removal of the disk 110 from the array 100, the member disk 110 may become online again by re-inserting the disk 110. Whether the reactivation is applied by the extended error recovery module 355 or by the manual maneuver, the extended error recovery module 355 detects 420 a state change to online from offline of the offline member disk 110 subsequent to the reactivation.
[0078] The DR registration module 310 registers 425 the DR process subsequent to the state change to online of the reactivated member disk 110 detected by the extended error recovery module 355. The stripe group selection module 330 selects 430 a stripe group from the set of cleared entries of the WTP map 315. For example, if the WEP map 315 is a bit map and the stripe group selection module 330 queries the bit representing stripe group three (3), the stripe group selection module 330 may select stripe group three (3) if the queried bit has a binary value zero (0), indicating that a DR process is pending.
[0079] The service module 320 performs 435 the DR process on the stripe group selected by the stripe group selection module 330. The DR process performed on the reactivated member disk 110 at the stripe within the selected stripe group comprises regenerating data including any check data by means of reading member stripes of all surviving original member disks 110 and computing the Exclusive OR of the contents thereof, and writing the result on the stripe of the reactivated member disk 110.
[0080] The WIP map update module 345 sets 440 the WIP map 315 entry for the stripe group completing the DR process by the service module 320. If the WIP map 315 is a bit map, the WIP map update module 345 sets the corresponding bit in the WIP map 315 to a binary one (1). If the audit-trail log module 340 had entered the stripe group number in the audit-trail log for the stripe group pending a DR process, the audit-trail log module 340 removes the audit-trail log entry containing the stripe group number since the stripe group has completed the DR process.
[0081] The service module 320 determines 445 if the DR process is complete for each stripe group. In one embodiment, the service module 320 determines 445 that the DR process is complete by verifying that each cleared WIP map entry indicates the DR process is complete. If the DR process is complete, the DR registration module 310 may deregister 450 the DR process and the method 400 terminates. If the DR process is not complete, the stripe group selection module 330 selects 430 a next stripe group such as the next higher numbered stripe group based on cleared entries of the WIP map 315. The service module 320 performs 440 the DR process on the next selected stripe group. The DR method 400 tracks the completion of the DR process using the WIP map 315.
[0082] FIG. 5 is a schematic flow chart diagram illustrating one embodiment of a data access method 500 in accordance with the present invention. The VO module 360 receives 505 a read/write I/O command specifying a data block address of an active logical drive 160 that can be mapped to a member disk 110 and the associated stripe group. In one embodiment, the VO module 360 receives 505 the I/O command from a host 210 such as the host 210 of FIG. 2. The I/O module 360 determines 510 if the logical drive 160 is in a degraded state. If not, that is, the logical drive 160 is in an online state, the VO module 360 accesses 525 the addressed data block in the stripe group. If the logical drive 160 is in a degraded state, the VO module 360 determines 515 if a rebuilding process is active, in the absence of a hot standby disk.
[0083] If a rebuilding process is active, the I/O module 360 determines 520 if the stripe group has a pending rebuilding process. In one embodiment, the I/O module 360 queries the WIP map 315. entry for the stripe group to determine 520 if the WIP map 315 entry is cleared. If not, the I/O module 360 accesses 525 the addressed data block. If the stripe group has a pending rebuilding process, the I/O module 360 may delay 530 the access to the data block until the rebuilding process is no longer pending on the stripe group.
[0084] If it is the logical drive 160 is in a degraded state and no rebuilding process is active, the I/O module 360 determines 540 if the data block address is mapped to an offline member disk 110. If not, the I/O module 360 accesses 525 the addressed data block. The I/O module may notify the WIP map clear module 335 to clear 410 a WIP map 315 entry for the associated stripe group if the member stripe of the offline member disk 110 contains any check data, and the check data needs to be updated. If the WIP map 315 entry has not been cleared, the WIP map clear module 335 will do so. In conjunction with such an action by the WIP map clear module 335, the audit-trail log module 340 may record the stripe group identifier in an audit-trail log.
[0085] If the data block address is mapped to the offline member disk 110, the I/O module 360 determines 545 if the I/O command is a read command. If not, the VO command is typically a write command, and the I/O module 360 skips an access to the data block and updates 560 any check data on a surviving member disk 110 as appropriate. The I/O module 360 may notify the WIP clear module 335 to clear 410 the WIP map 315 entry for the associated stripe group. If the WIP map 315 entry has not been cleared, the WIP map clear module 335 will do so, In conjunction with such action by the WIP map clear module 335, the audit-trail log module 340 may record the stripe group identifier in an audit-trail log.
[0086] If the I/O command is a read command, the I/O module 360 regenerates 550 data by reading corresponding data blocks of all the surviving member disks 110 in the stripe group and computing the Exclusive OR of the contents read. In one embodiment, as a result of executing the I/O command, the I/O module 360 updates 560 any check data of the stripe group on a surviving member disk 110 if the check data is required to be updated. The method 500 completes the execution of an VO command regardless of whether the logical drive 160 is in an online state or a degraded state. [0087] FIGS. 6a, 6b, and 6c are schematic block diagrams illustrating one embodiment of an exemplary manner of WIP map operation 600 for a parity RAID array 100b in accordance with the present invention. In the depicted embodiments, the WIP map 315 is a bit map, each entry of which is a single bit, tracking the completion of a DR process for a stripe group. As shown, the parity RAID array 100b with a formed logical drive 160b includes four (4) member disks 110, namely member disk 1 110a, member disk 2 110b, member disk 3, 110c, and member disk 4, 11 Od. The array includes five (5) stripe groups numbered 0 through 4. The WIP map initialization module 325 initializes 405 the bit of each WTP map 315 entry to a binary one (1), indicating that each associated stripe group has no outstanding DR process, as illustrated in FIG. 6a. The WIP map operation 600 executes the first step of initializing the WIP map 315.
[0088] FIG. 6b shows that member disk 2 110b becomes offline. Subsequently, an exemplary write command operation (not shown) requires a write on data stripe 1 670 of the logical drive 160b, which is mapped to member disk 2 110b. Consequently, the WIP map clear module 335 clears 410 the entry 605 to a binaryzero (0) ofthe WIP map 315 for stripe group number 0, which includes data stripe 1 670. Similarly, a second exemplary write command (not shown) operates on stripe group number 2 at data stripe 8 675, which is mapped to member disk 4 HOd. Subsequent to the write operation, parity stripe P2 on member disk 2110b, a member stripe of stripe group number 2 is required to be updated. The WIP map clear module 335 clears 410 the entry 610 to a binary zero (0) of the WIP map 315 for stripe group number 2. This type of cleared entries of WIP map 315 indicate that each associated respective stripe group has a pending DR process. The WIP map operation 600 executes the second step: clearing the WIP map 315 entries for each stripe group having a pending DR process.
[0089] FIG. 6c indicates that member disk 2 HOb has been reactivated, thereby becoming online. The service module 320 performs 435 a DR process on member disk 2 110b within stripe group number 0 and stripe group number 2 at data stripe 1 and parity stripe P2, respectively. Following the completion of a DR process, the WDP map update module 345 sets 440 entries 605 and 610 to binary ones (Is) of the WIP map 315 accordingly, indicating that stripe groups number 0 and number 2 have no more pending DR processes. The WIP map operation 600 completes the third and last step: setting each WIP map 315 entry subsequent to the completion of a DR process on the associated stripe groups.
[0090] FIGS. 7a, 7b, 7c, and 7d are schematic block diagrams illustrating one embodiment of an exemplary manner of tracking of changed stripes 700 with a WIP map 315 and an audit-trail log 740 in accordance with the present invention. As shown, the audit-trail log 740 may store portions of the audit-trail log 740a in the non-volatile memory 350 and remaining portions of the audit-trail log 740b in the fault-tolerant disk storage 260. In the depicted embodiments in FIGs. 7a, 7b, 7c, and 7d, the WIP map 315 includes ten (10) single- bit entries representing ten (10) stripe groups 710, for example, stripe group 0 710a, stripe group 1 710b, and so forth, and each entry of the audit- trail logs 740a and 740b contains the number of the stripe group with a cleared entry containing a binary zero (0) of the WIP map 315. The cleared entries of the WIP map 315 represent each stripe group that has a pending DR process. As illustrated, the audit-trail log 740a has four (4) entries 730a, 730b, 730c, and 730d. In one embodiment, the audit-trail log 740b, stored in the fault-tolerant disk storage 260, may contain as many entries as the WIP map 315 or more.
[0091] FIG.7a indicates the initial conditions of the WIP map 315 and the audit-trail logs 740a and 740b. The WIP map initialization module 325 initializes 405 the bit of each entry of the WIP map 315 to a binary one (1), indicating no pending DR process for the corresponding stripe group. The audit-trail log module 340 removes all contents of audit- trail logs 740a and 740b, showing no valid entries. In an alternate embodiment, the audit- trail log module 340 may enter an invalid stripe group number in each entry of the audit-trail log 740 to represent the absence of a valid entry.
[0092] FIG. 7b depicts four occurrences of required stripe writing on the offline > member disk 110 (not shown) at stripe group numbers 3, 5, 8, and 6 in sequence. The WEP map clear module 345 clears 410 four WIP map 315 entries to binary zeros (Os) for stripe group 3 71Od, stripe group 5 71Of, stripe group 671Og, and stripe group 8 71Oi. The audit- trail log module 340 enters the appropriate stripe group numbers in the audit-trail log 740a; that is, number 3 in the first entry 730a, number 5 in the second entry 730b, number 8 in the third entry 730c, and number 6 in the fourth entry 730d.
[0093] FIG. 7c illustrates three more occurrences of required stripe writing on the offline member disk 110 (not shown) at stripe group numbers 9, 1, and 4 in that order. The WIP map clear module 335 clears 410 entries of the WIP map 315 accordingly. As depicted, the audit-trail log module 340 has pushed contents of the audit-trail log 740a as shown in FIG. 7b onto the audit-trail log 740b and entered the new numbers, that is, 9, 1, and 4, into the audit-trail log 740a. Numbers 3, 5, 8, and 6 show up in entries 750a, 750b, 750c, and 75Od, respectively, of the audit-trail log 740b. Numbers 9, 1, and 4 are shown in entries 730a, 730b, and 730c, respectively, of the audit-trail log 740a.
[0094] FIG. 7d depicts one embodiment of the audit-trail log entry reordering by the audit-trail log module 340, so that prior to the start of a DR process, the audit-trail log 740 entries in the non-volatile memory «350 and the fault-tolerant disk storage 260 contain numbers of stripe groups having a pending DR process in an ascending numerical order, which matches that of the WIP map 315 cleared entries when scanned top down. The reordering of audit-trail log 740 entries facilitates an updating of the audit-trail log 740 subsequent to a DR process. In an alternate embodiment, the stripe group selection module 330 selects 430 each stripe group according to the top-down order of the audit-trail log 740a entries as shown in FIG.7c. Upon the completion of a DR process on all stripe groups listed in the audit-trail log 740a, the audit-trail log module 340 may bubble up entries in the audit- trail log 740b such as depicted in FIG. 7c into the audit-trail log 740a. The tracking of changed stripes 700 by use of the audit-trail log 740 aids in recovery of the WIP map 315 if invalidated inadvertently. [0095] FIGs. 8a and 8b are schematic block diagrams illustrating one embodiment of an exemplary updating operation 800 of the WIP map 315 and the audit-trail log 740 in accordance with the present invention. Continuing fora the exemplary tracking 700 of stripe groups each having a pending a DR process as shown in FIG. 7d, FIGs. 8a and 8b depicts the updating of the WIP map 315 and the audit-trail log 740 subsequent to a DR process on a stripe group. Fig.8a shows that following a DR process on stripe group I1 the WIP map update module 345 sets the bit 805 of the WEP map 315 to a binary one (1) from the binary zero (0) as shown in FIG.7d, and that the audit-trail log module 340 removes number 1 from entry 730a of the audit-trail log 740a. Likewise, FIG. 8b illustrate the updating of the WIP map 315 entry bit 810 and the entry 730b of the audit-trail log 740a subsequent to a DR process on stripe group 3. Note that until the audit-trail log 740a removes the last entry 730d as a result of a DR process, no contents of entries of the audit-trail log 740b are popped into the audit-trail log 740a. The updating operation 800 of the WIP map 315 and the audit-trail log 740 upon completing a DR process on a stripe group not only tracks the DR process completion status, but also enables a reconstruction of the WIP map 315 without duplicating the DR process on the stripe group if the reconstruction becomes necessary.
[0096] FIGs. 9a, 9b, and 9c are schematic block diagrams illustrating one embodiment of a WIP map recovery operation 900 in accordance with the present invention. If, for example, subsequent to the DR process on stripe group 3 in the example given in FIG. 8b, a power failure occurs. FIG.9a depicts states of the WIP map 315 and audit-trail log 740 following the resumption of the power. As shown, the WEP map 315 has no record of a DR process pending on any stripe group. The audit-trail log 740a residing in the non-volatile memory 350 and the audit-trail log 740b residing in the fault-tolerance disk storage 260 have captured and retained DR process pending stripe group identifiers. The audit-trail log 740a contains stripe group numbers 4 and 5, and the audit-trail log 740b, stripe group numbers 6, 8, and 9. [0097] As depicted in FIG.9b, the WP map initialization module 325 re-initializes 405 the WIP map 315. FIG. 9c illustrates that the WIP map clear module 335 re-clears 410 entries 710 of the WIP map 315, representing stripe groups numbered 4, 5, 6, 8, and 9 based on the contents of the audit-trail logs 740a and 740b. The WIP map recovery 900 reinstates the stripe groups that had a DR process pending by reconstructing the WIP map 315 as if no power failure had occurred.
[0098] The present invention utilizes a reconstructable WtP map 315 to track changed stripes of the offline member disk 110 prior to a reactivation and the completion of a DR process on stripe groups containing the stripes subsequent to the reactivation. In addition, the present invention shortens the duration of a degraded mode of operation of a logical drive 160 formed from a redundant RAID array 100b, 100c in the absence of a hot standby disk by reactivating the offline member disk 110 and rebuilding thereon only the changed stripes instead of each stripe regardless of whether changed or not. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of, the invention is, therefore, indicated by the. appended claims rather than by the foregoing description. AU changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
[0099] What is claimed is:

Claims

1. An apparatus for executing differential rebuilding ("DR"). the apparatus comprising: a work-in-process ("WIP") map for an offline member disk in a redundant RAID array with no hot standby disk configured with a plurality of WIP entries to track DR completion of each corresponding stripe group, each said entry being initially set; a WIP map clear module configured to clear a WIP map entry for a stripe group wherein the disk has an unfulfilled write requirement; an extended error recovery module configured to reactivate said disk -and to detect a state change to online of said disk; and a DR performance module configured to perform DR on said disk following a successful reactivation at each stripe group with a cleared WIP map entry and subsequently set said entry.
2. The apparatus of claim 1, wherein the extended error recovery module reactivates the offline member disk using a device reset cycle and an automated selective device power cycle.
3. The apparatus of claim 1, wherein the offline member disk is configured to be manually reactivated to become online by removing and re-inserting said disk in said RAID array.
4. The apparatus of claim 1, wherein the extended error recovery module is further configured to designate the offline member disk as permanently offline if the offline member disk fails to become online within a pre-specified period of time.
5. The apparatus of claim 1, wherein if the extended error recovery module detects a replacement of the offline member disk by a hot swap disk bearing a new identity, the extended error recovery module makes the replacement disk undertake a full rebuilding on each stripe.
6. The apparatus of claim 1, further comprising an input/output ("I/O") module configured to execute an VO command comprising a data block address of an active logical drive formed from said RAID array, access the data block if the logical drive is not in a degraded state, access the data block if the logical drive is in a degraded state, a rebuilding process is active, and the associated stripe group has no pending rebuilding process, delay access to the data block if the logical drive is in a degraded state, a rebuilding process is active, and the associated stripe group has a pending rebuilding process until the stripe group completes the rebuilding process, access the data block if the logical drive is in a degraded state, no rebuilding process is active, and the data block address is not mapped to the offline member disk, and regenerate data if the logical drive is in a degraded state, no rebuilding process is active, the data block address is mapped to the offline member disk, and the VO command is a read command.
7. The apparatus of claim 6, wherein the I/O module is further configured to update any check data in the associated stripe group on a surviving member disk if required as a result of executing the I/O command.
8. The apparatus of claim 1, further comprising an audit-trail log module configured to record an audit-trail log as a log of the stripe group identifiers of WIP map cleared entries.
9. The apparatus of claim 8, wherein the audit-trail log module is further configured to reconstruct the WIP map from the audit-trail log.
10. The apparatus of claim 9, further comprising a non-volatile memory configured to store the audit-trail log.
11. The apparatus of claim 10, further comprising a fault-tolerant disk storage configured to receive and store a portion of the audit-trail log from the non-volatile memory.
12. The apparatus of claim 11 , wherein the audit-trail log module is configured to reconstruct the WIP map from the audit-trail log stored in the non-volatile memory and the fault-tolerant disk storage.
13. A system for executing DR, the system comprising: a RAID array of member disks; a RAID enclosure housing the RAID array; a storage controller, coupled to the RAID array of member disks, the storage controller comprising: a processor, a memory coupled to the processor; a work-in-process ("WIP") map for an offline member disk in a redundant RAID array with no hot standby disk configured with a plurality of WIP entries to track DR completion of each corresponding stripe group, each said entry being initially set; a WIP map clear module configured to clear a WIP map entry for a stripe group wherein the disk has an unfulfilled write requirement; an extended error recovery module configured to reactivate the disk and to detect a state change to online of the disk; and a DR performance module configured to perform DR on the disk following a successful reactivation at each stripe group with a cleared WIP map entry and subsequently set said entry.
14. The system of claim 13, wherein the extended error recovery module reactivates the offline member disk using a device reset cycle and an automated selective device power cycle.
15. The system of claim 13, wherein the offline member disk is configured to be manually reactivated to become online by removing and re-inserting the disk in the enclosure of the RAID array.
16. The system of claim 13, wherein the extended error recovery module is further configured to designate the offline member disk as permanently-offline if the offline member disk fails to become online within a pre-specified period of time.
17. The system of claim 13, wherein if the extended error recovery module detects a replacement of the offline member disk by a hot swap disk bearing a new identity, the extended error recovery module makes the replacement disk undertake a full rebuilding on each stripe.
18. The system of claim 13, wherein the storage controller further comprises an I/O module configured to execute an VO command comprising a data block address of an active logical drive formed from the RAID array, access the data block if the logical drive is not in a degraded state, access the data block if the logical drive is in a degraded state, a rebuilding process is active, and the associated stripe group has no pending rebuilding process, delay access to the data block if the logical drive is in a degraded state, a rebuilding process is active, and the associated stripe group has a pending rebuilding process until the stripe group completes the rebuilding process, access the data block if the logical drive is in a degraded state, no rebuilding process is active, and the data block address is not mapped to the offline member disk, and regenerate data if the logical drive is in a degraded state, no rebuilding process is active, the data block address is mapped to the offline member disk, and the I/O command is a read command.
19. The system of claim 18, wherein the I/O module is further configured to update any check data in the associated stripe group on a surviving member disk if required as a result of executing the I/O command. ;~- - •
20. The system of claim 13, wherein the storage controller further comprises an audit-trail log module configured to record an audit-trail log as a log of the stripe group identifiers of WIP map cleared entries.
21. The system of claim 20, wherein the audit-trail log module is further configured to reconstruct the WIP map from the audit-trail log.
22. The system of claim 21 , wherein the storage controller further comprises a non-volatile memory configured to store the audit-trail log.
23. The system of claim 22, further comprising a fault-tolerant disk storage configured to receive and store a portion of the audit-trail log from the non-volatile memory.
24. The system of claim 23, wherein the audit-trail log module is configured to reconstruct the WIP map from the audit-trail log stored in the non-volatile memory and the fault-tolerant disk storage.
25. A signal bearing medium tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform operations to execute DR, the operations comprising: creating a WIP map for an offline member disk in a redundant
RAID array with no hot standby disk and setting each WIP map entry for an associated stripe group as initial condition; clearing the WIP map entry for a stripe group wherein the disk has an unfulfilled write requirement; reactivating the disk; . -* - detecting a state change to online of the disk; and performing DR on the successfully reactivated member disk at each stripe group with a cleared WIP map entry and subsequently setting said entry.
26. The signal bearing medium of claim 25, wherein the instructions further comprise operations to reactivate the offline member disk so as to make the disk online by initiating a device reset cycle and an automated selective device power cycle.
27. The signal bearing medium of claim 25, wherein the instructions further comprise operations to detect the state change to online from offline of the offline member disk subsequent to a manual reactivation.
28. The signal bearing medium of claim 25, wherein the instructions further comprise operations to designate the offline member disk as permanently offline if the disk fails to become online within a pre-specifϊed period of time.
29. The signal bearing medium of claim 25, wherein the instructions further comprise operations to detect a replacement of the offline member disk by a hot swap disk bearing a new identity and to engage the replacement disk in a full rebuilding process on each stripe.
30. The signal bearing medium of claim 25, wherein the instructions further comprise operations to receive an I/O command comprising a data block address of an active logical drive formed from the RAID array, access the data block if the logical drive is not in a degraded state, access the data block if the logical drive is in a degraded state, a rebuilding process is active, and the associated stripe group has no pending rebuilding process, delay access to the data block if the logical drive is in a degraded state, a rebuilding process is active, and the associated stripe group has a pending rebuilding process until the stripe group completes the rebuilding process, access the data block if the logical drive is in a degraded state, no rebuilding process is active, and the data block address is not mapped to the offline member disk, and regenerate data if the logical drive is in a degraded state, no rebuilding process is active, the data block address is mapped to the offline member disk, and the I/O command is a read command.
31. The signal bearing medium of claim 30, wherein the instructions further comprise operations to update any check data in the associated stripe group on a surviving member disk if required as a result of executing the I/O command.
32. The signal bearing medium of claim 25, wherein the instructions further comprise operations to record an audit-trail log as a log of the stripe group identifiers of the WIP map cleared entries, and reconstruct the WIP map from the audit-trail log.
33. The signal bearing medium of claim 32, wherein the instructions further comprise operations to store the audit-trail log in a non-volatile memory.
34. The signal bearing medium of claim 33, wherein the instructions further comprise operations to receive and store a portion of the audit-trail log in a fault-tolerant disk storage from the non-volatile memory.
35. The signal bearing medium of claim 34, wherein the instructions further comprise operations to reconstruct the WIP map from the audit-trail log stored in the nonvolatile memory and in the fault-tolerant disk storage.
36. A method for executing DR, the method comprising: creating a WTP map for an offline member disk in a redundant RAID array with no hot standby disk and setting each WIP map entry for an associated stripe group as initial condition; clearing the WIP map entry for a stripe group wherein the disk has an unfulfilled write requirement; reactivating the disk; detecting a state change to online of the disk; and performing DR on the successfully reactivated member disk at each stripe group with a cleared WIP map entry and subsequently setting said entry.
37. An apparatus for executing DR, the apparatus comprising: means for creating a WIP map for an offline member disk in a redundant RAID array with no hot standby disk and setting each WIP map entry for an associated stripe group as initial condition; means for clearing the WIP map entry for a stripe group wherein the disk has an unfulfilled write requirement; means for reactivating the disk; means for detecting a state change to online of the disk; and means for performing DR on the successfully reactivated member disk at each stripe group with a cleared WIP map entry and subsequently setting the entry.
PCT/US2005/023472 2005-01-14 2005-06-30 Apparatus, system and method for differential rebuilding of a reactivated offline raid member disk WO2006078311A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/035,860 US7143308B2 (en) 2005-01-14 2005-01-14 Apparatus, system, and method for differential rebuilding of a reactivated offline RAID member disk
US11/035,860 2005-01-14

Publications (2)

Publication Number Publication Date
WO2006078311A2 true WO2006078311A2 (en) 2006-07-27
WO2006078311A3 WO2006078311A3 (en) 2007-05-10

Family

ID=36685354

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/023472 WO2006078311A2 (en) 2005-01-14 2005-06-30 Apparatus, system and method for differential rebuilding of a reactivated offline raid member disk

Country Status (3)

Country Link
US (1) US7143308B2 (en)
TW (1) TW200625088A (en)
WO (1) WO2006078311A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117172A (en) * 2015-08-31 2015-12-02 北京神州云科数据技术有限公司 RAID (Redundant Arrays of Inexpensive Disks) historical non-identification record storage method

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100312292A1 (en) * 2001-10-18 2010-12-09 Orthoip, Llc Lagwire system and method for the fixation of bone fractures
US20100268285A1 (en) * 2001-10-18 2010-10-21 Orthoip, Llc Bone screw system and method for the fixation of bone fractures
US8702768B2 (en) 2001-10-18 2014-04-22 Orthoip, Llc Cannulated bone screw system and method
US9060809B2 (en) 2001-10-18 2015-06-23 Orthoip, Llc Lagwire system and method for the fixation of bone fractures
US20090306718A1 (en) * 2001-10-18 2009-12-10 Orthoip, Llc Filament and cap systems and methods for the fixation of bone fractures
US8679167B2 (en) 2001-10-18 2014-03-25 Orthoip, Llc System and method for a cap used in the fixation of bone fractures
US8828067B2 (en) 2001-10-18 2014-09-09 Orthoip, Llc Bone screw system and method
US6736819B2 (en) * 2001-10-18 2004-05-18 Kishore Tipirneni System and method for fixation of bone fractures
US7343519B2 (en) * 2004-05-03 2008-03-11 Lsi Logic Corporation Disk drive power cycle screening method and apparatus for data storage system
JP4324088B2 (en) * 2004-12-17 2009-09-02 富士通株式会社 Data replication control device
JP4815825B2 (en) * 2005-03-10 2011-11-16 日本電気株式会社 Disk array device and method for reconstructing the same
US20060253729A1 (en) * 2005-05-03 2006-11-09 International Business Machines Corporation Outboard swap of defective device in a storage subsystem
WO2006123416A1 (en) * 2005-05-19 2006-11-23 Fujitsu Limited Disk failure recovery method and disk array device
US10860424B1 (en) 2005-09-30 2020-12-08 Pure Storage, Inc. Background verification processing in a storage network
US8555109B2 (en) * 2009-07-30 2013-10-08 Cleversafe, Inc. Method and apparatus for distributed storage integrity processing
US11221917B1 (en) 2005-09-30 2022-01-11 Pure Storage, Inc. Integrity processing in a dispersed storage network
US11620185B2 (en) 2005-09-30 2023-04-04 Pure Storage, Inc. Integrity processing in a dispersed storage network
US7529968B2 (en) * 2005-11-07 2009-05-05 Lsi Logic Corporation Storing RAID configuration data within a BIOS image
US20070180292A1 (en) * 2006-01-31 2007-08-02 Bhugra Kern S Differential rebuild in a storage environment
US7624300B2 (en) * 2006-12-18 2009-11-24 Emc Corporation Managing storage stability
US7908448B1 (en) 2007-01-30 2011-03-15 American Megatrends, Inc. Maintaining data consistency in mirrored cluster storage systems with write-back cache
TWI334564B (en) 2007-02-14 2010-12-11 Via Tech Inc Data migration systems and methods for independent storage device expansion and adaptation
US7913075B2 (en) * 2007-07-03 2011-03-22 Pmc-Sierra, Inc. Systems and methods for automatic provisioning of storage and operating system installation from pre-existing iSCSI target
US8060772B2 (en) * 2008-01-04 2011-11-15 International Business Machines Corporation Storage redundant array of independent drives
US8255739B1 (en) * 2008-06-30 2012-08-28 American Megatrends, Inc. Achieving data consistency in a node failover with a degraded RAID array
KR101541442B1 (en) * 2008-11-04 2015-08-03 삼성전자주식회사 Computing system including memory and processor
US8201019B2 (en) * 2009-04-28 2012-06-12 International Business Machines Corporation Data storage device in-situ self test, repair, and recovery
US8543864B2 (en) * 2009-12-18 2013-09-24 Electronics And Telecommunications Research Institute Apparatus and method of performing error recovering process in asymmetric clustering file system
US8583866B2 (en) 2010-02-22 2013-11-12 International Business Machines Corporation Full-stripe-write protocol for maintaining parity coherency in a write-back distributed redundancy data storage system
US8103904B2 (en) * 2010-02-22 2012-01-24 International Business Machines Corporation Read-other protocol for maintaining parity coherency in a write-back distributed redundancy data storage system
US8103903B2 (en) * 2010-02-22 2012-01-24 International Business Machines Corporation Read-modify-write protocol for maintaining parity coherency in a write-back distributed redundancy data storage system
US8156368B2 (en) 2010-02-22 2012-04-10 International Business Machines Corporation Rebuilding lost data in a distributed redundancy data storage system
US9229824B2 (en) * 2010-03-16 2016-01-05 International Business Machines Corporation Caching rebuilt encoded data slices in a dispersed storage network
CN102147759A (en) * 2011-03-18 2011-08-10 浪潮电子信息产业股份有限公司 Method for processing offline of RAID hard disk
US9170868B2 (en) * 2011-07-27 2015-10-27 Cleversafe, Inc. Identifying an error cause within a dispersed storage network
US11016702B2 (en) 2011-07-27 2021-05-25 Pure Storage, Inc. Hierarchical event tree
US10678619B2 (en) 2011-07-27 2020-06-09 Pure Storage, Inc. Unified logs and device statistics
US9087019B2 (en) * 2012-01-27 2015-07-21 Promise Technology, Inc. Disk storage system with rebuild sequence and method of operation thereof
US8943372B2 (en) * 2012-03-30 2015-01-27 International Business Machines Corporation Systems and methods for open and extensible integration of management domains in computation and orchestration of resource placement
GB2504956A (en) 2012-08-14 2014-02-19 Ibm Management of RAID error recovery procedures and configuration
US9760293B2 (en) * 2013-03-07 2017-09-12 Seagate Technology Llc Mirrored data storage with improved data reliability
JP6307962B2 (en) * 2014-03-19 2018-04-11 日本電気株式会社 Information processing system, information processing method, and information processing program
US10684927B2 (en) * 2014-07-29 2020-06-16 Hewlett Packard Enterprise Development Lp Methods and systems for storing information that facilitates the reconstruction of at least some of the contents of a storage unit on a storage system
WO2016100158A1 (en) 2014-12-15 2016-06-23 Smith & Nephew, Inc. Active fracture compression implants
US10185639B1 (en) 2015-05-08 2019-01-22 American Megatrends, Inc. Systems and methods for performing failover in storage system with dual storage controllers
US10585764B2 (en) 2017-10-02 2020-03-10 International Business Machines Corporation Data storage system comprising primary and secondary storage systems
CN110413218B (en) * 2018-04-28 2023-06-23 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for fault recovery in a storage system
CN109508275A (en) * 2018-10-18 2019-03-22 高新兴国迈科技有限公司 A kind of data collection task station equipment with disk state detection function
CN111104244B (en) * 2018-10-29 2023-08-29 伊姆西Ip控股有限责任公司 Method and apparatus for reconstructing data in a storage array set
CN111124746B (en) * 2018-10-30 2023-08-11 伊姆西Ip控股有限责任公司 Method, apparatus and computer readable medium for managing redundant array of independent disks
US11269738B2 (en) * 2019-10-31 2022-03-08 EMC IP Holding Company, LLC System and method for fast rebuild of metadata tier
CN113391937A (en) * 2020-03-12 2021-09-14 伊姆西Ip控股有限责任公司 Method, electronic device and computer program product for storage management
CN111880963B (en) * 2020-07-29 2022-06-10 北京浪潮数据技术有限公司 Data reconstruction method, device, equipment and storage medium
US11755226B2 (en) 2020-09-18 2023-09-12 Hewlett Packard Enterprise Development Lp Tracking changes of storage volumes during data transfers
US11720274B2 (en) 2021-02-03 2023-08-08 Hewlett Packard Enterprise Development Lp Data migration using cache state change
US11693565B2 (en) 2021-08-10 2023-07-04 Hewlett Packard Enterprise Development Lp Storage volume synchronizations responsive to communication link recoveries

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5390187A (en) * 1990-10-23 1995-02-14 Emc Corporation On-line reconstruction of a failed redundant array system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5274799A (en) * 1991-01-04 1993-12-28 Array Technology Corporation Storage device array architecture with copyback cache
US5708668A (en) * 1992-05-06 1998-01-13 International Business Machines Corporation Method and apparatus for operating an array of storage devices
JP3183719B2 (en) * 1992-08-26 2001-07-09 三菱電機株式会社 Array type recording device
WO1994029795A1 (en) * 1993-06-04 1994-12-22 Network Appliance Corporation A method for providing parity in a raid sub-system using a non-volatile memory
US6732290B1 (en) * 2000-11-22 2004-05-04 Mti Technology Corporation Recovery system for raid write
US6820211B2 (en) * 2001-06-28 2004-11-16 International Business Machines Corporation System and method for servicing requests to a storage array
US7055058B2 (en) * 2001-12-26 2006-05-30 Boon Storage Technologies, Inc. Self-healing log-structured RAID
US7103884B2 (en) * 2002-03-27 2006-09-05 Lucent Technologies Inc. Method for maintaining consistency and performing recovery in a replicated data storage system
US6715048B1 (en) * 2002-03-28 2004-03-30 Emc Corporation System and method for efficiently performing a restore operation in a data storage environment
US7219201B2 (en) * 2003-09-17 2007-05-15 Hitachi, Ltd. Remote storage disk control device and method for controlling the same
US20050210318A1 (en) * 2004-03-22 2005-09-22 Dell Products L.P. System and method for drive recovery following a drive failure

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5390187A (en) * 1990-10-23 1995-02-14 Emc Corporation On-line reconstruction of a failed redundant array system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117172A (en) * 2015-08-31 2015-12-02 北京神州云科数据技术有限公司 RAID (Redundant Arrays of Inexpensive Disks) historical non-identification record storage method
CN105117172B (en) * 2015-08-31 2019-04-02 深圳神州数码云科数据技术有限公司 A kind of disk array history falls the store method of disk record

Also Published As

Publication number Publication date
TW200625088A (en) 2006-07-16
WO2006078311A3 (en) 2007-05-10
US7143308B2 (en) 2006-11-28
US20060161805A1 (en) 2006-07-20

Similar Documents

Publication Publication Date Title
US7143308B2 (en) Apparatus, system, and method for differential rebuilding of a reactivated offline RAID member disk
JP3226370B2 (en) Improvements on high availability disk arrays
US7721143B2 (en) Method for reducing rebuild time on a RAID device
JP3184171B2 (en) DISK ARRAY DEVICE, ERROR CONTROL METHOD THEREOF, AND RECORDING MEDIUM RECORDING THE CONTROL PROGRAM
CN101276302B (en) Magnetic disc fault processing and data restructuring method in magnetic disc array system
US7587631B2 (en) RAID controller, RAID system and control method for RAID controller
US6883112B2 (en) Storage device, backup and fault tolerant redundant method and computer program code of plurality storage devices
JP2002108573A (en) Disk array device and method for controlling its error and recording medium with its control program recorded thereon
US6751136B2 (en) Drive failure recovery via capacity reconfiguration
US20080126840A1 (en) Method for reconstructing data in case of two disk drives of raid failure and system therefor
JPH10254648A (en) Storage device storing portable media
US7694171B2 (en) Raid5 error recovery logic
JP2006252126A (en) Disk array device and its reconstruction method
US7529776B2 (en) Multiple copy track stage recovery in a data storage system
JPH11184643A (en) Managing method for disk array device and mechanically readable recording medium recording program
JP2000200157A (en) Disk array device and data restoration method in disk array device
US7024585B2 (en) Method, apparatus, and program for data mirroring with striped hotspare
CN106933707B (en) Data recovery method and system of data storage device based on raid technology
JP2010026812A (en) Magnetic disk device
JP4248164B2 (en) Disk array error recovery method, disk array control device, and disk array device
JP3399398B2 (en) Mirror Disk Recovery Method in Fault Tolerant System
JP2004102815A (en) Method for copying data between logical disks, program for copying data between logical disks, and disk controller
JPH06266508A (en) Disk array control method
JPH10254634A (en) Storage device and restoration means for storage device
US7337270B2 (en) Apparatus, system, and method for servicing a data storage device using work-in-process (WIP) maps

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05769418

Country of ref document: EP

Kind code of ref document: A2