US20060206665A1 - Accelerated RAID with rewind capability - Google Patents

Accelerated RAID with rewind capability Download PDF

Info

Publication number
US20060206665A1
US20060206665A1 US11/433,152 US43315206A US2006206665A1 US 20060206665 A1 US20060206665 A1 US 20060206665A1 US 43315206 A US43315206 A US 43315206A US 2006206665 A1 US2006206665 A1 US 2006206665A1
Authority
US
United States
Prior art keywords
data
cache
log
area
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/433,152
Inventor
Tim Orsley
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
QUANATUM Corp
Quantum Corp
Original Assignee
Quantum Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quantum Corp filed Critical Quantum Corp
Priority to US11/433,152 priority Critical patent/US20060206665A1/en
Assigned to QUANATUM CORPORATION reassignment QUANATUM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ORSELY, TIM
Publication of US20060206665A1 publication Critical patent/US20060206665A1/en
Assigned to CREDIT SUISSE reassignment CREDIT SUISSE SECURITY AGREEMENT Assignors: ADVANCED DIGITAL INFORMATION CORPORATION, CERTANCE (US) HOLDINGS, INC., CERTANCE HOLDINGS CORPORATION, CERTANCE LLC, QUANTUM CORPORATION, QUANTUM INTERNATIONAL, INC.
Assigned to CERTANCE, LLC, CERTANCE HOLDINGS CORPORATION, QUANTUM INTERNATIONAL, INC., ADVANCED DIGITAL INFORMATION CORPORATION, QUANTUM CORPORATION, CERTANCE (US) HOLDINGS, INC. reassignment CERTANCE, LLC RELEASE BY SECURED PARTY Assignors: CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1441Resetting or repowering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2066Optimisation of the communication load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2069Management of state, configuration or failover
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1004Adaptive RAID, i.e. RAID system adapts to changing circumstances, e.g. RAID1 becomes RAID5 as disks fill up
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/103Hybrid, i.e. RAID systems with parity comprising a mix of RAID types
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99955Archiving or backup

Definitions

  • the present invention relates to data protection in data storage devices, and in particular to data protection in disk arrays.
  • Storage devices of various types are utilized for storing information such as in computer systems.
  • Conventional computer systems include storage devices such as disk drives for storing information managed by an operating system file system.
  • disk drives With decreasing costs of storage space, an increasing amount of data is stored on individual disk drives.
  • important data can be lost.
  • some fault-tolerant storage devices utilize an array of redundant disk drives (RAID).
  • the data stored on the primary storage devices is backed-up to secondary storage devices such as tape, from time to time.
  • secondary storage devices such as tape
  • True data protection can be achieved by keeping a log of all writes to a storage device, on a data block level.
  • a user data set and a write log are maintained, wherein the data set has been completely backed up and thereafter a log of all writes is maintained.
  • the backed-up data set and the write log allows returning to the state of the data set before the current state of the data set, by restoring the backed-up (baseline) data set and then executing all writes from that log up until that time.
  • RAID configured disk arrays provide protection against data loss by protecting a single disk drive failure.
  • Protecting the log file stream using RAID has been achieved by either a RAID mirror (known as RAID- 1 ) shown by example in FIG. 1 , or a RAID stripe (known as RAID- 5 ) shown by example in FIG. 2 .
  • RAID mirror 10 including several disk drives 12
  • two disk drives store the data of one independent disk drive.
  • n+1 disk drives 12 are required to store the data of n independent disk drives (e.g., in FIG. 2 , a stripe of five disk drives stores the data of four independent disk drives).
  • each disk drive 12 has e.g. 100 GB capacity. In each disk drive 12 , half the capacity is used for user data, and another half for mirror data. As such, user data capacity of the disk array 10 is 400 GB and the other 400 GB is used for mirror data.
  • drive 1 protects drive 0 data (M 0 )
  • drive 2 protects drive 1 data (M 1 )
  • drive 0 fails, then the data M 0 in drive 1 can be used to recreate data M 0 in drive 0 , and the data M 7 in drive 7 can be used to crate data M 7 of drive 0 . As such, no data is lost in case of a single disk drive failure.
  • a RAID stripe configuration effectively groups capacity from all but one of the disk drives in the disk array 14 and writes the parity (XOR) of that capacity on the remaining disk drive (or across multiple drives as shown).
  • the disk array 14 includes five disk drives 12 (e.g., drive 0 -drive 4 ) each disk drive 12 having e.g. 100 GB capacity, divided into 5 sections.
  • the blocks S 0 -S 3 in the top portions of drive 0 -drive 3 are for user data, and a block of drive 4 is for parity data (i.e., XOR of S 0 -S 3 ).
  • the RAID stripe capacity is 400 GB for user data and 100 GB for parity data.
  • the parity area is distributed among the disk drives 12 as shown. Spreading the parity data across the disk drives 12 allows spreading the task of reading the parity data over several disk drives as opposed to just one disk drive.
  • Writing on a disk drive in a stripe configuration requires that the disk drive holding parity be read, a new parity calculated and the new parity written over the old parity. This requires a disk revolution and increases the write latency. The increased write latency decreases the throughput of the storage device 14 .
  • the RAID mirror configuration allows writing the log file stream to disk faster than the RAID stripe configuration (“stripe”).
  • a mirror is faster than a stripe since in the mirror, each write activity is independent of other write activities, in that the same block can be written to the mirroring disk drives at the same time.
  • a mirror configuration requires that the capacity to be protected be matched on another disk drive. This is costly as the capacity to be protected must be duplicated, requiring double the number of disk drives.
  • a stripe reduces such capacity to 1/n where n is the number of disk drives in the disk drive array. As such, protecting data with parity across multiple disk drives makes a stripe slower than a mirror, but more cost effective.
  • the present invention satisfies these needs.
  • the present invention provides a method for storing data in a fault-tolerant storage subsystem having an array of failure independent data storage units, by dividing the storage area on the storage units into a hybrid of a logical mirror area (i.e., RAID mirror) and a logical stripe area (i.e., RAID stripe).
  • a logical mirror area i.e., RAID mirror
  • a logical stripe area i.e., RAID stripe
  • the data is duplicated by keeping a duplicate copy of the data on a pair of storage units, and when storing data in the stripe area, the data is stored as stripes of blocks, including data blocks and associated error-correction blocks.
  • a log file stream is maintained as a log cache in the RAID mirror area for writing data from a host to the storage subsystem, and then data is transferred from the log file in the RAID mirror area to the final address in the RAID stripe area, preferably as a background task. In doing so, the aforementioned write latency performance penalty associated with writes to a RAID stripe can be masked from the host.
  • a memory cache (RAM cache) is added in front of the log cache, wherein incoming host blocks are first written to RAM cache quickly and the host is acknowledged.
  • the host perceives a faster write cycle than is possible if the data were written to a data storage unit while the host waited for an acknowledgement. This further enhances the performance of the above hybrid RAID subsystem.
  • a flashback module (backup module) is added to the subsystem to protect the RAM cache data.
  • the flashback module includes a non-volatile memory, such as flash memory, and a battery. During normal operations, the battery is trickle charged. Should any power failure then occur, the battery provides power to transfer the contents of the RAM cache to the flash memory. Upon restoration of power, the flash memory contents are transferred back to the RAM cache, and normal operations resume.
  • Read performance is further enhanced by pressing a data storage unit (e.g., disk drive) normally used as a spare data storage unit (“hot spare”) in the array, into temporary service in the hybrid RAID system.
  • a data storage unit e.g., disk drive
  • hot spare a spare data storage unit
  • the hot spare can be used to replicate the data in the mirrored area of the hybrid RAID subsystem. Should any data storage unit in the array fail, this hot spare could immediately be delivered to take the place of that failed data storage unit without increasing exposure to data loss from a single data storage unit failure.
  • the replication of the mirror area would make the array more responsive to read requests by allowing the hot spare to supplement the mirror area.
  • the mirror area acts as a temporary store for the log, prior to storing the write data in its final location in the stripe area.
  • the log prior to purging the data from the mirror area, the log can be written sequentially to an archival storage medium such as tape. If a baseline backup of the entire RAID subsystem stripe is created just before the log files are archived, each successive state of the RAID subsystem can be recreated by re-executing the write requests within the archived log files. This would allow any earlier state of the stripe of the RAID subsystem to be recreated (i.e., infinite roll-back or rewind). This is beneficial in allowing recovery from e.g. user error such as accidentally erasing a file, from a virus infection, etc.
  • the present invention provides a method and system of providing cost effective data protection with improved data read/write performance than a conventional RAID system, and also provides the capability of returning to a desired previous data state.
  • FIG. 1 shows a block diagram of an example disk array configured as a RAID mirror
  • FIG. 2 shows a block diagram of an example disk array configured as a RAID stripe
  • FIG. 3A shows a block diagram of an example hybrid RAID data organization in a disk array according to an embodiment of the present invention
  • FIG. 3B shows an example flowchart of an embodiment of the steps of data storage according to the present invention
  • FIG. 3C shows a block diagram of an example RAID subsystem logically configured as hybrid RAID stripe and mirror, according to the hybrid RAID data organization FIG. 3A ;
  • FIG. 4A shows an example data set and a log of updates to the data set after a back-up
  • FIG. 4B shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
  • FIG. 4C shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
  • FIG. 5A shows another block diagram of the disk array of FIGS. 3A and 3B , further including a flashback module according to the present invention
  • FIG. 5B shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
  • FIG. 5C shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
  • FIG. 6A shows a block diagram of another example hybrid RAID data organization in a disk array including a hot spare used as a temporary RAID mirror according to the present invention
  • FIG. 6B shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
  • FIG. 6C shows a block diagram of an example RAID subsystem logically configured as the hybrid RAID data organization of FIG. 6A that further includes a hot spare used as a temporary RAID mirror;
  • FIG. 7A shows a block diagram of another disk array including a hybrid RAID data organization using stripe and mirror configurations, and further including a hot spare as a redundant mirror and a flashback module, according to the present invention
  • FIG. 7B shows a block diagram of another disk array including hybrid RAID data organization using stripe and mirror configurations, and further including a hot spare as a redundant mirror and a flashback module, according to the present invention
  • FIG. 8A shows an example of utilizing a hybrid RAID subsystem in a storage area network (SAN), according to the present invention
  • FIG. 8B shows an example of utilizing a hybrid RAID as a network attached storage (NAS), according to the present invention.
  • FIG. 8C shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
  • an example fault-tolerant storage subsystem 16 having an array of failure independent data storage units 18 , such as disk drives, using a hybrid RAID data organization according to an embodiment of the present invention is shown.
  • the data storage units 18 can be other storage devices, such as e.g. optical storage devices, DVD-RAM, etc.
  • protecting data with parity across multiple disk drives makes a RAID stripe slow but cost effective.
  • a RAID mirror provides better data transfer performance because the target sector is simultaneously written on two disk drives, but requires that the capacity to be protected be matched on another disk drive.
  • a RAID stripe reduces such capacity to 1/n where n is the number of drives in the disk array, but in a RAID stripe, both the target and the parity sector must be read then written, causing write latency.
  • an array 17 of six disk drives 18 (e.g., drive 0 -drive 5 ) is utilized for storing data from, and reading data back to, a host system, and is configured to include both a RAID mirror data organization and a RAID stripe data organization according to the present invention.
  • the RAID mirror (“mirror”) configuration provides performance advantage when transferring data to disk drives 18 using e.g. a log file stream approach
  • the RAID stripe (“stripe”) configuration provides cost effectiveness by using the stripe organization for general purpose storage of user data sets.
  • this is achieved by dividing the capacity of the disk array 17 of FIG. 3A into at least two areas (segments), including a mirror area 20 and a stripe area 22 (step 100 ).
  • a data set 24 is maintained in the stripe area 22 (step 102 ), and an associated log file/stream 26 is maintained in the mirror area 20 (step 104 ).
  • the log file 26 is maintained as a write log cache in the mirror area 20 , such that upon receiving a write request from a host, the host data is written to the log file 26 (step 106 ), and then data is transferred from the log file 26 in the mirror area 20 to a final address in the data set 24 in the stripe area 22 (preferably, performed as a background task) (step 108 ). In doing so, the aforementioned write latency performance penalty associated with writes to a RAID stripe can be masked from the host.
  • the log is backed-up to tape continually or on a regular basis (step 110 ). The above steps are repeated as write requests arrive from the host.
  • the disk array 17 can include additional hybrid RAID mirror and RAID stripe configured areas according to the present invention.
  • the example hybrid RAID subsystem 16 further includes a data organization manager 28 having a RAID controller 30 that implements the hybrid data organization of FIG. 3A on the disk array 17 (e.g., an array of N disk drives 18 ).
  • a data organization manager 28 having a RAID controller 30 that implements the hybrid data organization of FIG. 3A on the disk array 17 (e.g., an array of N disk drives 18 ).
  • a RAID stripe for user data (e.g., S 0 -S 29 ) and parity data (e.g., XOR 0 -XOR 29 ).
  • 400 GB of user data is stored in the hybrid RAID subsystem 16 , compared to the same capacity in the RAID mirror 10 of FIG. 1 and the RAID stripe 14 of FIG. 2 .
  • the subsystem 16 communicates with a host 29 via a host interface 31 .
  • Other numbers of disk drives and with different storage capacities can also be used in the RAID subsystem 16 of FIG. 3C , according to the present invention.
  • FIG. 4A shows an example user data set 24 and a write log 26 , wherein the data set 24 has been completely backed up at e.g. midnight and thereafter a log 26 of all writes has been maintained (e.g., at times t 1 -t 6 ).
  • each write log entry 26 a includes updated data (udata) and the address (addr) in the data set where the updated data is to be stored, and a corresponding time stamp (ts).
  • the data set at each time t 1 -t 6 is also shown in FIG. 4A .
  • the log file 26 is first written in the RAID mirror area 20 and then data is transferred from the log file 26 in the RAID mirror area 20 to the final address in the RAID stripe area 22 (preferably as a background task), according to the present invention.
  • the disk array 17 ( FIG. 3C ) is used as a write log cache in a three step process: (1) when the host needs to write data to a disk, rather than writing to the final destination in a disk drive, that data is first written to the log 26 , satisfying the host (2) then when the disk drive is not busy, that data from the log 26 is transferred to the final destination data set on the disk drive, transparent to the host and (3) the log data is backed-up to e.g. tape to free up storage space, to log new data from the host.
  • the log and the final destination data are maintained in a hybrid RAID configuration as described.
  • cache hit i.e., cache hit
  • the requested data is transferred to the host 20 from the log 26 (step 124 ).
  • the stripe area 22 is accessed to retrieve the requested data to provide to the host (step 126 ).
  • the stripe area 22 is used for flushing the write log data, thereby permanently storing the data set in the stripe area 22 , and also used to read data blocks that are not in the write log cache 26 in the mirror area 20 .
  • the hybrid RAID system 16 is an improvement over a conventional RAID stripe without a RAID mirror, since according to the present invention most recently written data is likely in the log 26 stored in the mirror area 20 , which provides a faster read than a stripe.
  • the hybrid RAID system provides equivalent of RAID mirror performance for all writes and for most reads since most recently written data is most likely to be read back.
  • the RAID stripe 22 is only accessed to retrieve data not found in the log cache 26 stored in the RAID mirror 20 , whereby the hybrid RAID system 16 essentially provides the performance of a RAID mirror, but at cost effectiveness of a RAID stripe.
  • the stripe 22 is written to as a foreground process (e.g., real-time), then there is write performance penalty (i.e. the host is waiting for an acknowledgement that the write is complete).
  • the log cache 26 permits avoidance of such real-time writes to the stripe 22 .
  • the disk array 17 is divided into two logical data areas (i.e., a mirrored log write area 20 and a striped read area 22 ) using a mirror configuration for log writes avoids the write performance penalty of a stripe.
  • the mirror area 20 is sufficiently large to hold all log writes that occur during periods of peak activity, updates to the stripe area 22 can be performed in the background.
  • the mirror area 20 is essentially a write cache, and writing the log 26 to the mirror area 20 with background writes to the stripe area 22 allows the hybrid subsystem 16 to match mirror performance at stripe-like cost.
  • a cache memory e.g., RAM write cache 32 , FIG. 5A
  • RAM write cache 32 FIG. 5A
  • incoming host blocks are first written to the RAM write cache 32 quickly and the host is acknowledged (step 138 ). The host perceives a faster write cycle than is possible if the data were written to disk while the host waited for an acknowledgement.
  • the host data in the RAM write cache 32 is copied sequentially to the log 26 in the mirror area 20 (i.e., disk mirror write cache) (step 140 ), and the log data is later copied to the data set 24 in the stripe area 22 (i.e., disk stripe data set) e.g. as a background process (step 142 ). Sequential writes to the disk mirror write cache 26 and random writes to the disk stripe data set 24 , provide fast sequential writes.
  • a flashback module 34 (backup module) can be added to the disk array 17 to protect RAM cache data according to the present invention. Without the module 34 , write data would not be secure until stored at its destination address on disk.
  • the module 34 includes a non-volatile memory 36 such as Flash memory, and a battery 38 .
  • the battery 38 is trickle charged from an external power source 40 (step 150 ). Should any power failure then occur, the battery 38 provides the RAID controller 30 with power sufficient (step 152 ) to transfer the contents of the RAM write cache 32 to the flash memory 36 (step 154 ). Upon restoration of power, the contents of the flash memory 36 are transferred back to the RAM write cache 32 , and normal operations resume (step 156 ). This allows acknowledging the host write request (command) once the data is written in the RAM cache 32 (which is faster than writing it to the mirror disks).
  • the flashback module 34 can be moved to a another hybrid subsystem 16 to restore data from the flash memory 36 .
  • writes can be accumulated in the RAM cache 32 and written to the mirrored disk log file 26 sequentially (e.g., in the background).
  • write data should be transferred to disk as quickly as possible. Since sequential throughput of a hard disk drive is substantially better than random performance, the fastest way to transfer data from the RAM write cache 32 to disk is via the log file 26 (i.e., a sequence of address/data pairs above) in the mirror area 20 . This is because when writing a data block to the mirror area 20 , the data block is written to two different disk drives. Depending on the physical disk address of the incoming blocks from the host to be written, the disk drives of the mirror 20 may be accessed randomly. However, as a log file is written sequentially based on entries in time, the blocks are written to the log file in a sequential manner, regardless of their actual physical location in the data set 24 on the disk drives.
  • the log file 26 i.e., a sequence of address/data pairs above
  • data requested by the host 29 from the RAID subsystem 16 can be in the RAM write cache 32 , in the log cache area 26 in the mirror 20 area or in the general purpose stripe area 22 .
  • a determination is made if the requested data is in the RAM cache 32 (step 162 ), and if so, the requested data is transferred to the host 29 from the RAM cache 32 (step 164 ).
  • step 166 a determination is made if the requested data is in the write log file 26 in the mirror area 20 (step 166 ), and if so, the requested data is transferred to the host from the log 26 (step 168 ). If the requested data is not in the log 26 , then the data set 24 in the stripe area 22 is accessed to retrieve the requested data to provide to the host (step 169 ).
  • the mirror area 20 Since data in the mirror area 20 is replicated, twice the number of actuators are available to pursue read data requests effectively doubling responsiveness. While this mirror benefit is generally recognized, the benefit may be enhanced because the mirror does not contain random data but rather data that has recently been written. As discussed, because the likelihood that data will be read is probably directly proportional to the time since the data has been written, the mirror area 20 may be more likely to contain the desired data. A further acceleration can be realized if the data is read back in the same order it was written regardless of the potential randomness of the final data addresses since the mirror area 20 stores data in the written order and a read in that order creates a sequential stream.
  • read performance of the subsystem 16 can further be enhanced.
  • one of the disk drives in the array can be reserved as a spare disk drive (“hot spare”), wherein if one of the other disk drives in the array should fail, the hot spare is used to take the place of that failed drive.
  • read performance can be further enhanced by pressing a disk drive normally used as a hot spare in the disk array 17 , into temporary service in the hybrid RAID subsystem 16 .
  • FIG. 6A shows the hybrid RAID subsystem 16 of FIG. 3A , further including a hot spare disk drive 18 a (i.e., drive 6 ) according to the present invention.
  • the status of the hot spare 18 a is determined (step 170 ) and upon detecting the hot spare 18 a is lying dormant (i.e., not being used as a failed device replacement) (step 172 ), the hot spare 18 a is used to replicate the data in the mirrored area 20 of the hybrid RAID subsystem 16 (step 174 ). Then upon receiving a read request from the host (step 176 ), it is determined if the requested data is in the hot spare 18 a and the mirror area 20 (step 178 ).
  • a copy of the requested data is provided to the host from the hot spare 18 a with minimum latency or from the mirror area 20 , if faster ( 180 ). Otherwise, a copy of a requested data is provided to the host from the mirror area 20 or the stripe area 22 (step 182 ). Thereafter, it is determined if the hot spare 18 a is required to replace a failed disk drive (step 184 ). If not, the process goes back to step 176 , otherwise the hot spare 18 a is used to replace the failed disk drive (step 186 ).
  • the hot spare 18 a can immediately be delivered to take the place of that failed disk drive without increasing exposure to data loss from a single disk drive failure. For example, if drive 1 fails, drive 0 and drive 2 -drive 5 can start using the spare drive 6 and rebuild drive 6 to contain data of drive 1 prior to failure. However, while all the disk drives 18 of the array 17 are working properly, the replication of the mirror area 20 would make the subsystem 16 more responsive to read requests by allowing the hot spare 18 a to supplement the mirror area 20 .
  • the hot spare 18 a may be able to provide multiple redundant data copies for further performance boost. For example, if the hot spare 18 a matches the capacity of the mirrored area 20 of the array 17 , the mirrored area data can then be replicated twice on the hot spare 18 a . For example, in the hot spare 18 a data can be arranged wherein the data is replicated on each concentric disk track (i.e., one half of a track contains a copy of that which is on the other half of that track). In that case, rotational latency of the hot spare 18 a in response to random requests is effectively halved (i.e., smaller read latency).
  • FIG. 6C shows an example block diagram of a hybrid RAID subsystem 16 including a RAID controller 30 that implements the hybrid RAID data organization of FIG. 6A , for seven disk drives (drive 0 -drive 6 ), wherein drive 6 is the hot spare 18 a .
  • drive 0 -drive 1 in FIG. 6C for example, M 0 data is in drive 0 and is duplicated in drive 1 , whereby drive 1 protects drive 0 .
  • M 0 data is written to the spare drive 6 using replication, such that if requested M 0 data is in the write log 26 in the mirror area 20 , it can be read back from drive 0 , drive 1 , or the spare drive 6 . Since M 0 data is replicated twice in drive 6 , drive 6 appears to have high r.p.m. because as described, replication lowers read latency. Spare drive 6 can be configured to store all the mirrored blocks in a replicated fashion, similar to that for M 0 data, to improve the read performance of the hybrid subsystem 16 .
  • the hot spare 18 a can replicate the mirror area 20 twice. If the hot spare 18 a includes a replication of the mirror area, the hot spare 18 a can be removed from the subsystem 16 and backed-up. The backup can be performed off-line, not using network bandwidth. A new baseline could be created from the hot spare 18 a.
  • the backup can be restored from tape to a secondary disk array and then all writes from the log file 26 written to the stripe 22 of the secondary disk array.
  • the order of writes need not take place in a temporal order but can be optimized to minimize time between reads of the hot spare and/or writes to the secondary array.
  • the stripe of the secondary array is then in the same state as that of the primary array, as of the time the hot spare was removed from the primary array.
  • FIG. 7A shows a block diagram of an embodiment of a hybrid RAID subsystem 16 implementing said hybrid RAID data organization, and further including a hot spare 18 a as a redundant mirror and a flashback module 34 , according to the present invention.
  • Writing to the log 26 in the mirror area 20 and the flashback module 34 removes the write performance penalty normally associated with replication on a mirror.
  • Replication on a mirror involves adding a quarter rotation to all writes. When the target track is acquired, average latency to one of the replicated sectors is one quarter rotation but half a rotation is need to write the other sector. Since average latency on a standard mirror is half a rotation, an additional quarter rotation is required for writes.
  • FIG. 7B shows a block diagram of another embodiment of hybrid RAID subsystem 16 of FIG. 7A , wherein the flashback module 34 is part of the data organization manager 28 that includes the RAID controller 30 .
  • FIG. 8A shows an example of utilizing an embodiment of a hybrid RAID subsystem 16 according to the present invention in a example block device such as storage area network (SAN) 42 .
  • SAN storage area network
  • FIG. 8B shows an example of utilizing an embodiment of a hybrid RAID subsystem 16 according to the present invention as a network attached storage (NAS) in a network 44 .
  • NAS network attached storage
  • connected devices exchange files, as such a file server 46 is positioned in front of the hybrid RAID subsystem 16 .
  • the file server portion of a NAS device can be simplified with a focus solely on file service, and data integrity is provided by the hybrid RAID subsystem 16 .
  • the mirror area 20 acts as a temporary store for the log cache 26 , prior to storing the write data in its final location in the stripe 22 .
  • the log 26 can be written sequentially to an archival storage medium such as tape. Then, to return to a prior state of the data set, if a baseline backup of the entire RAID subsystem stripe 22 is created just before the log files are archived, each successive state of the RAID subsystem 16 can be recreated by re-executing the write requests within the archived log file system.
  • any earlier state of the stripe 22 of the RAID subsystem 16 is recreated (i.e., infinite roll-back or rewind). This is beneficial e.g. in allowing recovery from user error such as accidentally erasing a file, in allowing recovery from a virus infection, etc.
  • a copy of the data set 24 created at a back-up time prior to the selected time is obtained (step 190 ) and a copy of cache log 26 associated with said data set copy is obtained (step 192 ).
  • Said associated cache log 26 includes entries 26 a ( FIG.
  • Each data block in each entry of said associated cache log 26 is time-sequentially transferred to the corresponding block address in the data set copy, until a time stamp indicating said selected time is reached in an entry 26 a of the associated cache log (step 194 ).
  • the present invention further provides compressing the data in the log 26 stored in the mirror area 20 of the hybrid RAID system 16 for cost effectiveness. Compression is not employed in a conventional RAID subsystem because of variability in data redundancy. For example, a given data block is to be read, modified and rewritten. If the read data consumes the entire data block and the modified data does contain as much redundancy as did the original data, then the compressed modified data cannot fit in the data block on disk.
  • a read/modify/write operation is not a valid operation in the mirror area 20 in the present invention because the mirror area 20 contains a sequential log file of writes. While a given data block may be read from the mirror area 20 , after any modification, the writing of the data block would be appended to the existing log file stream 26 , not overwritten in place. Because of this, variability in compression is not an issue in the mirror area 20 . Modern compression techniques can e.g. halve the size of typical data, whereby use of compression in the mirror area 20 effectively e.g. doubles its size. This allows doubling the mirror area size or cutting the actual mirror area size in half, without reducing capacity relative to a mirror area without compression. The compression technique can similarly be performed for the RAM write cache 32 .
  • the data in the RAID subsystem 16 may be replicated to a system 16 a ( FIG. 7B ) at a remote location.
  • the remote system 16 a may not be called upon except in the event of an emergency in which the primary RAID subsystem 16 is shut down.
  • the remote system 16 a can provide further added value in the case of the present invention.
  • the primary RAID subsystem 16 sends data in the log file 26 in mirror area 20 to the remote subsystem 16 a wherein in this example the remote subsystem 16 a comprises a hybrid RAID subsystem according to the present invention. If the log file data is compressed the transmission time to the remote system 16 a can be reduced.
  • the remote subsystem 16 a can be the source of parity information for the primary subsystem 16 .
  • the remote subsystem 16 a in the process of writing data from the mirror area to its final address on the stripe in the subsystem 16 a , the associated parity data is generated.
  • the remote subsystem 16 a can then send the parity data (preferably compressed) to the primary subsystem 16 which can then avoid generating parity data itself, accelerating the transfer process for a given data block between the mirror and the stripe areas in the primary subsystem 16 .
  • the present invention goes beyond standard RAID by protecting data integrity, not just providing device reliability. Infinite roll-back provides protection during the window of vulnerability between backups. A hybrid mirror/stripe data organization results in improved performance. With the addition of the flashback module 34 , a conventional RAID mirror is outperformed at a cost which approaches that of a stripe. Further performance enhancement is attained with replication on an otherwise dormant hot spare and that hot spare can be used by a host-less appliance to generate a new baseline backup.
  • the present invention can be implemented in various data processing systems such as Enterprise systems, networks, SAN, NAS, medium and small systems (e.g., in a personal computer a write log is used, and data transferred to the user data set in background).
  • the “host” and “host system” refer to any source of information that is in communication with the hybrid RAID system for transferring data to, and from, the hybrid RAID subsystem.

Abstract

A method for storing data in a fault-tolerant storage subsystem having an array of failure independent data storage units, by dividing the storage area on the storage units into a logical mirror area and a logical stripe area, such that when storing data in the mirror area, duplicating the data by keeping a duplicate copy of the data on a pair of storage units, and when storing data in the stripe area, storing data as stripes of blocks, including data blocks and associated error-correction blocks.

Description

    FIELD OF THE INVENTION
  • The present invention relates to data protection in data storage devices, and in particular to data protection in disk arrays.
  • BACKGROUND OF THE INVENTION
  • Storage devices of various types are utilized for storing information such as in computer systems. Conventional computer systems include storage devices such as disk drives for storing information managed by an operating system file system. With decreasing costs of storage space, an increasing amount of data is stored on individual disk drives. However, in case of disk drive failure, important data can be lost. To alleviate this problem, some fault-tolerant storage devices utilize an array of redundant disk drives (RAID).
  • In typical data storage systems including storage devices such as primary disk drives, the data stored on the primary storage devices is backed-up to secondary storage devices such as tape, from time to time. However, any change to the data on the primary storage devices before the next back-up, can be lost if one or more of the primary storage devices fail.
  • True data protection can be achieved by keeping a log of all writes to a storage device, on a data block level. In one example, a user data set and a write log are maintained, wherein the data set has been completely backed up and thereafter a log of all writes is maintained. The backed-up data set and the write log allows returning to the state of the data set before the current state of the data set, by restoring the backed-up (baseline) data set and then executing all writes from that log up until that time.
  • To protect the log file itself, RAID configured disk arrays provide protection against data loss by protecting a single disk drive failure. Protecting the log file stream using RAID has been achieved by either a RAID mirror (known as RAID-1) shown by example in FIG. 1, or a RAID stripe (known as RAID-5) shown by example in FIG. 2. In the RAID mirror 10 including several disk drives 12, two disk drives store the data of one independent disk drive. In the RAID stripe 14, n+1 disk drives 12 are required to store the data of n independent disk drives (e.g., in FIG. 2, a stripe of five disk drives stores the data of four independent disk drives). The example RAID mirror 10 in FIG. 1 includes an array of eight disk drives 12 (e.g., drive0-drive7), wherein each disk drive 12 has e.g. 100 GB capacity. In each disk drive 12, half the capacity is used for user data, and another half for mirror data. As such, user data capacity of the disk array 10 is 400 GB and the other 400 GB is used for mirror data. In this example mirror configuration, drive1 protects drive0 data (M0), drive2 protects drive1 data (M1), etc. If drive0 fails, then the data M0 in drive1 can be used to recreate data M0 in drive0, and the data M7 in drive7 can be used to crate data M7 of drive0. As such, no data is lost in case of a single disk drive failure.
  • Referring back to FIG. 2, a RAID stripe configuration effectively groups capacity from all but one of the disk drives in the disk array 14 and writes the parity (XOR) of that capacity on the remaining disk drive (or across multiple drives as shown). In the example FIG. 2, the disk array 14 includes five disk drives 12 (e.g., drive0-drive4) each disk drive 12 having e.g. 100 GB capacity, divided into 5 sections. The blocks S0-S3 in the top portions of drive0-drive3 are for user data, and a block of drive4 is for parity data (i.e., XOR of S0-S3). In this example, the RAID stripe capacity is 400 GB for user data and 100 GB for parity data. The parity area is distributed among the disk drives 12 as shown. Spreading the parity data across the disk drives 12 allows spreading the task of reading the parity data over several disk drives as opposed to just one disk drive. Writing on a disk drive in a stripe configuration requires that the disk drive holding parity be read, a new parity calculated and the new parity written over the old parity. This requires a disk revolution and increases the write latency. The increased write latency decreases the throughput of the storage device 14.
  • On the other hand, the RAID mirror configuration (“mirror”) allows writing the log file stream to disk faster than the RAID stripe configuration (“stripe”). A mirror is faster than a stripe since in the mirror, each write activity is independent of other write activities, in that the same block can be written to the mirroring disk drives at the same time. However, a mirror configuration requires that the capacity to be protected be matched on another disk drive. This is costly as the capacity to be protected must be duplicated, requiring double the number of disk drives. A stripe reduces such capacity to 1/n where n is the number of disk drives in the disk drive array. As such, protecting data with parity across multiple disk drives makes a stripe slower than a mirror, but more cost effective.
  • There is, therefore, a need for a method and system of providing cost effective data protection with improved data read/write performance than a conventional RAID system. There is also a need for such a system to provide the capability of returning to a desired previous data state.
  • BRIEF SUMMARY OF THE INVENTION
  • The present invention satisfies these needs. In one embodiment, the present invention provides a method for storing data in a fault-tolerant storage subsystem having an array of failure independent data storage units, by dividing the storage area on the storage units into a hybrid of a logical mirror area (i.e., RAID mirror) and a logical stripe area (i.e., RAID stripe). When storing data in the mirror area, the data is duplicated by keeping a duplicate copy of the data on a pair of storage units, and when storing data in the stripe area, the data is stored as stripes of blocks, including data blocks and associated error-correction blocks.
  • In one version of the present invention, a log file stream is maintained as a log cache in the RAID mirror area for writing data from a host to the storage subsystem, and then data is transferred from the log file in the RAID mirror area to the final address in the RAID stripe area, preferably as a background task. In doing so, the aforementioned write latency performance penalty associated with writes to a RAID stripe can be masked from the host.
  • To further enhance performance, according to the present invention, a memory cache (RAM cache) is added in front of the log cache, wherein incoming host blocks are first written to RAM cache quickly and the host is acknowledged. The host perceives a faster write cycle than is possible if the data were written to a data storage unit while the host waited for an acknowledgement. This further enhances the performance of the above hybrid RAID subsystem.
  • While the data is en-route to a data storage unit through the RAM cache, power failure can result in data loss. As such, according to another aspect of the present invention, a flashback module (backup module) is added to the subsystem to protect the RAM cache data. The flashback module includes a non-volatile memory, such as flash memory, and a battery. During normal operations, the battery is trickle charged. Should any power failure then occur, the battery provides power to transfer the contents of the RAM cache to the flash memory. Upon restoration of power, the flash memory contents are transferred back to the RAM cache, and normal operations resume.
  • Read performance is further enhanced by pressing a data storage unit (e.g., disk drive) normally used as a spare data storage unit (“hot spare”) in the array, into temporary service in the hybrid RAID system. In a conventional RAID subsystem, any hot spare lies dormant but ready to take over if one of the data storage units in the array should fail. According to the present invention, rather than lying dormant, the hot spare can be used to replicate the data in the mirrored area of the hybrid RAID subsystem. Should any data storage unit in the array fail, this hot spare could immediately be delivered to take the place of that failed data storage unit without increasing exposure to data loss from a single data storage unit failure. However, while all the data storage units of the array are working properly, the replication of the mirror area would make the array more responsive to read requests by allowing the hot spare to supplement the mirror area.
  • The mirror area acts as a temporary store for the log, prior to storing the write data in its final location in the stripe area. In another version of the present invention, prior to purging the data from the mirror area, the log can be written sequentially to an archival storage medium such as tape. If a baseline backup of the entire RAID subsystem stripe is created just before the log files are archived, each successive state of the RAID subsystem can be recreated by re-executing the write requests within the archived log files. This would allow any earlier state of the stripe of the RAID subsystem to be recreated (i.e., infinite roll-back or rewind). This is beneficial in allowing recovery from e.g. user error such as accidentally erasing a file, from a virus infection, etc.
  • As such, the present invention provides a method and system of providing cost effective data protection with improved data read/write performance than a conventional RAID system, and also provides the capability of returning to a desired previous data state.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and other features, aspects and advantages of the present invention will become understood with reference to the following description, appended claims and accompanying figures where:
  • FIG. 1 shows a block diagram of an example disk array configured as a RAID mirror;
  • FIG. 2 shows a block diagram of an example disk array configured as a RAID stripe; FIG. 3A shows a block diagram of an example hybrid RAID data organization in a disk array according to an embodiment of the present invention;
  • FIG. 3B shows an example flowchart of an embodiment of the steps of data storage according to the present invention;
  • FIG. 3C shows a block diagram of an example RAID subsystem logically configured as hybrid RAID stripe and mirror, according to the hybrid RAID data organization FIG. 3A;
  • FIG. 4A shows an example data set and a log of updates to the data set after a back-up;
  • FIG. 4B shows an example flowchart of another embodiment of the steps of data storage according to the present invention;
  • FIG. 4C shows an example flowchart of another embodiment of the steps of data storage according to the present invention
  • FIG. 5A shows another block diagram of the disk array of FIGS. 3A and 3B, further including a flashback module according to the present invention;
  • FIG. 5B shows an example flowchart of another embodiment of the steps of data storage according to the present invention;
  • FIG. 5C shows an example flowchart of another embodiment of the steps of data storage according to the present invention;
  • FIG. 6A shows a block diagram of another example hybrid RAID data organization in a disk array including a hot spare used as a temporary RAID mirror according to the present invention;
  • FIG. 6B shows an example flowchart of another embodiment of the steps of data storage according to the present invention;
  • FIG. 6C shows a block diagram of an example RAID subsystem logically configured as the hybrid RAID data organization of FIG. 6A that further includes a hot spare used as a temporary RAID mirror;
  • FIG. 7A shows a block diagram of another disk array including a hybrid RAID data organization using stripe and mirror configurations, and further including a hot spare as a redundant mirror and a flashback module, according to the present invention;
  • FIG. 7B shows a block diagram of another disk array including hybrid RAID data organization using stripe and mirror configurations, and further including a hot spare as a redundant mirror and a flashback module, according to the present invention;
  • FIG. 8A shows an example of utilizing a hybrid RAID subsystem in a storage area network (SAN), according to the present invention;
  • FIG. 8B shows an example of utilizing a hybrid RAID as a network attached storage (NAS), according to the present invention; and
  • FIG. 8C shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Referring to FIG. 3A, an example fault-tolerant storage subsystem 16 having an array of failure independent data storage units 18, such as disk drives, using a hybrid RAID data organization according to an embodiment of the present invention is shown. The data storage units 18 can be other storage devices, such as e.g. optical storage devices, DVD-RAM, etc. As discussed, protecting data with parity across multiple disk drives makes a RAID stripe slow but cost effective. A RAID mirror provides better data transfer performance because the target sector is simultaneously written on two disk drives, but requires that the capacity to be protected be matched on another disk drive. Whereas a RAID stripe reduces such capacity to 1/n where n is the number of drives in the disk array, but in a RAID stripe, both the target and the parity sector must be read then written, causing write latency.
  • In the example of FIG. 3A, an array 17 of six disk drives 18 (e.g., drive0-drive5) is utilized for storing data from, and reading data back to, a host system, and is configured to include both a RAID mirror data organization and a RAID stripe data organization according to the present invention. In the disk array 17, the RAID mirror (“mirror”) configuration provides performance advantage when transferring data to disk drives 18 using e.g. a log file stream approach, and the RAID stripe (“stripe”) configuration provides cost effectiveness by using the stripe organization for general purpose storage of user data sets.
  • Referring to the example steps in the flowchart of FIG. 3B, according to an embodiment of the present invention, this is achieved by dividing the capacity of the disk array 17 of FIG. 3A into at least two areas (segments), including a mirror area 20 and a stripe area 22 (step 100). A data set 24 is maintained in the stripe area 22 (step 102), and an associated log file/stream 26 is maintained in the mirror area 20 (step 104). The log file 26 is maintained as a write log cache in the mirror area 20, such that upon receiving a write request from a host, the host data is written to the log file 26 (step 106), and then data is transferred from the log file 26 in the mirror area 20 to a final address in the data set 24 in the stripe area 22 (preferably, performed as a background task) (step 108). In doing so, the aforementioned write latency performance penalty associated with writes to a RAID stripe can be masked from the host. Preferably, the log is backed-up to tape continually or on a regular basis (step 110). The above steps are repeated as write requests arrive from the host. The disk array 17 can include additional hybrid RAID mirror and RAID stripe configured areas according to the present invention.
  • Referring to FIG. 3C, the example hybrid RAID subsystem 16 according to the present invention further includes a data organization manager 28 having a RAID controller 30 that implements the hybrid data organization of FIG. 3A on the disk array 17 (e.g., an array of N disk drives 18). In the example of FIG. 3C, an array 17 of N=6 disk drives (drive0-drive5, e.g. 100 GB each) is configured such that portions of the capacity of the disk drives 18 are used as a RAID mirror for the write log cache 26 and write log cache mirror data 27 (i.e., M0-M5). And, remaining portions of the capacity of the disk drives 18 are used a RAID stripe for user data (e.g., S0-S29) and parity data (e.g., XOR0-XOR29). In this example, 400 GB of user data is stored in the hybrid RAID subsystem 16, compared to the same capacity in the RAID mirror 10 of FIG. 1 and the RAID stripe 14 of FIG. 2. The subsystem 16 communicates with a host 29 via a host interface 31. Other numbers of disk drives and with different storage capacities can also be used in the RAID subsystem 16 of FIG. 3C, according to the present invention.
  • FIG. 4A shows an example user data set 24 and a write log 26, wherein the data set 24 has been completely backed up at e.g. midnight and thereafter a log 26 of all writes has been maintained (e.g., at times t1-t6). In this example, each write log entry 26 a includes updated data (udata) and the address (addr) in the data set where the updated data is to be stored, and a corresponding time stamp (ts). The data set at each time t1-t6 is also shown in FIG. 4A. The backed-up data set 24 and the write log 26 allows returning to the state of the data set 24 at any time before the current state of the data set (e.g., at time t6), by restoring the backed-up (baseline) data set 24 and then executing all writes from that log 26 up until that time. For example, if data for address addr=0 (e.g., logical block address 0) were updated at time t2, but then corrupted at time t5, then the data from addr=0 from time t2 can be retrieved by restoring the baseline backup and running the write log through time t2. The log file 26 is first written in the RAID mirror area 20 and then data is transferred from the log file 26 in the RAID mirror area 20 to the final address in the RAID stripe area 22 (preferably as a background task), according to the present invention.
  • As the write log 26 may grow large, it is preferably offloaded to secondary storage devices such as tape drives, to free up disk space to log more changes to the data set 24. As such, the disk array 17 (FIG. 3C) is used as a write log cache in a three step process: (1) when the host needs to write data to a disk, rather than writing to the final destination in a disk drive, that data is first written to the log 26, satisfying the host (2) then when the disk drive is not busy, that data from the log 26 is transferred to the final destination data set on the disk drive, transparent to the host and (3) the log data is backed-up to e.g. tape to free up storage space, to log new data from the host. The log and the final destination data are maintained in a hybrid RAID configuration as described.
  • Referring to the example steps in the flowchart of FIG. 4B, upon receiving a host read request (step 120), a determination is made if the requested data is in the write log 26, maintained as a cache in the mirror area 20, (i.e., cache hit) (step 122), and if so, the requested data is transferred to the host 20 from the log 26 (step 124). Statistically, since recently written data is more likely to be read back than previously written data, there is a tradeoff such that the larger the log area, the higher the probability that the requested data is in the log 26 (in the mirror area 20). When reading multiple blocks from the mirror area 20, different blocks can be read from different disk drives simultaneously, increasing read performance. In step 122, if there is no log cache hit, then the stripe area 22 is accessed to retrieve the requested data to provide to the host (step 126). Stripe read performance is inferior to a mirror but not as dramatically as write performance is inferior.
  • A such, the stripe area 22 is used for flushing the write log data, thereby permanently storing the data set in the stripe area 22, and also used to read data blocks that are not in the write log cache 26 in the mirror area 20. The hybrid RAID system 16 is an improvement over a conventional RAID stripe without a RAID mirror, since according to the present invention most recently written data is likely in the log 26 stored in the mirror area 20, which provides a faster read than a stripe. The hybrid RAID system provides equivalent of RAID mirror performance for all writes and for most reads since most recently written data is most likely to be read back. As such, the RAID stripe 22 is only accessed to retrieve data not found in the log cache 26 stored in the RAID mirror 20, whereby the hybrid RAID system 16 essentially provides the performance of a RAID mirror, but at cost effectiveness of a RAID stripe.
  • Therefore, if the stripe 22 is written to as a foreground process (e.g., real-time), then there is write performance penalty (i.e. the host is waiting for an acknowledgement that the write is complete). The log cache 26 permits avoidance of such real-time writes to the stripe 22. Because the disk array 17 is divided into two logical data areas (i.e., a mirrored log write area 20 and a striped read area 22) using a mirror configuration for log writes avoids the write performance penalty of a stripe. Provided the mirror area 20 is sufficiently large to hold all log writes that occur during periods of peak activity, updates to the stripe area 22 can be performed in the background. The mirror area 20 is essentially a write cache, and writing the log 26 to the mirror area 20 with background writes to the stripe area 22 allows the hybrid subsystem 16 to match mirror performance at stripe-like cost.
  • Referring to the example steps in the flowchart of FIG. 4C, to further enhance performance, according to the present invention, a cache memory (e.g., RAM write cache 32, FIG. 5A) is added in front of the log cache 26 in the disk array 17 (step 130), and as above the data set 24 and the log file 26 are maintained in the stripe area 22 and the mirror area 20, respectively (steps 132, 134). Upon receiving host write requests (step 136) incoming host blocks are first written to the RAM write cache 32 quickly and the host is acknowledged (step 138). The host perceives a faster write cycle than is possible if the data were written to disk while the host waited for an acknowledgement. This enhances the performance of conventional RAID system and further enhances the performance of the above hybrid RAID subsystem 16. The host data in the RAM write cache 32 is copied sequentially to the log 26 in the mirror area 20 (i.e., disk mirror write cache) (step 140), and the log data is later copied to the data set 24 in the stripe area 22 (i.e., disk stripe data set) e.g. as a background process (step 142). Sequential writes to the disk mirror write cache 26 and random writes to the disk stripe data set 24, provide fast sequential writes.
  • However, power failure while the data is en-route to disk (e.g., to the write log cache on disk) through the RAM write cache 32 can result in data loss because RAM is volatile. Therefore, as shown in the example block diagram of another embodiment of a hybrid RAID subsystem 16 in FIG. 5A, a flashback module 34 (backup module) can be added to the disk array 17 to protect RAM cache data according to the present invention. Without the module 34, write data would not be secure until stored at its destination address on disk.
  • The module 34 includes a non-volatile memory 36 such as Flash memory, and a battery 38. Referring to the example steps in the flowchart of FIG. 5B, during normal operations, the battery 38 is trickle charged from an external power source 40 (step 150). Should any power failure then occur, the battery 38 provides the RAID controller 30 with power sufficient (step 152) to transfer the contents of the RAM write cache 32 to the flash memory 36 (step 154). Upon restoration of power, the contents of the flash memory 36 are transferred back to the RAM write cache 32, and normal operations resume (step 156). This allows acknowledging the host write request (command) once the data is written in the RAM cache 32 (which is faster than writing it to the mirror disks). Should a failure of an element of the RAID subsystem 16 preclude resumption of normal operations, the flashback module 34 can be moved to a another hybrid subsystem 16 to restore data from the flash memory 36. With the flashback module 34 protecting the RAM write cache 32 against power loss, writes can be accumulated in the RAM cache 32 and written to the mirrored disk log file 26 sequentially (e.g., in the background).
  • To minimize the size (and the cost) of the RAM write cache 32 (and thus the corresponding size and cost of flash memory 36 in the flashback module 34), write data should be transferred to disk as quickly as possible. Since sequential throughput of a hard disk drive is substantially better than random performance, the fastest way to transfer data from the RAM write cache 32 to disk is via the log file 26 (i.e., a sequence of address/data pairs above) in the mirror area 20. This is because when writing a data block to the mirror area 20, the data block is written to two different disk drives. Depending on the physical disk address of the incoming blocks from the host to be written, the disk drives of the mirror 20 may be accessed randomly. However, as a log file is written sequentially based on entries in time, the blocks are written to the log file in a sequential manner, regardless of their actual physical location in the data set 24 on the disk drives.
  • In the above hybrid RAID system architecture according to the present invention, data requested by the host 29 from the RAID subsystem 16 can be in the RAM write cache 32, in the log cache area 26 in the mirror 20 area or in the general purpose stripe area 22. Referring to the example steps in the flowchart of FIG. 5C, upon receiving a host read request (step 160), a determination is made if the requested data is in the RAM cache 32 (step 162), and if so, the requested data is transferred to the host 29 from the RAM cache 32 (step 164). If the requested data is not in the RAM cache 32, then a determination is made if the requested data is in the write log file 26 in the mirror area 20 (step 166), and if so, the requested data is transferred to the host from the log 26 (step 168). If the requested data is not in the log 26, then the data set 24 in the stripe area 22 is accessed to retrieve the requested data to provide to the host (step 169).
  • Since data in the mirror area 20 is replicated, twice the number of actuators are available to pursue read data requests effectively doubling responsiveness. While this mirror benefit is generally recognized, the benefit may be enhanced because the mirror does not contain random data but rather data that has recently been written. As discussed, because the likelihood that data will be read is probably directly proportional to the time since the data has been written, the mirror area 20 may be more likely to contain the desired data. A further acceleration can be realized if the data is read back in the same order it was written regardless of the potential randomness of the final data addresses since the mirror area 20 stores data in the written order and a read in that order creates a sequential stream.
  • According to another aspect of the present invention, read performance of the subsystem 16 can further be enhanced. In a conventional RAID system, one of the disk drives in the array can be reserved as a spare disk drive (“hot spare”), wherein if one of the other disk drives in the array should fail, the hot spare is used to take the place of that failed drive. According to the present invention, read performance can be further enhanced by pressing a disk drive normally used as a hot spare in the disk array 17, into temporary service in the hybrid RAID subsystem 16. FIG. 6A shows the hybrid RAID subsystem 16 of FIG. 3A, further including a hot spare disk drive 18 a (i.e., drive6) according to the present invention.
  • Referring to the example steps in the flowchart of FIG. 6B, according to the present invention, the status of the hot spare 18 a is determined (step 170) and upon detecting the hot spare 18 a is lying dormant (i.e., not being used as a failed device replacement) (step 172), the hot spare 18 a is used to replicate the data in the mirrored area 20 of the hybrid RAID subsystem 16 (step 174). Then upon receiving a read request from the host (step 176), it is determined if the requested data is in the hot spare 18 a and the mirror area 20 (step 178). If so, a copy of the requested data is provided to the host from the hot spare 18 a with minimum latency or from the mirror area 20, if faster (180). Otherwise, a copy of a requested data is provided to the host from the mirror area 20 or the stripe area 22 (step 182). Thereafter, it is determined if the hot spare 18 a is required to replace a failed disk drive (step 184). If not, the process goes back to step 176, otherwise the hot spare 18 a is used to replace the failed disk drive (step 186).
  • As such, in FIG. 6A should any disk drive 18 in the array 17 fail, the hot spare 18 a can immediately be delivered to take the place of that failed disk drive without increasing exposure to data loss from a single disk drive failure. For example, if drive1 fails, drive0 and drive2-drive5 can start using the spare drive6 and rebuild drive6 to contain data of drive1 prior to failure. However, while all the disk drives 18 of the array 17 are working properly, the replication of the mirror area 20 would make the subsystem 16 more responsive to read requests by allowing the hot spare 18 a to supplement the mirror area 20.
  • Depending upon the size of the mirrored area 20, the hot spare 18 a may be able to provide multiple redundant data copies for further performance boost. For example, if the hot spare 18 a matches the capacity of the mirrored area 20 of the array 17, the mirrored area data can then be replicated twice on the hot spare 18 a. For example, in the hot spare 18 a data can be arranged wherein the data is replicated on each concentric disk track (i.e., one half of a track contains a copy of that which is on the other half of that track). In that case, rotational latency of the hot spare 18 a in response to random requests is effectively halved (i.e., smaller read latency).
  • As such, the hot spare 18 a is used to make the mirror area 20 of the hybrid RAID subsystem 16 faster. FIG. 6C shows an example block diagram of a hybrid RAID subsystem 16 including a RAID controller 30 that implements the hybrid RAID data organization of FIG. 6A, for seven disk drives (drive0-drive6), wherein drive6 is the hot spare 18 a. Considering drive0-drive1 in FIG. 6C, for example, M0 data is in drive0 and is duplicated in drive1, whereby drive1 protects drive0. In addition, M0 data is written to the spare drive6 using replication, such that if requested M0 data is in the write log 26 in the mirror area 20, it can be read back from drive0, drive1, or the spare drive6. Since M0 data is replicated twice in drive6, drive6 appears to have high r.p.m. because as described, replication lowers read latency. Spare drive6 can be configured to store all the mirrored blocks in a replicated fashion, similar to that for M0 data, to improve the read performance of the hybrid subsystem 16.
  • Because a hot spare disk drive should match capacity of other disk drives in the disk array (primary array) and since in this example the mirror area data (M0-M5) is half the capacity of a disk drive 18, the hot spare 18 a can replicate the mirror area 20 twice. If the hot spare 18 a includes a replication of the mirror area, the hot spare 18 a can be removed from the subsystem 16 and backed-up. The backup can be performed off-line, not using network bandwidth. A new baseline could be created from the hot spare 18 a.
  • If for example, previously a full backup of the disk array has been made to tape, and that the hot spare 18 a contains all writes since that backup, then the backup can be restored from tape to a secondary disk array and then all writes from the log file 26 written to the stripe 22 of the secondary disk array. To speed this process only the most recent update to a given block need be written. The order of writes need not take place in a temporal order but can be optimized to minimize time between reads of the hot spare and/or writes to the secondary array. The stripe of the secondary array is then in the same state as that of the primary array, as of the time the hot spare was removed from the primary array. Backing up the secondary array to tape at this point creates a new baseline that can then be updated with newer hot spares over time to create newer baselines facilitating fast emergency restores. Such new baseline creation can be done without a host but rather with an appliance including a disk array and a tape drive. If the new baseline tape backup fails, the process can revert to the previous baseline and a tape backup of the hot spare.
  • FIG. 7A shows a block diagram of an embodiment of a hybrid RAID subsystem 16 implementing said hybrid RAID data organization, and further including a hot spare 18 a as a redundant mirror and a flashback module 34, according to the present invention. Writing to the log 26 in the mirror area 20 and the flashback module 34, removes the write performance penalty normally associated with replication on a mirror. Replication on a mirror involves adding a quarter rotation to all writes. When the target track is acquired, average latency to one of the replicated sectors is one quarter rotation but half a rotation is need to write the other sector. Since average latency on a standard mirror is half a rotation, an additional quarter rotation is required for writes. With the flashback module 34, acknowledgment of write non-volatility to the host can occur upon receipt of the write in RAM write cache 32 in the RAID controller 30. Writes from RAM write cache 32 to the disk log file write cache 26 occur in the background during periods of non-peak activity. By writing sequentially to the log file 26, the likelihood of such non-peak activity is greatly increased. FIG. 7B shows a block diagram of another embodiment of hybrid RAID subsystem 16 of FIG. 7A, wherein the flashback module 34 is part of the data organization manager 28 that includes the RAID controller 30.
  • Another embodiment of a hybrid RAID subsystem 16 according to the present invention provides data block service and can be used as any block device (e.g:, single disk drive, RAID, etc.). Such a hybrid RAID subsystem can be used in any system wherein a device operating at a data block level can be used. FIG. 8A shows an example of utilizing an embodiment of a hybrid RAID subsystem 16 according to the present invention in a example block device such as storage area network (SAN) 42. In SAN, connected devices exchange data blocks.
  • FIG. 8B shows an example of utilizing an embodiment of a hybrid RAID subsystem 16 according to the present invention as a network attached storage (NAS) in a network 44. In NAS, connected devices exchange files, as such a file server 46 is positioned in front of the hybrid RAID subsystem 16. The file server portion of a NAS device can be simplified with a focus solely on file service, and data integrity is provided by the hybrid RAID subsystem 16.
  • The present invention provides further example enhancements to the hybrid RAID subsystem, described herein below. As mentioned, the mirror area 20 (FIG. 3A) acts as a temporary store for the log cache 26, prior to storing the write data in its final location in the stripe 22. Before purging the data from the temporary mirror 20, the log 26 can be written sequentially to an archival storage medium such as tape. Then, to return to a prior state of the data set, if a baseline backup of the entire RAID subsystem stripe 22 is created just before the log files are archived, each successive state of the RAID subsystem 16 can be recreated by re-executing the write requests within the archived log file system. This would allow any earlier state of the stripe 22 of the RAID subsystem 16 to be recreated (i.e., infinite roll-back or rewind). This is beneficial e.g. in allowing recovery from user error such as accidentally erasing a file, in allowing recovery from a virus infection, etc. Referring to the example steps in the flowchart of FIG. 8C, to recreate a state of the data set 24 in the stripe 22 at a selected time, a copy of the data set 24 created at a back-up time prior to the selected time, is obtained (step 190) and a copy of cache log 26 associated with said data set copy is obtained (step 192). Said associated cache log 26 includes entries 26 a (FIG. 4A) created time-sequentially immediately subsequent to said back-up time. Each data block in each entry of said associated cache log 26 is time-sequentially transferred to the corresponding block address in the data set copy, until a time stamp indicating said selected time is reached in an entry 26 a of the associated cache log (step 194).
  • The present invention further provides compressing the data in the log 26 stored in the mirror area 20 of the hybrid RAID system 16 for cost effectiveness. Compression is not employed in a conventional RAID subsystem because of variability in data redundancy. For example, a given data block is to be read, modified and rewritten. If the read data consumes the entire data block and the modified data does contain as much redundancy as did the original data, then the compressed modified data cannot fit in the data block on disk.
  • However, a read/modify/write operation is not a valid operation in the mirror area 20 in the present invention because the mirror area 20 contains a sequential log file of writes. While a given data block may be read from the mirror area 20, after any modification, the writing of the data block would be appended to the existing log file stream 26, not overwritten in place. Because of this, variability in compression is not an issue in the mirror area 20. Modern compression techniques can e.g. halve the size of typical data, whereby use of compression in the mirror area 20 effectively e.g. doubles its size. This allows doubling the mirror area size or cutting the actual mirror area size in half, without reducing capacity relative to a mirror area without compression. The compression technique can similarly be performed for the RAM write cache 32.
  • For additional data protection, in another version of the present invention, the data in the RAID subsystem 16 may be replicated to a system 16 a (FIG. 7B) at a remote location. The remote system 16 a may not be called upon except in the event of an emergency in which the primary RAID subsystem 16 is shut down. However, the remote system 16 a can provide further added value in the case of the present invention. In particular, the primary RAID subsystem 16 sends data in the log file 26 in mirror area 20 to the remote subsystem 16 a wherein in this example the remote subsystem 16 a comprises a hybrid RAID subsystem according to the present invention. If the log file data is compressed the transmission time to the remote system 16 a can be reduced. Since the load on the remote subsystem 16 a is less than that on the primary subsystem 16 (i.e., the primary subsystem 16 responds to both read and write requests whereas the remote subsystem 16 a need only respond to writes), the remote subsystem 16 a can be the source of parity information for the primary subsystem 16. As such, within the remote subsystem 16 a, in the process of writing data from the mirror area to its final address on the stripe in the subsystem 16 a, the associated parity data is generated. The remote subsystem 16 a can then send the parity data (preferably compressed) to the primary subsystem 16 which can then avoid generating parity data itself, accelerating the transfer process for a given data block between the mirror and the stripe areas in the primary subsystem 16.
  • The present invention goes beyond standard RAID by protecting data integrity, not just providing device reliability. Infinite roll-back provides protection during the window of vulnerability between backups. A hybrid mirror/stripe data organization results in improved performance. With the addition of the flashback module 34, a conventional RAID mirror is outperformed at a cost which approaches that of a stripe. Further performance enhancement is attained with replication on an otherwise dormant hot spare and that hot spare can be used by a host-less appliance to generate a new baseline backup.
  • The present invention can be implemented in various data processing systems such as Enterprise systems, networks, SAN, NAS, medium and small systems (e.g., in a personal computer a write log is used, and data transferred to the user data set in background). As such in the description herein, the “host” and “host system” refer to any source of information that is in communication with the hybrid RAID system for transferring data to, and from, the hybrid RAID subsystem.
  • The present invention has been described in considerable detail with reference to certain preferred versions thereof; however, other versions are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions contained herein.

Claims (34)

1-55. (canceled)
56. A method for storing data in a fault-tolerant storage subsystem having an array of failure independent data storage units, comprising the steps of:
dividing the data storage area on the data storage units into a logical mirror area and a logical stripe area, such that when storing data in the mirror area, duplicating the data by keeping a duplicate copy of the data on a pair of storage units, and when storing data in the stripe area, storing data as stripes of blocks, including data blocks and associated error-correction blocks;
storing a data set in the stripe area, and storing an associated log cache in the mirror area;
in response to a request from a host to write data to the storage subsystem: storing the host data in the log cache in the mirror area, and acknowledging completion of the write to the host;
copying said host data from the log cache in the mirror area to the data set in the stripe area.
57. The method of claim 56, wherein:
the log cache comprises a write log having multiple time-sequential entries, each entry including a data block, the data block address in the data set, and a data block time stamp.
58. The method of claim 57, wherein:
said request from the host includes said host data and a block address in the data set for storing the host data;
the step of storing the host data in the log cache in response to said host request further includes the steps of entering the host data, said block address and a time stamp in an entry in the log cache.
59. The method of claim 57, wherein:
the step of copying said host data from the log cache in the mirror area to the data set in the stripe area, further comprises the steps of: copying the host data in said log cache entry in the mirror area to said block address in the data set in the stripe area.
60. The method of claim 57, further comprising the steps of:
archiving said log cache entry in an archive; and
purging said entry from the cache log.
61. The method of claim 58 further comprising the steps of:
in response to a request to recreate a state of the data set at a selected time:
obtaining a copy of the data set created at a back-up time prior to the selected time;
obtaining a cache log associated with said data set copy, the associated cache log including entries created time-sequentially immediately subsequent to said back-up time; and
time-sequentially transferring each data block in each entry of said associated cache log, to the corresponding block address in the data set copy, until said selected time stamp is reached in an entry of the associated cache log.
62. The method of claim 57, wherein the storage subsystem further includes a cache memory, the method further comprising the steps of:
in response to a request to write data to the storage subsystem: storing the data in the cache memory, acknowledging completion of the write, and copying the data from the cache memory to the log cache in the mirror area.
63. The method of claim 62, further comprising the steps of:
copying said data from the log cache in the mirror area to the data set in the stripe area.
64. The method of claim 63, further comprising the steps of:
in response to a request to read data from the storage subsystem:
determining if the requested data is in the cache memory, and if so, providing the requested data from the cache memory,
otherwise, determining if the requested data is in the log cache in the mirror area, and if so, providing the requested data from the log cache,
otherwise, determining if the requested data is in the data set in the stripe area, and if so, providing the requested data from the data set.
65. The method of claim 57, further comprising the steps of compressing the data stored in the mirror area.
66. The method of claim 57, wherein the data storage units comprise data disk drives.
67. A fault-tolerant storage subsystem comprising:
an array of failure independent data storage units;
a controller that logically divides the data storage area on data the storage units into a logical mirror area and a logical stripe area, wherein the controller stores data in the mirror area by duplicating the data and keeping a duplicate copy of the data on a pair of storage units, and the controller stores data in the stripe area as stripes of blocks, including data blocks and associated error-correction blocks;
the controller further maintains a data set in the stripe area, and an associated log cache in the mirror area; and
in response to a request to write incoming data to the storage subsystem, the controller stores the incoming data in the log cache in the mirror area, and acknowledges completion of the write, and the controller copies said incoming data from the log cache in the mirror area to the data set in the stripe area.
68. The storage subsystem of claim 67, wherein:
the log cache comprises a write log having multiple time sequential entries, each entry including a data block, the data block address in the data set, and time stamp.
69. The storage subsystem of claim 68, wherein:
said request includes said incoming data and a block address in the data set for storing the incoming data; and
the controller enters the incoming data, said block address and a time stamp in an entry in the log cache.
70. The storage subsystem of claim 69, wherein in response to a request to read data from the data set, the controller further:
determines if the requested data is in the log cache in the mirror area, and if so, provides the requested data from the log cache,
otherwise, the controller determines if the requested data is in the data set in the stripe area, and if so, provides the requested data from the data set.
71. The storage subsystem of claim 69, wherein:
the controller copies said incoming data from the log cache in the mirror area to the data set in the stripe area, by copying the incoming data in said log cache entry in the mirror area to said block address in the data set in the stripe area.
72. The storage subsystem of claim 69, further comprising a cache memory, wherein:
in response to a request to write data to the data set, the controller stores the data in the cache memory, and acknowledges completion of the write; and
the controller further copies the data from the cache memory to the log cache in the mirror area.
73. The storage subsystem of claim 72, wherein the controller further copies said data from the log cache in the mirror area to the data set in the stripe area.
74. The storage subsystem of claim 73, wherein in response to a request to read data from the data set, the controller further:
determines if the requested data is in the cache memory, and if so, provides the requested data from the cache memory,
otherwise, the controller determines if the requested data is in the log cache in the mirror area, and if so, provides the requested data from the log cache,
otherwise, the controller determines if the requested data is in the data set in the stripe area, and if so, provides the requested data from the data set.
75. The storage subsystem of claim 68, wherein the controller further compresses the data stored in the mirror area.
76. A data organization manager for a fault-tolerant storage subsystem having an array of failure independent data storage units, the data organization manager comprising:
a controller that logically divides the data storage area on the data storage units into a hybrid of logical mirror area and a logical stripe area, wherein the controller stores data in the mirror area by duplicating the data and keeping a duplicate copy of the data on a pair of storage units, and the controller stores data in the stripe area as stripes of blocks, including data blocks and associated error-correction blocks;
the controller maintains a data set in the stripe area, and an associated log cache in the mirror area, and in response to a request to write data to the storage subsystem, the controller further: stores the data in the log cache in the mirror area, acknowledges completion of the write, and copies said data from the log cache in the mirror area to the data set in the stripe area.
77. The data organization manager of claim 76, wherein:
the log cache comprises a write log having multiple time sequential entries, each entry including a data block, the data block address in the data set, and time stamp;
said request includes said data and a block address in the data set for storing the data; and
the controller enters the data, said block address and a time stamp in an entry in the log cache.
78. The data organization manager of claim 76, wherein in response to a request to read data from the storage subsystem, the controller further:
determines if the requested data is in the log cache in the mirror area, and if so, provides the requested data from the log cache;
otherwise, the controller determines if the requested data is in the data set in the stripe area, and if so, provides the requested data from the data set.
79. The data organization manager of claim 77, wherein:
the controller copies said data from the log cache in the mirror area to the data set in the stripe area, by copying the data in said log cache entry in the mirror area to said block address in the data set in the stripe area.
80. The data organization manager of claim 79, wherein in response to a request to recreate a state of the data set at a selected time, the controller further:
obtains a copy of the data set created at a back-up time prior to the selected time;
obtains a cache log associated with said data set copy, the associated cache log including entries created time sequentially immediately subsequent to said back-up time; and
time sequentially transfers each data block in each entry of said associated cache log, to the corresponding block address in the data set copy, until said selected time stamp is reached in an entry of the associated cache log.
81. The data organization manager of claim 77, further comprising a cache memory, wherein:
in response to a request to write data to the data set, the controller stores the data in the cache memory, and acknowledges completion of the write; and
the controller further copies the data from the cache memory to the log cache in the mirror area.
82. The data organization manager of claim 81, wherein the controller further copies said data from the log cache in the mirror area to the data set in the stripe area.
83. The data organization manager of claim 76, wherein in response to a request to read data from the data set, the controller further:
determines if the requested data is in the cache memory, and if so, provides the requested data from the cache memory,
otherwise, the controller determines if the requested data is in the log cache in the mirror area, and if so, provides the requested data from the log cache,
otherwise, the controller determines if the requested data is in the data set in the stripe area, and if so, provides the requested data from the data set.
84. The data organization manager of claim 81, further comprising a memory backup module including non-volatile memory and a battery, wherein the storage subsystem is normally powered from a power supply;
wherein, upon detecting power failure from the power supply, the controller powers the cache memory and the non-volatile memory from the battery instead, and copies the data content of the cache memory to the non-volatile memory, and upon detecting restoration of power from the power supply, the controller copies back said data content from the non-volatile memory to the cache memory.
85. The data organization manager of claim 84, wherein said cache memory comprises random access memory (RAM), and said non-volatile memory comprises flash memory (FLASH).
86. The data organization manager of claim 84, wherein said battery comprises a rechargeable battery that is normally trickle charged by the power supply.
87. The data organization manager of claim 76, wherein the controller further reserves one of the storage units as a spare for use in case one of the other storage units fails, such that while the spare storage unit is not in use, the controller further:
replicates the log cache data stored in the mirror area into the spare storage unit, such that multiple copies of that data are stored in the spare storage unit; and
upon receiving a request to read data from the data set, the controller determines if the requested data is in the spare storage unit, and if so, the controller selects a copy of the requested data in the spare storage unit that can be provided with minimum read latency relative to other copies of the selected data, and provides the selected copy of the requested data.
88. The data organization manager of claim 77, wherein the controller further compresses the data stored in the mirror area and the cache.
US11/433,152 2002-09-20 2006-05-13 Accelerated RAID with rewind capability Abandoned US20060206665A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/433,152 US20060206665A1 (en) 2002-09-20 2006-05-13 Accelerated RAID with rewind capability

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/247,859 US7076606B2 (en) 2002-09-20 2002-09-20 Accelerated RAID with rewind capability
US11/433,152 US20060206665A1 (en) 2002-09-20 2006-05-13 Accelerated RAID with rewind capability

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/247,859 Division US7076606B2 (en) 2002-09-20 2002-09-20 Accelerated RAID with rewind capability

Publications (1)

Publication Number Publication Date
US20060206665A1 true US20060206665A1 (en) 2006-09-14

Family

ID=31946445

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/247,859 Expired - Fee Related US7076606B2 (en) 2002-09-20 2002-09-20 Accelerated RAID with rewind capability
US11/433,152 Abandoned US20060206665A1 (en) 2002-09-20 2006-05-13 Accelerated RAID with rewind capability

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/247,859 Expired - Fee Related US7076606B2 (en) 2002-09-20 2002-09-20 Accelerated RAID with rewind capability

Country Status (3)

Country Link
US (2) US7076606B2 (en)
EP (1) EP1400899A3 (en)
JP (1) JP2004118837A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080276124A1 (en) * 2007-05-04 2008-11-06 Hetzler Steven R Incomplete write protection for disk array
US20090204758A1 (en) * 2008-02-13 2009-08-13 Dell Products, Lp Systems and methods for asymmetric raid devices
US20090303630A1 (en) * 2008-06-10 2009-12-10 H3C Technologies Co., Ltd. Method and apparatus for hard disk power failure protection
US20100037017A1 (en) * 2008-08-08 2010-02-11 Samsung Electronics Co., Ltd Hybrid storage apparatus and logical block address assigning method
US20100161883A1 (en) * 2008-12-24 2010-06-24 Kabushiki Kaisha Toshiba Nonvolatile Semiconductor Memory Drive and Data Management Method of Nonvolatile Semiconductor Memory Drive
US7886111B2 (en) 2006-05-24 2011-02-08 Compellent Technologies System and method for raid management, reallocation, and restriping
US20110225353A1 (en) * 2008-10-30 2011-09-15 Robert C Elliott Redundant array of independent disks (raid) write cache sub-assembly
US20120151133A1 (en) * 2010-12-13 2012-06-14 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US8555108B2 (en) 2003-08-14 2013-10-08 Compellent Technologies Virtual disk drive system and method
US8819478B1 (en) * 2008-06-30 2014-08-26 Emc Corporation Auto-adapting multi-tier cache
US8856427B2 (en) 2011-06-08 2014-10-07 Panasonic Corporation Memory controller and non-volatile storage device
CN105068760A (en) * 2013-10-18 2015-11-18 华为技术有限公司 Data storage method, data storage apparatus and storage device
US9489150B2 (en) 2003-08-14 2016-11-08 Dell International L.L.C. System and method for transferring data between different raid data storage types for current data and replay data
US9996421B2 (en) 2013-10-18 2018-06-12 Huawei Technologies Co., Ltd. Data storage method, data storage apparatus, and storage device

Families Citing this family (225)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7418620B1 (en) * 2001-02-16 2008-08-26 Swsoft Holdings, Ltd. Fault tolerant distributed storage method and controller using (N,K) algorithms
JP4186602B2 (en) 2002-12-04 2008-11-26 株式会社日立製作所 Update data writing method using journal log
JP2004213435A (en) * 2003-01-07 2004-07-29 Hitachi Ltd Storage device system
US6965979B2 (en) * 2003-01-29 2005-11-15 Pillar Data Systems, Inc. Methods and systems of host caching
JP4165747B2 (en) 2003-03-20 2008-10-15 株式会社日立製作所 Storage system, control device, and control device program
US7668876B1 (en) * 2003-04-25 2010-02-23 Symantec Operating Corporation Snapshot-based replication infrastructure for efficient logging with minimal performance effect
US20040254962A1 (en) * 2003-06-12 2004-12-16 Shoji Kodama Data replication for enterprise applications
US7149858B1 (en) * 2003-10-31 2006-12-12 Veritas Operating Corporation Synchronous replication for system and data security
JP2005166016A (en) * 2003-11-11 2005-06-23 Nec Corp Disk array device
US7234074B2 (en) * 2003-12-17 2007-06-19 International Business Machines Corporation Multiple disk data storage system for reducing power consumption
JP4634049B2 (en) * 2004-02-04 2011-02-16 株式会社日立製作所 Error notification control in disk array system
JP4112520B2 (en) * 2004-03-25 2008-07-02 株式会社東芝 Correction code generation apparatus, correction code generation method, error correction apparatus, and error correction method
US20050235336A1 (en) * 2004-04-15 2005-10-20 Kenneth Ma Data storage system and method that supports personal video recorder functionality
JP4519563B2 (en) * 2004-08-04 2010-08-04 株式会社日立製作所 Storage system and data processing system
US7519629B2 (en) * 2004-09-30 2009-04-14 International Business Machines Corporation System and method for tolerating multiple storage device failures in a storage system with constrained parity in-degree
JP4428202B2 (en) * 2004-11-02 2010-03-10 日本電気株式会社 Disk array subsystem, distributed arrangement method, control method, and program in disk array subsystem
US7702864B2 (en) * 2004-11-18 2010-04-20 International Business Machines Corporation Apparatus, system, and method for writing stripes in parallel to unique persistent storage devices
JP2006268420A (en) * 2005-03-24 2006-10-05 Nec Corp Disk array device, storage system and control method
US7644046B1 (en) * 2005-06-23 2010-01-05 Hewlett-Packard Development Company, L.P. Method of estimating storage system cost
US7529968B2 (en) * 2005-11-07 2009-05-05 Lsi Logic Corporation Storing RAID configuration data within a BIOS image
US7761426B2 (en) * 2005-12-07 2010-07-20 International Business Machines Corporation Apparatus, system, and method for continuously protecting data
JP2007264894A (en) * 2006-03-28 2007-10-11 Kyocera Mita Corp Data storage system
US7617361B2 (en) * 2006-03-29 2009-11-10 International Business Machines Corporation Configureable redundant array of independent disks
KR100771521B1 (en) * 2006-10-30 2007-10-30 삼성전자주식회사 Flash memory device having a multi-leveled cell and programming method thereof
US7904647B2 (en) * 2006-11-27 2011-03-08 Lsi Corporation System for optimizing the performance and reliability of a storage controller cache offload circuit
US20080168224A1 (en) * 2007-01-09 2008-07-10 Ibm Corporation Data protection via software configuration of multiple disk drives
US8370715B2 (en) * 2007-04-12 2013-02-05 International Business Machines Corporation Error checking addressable blocks in storage
US8032702B2 (en) 2007-05-24 2011-10-04 International Business Machines Corporation Disk storage management of a tape library with data backup and recovery
US7853751B2 (en) * 2008-03-12 2010-12-14 Lsi Corporation Stripe caching and data read ahead
JP2008217811A (en) * 2008-04-03 2008-09-18 Hitachi Ltd Disk controller using nonvolatile memory
JP2009252114A (en) * 2008-04-09 2009-10-29 Hitachi Ltd Storage system and data saving method
US20090282194A1 (en) * 2008-05-07 2009-11-12 Masashi Nagashima Removable storage accelerator device
KR20110050404A (en) 2008-05-16 2011-05-13 퓨전-아이오, 인크. Apparatus, system, and method for detecting and replacing failed data storage
CN102037628A (en) * 2008-05-22 2011-04-27 Lsi公司 Battery backup system with sleep mode
CN101325610B (en) * 2008-07-30 2011-12-28 杭州华三通信技术有限公司 Virtual tape library backup system and magnetic disk power supply control method
WO2010016115A1 (en) * 2008-08-06 2010-02-11 富士通株式会社 Disk array device control unit, data transfer device, and power recovery processing method
US20110258362A1 (en) * 2008-12-19 2011-10-20 Mclaren Moray Redundant data storage for uniform read latency
US20100287407A1 (en) * 2009-05-05 2010-11-11 Siemens Medical Solutions Usa, Inc. Computer Storage Synchronization and Backup System
US8281227B2 (en) 2009-05-18 2012-10-02 Fusion-10, Inc. Apparatus, system, and method to increase data integrity in a redundant storage system
US8307258B2 (en) 2009-05-18 2012-11-06 Fusion-10, Inc Apparatus, system, and method for reconfiguring an array to operate with less storage elements
US8732396B2 (en) * 2009-06-08 2014-05-20 Lsi Corporation Method and apparatus for protecting the integrity of cached data in a direct-attached storage (DAS) system
US7941696B2 (en) * 2009-08-11 2011-05-10 Texas Memory Systems, Inc. Flash-based memory system with static or variable length page stripes including data protection information and auxiliary protection stripes
US7856528B1 (en) * 2009-08-11 2010-12-21 Texas Memory Systems, Inc. Method and apparatus for protecting data using variable size page stripes in a FLASH-based storage system
US8930622B2 (en) 2009-08-11 2015-01-06 International Business Machines Corporation Multi-level data protection for flash memory system
GB2488462B (en) * 2009-12-17 2018-01-17 Ibm Data management in solid state storage systems
US9785561B2 (en) * 2010-02-17 2017-10-10 International Business Machines Corporation Integrating a flash cache into large storage systems
US9311184B2 (en) * 2010-02-27 2016-04-12 Cleversafe, Inc. Storing raid data as encoded data slices in a dispersed storage network
US8112663B2 (en) * 2010-03-26 2012-02-07 Lsi Corporation Method to establish redundancy and fault tolerance better than RAID level 6 without using parity
US8181062B2 (en) * 2010-03-26 2012-05-15 Lsi Corporation Method to establish high level of redundancy, fault tolerance and performance in a raid system without using parity and mirroring
US20110296105A1 (en) * 2010-06-01 2011-12-01 Hsieh-Huan Yen System and method for realizing raid-1 on a portable storage medium
US8554741B1 (en) * 2010-06-16 2013-10-08 Western Digital Technologies, Inc. Timeline application for log structured storage devices
US11614893B2 (en) 2010-09-15 2023-03-28 Pure Storage, Inc. Optimizing storage device access based on latency
US8738962B2 (en) * 2010-11-17 2014-05-27 International Business Machines Corporation Memory mirroring with memory compression
TWI417727B (en) * 2010-11-22 2013-12-01 Phison Electronics Corp Memory storage device, memory controller thereof, and method for responding instruction sent from host thereof
US8966184B2 (en) 2011-01-31 2015-02-24 Intelligent Intellectual Property Holdings 2, LLC. Apparatus, system, and method for managing eviction of data
JP5505329B2 (en) * 2011-02-22 2014-05-28 日本電気株式会社 Disk array device and control method thereof
CN102682012A (en) * 2011-03-14 2012-09-19 成都市华为赛门铁克科技有限公司 Method and device for reading and writing data in file system
US9396067B1 (en) * 2011-04-18 2016-07-19 American Megatrends, Inc. I/O accelerator for striped disk arrays using parity
US9300590B2 (en) 2011-06-24 2016-03-29 Dell Products, Lp System and method for dynamic rate control in Ethernet fabrics
US9798615B2 (en) 2011-07-05 2017-10-24 Dell Products, Lp System and method for providing a RAID plus copy model for a storage network
US8589640B2 (en) 2011-10-14 2013-11-19 Pure Storage, Inc. Method for maintaining multiple fingerprint tables in a deduplicating storage system
US8799557B1 (en) * 2011-10-13 2014-08-05 Netapp, Inc. System and method for non-volatile random access memory emulation
US9053033B1 (en) 2011-12-30 2015-06-09 Emc Corporation System and method for cache content sharing
US9009416B1 (en) 2011-12-30 2015-04-14 Emc Corporation System and method for managing cache system content directories
US9235524B1 (en) 2011-12-30 2016-01-12 Emc Corporation System and method for improving cache performance
US9104529B1 (en) 2011-12-30 2015-08-11 Emc Corporation System and method for copying a cache system
US8930947B1 (en) 2011-12-30 2015-01-06 Emc Corporation System and method for live migration of a virtual machine with dedicated cache
US9158578B1 (en) 2011-12-30 2015-10-13 Emc Corporation System and method for migrating virtual machines
US8627012B1 (en) * 2011-12-30 2014-01-07 Emc Corporation System and method for improving cache performance
US9767032B2 (en) 2012-01-12 2017-09-19 Sandisk Technologies Llc Systems and methods for cache endurance
US10359972B2 (en) * 2012-08-31 2019-07-23 Sandisk Technologies Llc Systems, methods, and interfaces for adaptive persistence
US10073656B2 (en) 2012-01-27 2018-09-11 Sandisk Technologies Llc Systems and methods for storage virtualization
US8856619B1 (en) * 2012-03-09 2014-10-07 Google Inc. Storing data across groups of storage nodes
GB2503274A (en) * 2012-06-22 2013-12-25 Ibm Restoring redundancy in a RAID
US9059868B2 (en) 2012-06-28 2015-06-16 Dell Products, Lp System and method for associating VLANs with virtual switch ports
US20150378858A1 (en) * 2013-02-28 2015-12-31 Hitachi, Ltd. Storage system and memory device fault recovery method
JP6248435B2 (en) * 2013-07-04 2017-12-20 富士通株式会社 Storage device and storage device control method
US10019352B2 (en) 2013-10-18 2018-07-10 Sandisk Technologies Llc Systems and methods for adaptive reserve storage
JP6244974B2 (en) * 2014-02-24 2017-12-13 富士通株式会社 Storage device and storage device control method
US10574754B1 (en) 2014-06-04 2020-02-25 Pure Storage, Inc. Multi-chassis array with multi-level load balancing
US9218244B1 (en) 2014-06-04 2015-12-22 Pure Storage, Inc. Rebuilding data across storage nodes
US11068363B1 (en) 2014-06-04 2021-07-20 Pure Storage, Inc. Proactively rebuilding data in a storage cluster
US8850108B1 (en) 2014-06-04 2014-09-30 Pure Storage, Inc. Storage cluster
US9213485B1 (en) 2014-06-04 2015-12-15 Pure Storage, Inc. Storage system architecture
US9003144B1 (en) 2014-06-04 2015-04-07 Pure Storage, Inc. Mechanism for persisting messages in a storage system
US11399063B2 (en) 2014-06-04 2022-07-26 Pure Storage, Inc. Network authentication for a storage system
US9612952B2 (en) * 2014-06-04 2017-04-04 Pure Storage, Inc. Automatically reconfiguring a storage memory topology
US9836234B2 (en) 2014-06-04 2017-12-05 Pure Storage, Inc. Storage cluster
US9367243B1 (en) 2014-06-04 2016-06-14 Pure Storage, Inc. Scalable non-uniform storage sizes
US11652884B2 (en) 2014-06-04 2023-05-16 Pure Storage, Inc. Customized hash algorithms
US9946894B2 (en) * 2014-06-27 2018-04-17 Panasonic Intellectual Property Management Co., Ltd. Data processing method and data processing device
US10114757B2 (en) 2014-07-02 2018-10-30 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US9836245B2 (en) 2014-07-02 2017-12-05 Pure Storage, Inc. Non-volatile RAM and flash memory in a non-volatile solid-state storage
US11604598B2 (en) 2014-07-02 2023-03-14 Pure Storage, Inc. Storage cluster with zoned drives
US11886308B2 (en) 2014-07-02 2024-01-30 Pure Storage, Inc. Dual class of service for unified file and object messaging
US8868825B1 (en) 2014-07-02 2014-10-21 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US9021297B1 (en) 2014-07-02 2015-04-28 Pure Storage, Inc. Redundant, fault-tolerant, distributed remote procedure call cache in a storage system
US9811677B2 (en) 2014-07-03 2017-11-07 Pure Storage, Inc. Secure data replication in a storage grid
US8874836B1 (en) 2014-07-03 2014-10-28 Pure Storage, Inc. Scheduling policy for queues in a non-volatile solid-state storage
US9747229B1 (en) 2014-07-03 2017-08-29 Pure Storage, Inc. Self-describing data format for DMA in a non-volatile solid-state storage
US10853311B1 (en) 2014-07-03 2020-12-01 Pure Storage, Inc. Administration through files in a storage system
US9558069B2 (en) 2014-08-07 2017-01-31 Pure Storage, Inc. Failure mapping in a storage array
US9483346B2 (en) 2014-08-07 2016-11-01 Pure Storage, Inc. Data rebuild on feedback from a queue in a non-volatile solid-state storage
US10983859B2 (en) 2014-08-07 2021-04-20 Pure Storage, Inc. Adjustable error correction based on memory health in a storage unit
US9082512B1 (en) 2014-08-07 2015-07-14 Pure Storage, Inc. Die-level monitoring in a storage cluster
US9766972B2 (en) 2014-08-07 2017-09-19 Pure Storage, Inc. Masking defective bits in a storage array
US9495255B2 (en) 2014-08-07 2016-11-15 Pure Storage, Inc. Error recovery in a storage cluster
US10079711B1 (en) 2014-08-20 2018-09-18 Pure Storage, Inc. Virtual file server with preserved MAC address
US9563524B2 (en) 2014-12-11 2017-02-07 International Business Machines Corporation Multi level data recovery in storage disk arrays
US9747177B2 (en) * 2014-12-30 2017-08-29 International Business Machines Corporation Data storage system employing a hot spare to store and service accesses to data having lower associated wear
US20160202924A1 (en) * 2015-01-13 2016-07-14 Telefonaktiebolaget L M Ericsson (Publ) Diagonal organization of memory blocks in a circular organization of memories
US9948615B1 (en) 2015-03-16 2018-04-17 Pure Storage, Inc. Increased storage unit encryption based on loss of trust
US11294893B2 (en) 2015-03-20 2022-04-05 Pure Storage, Inc. Aggregation of queries
US9940234B2 (en) 2015-03-26 2018-04-10 Pure Storage, Inc. Aggressive data deduplication using lazy garbage collection
US10082985B2 (en) 2015-03-27 2018-09-25 Pure Storage, Inc. Data striping across storage nodes that are assigned to multiple logical arrays
US10178169B2 (en) 2015-04-09 2019-01-08 Pure Storage, Inc. Point to point based backend communication layer for storage processing
US9672125B2 (en) 2015-04-10 2017-06-06 Pure Storage, Inc. Ability to partition an array into two or more logical arrays with independently running software
US10140149B1 (en) 2015-05-19 2018-11-27 Pure Storage, Inc. Transactional commits with hardware assists in remote memory
US9817576B2 (en) 2015-05-27 2017-11-14 Pure Storage, Inc. Parallel update to NVRAM
US10846275B2 (en) 2015-06-26 2020-11-24 Pure Storage, Inc. Key management in a storage device
US10983732B2 (en) 2015-07-13 2021-04-20 Pure Storage, Inc. Method and system for accessing a file
US11232079B2 (en) 2015-07-16 2022-01-25 Pure Storage, Inc. Efficient distribution of large directories
US10108355B2 (en) 2015-09-01 2018-10-23 Pure Storage, Inc. Erase block state detection
US11341136B2 (en) 2015-09-04 2022-05-24 Pure Storage, Inc. Dynamically resizable structures for approximate membership queries
US10853266B2 (en) 2015-09-30 2020-12-01 Pure Storage, Inc. Hardware assisted data lookup methods
US9768953B2 (en) 2015-09-30 2017-09-19 Pure Storage, Inc. Resharing of a split secret
US10762069B2 (en) 2015-09-30 2020-09-01 Pure Storage, Inc. Mechanism for a system where data and metadata are located closely together
US9727244B2 (en) 2015-10-05 2017-08-08 International Business Machines Corporation Expanding effective storage capacity of a data storage system while providing support for address mapping recovery
US9843453B2 (en) 2015-10-23 2017-12-12 Pure Storage, Inc. Authorizing I/O commands with I/O tokens
US10007457B2 (en) 2015-12-22 2018-06-26 Pure Storage, Inc. Distributed transactions with token-associated execution
US10261690B1 (en) 2016-05-03 2019-04-16 Pure Storage, Inc. Systems and methods for operating a storage system
US11231858B2 (en) 2016-05-19 2022-01-25 Pure Storage, Inc. Dynamically configuring a storage system to facilitate independent scaling of resources
US10691567B2 (en) 2016-06-03 2020-06-23 Pure Storage, Inc. Dynamically forming a failure domain in a storage system that includes a plurality of blades
US11706895B2 (en) 2016-07-19 2023-07-18 Pure Storage, Inc. Independent scaling of compute resources and storage resources in a storage system
US11861188B2 (en) 2016-07-19 2024-01-02 Pure Storage, Inc. System having modular accelerators
US9672905B1 (en) 2016-07-22 2017-06-06 Pure Storage, Inc. Optimize data protection layouts based on distributed flash wear leveling
US11449232B1 (en) 2016-07-22 2022-09-20 Pure Storage, Inc. Optimal scheduling of flash operations
US10768819B2 (en) 2016-07-22 2020-09-08 Pure Storage, Inc. Hardware support for non-disruptive upgrades
US10216420B1 (en) 2016-07-24 2019-02-26 Pure Storage, Inc. Calibration of flash channels in SSD
US11604690B2 (en) 2016-07-24 2023-03-14 Pure Storage, Inc. Online failure span determination
US11080155B2 (en) 2016-07-24 2021-08-03 Pure Storage, Inc. Identifying error types among flash memory
US10366004B2 (en) 2016-07-26 2019-07-30 Pure Storage, Inc. Storage system with elective garbage collection to reduce flash contention
US11797212B2 (en) 2016-07-26 2023-10-24 Pure Storage, Inc. Data migration for zoned drives
US11734169B2 (en) 2016-07-26 2023-08-22 Pure Storage, Inc. Optimizing spool and memory space management
US11886334B2 (en) 2016-07-26 2024-01-30 Pure Storage, Inc. Optimizing spool and memory space management
US10203903B2 (en) 2016-07-26 2019-02-12 Pure Storage, Inc. Geometry based, space aware shelf/writegroup evacuation
US11422719B2 (en) 2016-09-15 2022-08-23 Pure Storage, Inc. Distributed file deletion and truncation
US9747039B1 (en) 2016-10-04 2017-08-29 Pure Storage, Inc. Reservations over multiple paths on NVMe over fabrics
US10756816B1 (en) 2016-10-04 2020-08-25 Pure Storage, Inc. Optimized fibre channel and non-volatile memory express access
US11550481B2 (en) 2016-12-19 2023-01-10 Pure Storage, Inc. Efficiently writing data in a zoned drive storage system
US11307998B2 (en) 2017-01-09 2022-04-19 Pure Storage, Inc. Storage efficiency of encrypted host system data
US11955187B2 (en) 2017-01-13 2024-04-09 Pure Storage, Inc. Refresh of differing capacity NAND
US9747158B1 (en) 2017-01-13 2017-08-29 Pure Storage, Inc. Intelligent refresh of 3D NAND
US10979223B2 (en) 2017-01-31 2021-04-13 Pure Storage, Inc. Separate encryption for a solid-state drive
US10528488B1 (en) 2017-03-30 2020-01-07 Pure Storage, Inc. Efficient name coding
US11016667B1 (en) 2017-04-05 2021-05-25 Pure Storage, Inc. Efficient mapping for LUNs in storage memory with holes in address space
US10944671B2 (en) 2017-04-27 2021-03-09 Pure Storage, Inc. Efficient data forwarding in a networked device
US10516645B1 (en) 2017-04-27 2019-12-24 Pure Storage, Inc. Address resolution broadcasting in a networked device
US10141050B1 (en) 2017-04-27 2018-11-27 Pure Storage, Inc. Page writes for triple level cell flash memory
US11003542B1 (en) * 2017-04-28 2021-05-11 EMC IP Holding Company LLC Online consistent system checkpoint
US11467913B1 (en) 2017-06-07 2022-10-11 Pure Storage, Inc. Snapshots with crash consistency in a storage system
US11782625B2 (en) 2017-06-11 2023-10-10 Pure Storage, Inc. Heterogeneity supportive resiliency groups
US11947814B2 (en) 2017-06-11 2024-04-02 Pure Storage, Inc. Optimizing resiliency group formation stability
US11138103B1 (en) 2017-06-11 2021-10-05 Pure Storage, Inc. Resiliency groups
US10425473B1 (en) 2017-07-03 2019-09-24 Pure Storage, Inc. Stateful connection reset in a storage cluster with a stateless load balancer
US10402266B1 (en) 2017-07-31 2019-09-03 Pure Storage, Inc. Redundant array of independent disks in a direct-mapped flash storage system
US10210926B1 (en) 2017-09-15 2019-02-19 Pure Storage, Inc. Tracking of optimum read voltage thresholds in nand flash devices
US10877827B2 (en) 2017-09-15 2020-12-29 Pure Storage, Inc. Read voltage optimization
US10496330B1 (en) 2017-10-31 2019-12-03 Pure Storage, Inc. Using flash storage devices with different sized erase blocks
US11024390B1 (en) 2017-10-31 2021-06-01 Pure Storage, Inc. Overlapping RAID groups
US10884919B2 (en) 2017-10-31 2021-01-05 Pure Storage, Inc. Memory management in a storage system
US10515701B1 (en) 2017-10-31 2019-12-24 Pure Storage, Inc. Overlapping raid groups
US10545687B1 (en) 2017-10-31 2020-01-28 Pure Storage, Inc. Data rebuild when changing erase block sizes during drive replacement
US10860475B1 (en) 2017-11-17 2020-12-08 Pure Storage, Inc. Hybrid flash translation layer
US10990566B1 (en) 2017-11-20 2021-04-27 Pure Storage, Inc. Persistent file locks in a storage system
US10719265B1 (en) 2017-12-08 2020-07-21 Pure Storage, Inc. Centralized, quorum-aware handling of device reservation requests in a storage system
US10929053B2 (en) 2017-12-08 2021-02-23 Pure Storage, Inc. Safe destructive actions on drives
US10929031B2 (en) 2017-12-21 2021-02-23 Pure Storage, Inc. Maximizing data reduction in a partially encrypted volume
JP6734305B2 (en) * 2018-01-10 2020-08-05 Necプラットフォームズ株式会社 Disk array controller, storage device, storage device recovery method, and disk array controller recovery program
US10976948B1 (en) 2018-01-31 2021-04-13 Pure Storage, Inc. Cluster expansion mechanism
US10733053B1 (en) 2018-01-31 2020-08-04 Pure Storage, Inc. Disaster recovery for high-bandwidth distributed archives
US10467527B1 (en) 2018-01-31 2019-11-05 Pure Storage, Inc. Method and apparatus for artificial intelligence acceleration
US11036596B1 (en) 2018-02-18 2021-06-15 Pure Storage, Inc. System for delaying acknowledgements on open NAND locations until durability has been confirmed
US11494109B1 (en) 2018-02-22 2022-11-08 Pure Storage, Inc. Erase block trimming for heterogenous flash memory storage devices
US10853146B1 (en) 2018-04-27 2020-12-01 Pure Storage, Inc. Efficient data forwarding in a networked device
US11385792B2 (en) 2018-04-27 2022-07-12 Pure Storage, Inc. High availability controller pair transitioning
US10931450B1 (en) 2018-04-27 2021-02-23 Pure Storage, Inc. Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers
US10817197B2 (en) 2018-05-04 2020-10-27 Microsoft Technology Licensing, Llc Data partitioning in a distributed storage system
US11120046B2 (en) 2018-05-04 2021-09-14 Microsoft Technology Licensing Llc Data replication in a distributed storage system
US11436023B2 (en) 2018-05-31 2022-09-06 Pure Storage, Inc. Mechanism for updating host file system and flash translation layer based on underlying NAND technology
US10929229B2 (en) * 2018-06-21 2021-02-23 International Business Machines Corporation Decentralized RAID scheme having distributed parity computation and recovery
US11438279B2 (en) 2018-07-23 2022-09-06 Pure Storage, Inc. Non-disruptive conversion of a clustered service from single-chassis to multi-chassis
US11868309B2 (en) 2018-09-06 2024-01-09 Pure Storage, Inc. Queue management for data relocation
US11500570B2 (en) 2018-09-06 2022-11-15 Pure Storage, Inc. Efficient relocation of data utilizing different programming modes
US11520514B2 (en) 2018-09-06 2022-12-06 Pure Storage, Inc. Optimized relocation of data based on data characteristics
US11354058B2 (en) 2018-09-06 2022-06-07 Pure Storage, Inc. Local relocation of data stored at a storage device of a storage system
US10454498B1 (en) 2018-10-18 2019-10-22 Pure Storage, Inc. Fully pipelined hardware engine design for fast and efficient inline lossless data compression
US10976947B2 (en) 2018-10-26 2021-04-13 Pure Storage, Inc. Dynamically selecting segment heights in a heterogeneous RAID group
US11334254B2 (en) 2019-03-29 2022-05-17 Pure Storage, Inc. Reliability based flash page sizing
US11775189B2 (en) 2019-04-03 2023-10-03 Pure Storage, Inc. Segment level heterogeneity
US11099986B2 (en) 2019-04-12 2021-08-24 Pure Storage, Inc. Efficient transfer of memory contents
US11237960B2 (en) * 2019-05-21 2022-02-01 Arm Limited Method and apparatus for asynchronous memory write-back in a data processing system
US11714572B2 (en) 2019-06-19 2023-08-01 Pure Storage, Inc. Optimized data resiliency in a modular storage system
US11281394B2 (en) 2019-06-24 2022-03-22 Pure Storage, Inc. Replication across partitioning schemes in a distributed storage system
US11893126B2 (en) 2019-10-14 2024-02-06 Pure Storage, Inc. Data deletion for a multi-tenant environment
US11704192B2 (en) 2019-12-12 2023-07-18 Pure Storage, Inc. Budgeting open blocks based on power loss protection
US11847331B2 (en) 2019-12-12 2023-12-19 Pure Storage, Inc. Budgeting open blocks of a storage unit based on power loss prevention
US11416144B2 (en) 2019-12-12 2022-08-16 Pure Storage, Inc. Dynamic use of segment or zone power loss protection in a flash device
US11188432B2 (en) 2020-02-28 2021-11-30 Pure Storage, Inc. Data resiliency by partially deallocating data blocks of a storage device
US11507297B2 (en) 2020-04-15 2022-11-22 Pure Storage, Inc. Efficient management of optimal read levels for flash storage systems
US11256587B2 (en) 2020-04-17 2022-02-22 Pure Storage, Inc. Intelligent access to a storage device
US11416338B2 (en) 2020-04-24 2022-08-16 Pure Storage, Inc. Resiliency scheme to enhance storage performance
US11474986B2 (en) 2020-04-24 2022-10-18 Pure Storage, Inc. Utilizing machine learning to streamline telemetry processing of storage media
US11941287B2 (en) * 2020-06-17 2024-03-26 EMC IP Holding Company, LLC System and method for near-instant unmapping and write-same in a log-structured storage cluster
US11768763B2 (en) 2020-07-08 2023-09-26 Pure Storage, Inc. Flash secure erase
US11693596B2 (en) 2020-08-13 2023-07-04 Seagate Technology Llc Pre-emptive storage strategies to reduce host command collisions
CN112015340A (en) * 2020-08-25 2020-12-01 实时侠智能控制技术有限公司 Nonvolatile data storage structure and storage method
US11513974B2 (en) 2020-09-08 2022-11-29 Pure Storage, Inc. Using nonce to control erasure of data blocks of a multi-controller storage system
US11681448B2 (en) 2020-09-08 2023-06-20 Pure Storage, Inc. Multiple device IDs in a multi-fabric module storage system
US11487455B2 (en) 2020-12-17 2022-11-01 Pure Storage, Inc. Dynamic block allocation to optimize storage system performance
US11847324B2 (en) 2020-12-31 2023-12-19 Pure Storage, Inc. Optimizing resiliency groups for data regions of a storage system
US11614880B2 (en) 2020-12-31 2023-03-28 Pure Storage, Inc. Storage system with selectable write paths
US11630593B2 (en) 2021-03-12 2023-04-18 Pure Storage, Inc. Inline flash memory qualification in a storage system
US11507597B2 (en) 2021-03-31 2022-11-22 Pure Storage, Inc. Data replication to meet a recovery point objective
US11832410B2 (en) 2021-09-14 2023-11-28 Pure Storage, Inc. Mechanical energy absorbing bracket apparatus

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5297258A (en) * 1991-11-21 1994-03-22 Ast Research, Inc. Data logging for hard disk data storage systems
US5392244A (en) * 1993-08-19 1995-02-21 Hewlett-Packard Company Memory systems with data storage redundancy management
US5504883A (en) * 1993-02-01 1996-04-02 Lsc, Inc. Method and apparatus for insuring recovery of file control information for secondary storage systems
US5649152A (en) * 1994-10-13 1997-07-15 Vinca Corporation Method and system for providing a static snapshot of data stored on a mass storage system
US5835953A (en) * 1994-10-13 1998-11-10 Vinca Corporation Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating
US5960451A (en) * 1997-09-16 1999-09-28 Hewlett-Packard Company System and method for reporting available capacity in a data storage system with variable consumption characteristics
US6098128A (en) * 1995-09-18 2000-08-01 Cyberstorage Systems Corporation Universal storage management system
US6148368A (en) * 1997-07-31 2000-11-14 Lsi Logic Corporation Method for accelerating disk array write operations using segmented cache memory and data logging
US6170063B1 (en) * 1998-03-07 2001-01-02 Hewlett-Packard Company Method for performing atomic, concurrent read and write operations on multiple storage devices
US6223252B1 (en) * 1998-05-04 2001-04-24 International Business Machines Corporation Hot spare light weight mirror for raid system
US6247149B1 (en) * 1997-10-28 2001-06-12 Novell, Inc. Distributed diagnostic logging system
US20020156971A1 (en) * 2001-04-19 2002-10-24 International Business Machines Corporation Method, apparatus, and program for providing hybrid disk mirroring and striping
US6567889B1 (en) * 1997-12-19 2003-05-20 Lsi Logic Corporation Apparatus and method to provide virtual solid state disk in cache memory in a storage controller
US20030200473A1 (en) * 1990-06-01 2003-10-23 Amphus, Inc. System and method for activity or event based dynamic energy conserving server reconfiguration
US6674447B1 (en) * 1999-12-06 2004-01-06 Oridus, Inc. Method and apparatus for automatically recording snapshots of a computer screen during a computer session for later playback
US6704838B2 (en) * 1997-10-08 2004-03-09 Seagate Technology Llc Hybrid data storage and reconstruction system and method for a data storage device
US6718434B2 (en) * 2001-05-31 2004-04-06 Hewlett-Packard Development Company, L.P. Method and apparatus for assigning raid levels
US20040139128A1 (en) * 2002-07-15 2004-07-15 Becker Gregory A. System and method for backing up a computer system

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030200473A1 (en) * 1990-06-01 2003-10-23 Amphus, Inc. System and method for activity or event based dynamic energy conserving server reconfiguration
US5297258A (en) * 1991-11-21 1994-03-22 Ast Research, Inc. Data logging for hard disk data storage systems
US5504883A (en) * 1993-02-01 1996-04-02 Lsc, Inc. Method and apparatus for insuring recovery of file control information for secondary storage systems
US5392244A (en) * 1993-08-19 1995-02-21 Hewlett-Packard Company Memory systems with data storage redundancy management
US5649152A (en) * 1994-10-13 1997-07-15 Vinca Corporation Method and system for providing a static snapshot of data stored on a mass storage system
US5835953A (en) * 1994-10-13 1998-11-10 Vinca Corporation Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating
US6073222A (en) * 1994-10-13 2000-06-06 Vinca Corporation Using a virtual device to access data as it previously existed in a mass data storage system
US6085298A (en) * 1994-10-13 2000-07-04 Vinca Corporation Comparing mass storage devices through digests that are representative of stored data in order to minimize data transfer
US6098128A (en) * 1995-09-18 2000-08-01 Cyberstorage Systems Corporation Universal storage management system
US6148368A (en) * 1997-07-31 2000-11-14 Lsi Logic Corporation Method for accelerating disk array write operations using segmented cache memory and data logging
US5960451A (en) * 1997-09-16 1999-09-28 Hewlett-Packard Company System and method for reporting available capacity in a data storage system with variable consumption characteristics
US6704838B2 (en) * 1997-10-08 2004-03-09 Seagate Technology Llc Hybrid data storage and reconstruction system and method for a data storage device
US6247149B1 (en) * 1997-10-28 2001-06-12 Novell, Inc. Distributed diagnostic logging system
US6567889B1 (en) * 1997-12-19 2003-05-20 Lsi Logic Corporation Apparatus and method to provide virtual solid state disk in cache memory in a storage controller
US6170063B1 (en) * 1998-03-07 2001-01-02 Hewlett-Packard Company Method for performing atomic, concurrent read and write operations on multiple storage devices
US6223252B1 (en) * 1998-05-04 2001-04-24 International Business Machines Corporation Hot spare light weight mirror for raid system
US6674447B1 (en) * 1999-12-06 2004-01-06 Oridus, Inc. Method and apparatus for automatically recording snapshots of a computer screen during a computer session for later playback
US20020156971A1 (en) * 2001-04-19 2002-10-24 International Business Machines Corporation Method, apparatus, and program for providing hybrid disk mirroring and striping
US6718434B2 (en) * 2001-05-31 2004-04-06 Hewlett-Packard Development Company, L.P. Method and apparatus for assigning raid levels
US20040139128A1 (en) * 2002-07-15 2004-07-15 Becker Gregory A. System and method for backing up a computer system

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8555108B2 (en) 2003-08-14 2013-10-08 Compellent Technologies Virtual disk drive system and method
US10067712B2 (en) 2003-08-14 2018-09-04 Dell International L.L.C. Virtual disk drive system and method
US9489150B2 (en) 2003-08-14 2016-11-08 Dell International L.L.C. System and method for transferring data between different raid data storage types for current data and replay data
US9436390B2 (en) 2003-08-14 2016-09-06 Dell International L.L.C. Virtual disk drive system and method
US9047216B2 (en) 2003-08-14 2015-06-02 Compellent Technologies Virtual disk drive system and method
US9021295B2 (en) 2003-08-14 2015-04-28 Compellent Technologies Virtual disk drive system and method
US8560880B2 (en) 2003-08-14 2013-10-15 Compellent Technologies Virtual disk drive system and method
US9244625B2 (en) 2006-05-24 2016-01-26 Compellent Technologies System and method for raid management, reallocation, and restriping
US7886111B2 (en) 2006-05-24 2011-02-08 Compellent Technologies System and method for raid management, reallocation, and restriping
US8230193B2 (en) 2006-05-24 2012-07-24 Compellent Technologies System and method for raid management, reallocation, and restriping
US10296237B2 (en) 2006-05-24 2019-05-21 Dell International L.L.C. System and method for raid management, reallocation, and restripping
US8214684B2 (en) 2007-05-04 2012-07-03 International Business Machines Corporation Incomplete write protection for disk array
US20080276124A1 (en) * 2007-05-04 2008-11-06 Hetzler Steven R Incomplete write protection for disk array
US20090204758A1 (en) * 2008-02-13 2009-08-13 Dell Products, Lp Systems and methods for asymmetric raid devices
US20090303630A1 (en) * 2008-06-10 2009-12-10 H3C Technologies Co., Ltd. Method and apparatus for hard disk power failure protection
US8819478B1 (en) * 2008-06-30 2014-08-26 Emc Corporation Auto-adapting multi-tier cache
US20100037017A1 (en) * 2008-08-08 2010-02-11 Samsung Electronics Co., Ltd Hybrid storage apparatus and logical block address assigning method
US9619178B2 (en) * 2008-08-08 2017-04-11 Seagate Technology International Hybrid storage apparatus and logical block address assigning method
US20110225353A1 (en) * 2008-10-30 2011-09-15 Robert C Elliott Redundant array of independent disks (raid) write cache sub-assembly
US20100161883A1 (en) * 2008-12-24 2010-06-24 Kabushiki Kaisha Toshiba Nonvolatile Semiconductor Memory Drive and Data Management Method of Nonvolatile Semiconductor Memory Drive
US20120151133A1 (en) * 2010-12-13 2012-06-14 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US9286000B2 (en) 2010-12-13 2016-03-15 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US8949524B2 (en) 2010-12-13 2015-02-03 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US8543760B2 (en) * 2010-12-13 2013-09-24 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US9547452B2 (en) 2010-12-13 2017-01-17 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US8458397B2 (en) * 2010-12-13 2013-06-04 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US20120272005A1 (en) * 2010-12-13 2012-10-25 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US8856427B2 (en) 2011-06-08 2014-10-07 Panasonic Corporation Memory controller and non-volatile storage device
CN105068760A (en) * 2013-10-18 2015-11-18 华为技术有限公司 Data storage method, data storage apparatus and storage device
US9996421B2 (en) 2013-10-18 2018-06-12 Huawei Technologies Co., Ltd. Data storage method, data storage apparatus, and storage device

Also Published As

Publication number Publication date
JP2004118837A (en) 2004-04-15
EP1400899A3 (en) 2011-04-06
US20040059869A1 (en) 2004-03-25
EP1400899A2 (en) 2004-03-24
US7076606B2 (en) 2006-07-11

Similar Documents

Publication Publication Date Title
US7076606B2 (en) Accelerated RAID with rewind capability
US7055058B2 (en) Self-healing log-structured RAID
US8904129B2 (en) Method and apparatus for backup and restore in a dynamic chunk allocation storage system
US6523087B2 (en) Utilizing parity caching and parity logging while closing the RAID5 write hole
AU710907B2 (en) Expansion of the number of drives in a raid set while maintaining integrity of migrated data
US9405627B2 (en) Flexible data storage system
US7054960B1 (en) System and method for identifying block-level write operations to be transferred to a secondary site during replication
EP0718766B1 (en) Method of operating a disk drive array
US7904679B2 (en) Method and apparatus for managing backup data
US7975168B2 (en) Storage system executing parallel correction write
US6067635A (en) Preservation of data integrity in a raid storage device
US6766491B2 (en) Parity mirroring between controllers in an active-active controller pair
US5488701A (en) In log sparing for log structured arrays
US7533298B2 (en) Write journaling using battery backed cache
US20030120864A1 (en) High-performance log-structured RAID
US8356292B2 (en) Method for updating control program of physical storage device in storage virtualization system and storage virtualization controller and system thereof
US6922752B2 (en) Storage system using fast storage devices for storing redundant data
US7069382B2 (en) Method of RAID 5 write hole prevention
US20030120869A1 (en) Write-back disk cache management
US20070033356A1 (en) System for Enabling Secure and Automatic Data Backup and Instant Recovery
US7293048B2 (en) System for preserving logical object integrity within a remote mirror cache
US20100146328A1 (en) Grid storage system and method of operating thereof
US20100037023A1 (en) System and method for transferring data between different raid data storage types for current data and replay data
CN112596673B (en) Multiple-active multiple-control storage system with dual RAID data protection
US20100146206A1 (en) Grid storage system and method of operating thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: QUANATUM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ORSELY, TIM;REEL/FRAME:017895/0995

Effective date: 20020831

AS Assignment

Owner name: CREDIT SUISSE, NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNORS:QUANTUM CORPORATION;ADVANCED DIGITAL INFORMATION CORPORATION;CERTANCE HOLDINGS CORPORATION;AND OTHERS;REEL/FRAME:019605/0159

Effective date: 20070712

Owner name: CREDIT SUISSE,NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNORS:QUANTUM CORPORATION;ADVANCED DIGITAL INFORMATION CORPORATION;CERTANCE HOLDINGS CORPORATION;AND OTHERS;REEL/FRAME:019605/0159

Effective date: 20070712

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: QUANTUM INTERNATIONAL, INC., WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329

Owner name: CERTANCE (US) HOLDINGS, INC., WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329

Owner name: ADVANCED DIGITAL INFORMATION CORPORATION, WASHINGT

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329

Owner name: QUANTUM CORPORATION, WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329

Owner name: CERTANCE, LLC, WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329

Owner name: CERTANCE HOLDINGS CORPORATION, WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329