CA2125201A1 - Digital storage system and method having alternating deferred updating of mirrored storage disks - Google Patents

Digital storage system and method having alternating deferred updating of mirrored storage disks

Info

Publication number
CA2125201A1
CA2125201A1 CA002125201A CA2125201A CA2125201A1 CA 2125201 A1 CA2125201 A1 CA 2125201A1 CA 002125201 A CA002125201 A CA 002125201A CA 2125201 A CA2125201 A CA 2125201A CA 2125201 A1 CA2125201 A1 CA 2125201A1
Authority
CA
Canada
Prior art keywords
disk
mode
disks
mirrored
mirrored pair
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002125201A
Other languages
French (fr)
Inventor
Christos A. Polyzois
Daniel M. Dias
Anupam K. Bhide
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CA2125201A1 publication Critical patent/CA2125201A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1666Error detection or correction of the data by redundancy in hardware where the redundant component is memory or memory area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2082Data synchronisation
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/002Programmed access in sequence to a plurality of record carriers or indexed parts, e.g. tracks, thereof, e.g. for editing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/70Masking faults in memories by using spares or by reconfiguring
    • G11C29/74Masking faults in memories by using spares or by reconfiguring using duplex memories, i.e. using dual copies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/40Combinations of multiple record carriers
    • G11B2220/41Flat as opposed to hierarchical combination, e.g. library of tapes or discs, CD changer, or groups of record carriers that together store one title
    • G11B2220/415Redundant array of inexpensive disks [RAID] systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99955Archiving or backup

Abstract

DIGITAL STORAGE SYSTEM AND METHOD HAVING
ALTERNATING DEFERRED UPDATING OF MIRRORED
STORAGE DISKS

ABSTRACT
A fault-tolerant high performance mirrored disk subsystem is described which has an improved disk writing scheme that provides high throughput for random disk writes and at the same time guarantees high performance for disk reads. The subsystem also has an improved recovery mechanism which provides fast recovery in the event that one of the mirrored disks fails and during such recovery provides the same performance as during non-recovery periods. Data blocks or pages which are to be written to disk are temporarily accumulated and sorted (or scheduled) into an order (or schedule) which can be written to disk efficiently, which in a preferred embodiment is in accordance with the physical location on disk at which each such block will be written. This also generally corresponds to an order which is encountered by a write head during a physical scan of a disk. The disks of a mirrored pair are operated out of phase with each other, so that one will be in read mode while the other is in write mode. Updated blocks are written out to the disk that is in write mode in sorted order, while guaranteed read performance is provided by the other disk that is in read mode. When a batch of updates has been applied to one disk of a mirrored pair, the mirrored pair switch their modes, and the other disk is updated. Preferably the updates are kept in a non-volatile memory, which furthermore advantageously may be made fault-tolerant as well. During recovery a pair of spare alternating mirrored disks is introduced to which new updates are directed, while a background scan process copies data from the surviving disk to the new mirrored pair.

Description

12 ~ 2 ~ 1 DIGITAL STORAGE SYSTEM AND METHOD HAVING
ALTERNATING DEFE~RED UPDATING OF MIRRORED
STOR~GE DISKS
DESCRIPTION
Field of the Invention This invention relates generally to fault tolerant digital storage disk systems and more particularly to digital storage disk systems of the mirrored disk type in which reliability is provided by storing digital information in duplicate on two separate storage disks.

Background of the Invention As the re~uirements for On-Line Database Transaction Processing (OLTP) grow, high transaction rates on the order of thousands of transactions per second must be supported by OLTP systems. Furthermore, these applications call for high availability and fault tolerance.

In applications such as OLTP, a large fraction of the requests are random accesses to data. Since a large fraction of the data resides on disks, the disk sub-systems must therefore support a high rate of random accesses, on the order of several thousands of random accesses per second. Furthermore, the disks need to be fault tolerant to meet the availability needs of OLTP.

Whenever a random access is made to a disk, in general the disk must rotate to a new orientation such that the desired data is under a disk arm and the read/write head on that disk arm must also move along the arm to a new radial position at which the desired data is under the read/write head. Unfortunately performance of this physical operation, and therefore random disk Input/Output (I/O) performance is not improving as fast as other system parameters such as CPU MIPS. Therefore, applications such as OLTP, where random access to data predominates, have become limited by this factor, which is referred to in this art as being disk arm bound. In systems which are disk arm bound, the disk cost is becoming an ever larger fraction of the system cost. Thus, 21~i2~1 there is a need for a disk sub-system which can support a larger rate of random accesses per second with a better price-performance characteristic than is provided by traditional disk systems.

Both mirrored disk systems and RAID disk systems (for Redundant Array of Independent Disks) have been used to provide fault tolerant disk systems for OLTP. In a mirrored disk system, the information on each disk is duplicated on a second (and therefore redundant) disk. In a RAID array, the information at corresponding block locations on several disks is used to create a parity block on another disk. In the event of failure, any one of the disks in a RAID array can be reconstructed from the others in the array. RAID architectures require less disks for a specified storage capacity, but mirrored disks generally perform better. In an article entitled "An evaluation of redundant arrays of disks using an Amdahl 5890," SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pp. 74-85, Boulder, Colorado, May 1990, P. Chen et al. showed that mirrored disks are better than RAID-5 disk arrays for workloads with predominantly random writes (i.e., average read/write times for mirrored disk architectures are lower than for RAID-5 architectures when random read/writes predominate). RAID-5 architecture is described, for example, by D. Patterson et al. in "A case for redundant arrays of inexpensive disks," ACM SIGMOD
Int'l Conf. on Management of Data, pp.109-116, Chicago, Il.
(June 1988). However, mirrored disks do require that each data write be written on both disks in a mirrored pair.
Thus, it is generally accepted that mirrored disk storage systems impose a performance penalty in order to provide the fault tolerance.

The traditional method for recovery in a mirrored disk system is to copy the data from the surviving disk of the mirrored pair onto a spare backup disk. This is typically done by scanning the data on the surviving disk, and applying any writes that come in during this process to both disks. One problem with this approach is that it produces a significant degradation of the disk system Y09-93-063 3 2~2~2Q~
performance during recovery.

Summary Of The Invention Accordingly, it is an object of this invention to improve the performance of mirrored disk systems by largely eliminating the penalty normally resulting from the need to duplicate each disk write onto both disks of a mirrored pair of disks.

It is also an object to provide a mirrored disk subsystem that has improved performance for random disk I/0 by converting random disk write I/O to close to sequential I/0.

It is still another object to improve the mirrored disk throughput without a penalty in read response time.

It is also an object to improve performance during the recovery process from a failed disk, by providing guaranteed performance to disk reads and writes during recovery, while retaining fast recovery.

These and further objects and advantages are achieved by this invention by providing a fault-tolerant disk storage subsystem of the mirrored disk type in which updates (i.e., data blocks to be written) to disk are accumulated and scheduled into successive batch runs of updates, the scheduling being done to produce an ordering which can be written efficiently to the mirrored disks. The updates preferably, but not necessarily, are accumulated in a memory in the disk controller, and the scheduling is preferably, but not necessarily, done by the disk controller for the mirrored disks. Preferably the memory is either non-volatile or fault-tolerant.

In a preferred embodiment, the scheduling is done by sorting the updates in accordance with the home locations of the updates on the mirrored disks (i.e., in accordance with the positions on disk at which the updates will be written). This is an ordering which also corresponds to a ~ ` 212~21D1 scan of a disk.

The disks in each mirrored pair are then operated out of phase with each other, one being in read mode while the other is in write mode. A batch of writes is efficiently applied each time to the disk in write mode in accordance with the scheduled order. Because the updates are copied onto each disk of the mirrored pair in accordance with the physical order on the disk, good performance is achieved for applying the writes.

Random writes are thus converted to largely sequential writes to disk and clustering of data on the disk is preserved. The average time to apply a write of a block using this method is typically less than half the time to apply a random write of a block on the disk, thus largely eliminating the problem of having to write a block twice to a pair of mirrored disks.

During this time, read requests are either handled by reading the data from the memory or from the disk that is in read mode. Thus, guaranteed performance to read requests is also achieved. When a batch of updates has been applied to one of the disks of a mirrored pair (i.e., the one in write mode), the disks switch modes of operation. There may also be a period of time when both disks are in read mode between these two modes of operation. Also there may be times when both disks of a mirrored pair are in write mode, as for example during loads and other large copying operations.

Recovery from failure of one disk of a mirrored pair of disks is handled by introducing a pair of spare mirrored disks that are operated using the alternating mirrors scheme. During recovery, new writes are directed to the spare disk pair. Reads are either handled from the surviving disk or the alternating mirror spare disk pair.
In the background, spare cycles are used to scan and copy data from the surviving disk to the spare alternating mirror pair. This method provides fast recovery with guaranteed performance to both read and write requests Y09-93-063 5 212~2~
during recovery.

Brief Description Of The_Drawin~s These, and further, objects, advantages, and fea-tures of the invention will be more apparent from the following detailed description of a preferred embodiment and the appended drawings in which:

FIG. 1 is an overall block diagram of a preferred embodiment of this invention;

FIG. 2 illustrates a preferred organization of data in the non-volatile memory of the I/0 processor;

Fig. 3 is a flow chart which shows the steps involved in processing a write request during normal operation;

Fig. 4 is a flow chart which shows the steps involved in the process of applying a batch of writes to a disk during normal operation;

Fig. 5 is a timing diagram which shows the timing relation between the two processes applying write batches to two mirrored disks;

Fig. 6 is a flow chart which shows the steps involved in processing a read request during normal operation;

Fig. 7 schematically illustrates the configuration during recovery of a failed disk;

Fig. 8 is a flow chart which shows the steps involved in servicing a read request during recovery;

Fig. 9 is a flow chart which shows the steps involved in the background process that scans the survivor disk.

Description Of A Preferred Embodiment FIG. 1 is a block diagram of a preferred embodiment of a 21~52~
Y09-93-06~ 6 computer system which incorporates a disk storage subsystem having mirrored storage disks which are alternately updated in a batch fashion with accumulated updates that have been sorted for efficient writing ~henceforth sometimes called AMDU for Alternating Mirrors with Deferred Updates) in accordance with this invention. It includes a controller or I/O processor (IOP) 200, a plurality of mirrored disk pairs 300-1 through 300-N, at least one spare pair of disks 400 and a central processing unit (CPU) 100. The controller 200 is connected to the CPU 100 and has a processor 210 and non-volatile memory 220. For simplicity we assume that the non-volatile memory is partitioned into regions 220-1 through 220-N, each corresponding to a mirrored pair.
.
Those skilled in the art will readily appreciate that the memory and controller need not constitute a separate physical subsystem as illustrated, but could instead be implemented with software running in the main computer system. Also the memory need not be non-volatile in order to achieve useful benefit from this invention and in many environments could be fault- tolerant as well (say through use of triple redundancy and a voting mechanism). The memory also need not be partitioned and there can be more than one spare pair of disks. The spare pair of disks is not used in normal operation, but is used in the event one of the mirrored disks fails.

Each mirrored disk pair consists of two disks labeled 300-al and 300-bl for disk pair 300-1 (correspondingly 300-aN and 300-bN for disk pair 300-N). The two disks in each mirrored pair contain basically identical data.
However, as will be better appreciated from the following description, updates to each mirrored pair of disks are NOT
made simultaneously as would be the case with conventional mirrored disks. In accordance with the invention, updates are accumulated instead in the non~ volatile memory 220 and sorted into batches of updates, which are applied to the two disks of a pair not simultaneously, but rather first to one and then to the other. Furthermore, while the same updates are eventually applied to each of the two disks of a pair (except for updates that have become obsolete .

-2 ~ ~2~

because they have been further updated in non-volatile memory 220 before being applied to both disks), the batches of updates made to each disk individually generally are not identical since they are applied at different times and more recent blocks of data are included in the individual batch for each disk of a pair.

Fig. 2 shows one region of non-volatile memory 220-i in more detail. The region has a number of data blocks labeled 221-1 through 221-k. Corresponding to each data block are two tags labeled 222-1 and 223-1 (for block 221-1) through 222-k and 223-k (for block 221-k). The two tags for each data block correspond to the two disks in the mirrored pair and indicate whether the corresponding disk must still write the data block.

In each region of the non-volatile memory there is also a list of pointers labeled 225-1 through 225-L. Each pointer points to a data block in some region of non-volatile memory. The order of the pointers in the list indicates the order in which the blocks should be written on the disk to achieve efficiency.

The non-volatile memory acts as a cache for data blocks to be written to disk. Those skilled in the art will readily appreciate that this cache may be managed like any other cache, such as by using a hash table (to determine which blocks are present in the cache) and a free list (to chain free blocks). The hash table and the free list are not shown in Fig~ 2.

In the preferred subsystem in accordance with this invention, there are four processes occurring during normal operation. The first process services write requests and is shown in Fig 3. When a write request arrives (block 510), the non-volatile memory in the controller is checked for the presence of an old version of that block (block 520 in Fig. 3). If the previous version of that block is already in the non-volatile memory region for the corresponding pair of disks, the previous version in memory is overwritten (block 530) and both tags corresponding to that block are turned on (block 560) to indicate that both disks must install the new version of that block.

If an old version of the block to be written is not found in the non-volatile memory, the controller looks for free space in its non-volatile memory in which to temporarily store the block to be written (block 5~0). If there is a free space for the block, the data block defined by the write re~uest is written into the free space and a pointer to the new block is inserted into the pointer list (block 550) in a position (relative to other pointers in the list) such that the list represents a schedule for efficiently writing the pointed-to data blocks to disk. The corresponding tags are turned on to indicate that both disks must perform a write of the new data block (block 560).

If the data block to be written is not in the non-volatile memory and there is no free block space available, the new block is written synchronously to both disks (block 570). This situation will not occur normally (other than maybe for loads and other very large copying operations) if the non-volatile memory is large enough to absorb heavy bursts of write activity, but the action to take in the event a write request is encountered and there is no free space in the non- volatile memory must be specified anyway.

The list of pointers 225 defines an order or schedule for the data blocks (in the non-volatile memory) covered by that list. This order or schedule is created preferably such that if a disk accesses the blocks in that order, the total time to access all the blocks (and write them to disk) will be minimized. As a first approximation, the ordering may be by cylinder, so that a scan (sweep) through the entire disk can apply all updates in one pass.
A11 blocks in a particular cylinder are written before moving on to the next cylinder. More elaborate schemes could order the blocks within a cylinder to minimize the rotational latency for the cylinder. More sophisticated schemes may take into account the combination of seek time and rotational delay. Sucll schemes are described, for -- 2:L2~2~

example, by M. Seltzer et al. in "Disk Scheduliny Revisited", Winter ~990 USENIX, pp. 313-323, Washington, D.C. (Jan. 1990).

There are two processes (one for each disk), which periodically wake up and apply the updates pending in non-volatile memory to the corresponding disk. The logic for applying the updates is identical for the two processes and is shown in Fig. 4. When the process wakes up (block 610), it goes to the beginning of the pointer list and traverses the pointer list examining each block in order.
In each step it checks to see if it has reached the end of the list (block 620). If so, it goes to sleep (block 680~
and wakes up for the next period. If there are more blocks, it checks the tag of the current block in the pointer list (block 630). If the tag corresponding to the process's disk is off, the process moves to the next pointer in the list (block 670). If the tag is on, the process writes the block on disk and turns the tag off (block 640). After turning the tag off, it checks the other tag (block 650). If the other tag is on, the process moves to the next pointer in the list (block 670). If the other tag is off, both disks have applied the update, so the pointer is removed from the list (block 660). The block is still valid (a read to that block will still get a cache hit), but it is free and can be overwritten by a subsequent write of any block. Then the process moves to the next pointer in the list ~block 670).

The two processes applying the updates to the two disks have the same logic and preferably the same period, i.e., the time that elapses between two consecutive activations of the wri-te phase o the process preferably is the same for the two processes. This period is called T in the described embodiment. The two processes need not be synchronized, but they are illustrated at a phase difference of 180 degrees in Fig. 5. The process for disk a wakes up at times 0, T, 2T, 3T etc., and begins to write a batch of updates to disk a until completed and then switches to read mode, while the process for disk b wakes up at times T/2, T + T/2, 2T + T/2, 3T + T/2, etc., and Y09-93-063 ~0 21~C)2~3 ~
begins to write a balch of updates to disk b un~il completed and -thell switches to read mode. Figure 5 illustrates this with a timing diagram, where the high value indicates time periods during which the writing process is active for the corresponding disk. While the writing process is asleep (inactive), the corresponding disk may service random read traffic.

As illustrated in Figure 5, this results in three different controller modes, namely controller mode 1 where disk a is in write mode and disk b is in read mode, controller mode 2 where disk b is in write mode and disk a is in read mode, and controller mode 3 where both disk a and disk b are in read mode. As mentioned earlier in connection with the description of block 570 of Figure 3, there is also a controller mode 4 where both disk a and disk b are in write mode. Controller mode 4 cannot occur so long as the batch write completes each time in less than time T/2. The system is preferably designed so that the situation where both disks are in write mode simultaneously is largely avoided, which is done by making the design such that batch writes will complete iIl less than time T/2.

Keeping the two processes at phase difference of 180 degrees ensures that if writes can be applied in less than half a period, there is always one disk arm dedicated to servicing random reads, which allows batches to become large (so that writes can gain efficiency), without hurting response time for reads. The period T is a system-dependent parameter, primarily determined by the amount of memory available, since the writes accumulating in a period T should fit in memory.

The logic for the process servicing read requests is shown in Figure 6. When a read request arrives (block 810), the non-volatile memory is checked (block 820) for the presence of the block to be read. If the block is in memory, it is returned immediately (block 830). If it is not in memory, a check is made (block 840) as to whether both disks are currently servicing read requests (i.e., whether the controller is in controller mode 3). If not, the YO9-93-063 1:1 request :is served by the dislc that is currently in read-only mode (block 850), i.e., the disk whose write process is inactive. If both clisks are in read-only mode ~i.e., controller mode 3), the request may be serviced by either disk, but preferably will be serviced by the disk whose heads are closest to the targe-t block (block 860).

Those skilled in the art will readily appreciate that some routine synchronization (e.g., latching) is required to preserve the integrity of the shared data structures (e.g., tags, pointer list) accessed simultaneous]y by more than one of the above processes. Also, the pointer order that minimizes the time to write a batch may be different for the two disks, since each disk generally writes to disk a different subset of the blocks stored in the non-volatile memory.

Operation under a failure scenario will now be described and is illustrated in Fig. 7. Assume that disk 300-bg in mirrored pair 300-g fails. Traditional recovery schemes use one replacement disk, onto which the contents of the surviving disk 300-ag are copied to replace the lost mirrored disk and therefore restore the mirrored pair. The preferred recovery scheme in accordance with this invention utilizes a pair of replacement disks 400a and 400b, rather than just a single replacement disk. When recovery completes, the disks in pair 400 are up-to-date, and the survivor 300-ag is returned to the system for other use. -For the duration of recovery, the survivor stays in read-only mode. The survivor does not get involved in servicing wr:ites. Those skilled in the art will readily appreciate that a bit map (labeled 230 in Figure 7) stored in the non-volatile memory can be used to keep track of which blocks remain to be retrieved from the survivor before recovery completes. The bit map has one bit per disk block and all bits are clear when recovery starts. ~ -Alternatively, the bit map can be stored in other memory components of the system.

In total, there are five processes involved in recovery.
~' .

Yo9-93-063 ~ 2 ~
Two of the processes (one for each disk) periodically wake up and apply writes pending in the cache to the corresponding disk. The processes have the same period and are maintained at a phase difference of 180 degrees.
The logic of these processes is identical to that for normal operation shown in Fig. ~. The third process services reads and is shown in Figure 8. When a read request arrives (block 910), the memory is checked (block 920) for the presence of the block to be read. If the block is present in memory, it is returned im~ediately (block 930). If the block is not in the non-volatile memory, it mus-t be read from disk. The bit map is first checked to see if the block is available on the replacement disks (block 940). If not, the read is serviced by the survivor (block 960). After the block is read from the survivor, the process checks if there is free space in the non-volatile memory (block 970). If not, the process ends.
If there is free space in the non-volatile memory, the block is also placed in the non-volatile memory, a pointer is inserted in the list and both tags are turned on (block 980), so that the disks will write it in their next write phase. Furthermore, the bit in the bit map corresponding to that block is turned on to indicate that there is no longer a need to extract that block from the survivor.

If on a read request the block is not in non-volatile memory and the bit map shows that the block is available on the replacement disks, the block is read from one o the replacements. The process preferably checks how many replacements are in read mode at that moment (block 950).
If only one replacement is in read mode, the request is served by that replacement (block 954). If both replacements are in read mode, the request preferably is serviced by the disk whose arm is closest to the requested block (block 952). If the block is read from a replacement disk, there is no need to update the bit map or store the block in the cache.

The fourth process services writes during recovery and involves exactly the same steps shown in Fig. 3. In addition, in all cases, the bit corresponding to the block - 2 ~

written is set ~ Ihe bil map (if not already set) to indicate that it is no longer necessary to copy that block from the survivor.

The fifth process (shown in Fig. 9) is a background process that scans the blocks in the survivor that have not yet been written on the replacements. The process is started (block 1000) when the system enters the recovery mode and the replacement disks are activated. The process waits until the survivor becomes idle (block 1010), i.e., until there are no random read requests pending for it. Then, it checks if there are unscanned blocks (block 1020), i.e., blocks for which the bi-t in the bit map has not been set.
If all blocks have been scanned, recovery is complete and the process terminates (block 1060); the survivor can be returned to the system for other use. If there are unscanned blocks, the process checks if there is free space in the non-volatile memory (block 1030). If not, it goes to sleep (block 1040) for a certain interval. If there is free space in the non-volatile memory, the process reads the unscanned block which is closest to the current position of the survivor's head (block 1050). The bit map is used to determine which blocks are unscanned. The block read is placed in the non-volatile memory, and both of its tags are turned on to indicate that the replacements must write the block. A pointer is also inserted in the pointer list. Furthermore, the corresponding bit in the bit map is set to indicate that the survivor does not need to scan that block again. The process then repeats the above steps (goes to block 1010). If a random read arrives, the process is suspended in block 1010 until the read completes.

Those skilled in the are will readily appreciate that there are opportunistic strategies which the survivor disk can use to further expedite the recovery process. For example, whenever the survivor disk services a random read request, it could also read any unscanned (i.e., uncopied) blocks that happen to pass under its arm while it is waiting for the disk to rotate to the tarqeted block.
Furthermore, the process shown in Fig. 9 obviously could 21~2~

re~d more th~n one bl ock at a time .

Claims (18)

1. A fault-tolerant disk storage subsystem for storing data blocks of digital information for a computer system, comprising:
a mirrored pair of disks for storing data blocks of digital information in duplicate on both disks of said mirrored pair;
a controller for said mirrored pair of disks, said controller having a memory;
said controller temporarily accumulating data blocks provided by said computer system in said memory until storage thereof in duplicate on both disks of said mirrored pair;
said controller scheduling said data blocks accumulated in said memory in an order for efficient writing to said mirrored pair of disks;
said controller having a first mode of operation in which one disk of said mirrored pair is in write mode while the other disk of said mirrored pair is in read mode and a second mode of operation in which said one disk is in read mode while said other disk is in write mode;
said controller copying from said memory onto said one disk during said first mode of operation data blocks in scheduled order that have not been already written to said one disk and copying from said memory onto said second disk during said second mode of operation data blocks in scheduled order that have not been already written to said other disk;
said controller operating said mirrored pair of disks in said first mode of operation during spaced time intervals and in said second mode of operation during at least a portion of the time between said spaced time intervals; and said controller providing a requested data block to said computer system from said memory if said requested data block is in said memory, and otherwise from said other disk if said mirrored pair of disks is operating in said first mode of operation and from said one disk if said mirrored pair of disks is operating in said second mode of operation, whereby data blocks are written onto said mirrored pair of disks in scheduled order without interference from or with the reading of data blocks requested by said computer system.
2. A fault-tolerant disk storage subsystem as defined in Claim 1 wherein said controller is implemented by software in said computer system and said memory of said controller is a portion of the general storage resources of said computer system.
3. A fault-tolerant disk storage subsystem as defined in Claim 1 wherein said controller and memory are implemented with dedicated hardware.
4. A fault-tolerant disk storage subsystem as defined in Claim 1 wherein said memory is non-volatile.
5. A fault-tolerant disk storage subsystem as defined in Claim 1 wherein said memory is fault-tolerant.
6. A fault-tolerant disk storage subsystem as defined in Claim 1 wherein said first mode of operation is scheduled to start periodically.
7. A fault-tolerant disk storage subsystem as defined in Claim 1 wherein said data blocks are scheduled in an order which corresponds to a physical scan of an entire disk.
8. A fault-tolerant disk storage subsystem as defined in Claim 1 wherein said controller makes a single scan through accumulated data blocks in scheduled order during said first mode of operation before changing modes.
9. A fault-tolerant disk storage subsystem as defined in Claim 1 wherein said controller also has a third mode of operation during which both of said disks of said mirrored pair are in read mode and a requested data block may be retrieved from either of said disks of said mirrored pair in the event said requested data block is not in said memory.
10. A fault-tolerant disk storage subsystem as defined in Claim 9 wherein said controller operates said mirrored pair of disks in said third mode of operation whenever said first mode of operation is terminated prior to a next scheduled start of said second mode of operation and whenever said second mode of operation is terminated prior to a next scheduled start of said first mode of operation.
11. A fault-tolerant disk storage subsystem as defined in Claim 9 wherein during said third mode of operation, when a data block is requested by said computer system and is not present in said memory, said controller determines which one of said disks in said mirrored pair can deliver said requested data block in the shortest time and retrieves said requested data block from said determined disk.
12. A fault-tolerant disk storage subsystem as defined in Claim 1 wherein said controller also has a fourth mode of operation during which both of said disks of said mirrored pair are operated in write mode.
13. A fault-tolerant disk storage subsystem as defined in Claim 9 wherein said controller operates said mirrored pair of disks in said fourth mode of operation whenever all data blocks in scheduled order at the start of said first mode of operation have not been written to said one disk by the time said second mode of operation is scheduled to start again and whenever all data blocks in scheduled order at the start of said second mode of operation have not been written to said other disk by the time said first mode of operation is scheduled to start again.
14. A fault-tolerant disk storage subsystem as defined in Claim 1 wherein said subsystem includes a spare pair of storage disks and in the event that only one of said disks of said mirrored pair remains operational, said controller augments said mirrored pair with said spare pair during a recovery mode of operation.
15. A fault-tolerant disk storage subsystem as defined in Claim 14 wherein said controller during said recovery mode of operation places said disk that remains operational in read mode continuously until all blocks on said remaining operational disk have been transferred either to said memory or to one or both of the disks of said spare pair, at which time said spare pair replaces said mirrored pair, said remaining operational disk becomes a spare disk and said recovery mode of operation terminates.
16. A method of storing data blocks of digital information received from a computer system in a storage subsystem having a mirrored pair of storage disks and for retrieving data blocks from said storage subsystem upon request from said computer system, comprising the steps of:
temporarily accumulating a group of data blocks to be stored;
scheduling said accumulated group of data blocks in an order for efficient writing to said mirrored pair of disks;
operating said mirrored pair of disks in a first mode of operation in which one disk of said mirrored pair is in write mode while the other disk of said mirrored pair is in read mode and in a second mode of operation in which said one disk is in read mode while said other disk is in write mode;
copying onto said one disk during said first mode of operation ordered data blocks in said accumulated group that have not been already written to said one disk;
copying onto said other disk during said second mode of operation ordered data blocks in said accumulated group that have not been already written to said other disk;
operating said mirrored pair of disks in said first mode of operation during spaced time intervals and in said second mode of operation during at least a portion of the time between said spaced time intervals; and retrieving a data block requested by said computer system from said accumulated group if said requested data block is in said accumulated group, and otherwise from said other disk if said mirrored pair of disks is operating in said first mode of operation and from said one disk if said mirrored pair of disks is operating in said second mode of operation, whereby data blocks are written onto said mirrored pair of disks in scheduled order without interference from or with the reading of data blocks requested by said computer system.
17. A method of storing data blocks of digital information as defined in Claim 16 and further comprising the step of deleting from said accumulated group any data blocks that have been written to both of said disks of said mirrored pair.
18. A method as defined in Claim 17 and further comprising the step of associating first and second flags with each accumulated data block in said group, said first flag associated with any particular data group indicating whether or not said particular data group has been copied to said one disk and said second flag associated with said particular data group indicating whether or not said particular data group has been copied to said other disk.
CA002125201A 1993-08-23 1994-06-06 Digital storage system and method having alternating deferred updating of mirrored storage disks Abandoned CA2125201A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/110,467 1993-08-23
US08/110,467 US5432922A (en) 1993-08-23 1993-08-23 Digital storage system and method having alternating deferred updating of mirrored storage disks

Publications (1)

Publication Number Publication Date
CA2125201A1 true CA2125201A1 (en) 1995-02-24

Family

ID=22333166

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002125201A Abandoned CA2125201A1 (en) 1993-08-23 1994-06-06 Digital storage system and method having alternating deferred updating of mirrored storage disks

Country Status (8)

Country Link
US (1) US5432922A (en)
EP (1) EP0642081A3 (en)
JP (1) JP3053511B2 (en)
KR (1) KR0131554B1 (en)
CN (1) CN1096639C (en)
BR (1) BR9403306A (en)
CA (1) CA2125201A1 (en)
SG (1) SG42854A1 (en)

Families Citing this family (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544347A (en) 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
US5511177A (en) * 1991-11-21 1996-04-23 Hitachi, Ltd. File data multiplexing method and data processing system
US5729718A (en) * 1993-11-10 1998-03-17 Quantum Corporation System for determining lead time latency as function of head switch, seek, and rotational latencies and utilizing embedded disk drive controller for command queue reordering
JP3243919B2 (en) * 1994-02-23 2002-01-07 三菱電機株式会社 Disk device and FA controller using this disk device
JP2708386B2 (en) * 1994-03-18 1998-02-04 インターナショナル・ビジネス・マシーンズ・コーポレイション Method and apparatus for recovering duplicate database through simultaneous update and copy procedure
JP3085085B2 (en) * 1994-05-09 2000-09-04 三菱電機株式会社 Data access device and distributed database system
DE69533764T2 (en) * 1994-06-22 2005-12-01 Hewlett-Packard Development Co., L.P., Houston A method of using different content storage disks in a single volume of a hierarchical disk array
JPH086843A (en) * 1994-06-23 1996-01-12 Fujitsu Ltd Semiconductor storage device
EP0702302B1 (en) * 1994-09-16 2001-12-05 International Computers Limited Disk array for data storage
US5764903A (en) * 1994-09-26 1998-06-09 Acer America Corporation High availability network disk mirroring system
EP0707267A3 (en) * 1994-10-12 1996-07-03 Ibm Redundant array of disk drives with asymmetric mirroring and asymmetric mirroring data processing method
JPH08152976A (en) * 1994-11-28 1996-06-11 Fujitsu Ltd Access method for storage device
US6067635A (en) * 1995-10-27 2000-05-23 Lsi Logic Corporation Preservation of data integrity in a raid storage device
US5708771A (en) * 1995-11-21 1998-01-13 Emc Corporation Fault tolerant controller system and method
US5805787A (en) * 1995-12-29 1998-09-08 Emc Corporation Disk based disk cache interfacing system and method
US5765172A (en) * 1996-01-23 1998-06-09 Dsc Communications Corporation System and method for verifying integrity of replicated databases
DE19607101A1 (en) * 1996-02-24 1997-08-28 Hella Kg Hueck & Co Electronic device and device for data transmission between two identical electronic devices
US5819310A (en) * 1996-05-24 1998-10-06 Emc Corporation Method and apparatus for reading data from mirrored logical volumes on physical disk drives
US5933653A (en) * 1996-05-31 1999-08-03 Emc Corporation Method and apparatus for mirroring data in a remote data storage system
US6101497A (en) * 1996-05-31 2000-08-08 Emc Corporation Method and apparatus for independent and simultaneous access to a common data set
US6092066A (en) * 1996-05-31 2000-07-18 Emc Corporation Method and apparatus for independent operation of a remote data facility
US5917998A (en) * 1996-07-26 1999-06-29 International Business Machines Corporation Method and apparatus for establishing and maintaining the status of membership sets used in mirrored read and write input/output without logging
US6567903B1 (en) * 1996-08-23 2003-05-20 Emc Corporation Data storage system having master/slave addressable memories
US6347365B1 (en) 1996-08-23 2002-02-12 Emc Corporation Data storage system having a[n] memory responsive to clock pulses produced on a bus and clock pulses produced by an internal clock
US6009481A (en) * 1996-09-30 1999-12-28 Emc Corporation Mass storage system using internal system-level mirroring
US5991852A (en) * 1996-10-28 1999-11-23 Mti Technology Corporation Cache ram using a secondary controller and switching circuit and improved chassis arrangement
US5950229A (en) * 1997-03-12 1999-09-07 Micron Electronics, Inc. System for accelerating memory bandwidth
US7389312B2 (en) * 1997-04-28 2008-06-17 Emc Corporation Mirroring network data to establish virtual storage area network
US6148368A (en) * 1997-07-31 2000-11-14 Lsi Logic Corporation Method for accelerating disk array write operations using segmented cache memory and data logging
US6112257A (en) * 1997-09-24 2000-08-29 Emc Corporation Dynamic adjustment of mirror service policy for logical volumes in a disk drive system based on collected statistics
US6105103A (en) * 1997-12-19 2000-08-15 Lsi Logic Corporation Method for mapping in dynamically addressed storage subsystems
JP3866862B2 (en) * 1998-06-05 2007-01-10 パイオニア株式会社 Information recording apparatus and method
US6167531A (en) * 1998-06-18 2000-12-26 Unisys Corporation Methods and apparatus for transferring mirrored disk sets during system fail-over
US6230240B1 (en) * 1998-06-23 2001-05-08 Hewlett-Packard Company Storage management system and auto-RAID transaction manager for coherent memory map across hot plug interface
JP2000132915A (en) * 1998-10-22 2000-05-12 Matsushita Electric Ind Co Ltd Disc device and data processing method therefor
US6209002B1 (en) 1999-02-17 2001-03-27 Emc Corporation Method and apparatus for cascading data through redundant data storage units
EP1039387B1 (en) 1999-03-19 2010-05-19 Hitachi, Ltd. System for replicating data
DE60043873D1 (en) * 1999-06-01 2010-04-08 Hitachi Ltd Method for data backup
US6463573B1 (en) * 1999-06-03 2002-10-08 International Business Machines Corporation Data processor storage systems with dynamic resynchronization of mirrored logical data volumes subsequent to a storage system failure
US6754682B1 (en) * 2000-07-10 2004-06-22 Emc Corporation Method and apparatus for enabling consistent ancillary disk array storage device operations with respect to a main application
US6977927B1 (en) 2000-09-18 2005-12-20 Hewlett-Packard Development Company, L.P. Method and system of allocating storage resources in a storage area network
US7386610B1 (en) 2000-09-18 2008-06-10 Hewlett-Packard Development Company, L.P. Internet protocol data mirroring
US6804819B1 (en) 2000-09-18 2004-10-12 Hewlett-Packard Development Company, L.P. Method, system, and computer program product for a data propagation platform and applications of same
US7032119B2 (en) * 2000-09-27 2006-04-18 Amphus, Inc. Dynamic power and workload management for multi-server system
US6820182B1 (en) 2000-10-18 2004-11-16 International Business Machines Corporation Support for exhaustion recovery in a data processing system with memory mirroring
US7133233B1 (en) * 2000-10-24 2006-11-07 Maxtor Corporation Disk drive with read while write capability
US6606690B2 (en) 2001-02-20 2003-08-12 Hewlett-Packard Development Company, L.P. System and method for accessing a storage area network as network attached storage
JP2003044230A (en) * 2001-05-23 2003-02-14 Hitachi Ltd Storage system
US6993677B1 (en) * 2001-06-14 2006-01-31 Lsi Logic Corporation System and method for data verification in a RAID system
US6647460B2 (en) 2001-07-13 2003-11-11 Hitachi, Ltd. Storage device with I/O counter for partial data reallocation
US7139932B2 (en) * 2002-01-03 2006-11-21 Hitachi, Ltd. Data synchronization of multiple remote storage after remote copy suspension
US6745303B2 (en) * 2002-01-03 2004-06-01 Hitachi, Ltd. Data synchronization of multiple remote storage
US7069465B2 (en) * 2002-07-26 2006-06-27 International Business Machines Corporation Method and apparatus for reliable failover involving incomplete raid disk writes in a clustering system
US7210010B2 (en) * 2002-09-30 2007-04-24 Insignia Solutions Plc Efficient system and method for updating a memory device
US7107483B2 (en) * 2002-11-15 2006-09-12 Lsi Logic Corporation Apparatus and method for enhancing data availability by leveraging primary/backup data storage volumes
US6934805B2 (en) 2003-05-19 2005-08-23 International Business Machines Corporation Method and apparatus for managing computer storage devices for improved operational availability
JP4512386B2 (en) * 2004-03-05 2010-07-28 株式会社日立製作所 Backup system and method
US7492953B2 (en) * 2004-06-17 2009-02-17 Smith Micro Software, Inc. Efficient method and system for reducing update requirements for a compressed binary image
EP1934727B1 (en) * 2005-08-23 2019-01-16 Red Bend Ltd. Method and system for in-place updating content stored in a storage device
JP4472617B2 (en) * 2005-10-28 2010-06-02 富士通株式会社 RAID system, RAID controller and rebuild / copy back processing method thereof
US20070180206A1 (en) * 2006-01-18 2007-08-02 Craft Julie L Method of updating a duplicate copy of an operating system on the same disk
US7707356B2 (en) * 2006-09-28 2010-04-27 Agere Systems Inc. Method and apparatus for scheduling disk read requests
US7694171B2 (en) * 2006-11-30 2010-04-06 Lsi Corporation Raid5 error recovery logic
US7882304B2 (en) * 2007-04-27 2011-02-01 Netapp, Inc. System and method for efficient updates of sequential block storage
US8375187B1 (en) * 2007-09-27 2013-02-12 Emc Corporation I/O scheduling for flash drives
US7827441B1 (en) * 2007-10-30 2010-11-02 Network Appliance, Inc. Disk-less quorum device for a clustered storage system
JP5505329B2 (en) * 2011-02-22 2014-05-28 日本電気株式会社 Disk array device and control method thereof
US20120226935A1 (en) * 2011-03-03 2012-09-06 Nitin Kishore Virtual raid-1 drive as hot spare
US20130166865A1 (en) * 2011-12-22 2013-06-27 Alex Lemberg Systems and Methods for Managing Parallel Access to Multiple Storage Systems
US8625386B2 (en) * 2012-03-22 2014-01-07 Seiko Epson Corporation Non-volatile memory device, circuit board, printing material container and printer
JP5843010B2 (en) * 2012-06-25 2016-01-13 富士通株式会社 Storage control device, storage control method, and storage control program
JP6605786B2 (en) * 2014-03-18 2019-11-13 日本電気株式会社 Redundant storage control device, redundant storage control method, and redundant storage control program
JP6366013B2 (en) * 2014-04-30 2018-08-01 パナソニックIpマネジメント株式会社 Image recording apparatus and image recording method
CN106897311B (en) * 2015-12-21 2020-08-11 财团法人工业技术研究院 Database batch updating method, data reduction log generating method and storage device
CN106933700B (en) * 2015-12-31 2021-12-03 伊姆西Ip控股有限责任公司 Method and apparatus for restoring data backup between memories
US10452301B1 (en) 2016-09-29 2019-10-22 Amazon Technologies, Inc. Cluster-based storage device buffering
CN110096216B (en) * 2018-01-30 2022-06-14 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing data storage in a data storage system
US11226990B2 (en) * 2018-07-20 2022-01-18 Ionir Systems Ltd. System and method for operating a digital storage system
CN111338559B (en) * 2018-12-19 2023-11-10 深圳市茁壮网络股份有限公司 Data processing method, user module, information management module and system
US11749335B2 (en) * 2020-11-03 2023-09-05 Jianzhong Bi Host and its memory module and memory controller

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS57168360A (en) * 1981-03-31 1982-10-16 Fujitsu Ltd Magnetic recording controlling system
JPS59135563A (en) * 1983-01-24 1984-08-03 Hitachi Ltd Computer system having disk cache device
JPH0724009B2 (en) * 1985-04-27 1995-03-15 日本電気株式会社 Double data writing method by disk controller
US4722085A (en) * 1986-02-03 1988-01-26 Unisys Corp. High capacity disk storage system having unusually high fault tolerance level and bandpass
US5230073A (en) * 1986-07-21 1993-07-20 Bell Communications Research, Inc. System and method for accessing and updating a continuously broadcasted stored database
JPH0258140A (en) * 1988-08-24 1990-02-27 Nec Corp Volume trouble correcting system
JPH02280221A (en) * 1989-04-21 1990-11-16 Mitsubishi Electric Corp System for controlling disk
JPH03266046A (en) * 1990-03-16 1991-11-27 Nippon Telegr & Teleph Corp <Ntt> Data base high speed access method
JPH0421021A (en) * 1990-05-14 1992-01-24 Fujitsu Ltd Data backup system for semiconductor disk device
JPH04102148A (en) * 1990-08-22 1992-04-03 Hitachi Ltd Data processor
JPH04125725A (en) * 1990-09-17 1992-04-27 Hitachi Ltd Magnetic tape controller and its control method
US5212784A (en) * 1990-10-22 1993-05-18 Delphi Data, A Division Of Sparks Industries, Inc. Automated concurrent data backup system
JPH04195646A (en) * 1990-11-28 1992-07-15 Hitachi Ltd Multiplexed system
JPH04216152A (en) * 1990-12-17 1992-08-06 Nec Kyushu Ltd Backup controller
US5297258A (en) * 1991-11-21 1994-03-22 Ast Research, Inc. Data logging for hard disk data storage systems
JPH05165579A (en) * 1991-12-17 1993-07-02 Yokogawa Electric Corp Mirrored disk controller

Also Published As

Publication number Publication date
KR0131554B1 (en) 1998-04-24
SG42854A1 (en) 1997-10-17
EP0642081A3 (en) 1995-03-22
KR950006577A (en) 1995-03-21
EP0642081A2 (en) 1995-03-08
CN1096639C (en) 2002-12-18
JP3053511B2 (en) 2000-06-19
US5432922A (en) 1995-07-11
CN1102896A (en) 1995-05-24
BR9403306A (en) 1995-06-20
JPH0764870A (en) 1995-03-10

Similar Documents

Publication Publication Date Title
US5432922A (en) Digital storage system and method having alternating deferred updating of mirrored storage disks
US6604172B2 (en) Disk array device with selectable method for generating redundant data
US5210866A (en) Incremental disk backup system for a dynamically mapped data storage subsystem
CA2080579C (en) Logical track write scheduling system for a parallel disk drive array data storage subsystem
US5546535A (en) Multiple controller sharing in a redundant storage array
US7159150B2 (en) Distributed storage system capable of restoring data in case of a storage failure
US6145091A (en) Storage unit subsystem
US6704839B2 (en) Data storage system and method of storing data
JP2501752B2 (en) Storage device of computer system and method of storing data
US6922752B2 (en) Storage system using fast storage devices for storing redundant data
EP0606714A1 (en) DASD array
US20030097533A1 (en) Storage device, backup method and computer program code of this storage device
US6397348B1 (en) Redundant array of disk drives with asymmetric mirroring and asymmetric mirroring data processing method
US6301711B1 (en) System and method for the network support of full motion video using a redundant array of inexpensive disks
US6874101B2 (en) Storage unit subsystem
JPH09265435A (en) Storage device system
JPH06119121A (en) Disk array device
GB2298306A (en) A disk array and tasking means
Quinn RAID-S Technical Overview: RAID 4 and 5-Compliant Hardware and Software Functionality Improves Data Availability Through Use of XOR-Capable Disks in an Integrated Cached Disk Army
MXPA99003253A (en) Expansion of the number of drives in a raid set while maintaining integrity of migrated data

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued