WO1990000280A1 - Disk drive memory - Google Patents

Disk drive memory Download PDF

Info

Publication number
WO1990000280A1
WO1990000280A1 PCT/US1989/001677 US8901677W WO9000280A1 WO 1990000280 A1 WO1990000280 A1 WO 1990000280A1 US 8901677 W US8901677 W US 8901677W WO 9000280 A1 WO9000280 A1 WO 9000280A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
disk drives
data file
disk drive
parity
Prior art date
Application number
PCT/US1989/001677
Other languages
French (fr)
Inventor
Robert Henry Dunphy, Jr.
Robert Walsh
John Henry Bowers
Original Assignee
Storage Technology Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Storage Technology Corporation filed Critical Storage Technology Corporation
Priority to DE68919219T priority Critical patent/DE68919219T2/en
Priority to EP89906506A priority patent/EP0422030B1/en
Publication of WO1990000280A1 publication Critical patent/WO1990000280A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/18Error detection or correction; Testing, e.g. of drop-outs
    • G11B20/1833Error detection or correction; Testing, e.g. of drop-outs by adding special lists or symbols to the coded information
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B5/00Recording by magnetisation or demagnetisation of a record carrier; Reproducing by magnetic means; Record carriers therefor
    • G11B5/012Recording on, or reproducing or erasing from, magnetic disks

Definitions

  • This invention relates to computer systems and, in particular, to an inexpensive, high performance, high reliability disk drive memory for use with a computer system.
  • the typical commercially available disk drive is a 14-inch form factor unit, such as the IBM 3380J disk drive, that can store on the order of 1.2 gigabytes of data.
  • the associated central processing unit stores data files on the disk drive memory by writing the entire data file onto a single disk drive. It is obvious that the failure of a single disk drive can result in the loss of a significant amount of data. In order to minimize the possibility of this occurring, the disk drives are built to be high reliability units. The cost of reliability is high in that the resultant disk drive is a very expensive unit.
  • An alternative to the large form factor disk drives for storing data is the use of a multiplicity of small form factor disk drives interconnected in a parallel array.
  • Such an arrangement is the Micropolis Parallel Drive Array, Model 1804 SCSI that uses four, parallel, synchronized disk drives and one redundant parity drive.
  • This arrangement uses parity protection, provided by the parity drive, to increase data reliability. The failure of one of the four data disk drives can be recovered from by the use of the parity bits stored on the parity disk drive.
  • a similar system is disclosed in U.S. Patent No. 4,722,085 wherein a high capacity disk drive memory is disclosed.
  • This disk drive memory uses a plurality of relatively small, independently operating disk subsystems to function as a large, high capacity disk drive having an unusually high fault tolerance and a very high data transfer bandwidth.
  • a data organizer adds seven error check bits to each 32 bit data word to provide error checking and error correction capability.
  • the resultant 39 bit word is written, one bit per disk drive, on to 39 disk drives.
  • the remaining 38 bits of the stored 39 bit word can be used to reconstruct the 32 bit data word on a word- by-word basis as each data word is read from memory, thereby obtaining fault tolerance.
  • the disk drive memory of the present invention uses a large plurality of small form factor disk drives to implement an inexpensive, high performance, high reliability disk drive memory that emulates the format and capability of large form factor disk drives.
  • the plurality of disk drives are switchably interconnectable to form parity groups of N+l parallel connected disk drives to store data thereon.
  • the N+l disk drives are used to store the N segments of each data word plus a parity segment.
  • a pool of backup disk drives is maintained to automatically substitute a replacement disk drive for a disk drive in a parity group that fails during operation.
  • the pool of backup disk drives provides high reliability at low cost.
  • Each disk drive is designed so that it can detect a failure in its operation, which allows the parity segment can be used not only for error detection but also for error correction.
  • Identification of the failed disk drive provides information on the bit position of the error in the data word and the parity data provides information to correct the error itself.
  • a backup disk drive from the shared pool of backup disk drives is automatically switched in place of the failed disk drive.
  • Control circuitry reconstructs the data stored on the failed disk drive, using the remaining N-l segments of each data word plus the associated parity segment.
  • a failure in the parity segment does not require data reconstruction, but necessitates regeneration of the parity information.
  • the reconstructed data is then written onto the substitute disk drive.
  • the use of backup disk drives increases the reliability of the N+l parallel disk drive architecture while the use of a shared pool of backup disk drives minimizes the cost of providing the improved reliability.
  • This architecture of a large pool of switchably interconnectable, small form factor disk drives also provides great flexibility to control the operational characteristics of the disk drive memory.
  • the reliability of the disk drive memory system can be modified by altering the assignment of disk drives from the backup pool of disk drive to the data storage disk drive parity groups.
  • the size of the parity group is controllable, thereby enabling a mixture of parity group sizes to be concurrently maintained in the disk drive memory.
  • Various parity groups can be optimized for different performance characteristics.
  • the data transfer rate is proportional to the number of disk drives in the parity group; as the size of the parity group increases, the number of parity drives and spare drives available in the spare pool decrease; and as the size of the parity group increases the number of physical actuators/virtual actuator decreases.
  • the data transmitted by the associated central processing unit is used to generate parity information.
  • the data and parity information is written across N+l disk drives in the disk drive memory.
  • a number of disk drives are maintained in the disk drive memory as spare or backup units, which backup units are automatically switched on line in place of disk drives that fail.
  • Control software is provided to reconstruct the data that was stored on a failed disk drive and to write this reconstructed data onto the backup disk drive that is selected to replace the failed disk drive unit.
  • a control module in the disk drive memory divides the received data into a plurality (N) of segments.
  • the control module also generates a parity segment that represents parity data that can be used to reconstruct one of the N segments of the data if one segment is inadvertently lost due to a disk drive failure.
  • a disk drive manager in the disk drive memory selects N+l disk drives from the plurality of disk drives in the disk drive memory to function as a parity group on which the data file and its associated parity segment is stored.
  • the control module writes each of the N data segments on a separate one of N of the N+l disk drives selected to be part of the parity group.
  • the parity segment is written onto the remaining one of the selected disk drives.
  • the data and its associated parity information is written on N+l disk drives instead of on a single disk drive. Therefore, the failure of a single disk drive will only impact one of the N segments of the data.
  • the remaining N- 1 segments of the data plus the parity segment that is stored on a disk drive can be used to reconstruct the missing or lost data segment from this data due to the failure of the single disk drive.
  • the parity information is used to provide backup for the data as is a plurality of backup disk drives.
  • the data is spread across a plurality of disk drives so that the failure of a single disk drive will only cause a temporary loss of 1/N of the data.
  • the parity segment written on a separate disk drive enables the software in the disk drive memory to reconstruct the lost segment of the data on a new drive over a period of time.
  • data can be reconstructed as needed in real time as needed by the CPU so that the original disk failure is transparent to the CPU.
  • the provision of one parity disk drive for every N data disk drives plus the provision of a pool of standby or backup disk drives provide full backup for all of the data stored on the disk drives in this disk drive memory.
  • Such an arrangement provides high reliability at a reasonable cost which cost is far less than the cost of providing a duplicate backup disk drive as in disk shadowing or the high maintenance cost of prior disk drive memory array systems.
  • the size of the pool of standby drives and the rate of drive failure determines the interval between required service calls. A sufficiently larger pool could allow service as infrequently as once per year or less, saving considerable costs.
  • Figure 1 illustrates in block diagram form the architecture of the disk drive memory
  • FIG. 1 illustrates the disk subsystem in block diagram form
  • FIG. 3 illustrates the control module in block diagram form
  • Figure 4 illustrates the disk manager in block diagram form.
  • the disk drive memory of the present invention uses a plurality of small form factor disk drives in place of the single disk drive to implement an inexpensive, high performance, high reliability disk drive memory that emulates the format and capability of large form factor disk drives.
  • the plurality of disk drives are switchably interconnectable to form parity groups of N+l parallel connected disk drives to store data thereon.
  • the N+l disk drives are used to store the N segments of each data word plus a parity segment.
  • a pool of backup disk drives is maintained to automatically substitute a replacement disk drive for a disk drive that fails during operation.
  • the pool of backup disk drives provides high reliability at low cost.
  • Each disk drive is designed so that it can detect a failure in its operation, which allows the parity segment can be used not only for error detection but also for error correction.
  • Identification of the failed disk drive provides information on the bit position of the error in the data word and the parity data provides information to correct the error itself.
  • a backup disk drive from the shared pool of backup disk drives is automatically switched in place of the failed disk drive.
  • Control circuitry reconstructs the data stored on the failed disk drive, using the remaining N-l segments of each data word plus the associated parity segment. A failure in the parity segment does not require data reconstruction, but necessitates regeneration of the parity information. The reconstructed data is then written onto the substitute disk drive.
  • backup di ⁇ k drives increases the reliability of the N+l parallel disk drive architecture while the use of a shared pool of backup disk drives minimizes the cost of providing the improved reliability.
  • This architecture of a large pool of switchably interconnectable, small form factor disk drives also provides great flexibility to control the operational characteristics of the disk drive memory.
  • the reliability of the disk drive memory system can be modified by altering the assignment of disk drives from the backup pool of disk drives to the data storage disk drive parity groups.
  • the size of the parity group is controllable, thereby enabling a mixture of parity group sizes to be concurrently maintained in the disk drive memory.
  • parity groups can be optimized for different performance characteristics. For example: the data transfer rate is proportional to the number of disk drives in the parity group; as the size of the parity group increases, the number of parity drives and spare drives available in the spare pool decrease; and as the size of the parity group increases the number of physical actuators/virtual actuator decreases.
  • the data transmitted by the associated central processing unit is used to generate parity information.
  • the data and parity information is written across N+l disk drives in the disk drive memory.
  • a number of disk drives are maintained in the disk drive memory as spare or backup unit ⁇ , which backup units are automatically switched on line in place of a disk drive that fails.
  • Control software is provided to reconstruct the data that was stored on a failed disk drive and to write this reconstructed data onto the backup disk drive that is selected to replace the failed disk drive unit.
  • a control module in the disk drive memory divides the received data into a plurality (N) of segments.
  • the control module also generates a parity segment that represents parity data that can be used to reconstruct one of the N segments of the data if one segment is inadvertently lost due to a disk drive failure.
  • a disk drive manager in disk drive memory selects N+l disk drives from the plurality of disk drives in the disk drive memory to function as a parity group on which the data file and its associated parity segment is stored.
  • the control module writes each of the N data segments on a separate one of N of the N+l disk drives selected to be part of the parity group.
  • the parity segment is written onto the remaining one of the selected disk drives.
  • the data and its associated parity information is written on N+l disk drives instead of on a single disk drive. Therefore, the failure of a single disk drive will only impact one of the N segments of the data.
  • the remaining N- 1 segments of the data plus the parity segment that is stored on a disk drive can be used to reconstruct the missing or lost data segment from this data due to the failure of the single disk drive.
  • the parity information is used to provide backup for the data as is a plurality of backup disk drives.
  • the data is spread across a plurality of disk drives so that the failure of a single disk drive will only cause a temporary loss of 1/N of the data.
  • the parity segment written on a separate disk drive enables the software in the disk drive memory to reconstruct the lost segment of the data on a new drive over a period of time.
  • data can be reconstructed as needed in real time as needed by the CPU so that the original disk failure is transparent to the CPU.
  • the provision of one parity disk drive for every N data disk drives plus the provision of a pool of standby or backup disk drives provide full backup for all of the data stored on the disk drives in this disk drive memory.
  • Such an arrangement provides high reliability at a reasonable cost which cost is far less than the cost of providing a duplicate backup disk drive as in disk shadowing or the high maintenance cost of prior disk drive memory array systems.
  • One measure of reliability is the function Mean Time Between Failures which provides a metric by which systems can be compared. For a single element having a constant failure rate f in failures per unit time, the mean time between failures is 1/f. The overall reliability of a system of n series connected elements, where all of the units must be operational for the system to be operational, is simply the product of the individual reliability functions.. When all of the elements have a constant failure rate, the mean . time between failures is 1/nf.
  • the reliability of an element is always less than or equal to 1 and the reliability of a series of interconnected elements is therefore always less than or equal to the reliability of a single element.
  • extremely high reliability elements are required or redundancy may be used. Redundancy provides spare units which are used to maintain a system operating when an on-line unit fails.
  • the mean time between failures becomes (k+l)/f(n-k) where (n-k)/n refers to a system with n total elements, of which k are spares and only n-k must be functional for the system to be operational.
  • the reliability of a system may be increased significantly by the use of repair, which involves fixing failed units and restoring them to full operational capability.
  • repair There are two types of repair: on demand and periodic.
  • On demand repair causes a repair operation with repair rate u to be initiated on every failure that occurs.
  • Periodic repair provides for scheduled repairs at regular intervals, that restores all units that have failed since the last repair visit. More spare units are required for periodic repairs to achieve the same level of reliability as an on demand repair procedure but the maintenance process is simplified.
  • high reliability can be obtained by the proper selection of a redundancy methodology and a repair strategy.
  • Another factor in the selection of a disk drive memory architecture is the data reconstruction methodology.
  • the architecture of the disk drive memory of the present invention takes advantage of this factor to enable the use of a single parity bit for both error detection and error recovery in addition to providing flexibility in the selection of a redundancy and repair strategy to implement a high reliability disk drive memory that is inexpensive.
  • Disk Drive Memory Architecture Figure 1 illustrates in block diagram form the architecture of the preferred embodiment of disk drive memory 100. There are numerous alternative implementations possible, and this embodiment both illustrates the concepts of the invention and provides a high reliability, high performance, inexpensive disk drive memory.
  • the disk drive memory 100 appears to the associated central processing unit to be a large disk drive or a collection of large disk drives since the architecture of disk drive memory 100 is transparent to the associated central processing unit.
  • This disk drive memory 100 includes a plurality of disk drives 130-0 to 130-M,* each of which is an inexpensive yet fairly reliable disk drive.
  • the plurality of disk drives 130-0 to 130-M is significantly less expensive, even with providing disk drives to store parity information and providing disk drives for backup purposes, than to provide the typical 14 inch form factor backup disk drive for each disk drive in the disk drive memory.
  • the plurality of disk drives 130-0 to 130-M are typically the commodity hard disk drives in the 5-1/4 inch form factor.
  • Each of disk drives 130-0 to 130-M is connected to disk drive interconnection apparatus, which in this example is the plurality of crosspoint switches 121- 124 illustrated in Figure 1.
  • disk drive interconnection apparatus which in this example is the plurality of crosspoint switches 121- 124 illustrated in Figure 1.
  • four crosspoint switches 121-124 are shown in Figure 1 and these four crosspoint switches 121- 124 are each connected to all of the disk drives 130- 0 to 130-M.
  • Each crosspoint switch (example 121) is connected by an associated set of M conductors 141-0 to 141-M to a corresponding associated disk drive 130- 0 to 130-M.
  • each crosspoint switch 121-124 can access each disk drive 130-0 to 130-M in the disk drive memory via an associated dedicated conductor.
  • the crosspoint switches 121-124 themselves are an N+l by M switch that interconnects N+l signal leads on one side of the crosspoint switch with M signal leads on the other side of the crosspoint switch 121.
  • Transmission through the crosspoint switch 121 is bidirectional in nature in that data can be written through the crosspoint switch 121 to a disk drive or read from a disk drive through the crosspoint switch 121.
  • each crosspoint switch 121-124 serves to connect N+l of the disk drives 130-0 to 120-M in parallel to form a parity group.
  • the data transfer rate of this arrangement is therefore N+l times the data transfer rate of a single one of disk drives 130- 0 to 130-M.
  • FIG. 1 illustrates a plurality of control modules 101-104, each of which is connected to an associated crosspoint switch 121-124.
  • Each control module (example 101) is connected via N+l data leads and a single control lead 111 to the associated crosspoint switch 121.
  • Control module 101 can activate crosspoint switch 121 via control signals transmitted over the control lead to interconnect the N+l signal leads from control module 101 to N+l designated ones of the M disk drives 130-0 to 130-M. Once this interconnection is accomplished, control module 101 is directly connected via the N+l data leads 111 and the interconnections through crosspoint switch 121 to a designated subset of N+l of the M disk drives 130-0 to 130-M.
  • control module 101 There are N+l disk drives in this subset and crosspoint switch 121 interconnects control module 101 with these disk drives that are in the subset via connecting each of the N+l signal leads from control unit 101 to a corresponding signal lead associated with one of the disk drives in the subset. Therefore a direct connection is established between control unit 101 and N+l disk drives in the collection of disk drives 130-0 to 130-M. Control unit 101 can thereby read and write data on the disk drives in this subset directly over this connection.
  • the data that is written onto the disk drives consists of data that is transmitted from an associated central processing unit over bus 150 to one of directors 151-154.
  • the data file is written into for example director 151 which stores the data and transfers this received data over conductors 161 to control module 101.
  • Control module 101 segments the received data into N segments and also generates a parity segment for error correction purposes. Each of the segments of the data are written onto one of the N disk drives in the selected subset. An additional disk drive is used in the subset to store the parity segment.
  • the parity segment includes error correction characters and data that can be used to verify the integrity of the data that is stored -on the N disk drives as well as to reconstruct one of the N segments of the data if that segment were lost due to a failure of the disk drive on which that data segment is stored.
  • the disk drive memory illustrated on Figure 1 includes a disk drive manager 140 which is connected to all of the disk drives 130-0 to 130-M via conductor 143 as well as to each of control modules 101-104 via an associated one of conductors 145-1 to 145-4.
  • Disk drive manager 140 maintains data in memory indicative of the correspondence between the data read into the disk drive memory 100 and the location on the various disks 130-0 to 130-M on which this data is stored.
  • Disk drive manager 140 assigns various ones of the disk drives 130-0 to 130-M to the parity groups as described above as well as assigning various disk drives to a backup pool. The identity of these N+l disk drives is transmitted by disk drive manager 140 to control module 101 via conductor 145-1.
  • Control module 101 uses the identity of the disk drives assigned to this parity group to activate crosspoint switch 121 to establish the necessary interconnections between the N+l signal leads of control module 101 and the corresponding signal leads of the N+l disk drives designated by disk drive manager 140 as part of this parity group.
  • disk drive memory 100 can emulate one or more large form factor disk drives (ex - a 3380 type of disk drive) using a plurality of smaller form factor disk drives while providing a high reliability capability by writing the data across a plurality of the smaller form factor disk drives.
  • a reliability improvement is also obtained by providing a pool of backup disk drives that are switchably interconnectable in place of a failed disk drive. Data-reconstruction is accomplished by the use of the parity segment, so that the data stored on the remaining functioning disk drives combined with the parity information stored in the parity segment can be used by control software to reconstruct the data lost when one of the plurality of disk drives in the parity group fails.
  • This arrangement provides a reliability capability similar to that obtained by disk shadowing arrangements at a significantly reduced cost over such an arrangement.
  • FIG 2 is a block diagram of the disk drive 130-0.
  • the disk drive 130-0 can be considered a disk subsystem that consists of a disk drive mechanism and its surrounding control and interface circuitry.
  • the disk drive shown in Figure 2 consists of a commodity disk drive 201 which is a commercially available hard disk drive of the type that typically is used in personal computers.
  • Control processor 202 has control responsibility for the entire disk drive shown in Figure 2.
  • the control processor 202 monitors all information routed over the various data channels 141- 0 to 144-0.
  • the data channels 141-0 to 144-0 that interconnect the associated crosspoint switches 121- 124 with disk drive 130-0 are serial communication channels. Any data transmitted over these channels is stored in a corresponding interface buffer 231-234.
  • the interface buffers 231-234 are connected via an associated serial data channel 241-244 to a corresponding serial/parallel converter circuit 211- 214.
  • Control processor 202 has a plurality of parallel interfaces which are connected via parallel data paths 221-224 to the serial/parallel converter circuits 211/214.
  • processor 202 requires that the data be converted between serial and parallel format to correspond to the difference in interface format between crosspoint switches 121-124 and control processor 202.
  • a disk controller 204 is also provided in disk drive 130-0 to implement the low level electrical interface required by the commodity disk drive 201.
  • the commodity disk drive 201 has an ESDI interface which must be interfaced with control processor 202.
  • Disk controller 204 provides this function.
  • data communication between control processor 202 and commodity disk drive 201 is accomplished over bus 206, cache memory 203, bus 207, disk controller 204, bus 208.
  • Cache memory 203 is provided as a buffer to improve performance of the disk drive 130-0.
  • the cache is capable of holding an entire track of data for each physical data head in the commodity disk drive 201.
  • Disk controller 204 provides serialization and deserialization of data, CRC/ECC generation, checking and correction and NRZ data encoding.
  • the addressing information such as the head select and other type of control signals are provided by control processor 202 and communicated over bus 205 to commodity disk drive 201.
  • control processor 202 is connected by signal lead 262 to an interface buffer 261 which interconnects control processor 201 with signal lead 143 to disk drive manager 140.
  • This communication path is provided for diagnostic and control purposes.
  • disk drive manager 140 can signal control processor 202 to power commodity disk drive 201 down when disk drive 130-0 is in the standby mode. In this fashion, commodity disk drive 201 remains in an idle state until it is selected by disk drive manager 140 at which time disk drive manager 140 can activate the disk drive by providing the appropriate control signals over lead 143.
  • Control module 101 includes a control processor 301 that is responsible for monitoring the various interfaces to director 151 and the associated crosspoint switch 121.
  • Control processor 301 monitors CTL-I interface 309 and 311, for commands from director 151 and, when a command is received by one of these two interfaces 309, 311 control processor 301 reads the command over the corresponding signal lead 310, 312 respectively.
  • Control processor 301 is connected by bus 304 to a cache memory 305 which is used to improve performance.
  • Control processor 301 routes the command and/or data information received from director 151 to the appropriate disk groups through the N serial command/data interfaces illustrated as serial/parallel interface 302.
  • Serial/parallel interface 302 provides N+l interfaces where the N+l data and control channels 111 that are connected to the associated crosspoint switch 121.
  • Control processor 301 takes the data that is transmitted by director 151 and divides the data into N segments. Control processor 301 also generates a parity segment for error recovery purposes. Control processor 301 is responsible for all gap processing in support of the count/key/data format as received from the associated central processing unit. Control processor 301 receives information from disk drive manager 140 over lead 145. This control data is written into disk drive manager interface 313 where it can be retrieved over lead 314 by control processor 301.
  • the control information from disk drive manager 140 is data indicative of the interconnections required in crosspoint switch 121 to connect the N+l data channels 111 of control module 101 with the selected N+l disk drives out of the pool of disk drives 130-0 to 130-M.
  • control processor 301 generates the N+l data and parity segments and stores these in cache memory 305 to be transmitted to the N+l selected disk drives.
  • control processor 301 transmits control signals over lead 307 via crosspoint control logic 308 to crosspoint switch 121 to indicate the interconnections required in crosspoint switch 121 to interconnect the N+l signal channels ill of control module 101 with the corresponding signal leads 141-0 to 141-M associated with the selected disk drives.
  • the N+l data plus parity segments are transmitted by control processor 301 outputting these segments from cache memory 305 over bus 306 through serial/parallel interface 302 onto the N+l serial data channels 111.
  • the count/key/data format of the 3380 type of disk drive must be supported.
  • the count/key/data information is stored on a physical track as data.
  • the physical drives are formatted so that an integral number of virtual tracks are stored there, one per sector.
  • separate caches are provided for each control module track to allow parallel accesses by different control modules.
  • the single density 3380 track has a capacity of approximately 50 KB. If a parity group of 8 data disk drives +1 parity disk drive is used, 50/8 or 6.25K is stored on each physical disk drive.
  • One of the primary responsibilities of the control modules is to translate virtual 3380 addresses to physical addresses.
  • a virtual address consists of an actuator number, a cylinder number, a head number, and a target record. This is translated to the parity group number, the physical cylinder within the parity group, the head number and the sector index within the physical track to pick one of the four virtual tracks stored there. This is accomplished by first generating a "sequential cylinder index" from the virtual actuator number and virtual cylinder number:
  • SEQ CYL INDEX VIRTUAL ACTUATOR (#CYLINDER/ACTUATOR) + VIRTUAL CYLINDER
  • the physical group number that contains the data is found by taking the integer value that results from dividing the sequential cylinder index by the number of virtual cylinders per physical group:
  • GROUP INT( SEQ CYL INDEX )
  • 4x1632 6528 virtual tracks per group.
  • the physical cylinder within the appropriate group that contains the desired data is found by taking the integer value that results from dividing the difference between the sequential cylinder index and the base cylinder index for the particular group by the number of virtual tracks per physical track:
  • PHYSICAL CYL INT( SEQ CYL INDEX - GROUP • ⁇ VIRTUAL CYL PER GROUP ) #VIRTUAL TRACKS PER PHYSICAL TRACK
  • the physical head value is the numerical equivalent of the virtual head value.
  • the index into the physical track to identify the specific virtual track is given by the remainder of the physical cylinder calculation given above:
  • the above calculations uniquely identify a single virtual track in the physical implementation.
  • the virtual target record is then used to process the virtual track for the specific information requested.
  • the disk drive memory maintains a mapping between the desired 3380 image and the physical configuration of the disk drive memory. This mapping enables the disk drive memory to emulate whatever large form factor disk drive that is desired.
  • Disk Drive Manager Figure 4 illustrates the disk drive manager in block diagram form.
  • the disk drive manager 140 is the essential controller for the entire disk drive memory illustrated in Figure 1.
  • Disk drive manager 140 has separate communication paths to each of control modules 101-104 via associated control module interfaces 411-414.
  • disk drive manager 140 has a communication path to each of the disk drives 130-0 to 130-M in the disk drive memory independent of the crosspoint switches 121-124.
  • the disk drive manager 140 also has primary responsibility for diagnostic activities within this architecture of the disk drive memory and maintains all history and error logs in history log memory 404.
  • the central part of disk drive manager 140 is processor 401 which provides the intelligence and operational programs to implement these functions.
  • Processor 401 is connected via busses 421-424 with the associated control module interfaces 411-414 to communicate with control modules 101-104 respectively.
  • bus 403 connects processor 401 with disk control interface 402 that provides a communication path over lead 143 to all of the disk drives 130-0 to 130-M in the disk drive memory.
  • the history log 404 is connected to processor 401 via bus 405.
  • Processor 401 determines the mapping from virtual to physical addressing in the disk drive memory and provides that information to control modules 101-104 over the corresponding signal leads 145.
  • Processor 401 also maintains the pool of spare disk drives and allocates new spares when disk failures occur when requested to do so by the affected control module 101-104.
  • disk drive manager 140 determines the number of spare disk drives that are available in the disk drive memory. Based on system capacity requirements, disk drive manager 140 forms parity groups out of this pool of spare disk drives. The specific information of which physical disk are contained in a parity group is stored in local memory in disk drive manager 140 and a copy of that information is transmitted to each of control modules 101-104 so that these control modules 101-104 can translate the virtual addresses received with the data from the associated central processing unit to physical parity groups that consist of the corresponding selected disk drives. Because of the importance of the system mapping information, redundant copies protected by error correction codes are stored in non-volatile memory in disk drive manager 140.
  • control module 101-104 uses the system mapping information supplied by disk drive manager 140 to determine which physical disk group contains the data. Based on this translation information, the corresponding control module 101 sets the associated crosspoint switch 121 to interconnect the N+l data channels 111 of control module 101 with selected disk drives identified by this translation information.
  • the control module divides the data supplied by the central processing unit into N segments and distributes it along with a parity segment to the individual members of the parity group. In a situation where a data is read from the disk drive memory to the central processing unit, the control module must perform the inverse operation by reassembling the data streams read from the selected disk drives in the parity group.
  • the control module determines whether an individual disk drive in the parity group it is addressing has malfunctioned.
  • the control module that has detected a bad disk drive transmits a control message to disk drive manager 140 over the corresponding control signal lead 145 to indicate that a disk drive has failed, is suspect or that a new disk drive is needed.
  • the faulty disk drive is taken out of service and a spare disk drive is activated from the spare pool by the disk drive manager 140. This is accomplished by rewriting the identification of that parity group that contains the bad disk drive.
  • the new selected disk drive in the parity group is identified by control signals which are transmitted to all of control modules 101-104. This insures that the system mapping information stored in each of control modules 101-104 is kept up to date.
  • the new disk drive is added to the parity group, it is tested and, if found to be operating properly, it replaces the failed disk drive in the system mapping tables.
  • the control module that requested the spare disk drive reconstructs the data for the new disk drive using the remaining N-l operational data disk drives and the available parity information from the parity disk drive. Before reconstruction is complete on the disk, data is still available to the CPU, it must be reconstructed on line rather than just reading it from the disk. When this data reconstruction operation is complete, the reconstructed segment is written on the replacement disk drive and control signals are transmitted to the disk drive manager 140 to indicate that the reconstruction operation is complete and that parity group is now again operational.
  • Disk drive manager 140 transmits control signals to all of the control modules in the disk drive memory to inform the control modules that data reconstruction is complete so that that parity group can be accessed without further data reconstruction.
  • This dynamically reconfigurable attribute of the disk drive memory enables this system to be very flexible.
  • the dynamically configurable aspect of the communication path between the control modules and the disk drives permits the architecture to be very flexible.
  • the user can implement a disk drive memory that has a high data storage capacity and which requires shorter periodic repair intervals, or a disk drive memory that has a lower data storage capacity with longer required repair intervals simply by changing the number of active disk drive parity groups.
  • the disk drive memory has the ability to detect new spare disk drives when they are plugged in to the system thereby enabling the disk drive memory to grow as the storage or reliability needs change without having to reprogram the disk drive memory control software.
  • the parameters that may be varied include system reliability, system repair interval, system data storage capacity and parity group size.
  • system reliability typically causes another characteristic of the system to worsen.
  • a user can reduce the periodic repair interval. This reduces the number of spare disk drives required in the disk drive memory but causes increased maintenance costs.
  • data storage capacity requirements of the disk drive memory are reduced, fewer spare disk drives are required because of the reduced number of active disk drives.

Abstract

The disk drive memory (100) of the present invention uses a large plurality of small form factor disk drives (130-O-130-M) to implement an inexpensive, high performance, high reliability disk drive memory that emulates the format and capability of large form factor disk drives. The plurality of disk drives (130-O-130-M) are switchably interconnectable to form parity groups of N+1 parallel connected disk drives (130-O-130-M) to store data thereon. The N+1 disk drives (130-O-130-M) are used to store the N segments of each data word plus a parity segment. In addition, a pool of backup disk drives (130-O-130-M) is maintained to automatically substitute a replacement disk drive for a disk drive in a parity group that fails during operation.

Description

DISK DRIVE MEMORY
FIELD OF THE INVENTION This invention relates to computer systems and, in particular, to an inexpensive, high performance, high reliability disk drive memory for use with a computer system.
PROBLEM
It is a problem in the field of computer systems to provide an inexpensive, high performance, high reliability memory that has backup capability. in computer systems, it is expensive to provide high reliability capability for the various memory devices that are used with a computer. This problem is especially severe in the case of disk, drive memory systems. The typical commercially available disk drive is a 14-inch form factor unit, such as the IBM 3380J disk drive, that can store on the order of 1.2 gigabytes of data. The associated central processing unit stores data files on the disk drive memory by writing the entire data file onto a single disk drive. It is obvious that the failure of a single disk drive can result in the loss of a significant amount of data. In order to minimize the possibility of this occurring, the disk drives are built to be high reliability units. The cost of reliability is high in that the resultant disk drive is a very expensive unit.
In critical situations where the loss of the data stored on the disk drive could cause a significant disruption in the operation of the associated central processing unit, additional reliability may be obtained by disk shadowing-backing up each disk drive with an additional redundant disk drive. However, the provision of a second disk drive to backup the primary disk drive more than doubles the cost of memory for the computer system. Various arrangements are available to reduce the cost of providing disk shadowing backup protection. These arrangements include storing only the changes that are made to the data stored on the disk drive, backing up only the most critical data stored on the disk drive and only periodically backing up the data that is stored on the disk drive by storing it on a much less expensive data storage unit that also has a much slower data retrieval access time. However, none of these arrangements provide high reliability data storage with backup capability at a reasonable price.
An alternative to the large form factor disk drives for storing data is the use of a multiplicity of small form factor disk drives interconnected in a parallel array. Such an arrangement is the Micropolis Parallel Drive Array, Model 1804 SCSI that uses four, parallel, synchronized disk drives and one redundant parity drive. This arrangement uses parity protection, provided by the parity drive, to increase data reliability. The failure of one of the four data disk drives can be recovered from by the use of the parity bits stored on the parity disk drive. A similar system is disclosed in U.S. Patent No. 4,722,085 wherein a high capacity disk drive memory is disclosed. This disk drive memory uses a plurality of relatively small, independently operating disk subsystems to function as a large, high capacity disk drive having an unusually high fault tolerance and a very high data transfer bandwidth. A data organizer adds seven error check bits to each 32 bit data word to provide error checking and error correction capability. The resultant 39 bit word is written, one bit per disk drive, on to 39 disk drives. In the event that one of the 39 disk drives fails, the remaining 38 bits of the stored 39 bit word can be used to reconstruct the 32 bit data word on a word- by-word basis as each data word is read from memory, thereby obtaining fault tolerance.
The difficulty with these parallel disk drive array arrangements is that there are no spare disk drives provided and the system reliability of such an architecture of n parallel connected disk drives with no spares is fairly low. While these disk drive memory systems provide some data reconstruction capability, the lack of backup or spare disk drive capability renders the maintenance cost of these systems high, since disk drive failures in such an architecture occur fairly frequently and each disk drive failure necessitates a service call to replace the failed disk drive. If a service call is not made before a second drive fails, there will be data loss. In addition, the use of a Hamming Code type of error detection and correction arrangement as suggested by U.S. Patent No. 4,722,085 requires a high overhead: 7 bits of error detection code for a 32 bit data word. These limitations render this architecture uneconomical for disk storage systems. A further limitation of the disk drive memory system of U.S. Patent 4,722,085 is that this tightly coupled parallel disk drive array architecture uses tightly coupled disk actuators. This arrangement has a high data transfer bandwidth but effectively only a single actuator for 2.75 gigabytes of memory. This adversely affects the random access to memory performance of this disk drive memory system since all memory can only be accessed through the single actuator. Therefore, there presently is no inexpensive, high performance, high reliability disk drive memory that has backup capability for computer systems.
SOLUTION The above described problems are solved and a technical advance achieved in the field by the disk drive memory of the present invention. The disk drive memory of the present invention uses a large plurality of small form factor disk drives to implement an inexpensive, high performance, high reliability disk drive memory that emulates the format and capability of large form factor disk drives. The plurality of disk drives are switchably interconnectable to form parity groups of N+l parallel connected disk drives to store data thereon. The N+l disk drives are used to store the N segments of each data word plus a parity segment. In addition, a pool of backup disk drives is maintained to automatically substitute a replacement disk drive for a disk drive in a parity group that fails during operation.
The pool of backup disk drives provides high reliability at low cost. Each disk drive is designed so that it can detect a failure in its operation, which allows the parity segment can be used not only for error detection but also for error correction. Identification of the failed disk drive provides information on the bit position of the error in the data word and the parity data provides information to correct the error itself. Once a failed disk drive is identified, a backup disk drive from the shared pool of backup disk drives is automatically switched in place of the failed disk drive. Control circuitry reconstructs the data stored on the failed disk drive, using the remaining N-l segments of each data word plus the associated parity segment. A failure in the parity segment does not require data reconstruction, but necessitates regeneration of the parity information. The reconstructed data is then written onto the substitute disk drive. The use of backup disk drives increases the reliability of the N+l parallel disk drive architecture while the use of a shared pool of backup disk drives minimizes the cost of providing the improved reliability.
This architecture of a large pool of switchably interconnectable, small form factor disk drives also provides great flexibility to control the operational characteristics of the disk drive memory. The reliability of the disk drive memory system can be modified by altering the assignment of disk drives from the backup pool of disk drive to the data storage disk drive parity groups. In addition, the size of the parity group is controllable, thereby enabling a mixture of parity group sizes to be concurrently maintained in the disk drive memory. Various parity groups can be optimized for different performance characteristics. For example: the data transfer rate is proportional to the number of disk drives in the parity group; as the size of the parity group increases, the number of parity drives and spare drives available in the spare pool decrease; and as the size of the parity group increases the number of physical actuators/virtual actuator decreases.
Thus, the use of an amorphous pool containing a large number of switchably interconnectable disk drives overcomes the limitations of existing disk drive memory systems and also provides capabilities previously unavailable in disk drive memory systems.
In operation, the data transmitted by the associated central processing unit is used to generate parity information. The data and parity information is written across N+l disk drives in the disk drive memory. In addition, a number of disk drives are maintained in the disk drive memory as spare or backup units, which backup units are automatically switched on line in place of disk drives that fail. Control software is provided to reconstruct the data that was stored on a failed disk drive and to write this reconstructed data onto the backup disk drive that is selected to replace the failed disk drive unit.
In response to the associated central processing unit writing data to the disk drive memory, a control module in the disk drive memory divides the received data into a plurality (N) of segments. The control module also generates a parity segment that represents parity data that can be used to reconstruct one of the N segments of the data if one segment is inadvertently lost due to a disk drive failure. A disk drive manager in the disk drive memory selects N+l disk drives from the plurality of disk drives in the disk drive memory to function as a parity group on which the data file and its associated parity segment is stored. The control module writes each of the N data segments on a separate one of N of the N+l disk drives selected to be part of the parity group. In addition, the parity segment is written onto the remaining one of the selected disk drives. Thus, the data and its associated parity information is written on N+l disk drives instead of on a single disk drive. Therefore, the failure of a single disk drive will only impact one of the N segments of the data. The remaining N- 1 segments of the data plus the parity segment that is stored on a disk drive can be used to reconstruct the missing or lost data segment from this data due to the failure of the single disk drive.
In this fashion, the parity information is used to provide backup for the data as is a plurality of backup disk drives. Instead of requiring the replication of each disk drive as in disk shadowing backup, the data is spread across a plurality of disk drives so that the failure of a single disk drive will only cause a temporary loss of 1/N of the data. The parity segment written on a separate disk drive enables the software in the disk drive memory to reconstruct the lost segment of the data on a new drive over a period of time. However, data can be reconstructed as needed in real time as needed by the CPU so that the original disk failure is transparent to the CPU. Therefore, the provision of one parity disk drive for every N data disk drives plus the provision of a pool of standby or backup disk drives provide full backup for all of the data stored on the disk drives in this disk drive memory. Such an arrangement provides high reliability at a reasonable cost which cost is far less than the cost of providing a duplicate backup disk drive as in disk shadowing or the high maintenance cost of prior disk drive memory array systems. The size of the pool of standby drives and the rate of drive failure determines the interval between required service calls. A sufficiently larger pool could allow service as infrequently as once per year or less, saving considerable costs. These and other advantages of this invention will be ascertained by a reading of the detailed description. BRIEF DESCRIPTION OF THE DRAWING
Figure 1 illustrates in block diagram form the architecture of the disk drive memory;
Figure 2 illustrates the disk subsystem in block diagram form;
Figure 3 illustrates the control module in block diagram form;
Figure 4 illustrates the disk manager in block diagram form.
DETAILED DESCRIPTION OF THE DRAWING The disk drive memory of the present invention uses a plurality of small form factor disk drives in place of the single disk drive to implement an inexpensive, high performance, high reliability disk drive memory that emulates the format and capability of large form factor disk drives. The plurality of disk drives are switchably interconnectable to form parity groups of N+l parallel connected disk drives to store data thereon. The N+l disk drives are used to store the N segments of each data word plus a parity segment. In addition, a pool of backup disk drives is maintained to automatically substitute a replacement disk drive for a disk drive that fails during operation.
The pool of backup disk drives provides high reliability at low cost. Each disk drive is designed so that it can detect a failure in its operation, which allows the parity segment can be used not only for error detection but also for error correction. Identification of the failed disk drive provides information on the bit position of the error in the data word and the parity data provides information to correct the error itself. Once a failed disk drive is identified, a backup disk drive from the shared pool of backup disk drives is automatically switched in place of the failed disk drive. Control circuitry reconstructs the data stored on the failed disk drive, using the remaining N-l segments of each data word plus the associated parity segment. A failure in the parity segment does not require data reconstruction, but necessitates regeneration of the parity information. The reconstructed data is then written onto the substitute disk drive. The use of backup diεk drives increases the reliability of the N+l parallel disk drive architecture while the use of a shared pool of backup disk drives minimizes the cost of providing the improved reliability. This architecture of a large pool of switchably interconnectable, small form factor disk drives also provides great flexibility to control the operational characteristics of the disk drive memory. The reliability of the disk drive memory system can be modified by altering the assignment of disk drives from the backup pool of disk drives to the data storage disk drive parity groups. In addition, the size of the parity group is controllable, thereby enabling a mixture of parity group sizes to be concurrently maintained in the disk drive memory.
Various parity groups can be optimized for different performance characteristics. For example: the data transfer rate is proportional to the number of disk drives in the parity group; as the size of the parity group increases, the number of parity drives and spare drives available in the spare pool decrease; and as the size of the parity group increases the number of physical actuators/virtual actuator decreases.
Thus, the use of an amorphous pool containing a large number of switchably interconnectable disk drives overcomes the limitations of existing disk drive memory systems and also provides capabilities previously unavailable in disk drive memory systems.
In operation, the data transmitted by the associated central processing unit is used to generate parity information. The data and parity information is written across N+l disk drives in the disk drive memory. In addition, a number of disk drives are maintained in the disk drive memory as spare or backup unitε, which backup units are automatically switched on line in place of a disk drive that fails. Control software is provided to reconstruct the data that was stored on a failed disk drive and to write this reconstructed data onto the backup disk drive that is selected to replace the failed disk drive unit.
In response to the associated central processing unit writing data to the disk drive memory, a control module in the disk drive memory divides the received data into a plurality (N) of segments. The control module also generates a parity segment that represents parity data that can be used to reconstruct one of the N segments of the data if one segment is inadvertently lost due to a disk drive failure. A disk drive manager in disk drive memory selects N+l disk drives from the plurality of disk drives in the disk drive memory to function as a parity group on which the data file and its associated parity segment is stored. The control module writes each of the N data segments on a separate one of N of the N+l disk drives selected to be part of the parity group. In addition, the parity segment is written onto the remaining one of the selected disk drives. Thus, the data and its associated parity information is written on N+l disk drives instead of on a single disk drive. Therefore, the failure of a single disk drive will only impact one of the N segments of the data. The remaining N- 1 segments of the data plus the parity segment that is stored on a disk drive can be used to reconstruct the missing or lost data segment from this data due to the failure of the single disk drive.
In this fashion, the parity information is used to provide backup for the data as is a plurality of backup disk drives. Instead of requiring the replication of each disk drive as in disk shadowing backup, the data is spread across a plurality of disk drives so that the failure of a single disk drive will only cause a temporary loss of 1/N of the data. The parity segment written on a separate disk drive enables the software in the disk drive memory to reconstruct the lost segment of the data on a new drive over a period of time. However, data can be reconstructed as needed in real time as needed by the CPU so that the original disk failure is transparent to the CPU. Therefore, the provision of one parity disk drive for every N data disk drives plus the provision of a pool of standby or backup disk drives provide full backup for all of the data stored on the disk drives in this disk drive memory. Such an arrangement provides high reliability at a reasonable cost which cost is far less than the cost of providing a duplicate backup disk drive as in disk shadowing or the high maintenance cost of prior disk drive memory array systems.
Reliability
One measure of reliability is the function Mean Time Between Failures which provides a metric by which systems can be compared. For a single element having a constant failure rate f in failures per unit time, the mean time between failures is 1/f. The overall reliability of a system of n series connected elements, where all of the units must be operational for the system to be operational, is simply the product of the individual reliability functions.. When all of the elements have a constant failure rate, the mean.time between failures is 1/nf.
The reliability of an element is always less than or equal to 1 and the reliability of a series of interconnected elements is therefore always less than or equal to the reliability of a single element. To achieve high system reliability, extremely high reliability elements are required or redundancy may be used. Redundancy provides spare units which are used to maintain a system operating when an on-line unit fails. For an (n-k)/n standby redundant system, the mean time between failures becomes (k+l)/f(n-k) where (n-k)/n refers to a system with n total elements, of which k are spares and only n-k must be functional for the system to be operational.
The reliability of a system may be increased significantly by the use of repair, which involves fixing failed units and restoring them to full operational capability. There are two types of repair: on demand and periodic. On demand repair causes a repair operation with repair rate u to be initiated on every failure that occurs. Periodic repair provides for scheduled repairs at regular intervals, that restores all units that have failed since the last repair visit. More spare units are required for periodic repairs to achieve the same level of reliability as an on demand repair procedure but the maintenance process is simplified. Thus , high reliability can be obtained by the proper selection of a redundancy methodology and a repair strategy. Another factor in the selection of a disk drive memory architecture is the data reconstruction methodology. To detect two bit errors in an eight bit byte and to correct one requires five error check bits per eight bit data byte using a Hamming code. If the location of the bad bit were known, the data reconstruction can be accomplished with a single error check (parity) bit. The architecture of the disk drive memory of the present invention takes advantage of this factor to enable the use of a single parity bit for both error detection and error recovery in addition to providing flexibility in the selection of a redundancy and repair strategy to implement a high reliability disk drive memory that is inexpensive.
Disk Drive Memory Architecture Figure 1 illustrates in block diagram form the architecture of the preferred embodiment of disk drive memory 100. There are numerous alternative implementations possible, and this embodiment both illustrates the concepts of the invention and provides a high reliability, high performance, inexpensive disk drive memory. The disk drive memory 100 appears to the associated central processing unit to be a large disk drive or a collection of large disk drives since the architecture of disk drive memory 100 is transparent to the associated central processing unit. This disk drive memory 100 includes a plurality of disk drives 130-0 to 130-M,* each of which is an inexpensive yet fairly reliable disk drive. The plurality of disk drives 130-0 to 130-M is significantly less expensive, even with providing disk drives to store parity information and providing disk drives for backup purposes, than to provide the typical 14 inch form factor backup disk drive for each disk drive in the disk drive memory. The plurality of disk drives 130-0 to 130-M are typically the commodity hard disk drives in the 5-1/4 inch form factor.
. Each of disk drives 130-0 to 130-M is connected to disk drive interconnection apparatus, which in this example is the plurality of crosspoint switches 121- 124 illustrated in Figure 1. For illustration purposes, four crosspoint switches 121-124 are shown in Figure 1 and these four crosspoint switches 121- 124 are each connected to all of the disk drives 130- 0 to 130-M. Each crosspoint switch (example 121) is connected by an associated set of M conductors 141-0 to 141-M to a corresponding associated disk drive 130- 0 to 130-M. Thus, each crosspoint switch 121-124 can access each disk drive 130-0 to 130-M in the disk drive memory via an associated dedicated conductor. The crosspoint switches 121-124 themselves are an N+l by M switch that interconnects N+l signal leads on one side of the crosspoint switch with M signal leads on the other side of the crosspoint switch 121. Transmission through the crosspoint switch 121 is bidirectional in nature in that data can be written through the crosspoint switch 121 to a disk drive or read from a disk drive through the crosspoint switch 121. Thus, each crosspoint switch 121-124 serves to connect N+l of the disk drives 130-0 to 120-M in parallel to form a parity group. The data transfer rate of this arrangement is therefore N+l times the data transfer rate of a single one of disk drives 130- 0 to 130-M.
Figure 1 illustrates a plurality of control modules 101-104, each of which is connected to an associated crosspoint switch 121-124. Each control module (example 101) is connected via N+l data leads and a single control lead 111 to the associated crosspoint switch 121. Control module 101 can activate crosspoint switch 121 via control signals transmitted over the control lead to interconnect the N+l signal leads from control module 101 to N+l designated ones of the M disk drives 130-0 to 130-M. Once this interconnection is accomplished, control module 101 is directly connected via the N+l data leads 111 and the interconnections through crosspoint switch 121 to a designated subset of N+l of the M disk drives 130-0 to 130-M. There are N+l disk drives in this subset and crosspoint switch 121 interconnects control module 101 with these disk drives that are in the subset via connecting each of the N+l signal leads from control unit 101 to a corresponding signal lead associated with one of the disk drives in the subset. Therefore a direct connection is established between control unit 101 and N+l disk drives in the collection of disk drives 130-0 to 130-M. Control unit 101 can thereby read and write data on the disk drives in this subset directly over this connection.
The data that is written onto the disk drives consists of data that is transmitted from an associated central processing unit over bus 150 to one of directors 151-154. The data file is written into for example director 151 which stores the data and transfers this received data over conductors 161 to control module 101. Control module 101 segments the received data into N segments and also generates a parity segment for error correction purposes. Each of the segments of the data are written onto one of the N disk drives in the selected subset. An additional disk drive is used in the subset to store the parity segment. The parity segment includes error correction characters and data that can be used to verify the integrity of the data that is stored -on the N disk drives as well as to reconstruct one of the N segments of the data if that segment were lost due to a failure of the disk drive on which that data segment is stored.
The disk drive memory illustrated on Figure 1 includes a disk drive manager 140 which is connected to all of the disk drives 130-0 to 130-M via conductor 143 as well as to each of control modules 101-104 via an associated one of conductors 145-1 to 145-4. Disk drive manager 140 maintains data in memory indicative of the correspondence between the data read into the disk drive memory 100 and the location on the various disks 130-0 to 130-M on which this data is stored. Disk drive manager 140 assigns various ones of the disk drives 130-0 to 130-M to the parity groups as described above as well as assigning various disk drives to a backup pool. The identity of these N+l disk drives is transmitted by disk drive manager 140 to control module 101 via conductor 145-1. Control module 101 uses the identity of the disk drives assigned to this parity group to activate crosspoint switch 121 to establish the necessary interconnections between the N+l signal leads of control module 101 and the corresponding signal leads of the N+l disk drives designated by disk drive manager 140 as part of this parity group.
Thus, disk drive memory 100 can emulate one or more large form factor disk drives (ex - a 3380 type of disk drive) using a plurality of smaller form factor disk drives while providing a high reliability capability by writing the data across a plurality of the smaller form factor disk drives. A reliability improvement is also obtained by providing a pool of backup disk drives that are switchably interconnectable in place of a failed disk drive. Data-reconstruction is accomplished by the use of the parity segment, so that the data stored on the remaining functioning disk drives combined with the parity information stored in the parity segment can be used by control software to reconstruct the data lost when one of the plurality of disk drives in the parity group fails. This arrangement provides a reliability capability similar to that obtained by disk shadowing arrangements at a significantly reduced cost over such an arrangement.
Disk Drive
Figure 2 is a block diagram of the disk drive 130-0. The disk drive 130-0 can be considered a disk subsystem that consists of a disk drive mechanism and its surrounding control and interface circuitry. The disk drive shown in Figure 2 consists of a commodity disk drive 201 which is a commercially available hard disk drive of the type that typically is used in personal computers. Control processor 202 has control responsibility for the entire disk drive shown in Figure 2. The control processor 202 monitors all information routed over the various data channels 141- 0 to 144-0. The data channels 141-0 to 144-0 that interconnect the associated crosspoint switches 121- 124 with disk drive 130-0 are serial communication channels. Any data transmitted over these channels is stored in a corresponding interface buffer 231-234. The interface buffers 231-234 are connected via an associated serial data channel 241-244 to a corresponding serial/parallel converter circuit 211- 214. Control processor 202 has a plurality of parallel interfaces which are connected via parallel data paths 221-224 to the serial/parallel converter circuits 211/214. Thus, any data transfer between a corresponding crosspoint switch 121-124 and control co¬
processor 202 requires that the data be converted between serial and parallel format to correspond to the difference in interface format between crosspoint switches 121-124 and control processor 202. A disk controller 204 is also provided in disk drive 130-0 to implement the low level electrical interface required by the commodity disk drive 201. The commodity disk drive 201 has an ESDI interface which must be interfaced with control processor 202. Disk controller 204 provides this function. Thus, data communication between control processor 202 and commodity disk drive 201 is accomplished over bus 206, cache memory 203, bus 207, disk controller 204, bus 208. Cache memory 203 is provided as a buffer to improve performance of the disk drive 130-0. The cache is capable of holding an entire track of data for each physical data head in the commodity disk drive 201. Disk controller 204 provides serialization and deserialization of data, CRC/ECC generation, checking and correction and NRZ data encoding. The addressing information such as the head select and other type of control signals are provided by control processor 202 and communicated over bus 205 to commodity disk drive 201. In addition, control processor 202 is connected by signal lead 262 to an interface buffer 261 which interconnects control processor 201 with signal lead 143 to disk drive manager 140. This communication path is provided for diagnostic and control purposes. For example, disk drive manager 140 can signal control processor 202 to power commodity disk drive 201 down when disk drive 130-0 is in the standby mode. In this fashion, commodity disk drive 201 remains in an idle state until it is selected by disk drive manager 140 at which time disk drive manager 140 can activate the disk drive by providing the appropriate control signals over lead 143.
Control Module
Figure 3 illustrates control module 101 in block diagram form. Control module 101 includes a control processor 301 that is responsible for monitoring the various interfaces to director 151 and the associated crosspoint switch 121. Control processor 301 monitors CTL-I interface 309 and 311, for commands from director 151 and, when a command is received by one of these two interfaces 309, 311 control processor 301 reads the command over the corresponding signal lead 310, 312 respectively. Control processor 301 is connected by bus 304 to a cache memory 305 which is used to improve performance. Control processor 301 routes the command and/or data information received from director 151 to the appropriate disk groups through the N serial command/data interfaces illustrated as serial/parallel interface 302. Serial/parallel interface 302 provides N+l interfaces where the N+l data and control channels 111 that are connected to the associated crosspoint switch 121. Control processor 301 takes the data that is transmitted by director 151 and divides the data into N segments. Control processor 301 also generates a parity segment for error recovery purposes. Control processor 301 is responsible for all gap processing in support of the count/key/data format as received from the associated central processing unit. Control processor 301 receives information from disk drive manager 140 over lead 145. This control data is written into disk drive manager interface 313 where it can be retrieved over lead 314 by control processor 301. The control information from disk drive manager 140 is data indicative of the interconnections required in crosspoint switch 121 to connect the N+l data channels 111 of control module 101 with the selected N+l disk drives out of the pool of disk drives 130-0 to 130-M. Thus, control processor 301 generates the N+l data and parity segments and stores these in cache memory 305 to be transmitted to the N+l selected disk drives. In order to accomplish this transfer, control processor 301 transmits control signals over lead 307 via crosspoint control logic 308 to crosspoint switch 121 to indicate the interconnections required in crosspoint switch 121 to interconnect the N+l signal channels ill of control module 101 with the corresponding signal leads 141-0 to 141-M associated with the selected disk drives. Once the crosspoint control signals are transmitted to the associated crosspoint switch 121, the N+l data plus parity segments are transmitted by control processor 301 outputting these segments from cache memory 305 over bus 306 through serial/parallel interface 302 onto the N+l serial data channels 111.
Count/Key/Data and Address Translation
To support a 3380 image, the count/key/data format of the 3380 type of disk drive must be supported. The count/key/data information is stored on a physical track as data. The physical drives are formatted so that an integral number of virtual tracks are stored there, one per sector. To simulate the single density volume granularity of 630 MB, separate caches are provided for each control module track to allow parallel accesses by different control modules. For example, the single density 3380 track has a capacity of approximately 50 KB. If a parity group of 8 data disk drives +1 parity disk drive is used, 50/8 or 6.25K is stored on each physical disk drive. One of the primary responsibilities of the control modules is to translate virtual 3380 addresses to physical addresses. A virtual address consists of an actuator number, a cylinder number, a head number, and a target record. This is translated to the parity group number, the physical cylinder within the parity group, the head number and the sector index within the physical track to pick one of the four virtual tracks stored there. This is accomplished by first generating a "sequential cylinder index" from the virtual actuator number and virtual cylinder number:
SEQ CYL INDEX = VIRTUAL ACTUATOR (#CYLINDER/ACTUATOR) + VIRTUAL CYLINDER
The physical group number that contains the data is found by taking the integer value that results from dividing the sequential cylinder index by the number of virtual cylinders per physical group:
GROUP = INT( SEQ CYL INDEX )
#VIRTUAL CYL PER GROUP
For example, if we assume there are 4 virtual tracks per physical track, then given the 1632 tracks that are contained in a typical disk drive, there are
4x1632 = 6528 virtual tracks per group. The physical cylinder within the appropriate group that contains the desired data is found by taking the integer value that results from dividing the difference between the sequential cylinder index and the base cylinder index for the particular group by the number of virtual tracks per physical track: PHYSICAL CYL = INT( SEQ CYL INDEX - GROUP^VIRTUAL CYL PER GROUP ) #VIRTUAL TRACKS PER PHYSICAL TRACK
Because both the 3380 and the typical disk drive units contain 15 data heads per actuator, the physical head value is the numerical equivalent of the virtual head value. The index into the physical track to identify the specific virtual track is given by the remainder of the physical cylinder calculation given above:
SECTOR INDEX = REM( SEQ CYL INDEX - GROUP-#VIRTUAL CYL PER GROUP ) #VIRTUAL TRACKS PER PHYSICAL TRACK
The above calculations uniquely identify a single virtual track in the physical implementation. The virtual target record is then used to process the virtual track for the specific information requested.
Therefore, the disk drive memory maintains a mapping between the desired 3380 image and the physical configuration of the disk drive memory. This mapping enables the disk drive memory to emulate whatever large form factor disk drive that is desired.
Disk Drive Manager Figure 4 illustrates the disk drive manager in block diagram form. The disk drive manager 140 is the essential controller for the entire disk drive memory illustrated in Figure 1. Disk drive manager 140 has separate communication paths to each of control modules 101-104 via associated control module interfaces 411-414. In addition, disk drive manager 140 has a communication path to each of the disk drives 130-0 to 130-M in the disk drive memory independent of the crosspoint switches 121-124. The disk drive manager 140 also has primary responsibility for diagnostic activities within this architecture of the disk drive memory and maintains all history and error logs in history log memory 404. The central part of disk drive manager 140 is processor 401 which provides the intelligence and operational programs to implement these functions. Processor 401 is connected via busses 421-424 with the associated control module interfaces 411-414 to communicate with control modules 101-104 respectively. In addition, bus 403 connects processor 401 with disk control interface 402 that provides a communication path over lead 143 to all of the disk drives 130-0 to 130-M in the disk drive memory. The history log 404 is connected to processor 401 via bus 405. Processor 401 determines the mapping from virtual to physical addressing in the disk drive memory and provides that information to control modules 101-104 over the corresponding signal leads 145. Processor 401 also maintains the pool of spare disk drives and allocates new spares when disk failures occur when requested to do so by the affected control module 101-104.
At system powerup, disk drive manager 140 determines the number of spare disk drives that are available in the disk drive memory. Based on system capacity requirements, disk drive manager 140 forms parity groups out of this pool of spare disk drives. The specific information of which physical disk are contained in a parity group is stored in local memory in disk drive manager 140 and a copy of that information is transmitted to each of control modules 101-104 so that these control modules 101-104 can translate the virtual addresses received with the data from the associated central processing unit to physical parity groups that consist of the corresponding selected disk drives. Because of the importance of the system mapping information, redundant copies protected by error correction codes are stored in non-volatile memory in disk drive manager 140. When a request for a specific piece of information is received by a control module 101-104 from a storage director 151-154 the control module 101-104 uses the system mapping information supplied by disk drive manager 140 to determine which physical disk group contains the data. Based on this translation information, the corresponding control module 101 sets the associated crosspoint switch 121 to interconnect the N+l data channels 111 of control module 101 with selected disk drives identified by this translation information. In the case where the associated central processing unit is writing data into the disk drive memory, the control module divides the data supplied by the central processing unit into N segments and distributes it along with a parity segment to the individual members of the parity group. In a situation where a data is read from the disk drive memory to the central processing unit, the control module must perform the inverse operation by reassembling the data streams read from the selected disk drives in the parity group.
Disk Drive Malfunction
The control module determines whether an individual disk drive in the parity group it is addressing has malfunctioned. The control module that has detected a bad disk drive transmits a control message to disk drive manager 140 over the corresponding control signal lead 145 to indicate that a disk drive has failed, is suspect or that a new disk drive is needed. When a request for a spare disk drive is received by the disk drive manager 140, the faulty disk drive is taken out of service and a spare disk drive is activated from the spare pool by the disk drive manager 140. This is accomplished by rewriting the identification of that parity group that contains the bad disk drive. The new selected disk drive in the parity group is identified by control signals which are transmitted to all of control modules 101-104. This insures that the system mapping information stored in each of control modules 101-104 is kept up to date.
Once the new disk drive is added to the parity group, it is tested and, if found to be operating properly, it replaces the failed disk drive in the system mapping tables. The control module that requested the spare disk drive reconstructs the data for the new disk drive using the remaining N-l operational data disk drives and the available parity information from the parity disk drive. Before reconstruction is complete on the disk, data is still available to the CPU, it must be reconstructed on line rather than just reading it from the disk. When this data reconstruction operation is complete, the reconstructed segment is written on the replacement disk drive and control signals are transmitted to the disk drive manager 140 to indicate that the reconstruction operation is complete and that parity group is now again operational. Disk drive manager 140 transmits control signals to all of the control modules in the disk drive memory to inform the control modules that data reconstruction is complete so that that parity group can be accessed without further data reconstruction. This dynamically reconfigurable attribute of the disk drive memory enables this system to be very flexible. In addition, the dynamically configurable aspect of the communication path between the control modules and the disk drives permits the architecture to be very flexible. With the same physical disk drive memory, the user can implement a disk drive memory that has a high data storage capacity and which requires shorter periodic repair intervals, or a disk drive memory that has a lower data storage capacity with longer required repair intervals simply by changing the number of active disk drive parity groups. In addition, the disk drive memory has the ability to detect new spare disk drives when they are plugged in to the system thereby enabling the disk drive memory to grow as the storage or reliability needs change without having to reprogram the disk drive memory control software.
Architectural Trade-offs
There are a variety of trade-offs that exist within this disk drive memory architecture. The parameters that may be varied include system reliability, system repair interval, system data storage capacity and parity group size. Each parameter, when varied to cause one aspect of the system performance to improve, typically causes another characteristic of the system to worsen. Thus, if one lowers the system reliability, then fewer spare disk drives are required and there will be a higher system failure rate, i.e. more frequent data loss. A user can reduce the periodic repair interval. This reduces the number of spare disk drives required in the disk drive memory but causes increased maintenance costs. Similarly, if the data storage capacity requirements of the disk drive memory are reduced, fewer spare disk drives are required because of the reduced number of active disk drives. There is an approximately linear relationship between the data storage capacity of the disk drive memory and the number of spare disk drives required for a fixed reliability. Another variable characteristic is the size of the parity group. As the size of the parity group becomes larger, there is less disk drive overhead because fewer groups are required for a given amount of data storage capacity and one parity disk is required per group regardless of its size. The instantaneous data rate is larger from a large parity group because of the increased number of disk drives operating in parallel. However, the larger group size reduces the reliability of the spare swap process due to the fact that there is an increased probability of more than one disk drive failing at the same time. This also reduces the number of distinct physical actuators that may do simultaneous seeks of data on the disk drives.
While a specific embodiment of this invention has been disclosed herein, it is expected that those skilled in the art can design other embodiments that differ from this particular embodiment but fall within the scope of the appended claims.

Claims

I CLAIM:
1. A disk memory system for storing data files for associated data processing devices comprising: a plurality of disk drives; means for writing each of said data files received from said associated data processing devices and parity data associated with said data file in segments across two or more of said disk drives; means for reserving one or more of said plurality of disk drives as backup disk drives; and means responsive to the failure of one of said two or more disk drives for switchably connecting one of said backup disk drives in place of said failed disk drive.
2. The system of claim 1 further including: means for reconstructing the segment of said data file written on said failed disk drive, using said associated parity data.
3. The system of claim 2 further including: means for writing said reconstructed segment of said data file on to said one backup disk drive.
4. The system of claim 2 wherein said reconstructing means includes: means for identifying said failed disk drive; and means for generating said segment written on said failed disk drive using said associated parity data and the remainder of said data file.
5. The system of claim 1 wherein said writing means includes: means for dividing said data file into two or more segments; and means for generating parity data for said segmented data file.
6. The system of claim 5 wherein said writing means further includes: means for writing each of said segments and said parity data on to a different one of said two or more disk drives.
7. The system of claim 1 further including: means for maintaining data indicative of the correspondence between said data file and the identity said two or more disk drives.
8. The system of claim 1 further including: means responsive to a request for said data file from one of said associated data processing devices for reconstruct said segments of said data file.
9. The system of claim 8 further including: means responsive to said concatenating means for transmitting said concatenated segments of said data file to said requesting data processing device.
10. A method of storing data files for data processing devices on an associated disk memory system that includes a plurality of disk drives comprising the steps of: writing each of said data files received from said associated data processing devices and parity data associated with said data file across two or more of said disk drives; reserving one or more of said plurality of disk drives as backup disk drives; and switchably connecting one of said backup disk drives in place of said failed disk drive in response to the failure of one of said two or more disk drives.
11. The method of claim 10 further including the step of: reconstructing the segment of said data file written on said failed disk drive, using said associated parity data.
12. The method of claim 11 further including the step of: writing said reconstructed segment of said data file on to said one backup disk drive.
13. The method of claim 11 wherein said step of reconstructing includes the steps of: identifying said failed disk drive; and generating said segment written on said failed disk drive using said associated parity data and the remainder of said data file.
14. The method of claim 11 wherein said step of writing includes the steps of: dividing said data file into one or more segments; and generating parity data for said segmented data file.
15. The method of claim 14 wherein said step of writing further includes the step of: writing each of said segments and said parity data on to a different one of said two or more disk drives.
16. The method of claim 10 further including the step of: maintaining data indicative of the correspondence between said data file and the identity said two or more disk drives.
17. The method of claim 10 further including the step of: concatenating, in response to a request for said data file from one of said associated data processing devices, said segments of said data file.
18. The method of claim 17 further including the step of: transmitting said concatenated segments of said data file to said requesting data processing device.
19. A disk memory system for storing data files for associated data processing devices comprising: a plurality of disk drives; means for reserving one or more of said plurality of disk drives as backup disk drives; means responsive to the receipt of one of said data files from said associated data processing devices for selecting two or more of said plurality of disk drives; means responsive to said selecting means for writing said data file and parity data associated with said data file across said two or more disk drives; and means responsive to the failure of one of said two or more disk drives for switchably connecting one of said backup disk drives in place of said failed disk drive.
20. The system of claim 19 further including: means for reconstructing the segment of said data file written on said failed disk drive, using said associated parity data.
21. The system of claim 20 further including: means forwriting said reconstructed segment of said data file on to said one backup disk drive.
22. The system of claim 20 wherein said reconstructing means includes: means for identifying said failed disk drive; and means for generating said segment written on said failed disk drive using said associated parity data and the remainder of said data file.
23. The system of claim 21 wherein said writing means includes: means for dividing said data file into one or more segments; and means for generating parity data for said segmented data file.
24. The system of claim 23 wherein said writing means further includes: means for writing each of said segments and said parity data on to a different one of said two or more disk drives.
25. The system of claim 19 further including: means for maintaining data indicative of the correspondence between said data file and the identity said two or more disk drives.
26. The system of claim 19 further including: means responsive to a request for said data file from one of said associated data processing devices for concatenating said segments of said data file.
27. The system of claim 26 further including: means responsive to said concatenating means for transmitting said concatenated segments of said data file to said requesting data processing device.
28. A method of storing data files on a disk memory system that includes a plurality of disk drives, for an associated data processing devices comprising the steps of: reserving one or more of said plurality of disk drives as backup disk drives; selecting two or more of said plurality of disk drives; writing said data file and parity data associated with said data file across said two or more disk drives; and switchably connecting, in response to the failure of one of said two or more disk drives, one of said backup disk drives in place of said failed disk drive.
29. The method of claim 28 further including the step of: reconstructing the segment of said data file written on said failed disk drive, using said associated parity data.
30. The method of claim 29 further including the step of: writing said reconstructed segment of said data file on to said one backup disk drive.
31. The method of claim 28 wherein said step of reconstructing includes the steps of: identifying said failed disk drive; and generating said segment written on said failed disk drive using said associated parity data and the remainder of said data file.
32. The method of claim 28 wherein said step of writing includes the steps of: dividing said data file into two or more segments; and generating parity data for said segmented data file.
33. The method of claim 32 wherein said step of writing further includes the step of: writing each of said segments and said parity data on to a different one of said two or more disk drives.
34. The method of claim 28 further including the step of: maintaining data indicative of the correspondence between said data file and the identity said two or more disk drives.
35. The method of claim 28 further including the step of: concatenating, in response to a request for said data file from one of said associated data processing devices, said segments of said data file.
36. The method of claim 35 further including the step of: transmitting said concatenated segments of said data file to said requesting data processing device.
37. A disk memory system for storing data files that are accessible by associated data processing devices comprising: a plurality of disk drives for storing data thereon; means for transferring data between said disk memory system and said associated data processing devices; means for segmenting each data file received from said associated data processing devices via said transferring means into n segments; means responsive to said segmenting means for generating data parity information for said segmented data file; means for switchably interconnecting n+l of said disk drives with said segmenting means to write said n segments plus said parity data on to said n+l disk drives; means for reserving one or more of said plurality of disk drives as backup disk drives; means responsive to the failure of one of said n+l disk drives for switchably connecting one of said backup disk drives in place of said failed disk drive; means for reconstructing the segment of said data file written on said failed disk drive, using said associated parity data; and means for writing said reconstructed segment of said data file on to said one backup disk drive.
38. In a disk memory system including a plurality of disk drives a method of storing data files that are accessible by associated data processing devices comprising the steps of: transferring data between said disk memory system and said associated data processing devices; segmenting each data file received from said associated data processing devices via said transferring means into n segments; generating data parity information for said segmented data file; switchably interconnecting n+l of said disk drives with said segmenting means to write said n segments plus said parity data on to said n+l disk drives; reserving one or more of said plurality of disk drives as backup disk drives; switchably connecting, in response to the failure of one of said n+l disk drives, one of said backup disk drives in place of said failed disk drive; reconstructing the segment of said data file written on said failed disk drive, using said associated parity data; and writing said reconstructed segment of said data file on to said one backup disk drive.
39. A disk memory system for emulating a large form factor disk drive to store data files that are accessible by associated data processing devices comprising: a plurality of small form factor disk drives for storing data thereon; means for selecting n+l of said plurality of disk drives; means for segmenting each data file received from said associated data processing devices into n segments; means responsive to said segmenting means for generating data parity information for said segmented data file; and means for switchably interconnecting n+l of said disk drives with said segmenting means to write said n segments plus said parity data on to said n+l selected disk drives.
40. The system of claim 39 further including: means for reserving one or more of said plurality of disk drives as backup disk drives; and means responsive to the failure of one of said n+l disk drives for switchably connecting one of said backup disk drives in place of said failed disk drive.
41. The system of claim 40 further including: means for reconstructing the segment of said data file written on said failed disk drive, using said associated parity data; and means for writing said reconstructed segment of said data file on to said one backup disk drive.
42. The system of claim 41 wherein said reconstructing means includes: means for identifying said failed disk drive; and means for generating said segment written on said failed disk drive using said associated parity data and the remainder of said data file.
43. The system of claim 39 further including: means for maintaining data indicative of the correspondence between said data file and the identity said two or more disk drives.
44. The system of claim 39 further including: means responsive to a request for said data file from one of said associated data processing devices for concatenating said segments of said data file.
45. The system of claim 44 further including: means responsive to said concatenatingmeans for.transmitting said concatenated segments of said data file to said requesting data processing device.
46. In a disk memory system a method of emulating a large form factor disk drive using a plurality of small form factor disk drives to store data files that are accessible by associated data processing devices comprising the steps of: selecting n+l of said plurality of disk drives; segmenting each data file received from said associated data processing devices into n segments; generating data parity information for said segmented data file; and switchably interconnecting n+l of said disk drives with said segmenting means to write said n segments plus said parity data on to said n+l selected disk drives.
47. The method of claim 46 further including the steps of: reserving one or more of said plurality of disk drives as backup disk drives; and switchably connecting, in response to the failure of one of said n+l disk drives, one of said backup disk drives in place of said failed disk drive.
48. The method of claim 47 further including the steps of: reconstructing the segment of said data file written on said failed disk drive, using said associated parity data; and writing said reconstructed segment of said data file on to said one backup disk drive.
49. The method of claim 48 wherein said step of reconstructing includes the steps of: identifying said failed disk drive; and generating said segment written on said failed disk drive using said associated parity data and the remainder of said data file.
50. The method of claim 46 further including the step of: maintaining data indicative of the correspondence between said data file and the identity said two or more disk drives.
51. The method of claim 46 further including the step of: concatenating, in response to a request for said data file from one of said associated data processing devices, said segments of said data file.
52. The method of claim 51 further including the step of: transmitting said concatenated segments of said data file to said requesting data processing device.
53. A disk memory system for storing data files for one or more associated data processing devices comprising: a plurality of disk drives: means for assigning a subset of said plurality of said disk drives to one or more .parity groups, each parity group consisting of n+l disk drives; means responsive to the receipt of a data file from one of said associated data processing devices for segmenting said received data file into n equal segments; means for generating a parity segment using said n segments of said received data file; and means for writing said n segments and said parity segment on one of said parity groups.
54. The apparatus of claim 53 further including: means for selecting one or more of said plurality of disk drives as backup disk drives.
55. The system of claim 54 further including: means responsive to the failure of one of said disk drives in a parity group for reconstructing the segment of said data file written on said failed disk drive, using said associated parity data.
56. The system of claim 55 further including: means for writing said reconstructed segment of said data file on to one of said backup disk drives.
57. The system of claim 55 wherein said reconstructing means includes: means for identifying said failed disk drive; and means for generating said segment written on said failed disk drive using said associated parity data and the remainder of said data file.
58. The system of claim 53 further including: means responsive to a request for said data file from one of said associated data processing devices for concatenating said segments of said data file.
59. In a disk memory system including a plurality of disk drives a method of storing data files for one or more associated data processing devices comprising the steps of: assigning a subset of said plurality of said disk drives to one or more parity groups, each parity group consisting of n+l disk drives; segmenting, in response to the receipt of a data file from one of said associated data processing devices, said received data file into n equal segments; generating a parity segment using said n segments of said received data file; and writing said n segments and said parity segment on one of said parity groups.
60. The method of claim 59 further including the step of: selecting one or more of said plurality of disk drives as backup disk drives.
61. The method of claim 60 further including the step of: reconstructing, in response to the failure of one of said disk drives in a parity group, the segment of said data file written on said failed disk drive, using said associated parity data.
62. The method of claim 61 further including the'step of: writing said reconstructed segment of said data file on to one of said backup disk drives.
63. The method of claim 62 wherein said step of reconstructing includes the steps of: identifying said failed disk drive; and generating said segment written on said failed disk drive using said associated parity data and the remainder of said data file.
64. The method of claim 59 further including the step of: concatenating, in response to a request for said data file from one of said associated data processing devices, said segments of said data file.
PCT/US1989/001677 1988-06-28 1989-04-20 Disk drive memory WO1990000280A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DE68919219T DE68919219T2 (en) 1988-06-28 1989-04-20 MEMORY FOR DISK UNIT.
EP89906506A EP0422030B1 (en) 1988-06-28 1989-04-20 Disk drive memory

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US212,434 1988-06-28
US07/212,434 US4914656A (en) 1988-06-28 1988-06-28 Disk drive memory

Publications (1)

Publication Number Publication Date
WO1990000280A1 true WO1990000280A1 (en) 1990-01-11

Family

ID=22790991

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1989/001677 WO1990000280A1 (en) 1988-06-28 1989-04-20 Disk drive memory

Country Status (8)

Country Link
US (1) US4914656A (en)
EP (1) EP0422030B1 (en)
JP (1) JP2831072B2 (en)
AT (1) ATE113739T1 (en)
AU (3) AU621126B2 (en)
CA (1) CA1322409C (en)
DE (1) DE68919219T2 (en)
WO (1) WO1990000280A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991013404A1 (en) * 1990-03-02 1991-09-05 Sf2 Corporation Data storage apparatus and method
WO1991014982A1 (en) * 1990-03-29 1991-10-03 Sf2 Corporation Methods and apparatus for assigning signatures to members of a set of mass storage devices
EP0450801A2 (en) * 1990-03-30 1991-10-09 International Business Machines Corporation Disk drive array storage system
WO1991016711A1 (en) * 1990-04-16 1991-10-31 Storage Technology Corporation Logical track write scheduling system for a parallel disk drive array data storage subsystem
EP0467079A2 (en) * 1990-06-19 1992-01-22 Fujitsu Limited Disc array storage system
EP0482819A2 (en) * 1990-10-23 1992-04-29 Emc Corporation On-line reconstruction of a failed redundant array system
EP0485110A2 (en) * 1990-11-09 1992-05-13 Emc Corporation Logical partitioning of a redundant array storage system
EP0508441A2 (en) * 1991-04-11 1992-10-14 Mitsubishi Denki Kabushiki Kaisha Recording device having short data writing time
EP0515499A1 (en) * 1990-02-13 1992-12-02 Storage Technology Corporation Disk drive memory
GB2270791A (en) * 1992-09-21 1994-03-23 Grass Valley Group Video disk storage array
GB2278228A (en) * 1993-05-21 1994-11-23 Mitsubishi Electric Corp An arrayed recording apparatus
US5469453A (en) * 1990-03-02 1995-11-21 Mti Technology Corporation Data corrections applicable to redundant arrays of independent disks
US5475697A (en) * 1990-03-02 1995-12-12 Mti Technology Corporation Non-volatile memory storage of write operation indentifier in data storage device
EP0701198A1 (en) * 1994-05-19 1996-03-13 Starlight Networks, Inc. Method for operating an array of storage units
US5721950A (en) * 1992-11-17 1998-02-24 Starlight Networks Method for scheduling I/O transactions for video data storage unit to maintain continuity of number of video streams which is limited by number of I/O transactions
US5802394A (en) * 1994-06-06 1998-09-01 Starlight Networks, Inc. Method for accessing one or more streams in a video storage system using multiple queues and maintaining continuity thereof
US6327673B1 (en) 1991-01-31 2001-12-04 Hitachi, Ltd. Storage unit subsystem
US6874101B2 (en) 1991-01-31 2005-03-29 Hitachi, Ltd. Storage unit subsystem

Families Citing this family (135)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IT1217801B (en) * 1988-06-08 1990-03-30 Honeywell Rull Italia S P A APPARATUS FOR REMOVAL / HOT INSERTION ON A UNIT CONNECTION BUS, WITH NON-REMOVABLE MAGNETIC RECORDING SUPPORT
US5283791A (en) * 1988-08-02 1994-02-01 Cray Research Systems, Inc. Error recovery method and apparatus for high performance disk drives
US5218689A (en) * 1988-08-16 1993-06-08 Cray Research, Inc. Single disk emulation interface for an array of asynchronously operating disk drives
US5148432A (en) * 1988-11-14 1992-09-15 Array Technology Corporation Arrayed disk drive system and method
US5341479A (en) * 1989-01-31 1994-08-23 Storage Technology Corporation Address mark triggered read/write head buffer
US5185746A (en) * 1989-04-14 1993-02-09 Mitsubishi Denki Kabushiki Kaisha Optical recording system with error correction and data recording distributed across multiple disk drives
US5033049A (en) * 1989-06-12 1991-07-16 International Business Machines Corporation On-board diagnostic sub-system for SCSI interface
US5146574A (en) * 1989-06-27 1992-09-08 Sf2 Corporation Method and circuit for programmable selecting a variable sequence of element using write-back
US5101492A (en) * 1989-11-03 1992-03-31 Compaq Computer Corporation Data redundancy and recovery protection
US5072378A (en) * 1989-12-18 1991-12-10 Storage Technology Corporation Direct access storage device with independently stored parity
US5402428A (en) 1989-12-25 1995-03-28 Hitachi, Ltd. Array disk subsystem
JPH0786810B2 (en) * 1990-02-16 1995-09-20 富士通株式会社 Array disk device
US6728832B2 (en) * 1990-02-26 2004-04-27 Hitachi, Ltd. Distribution of I/O requests across multiple disk units
US5680574A (en) * 1990-02-26 1997-10-21 Hitachi, Ltd. Data distribution utilizing a master disk unit for fetching and for writing to remaining disk units
US5315708A (en) * 1990-02-28 1994-05-24 Micro Technology, Inc. Method and apparatus for transferring data through a staging memory
US5212785A (en) * 1990-04-06 1993-05-18 Micro Technology, Inc. Apparatus and method for controlling data flow between a computer and memory devices
US5134619A (en) * 1990-04-06 1992-07-28 Sf2 Corporation Failure-tolerant mass storage system
US5140592A (en) * 1990-03-02 1992-08-18 Sf2 Corporation Disk array system
US5388243A (en) * 1990-03-09 1995-02-07 Mti Technology Corporation Multi-sort mass storage device announcing its active paths without deactivating its ports in a network architecture
US5088081A (en) * 1990-03-28 1992-02-11 Prime Computer, Inc. Method and apparatus for improved disk access
US5202856A (en) * 1990-04-05 1993-04-13 Micro Technology, Inc. Method and apparatus for simultaneous, interleaved access of multiple memories by multiple ports
US5956524A (en) * 1990-04-06 1999-09-21 Micro Technology Inc. System and method for dynamic alignment of associated portions of a code word from a plurality of asynchronous sources
US5214778A (en) * 1990-04-06 1993-05-25 Micro Technology, Inc. Resource management in a multiple resource system
US5414818A (en) * 1990-04-06 1995-05-09 Mti Technology Corporation Method and apparatus for controlling reselection of a bus by overriding a prioritization protocol
US5233692A (en) * 1990-04-06 1993-08-03 Micro Technology, Inc. Enhanced interface permitting multiple-byte parallel transfers of control information and data on a small computer system interface (SCSI) communication bus and a mass storage system incorporating the enhanced interface
US5130992A (en) * 1990-04-16 1992-07-14 International Business Machines Corporaiton File-based redundant parity protection in a parallel computing system
US5247638A (en) * 1990-06-18 1993-09-21 Storage Technology Corporation Apparatus for compressing data in a dynamically mapped virtual data storage subsystem
JPH0731582B2 (en) * 1990-06-21 1995-04-10 インターナショナル・ビジネス・マシーンズ・コーポレイション Method and apparatus for recovering parity protected data
EP0464551A3 (en) * 1990-06-25 1992-11-19 Kabushiki Kaisha Toshiba Method and apparatus for controlling drives coupled to a computer system
US5673412A (en) 1990-07-13 1997-09-30 Hitachi, Ltd. Disk system and power-on sequence for the same
US5265098A (en) * 1990-08-03 1993-11-23 International Business Machines Corporation Method and means for managing DASD array accesses when operating in degraded mode
US5544347A (en) 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
CA2043493C (en) * 1990-10-05 1997-04-01 Ricky C. Hetherington Hierarchical integrated circuit cache memory
EP0481735A3 (en) * 1990-10-19 1993-01-13 Array Technology Corporation Address protection circuit
US5163162A (en) * 1990-11-14 1992-11-10 Ibm Corporation System and method for data recovery in multiple head assembly storage devices
JP2752247B2 (en) * 1990-11-29 1998-05-18 富士通株式会社 Information storage device
US5235601A (en) * 1990-12-21 1993-08-10 Array Technology Corporation On-line restoration of redundancy information in a redundant array system
US5274799A (en) * 1991-01-04 1993-12-28 Array Technology Corporation Storage device array architecture with copyback cache
US5271012A (en) * 1991-02-11 1993-12-14 International Business Machines Corporation Method and means for encoding and rebuilding data contents of up to two unavailable DASDs in an array of DASDs
US5579475A (en) * 1991-02-11 1996-11-26 International Business Machines Corporation Method and means for encoding and rebuilding the data contents of up to two unavailable DASDS in a DASD array using simple non-recursive diagonal and row parity
US5257362A (en) * 1991-03-08 1993-10-26 International Business Machines Corporation Method and means for ensuring single pass small read/write access to variable length records stored on selected DASDs in a DASD array
US5345565A (en) * 1991-03-13 1994-09-06 Ncr Corporation Multiple configuration data path architecture for a disk array controller
JP2923702B2 (en) * 1991-04-01 1999-07-26 株式会社日立製作所 Storage device and data restoration method thereof
US5506979A (en) * 1991-04-02 1996-04-09 International Business Machines Corporation Method and means for execution of commands accessing variable length records stored on fixed block formatted DASDS of an N+2 DASD synchronous array
JP3187525B2 (en) * 1991-05-17 2001-07-11 ヒュンダイ エレクトロニクス アメリカ Bus connection device
US5239659A (en) * 1991-06-19 1993-08-24 Storage Technology Corporation Phantom duplex copy group apparatus for a disk drive array data storge subsystem
EP0519669A3 (en) * 1991-06-21 1994-07-06 Ibm Encoding and rebuilding data for a dasd array
US5257391A (en) * 1991-08-16 1993-10-26 Ncr Corporation Disk controller having host interface and bus switches for selecting buffer and drive busses respectively based on configuration control signals
US5333143A (en) * 1991-08-29 1994-07-26 International Business Machines Corporation Method and means for b-adjacent coding and rebuilding data from up to two unavailable DASDS in a DASD array
US5274507A (en) * 1991-09-09 1993-12-28 Paul Lee Parallel data encoding for moving media
US5636358A (en) * 1991-09-27 1997-06-03 Emc Corporation Method and apparatus for transferring data in a storage device including a dual-port buffer
US5499337A (en) 1991-09-27 1996-03-12 Emc Corporation Storage device array architecture with solid-state redundancy unit
US5237658A (en) * 1991-10-01 1993-08-17 Tandem Computers Incorporated Linear and orthogonal expansion of array storage in multiprocessor computing systems
US5379417A (en) * 1991-11-25 1995-01-03 Tandem Computers Incorporated System and method for ensuring write data integrity in a redundant array data storage system
US5287462A (en) * 1991-12-20 1994-02-15 Ncr Corporation Bufferless SCSI to SCSI data transfer scheme for disk array applications
EP0551009B1 (en) * 1992-01-08 2001-06-13 Emc Corporation Method for synchronizing reserved areas in a redundant storage array
US5341381A (en) * 1992-01-21 1994-08-23 Tandem Computers, Incorporated Redundant array parity caching system
US5469566A (en) * 1992-03-12 1995-11-21 Emc Corporation Flexible parity generation circuit for intermittently generating a parity for a plurality of data channels in a redundant array of storage units
WO1993018456A1 (en) * 1992-03-13 1993-09-16 Emc Corporation Multiple controller sharing in a redundant storage array
JP2868141B2 (en) * 1992-03-16 1999-03-10 株式会社日立製作所 Disk array device
US5708668A (en) * 1992-05-06 1998-01-13 International Business Machines Corporation Method and apparatus for operating an array of storage devices
JP3183719B2 (en) * 1992-08-26 2001-07-09 三菱電機株式会社 Array type recording device
US5418925A (en) * 1992-10-23 1995-05-23 At&T Global Information Solutions Company Fast write I/O handling in a disk array using spare drive for buffering
US5388108A (en) * 1992-10-23 1995-02-07 Ncr Corporation Delayed initiation of read-modify-write parity operations in a raid level 5 disk array
US5487160A (en) * 1992-12-04 1996-01-23 At&T Global Information Solutions Company Concurrent image backup for disk storage system
US5819109A (en) * 1992-12-07 1998-10-06 Digital Equipment Corporation System for storing pending parity update log entries, calculating new parity, updating the parity block, and removing each entry from the log when update is complete
US5519849A (en) * 1992-12-07 1996-05-21 Digital Equipment Corporation Method of reducing the complexity of an I/O request to a RAID-4 or RAID-5 array
US5416915A (en) * 1992-12-11 1995-05-16 International Business Machines Corporation Method and system for minimizing seek affinity and enhancing write sensitivity in a DASD array
US5423046A (en) * 1992-12-17 1995-06-06 International Business Machines Corporation High capacity data storage system using disk array
US5689678A (en) 1993-03-11 1997-11-18 Emc Corporation Distributed storage array system having a plurality of modular control units
US5867640A (en) * 1993-06-01 1999-02-02 Mti Technology Corp. Apparatus and method for improving write-throughput in a redundant array of mass storage devices
US5504858A (en) * 1993-06-29 1996-04-02 Digital Equipment Corporation Method and apparatus for preserving data integrity in a multiple disk raid organized storage system
US6269453B1 (en) 1993-06-29 2001-07-31 Compaq Computer Corporation Method for reorganizing the data on a RAID-4 or RAID-5 array in the absence of one disk
US5390327A (en) * 1993-06-29 1995-02-14 Digital Equipment Corporation Method for on-line reorganization of the data on a RAID-4 or RAID-5 array in the absence of one disk and the on-line restoration of a replacement disk
US5522031A (en) * 1993-06-29 1996-05-28 Digital Equipment Corporation Method and apparatus for the on-line restoration of a disk in a RAID-4 or RAID-5 array with concurrent access by applications
US5581690A (en) * 1993-06-29 1996-12-03 Digital Equipment Corporation Method and apparatus for preventing the use of corrupt data in a multiple disk raid organized storage system
US20030088611A1 (en) * 1994-01-19 2003-05-08 Mti Technology Corporation Systems and methods for dynamic alignment of associated portions of a code word from a plurality of asynchronous sources
DE19540915A1 (en) * 1994-11-10 1996-05-15 Raymond Engineering Redundant arrangement of solid state memory modules
US5488701A (en) * 1994-11-17 1996-01-30 International Business Machines Corporation In log sparing for log structured arrays
US5666114A (en) * 1994-11-22 1997-09-09 International Business Machines Corporation Method and means for managing linear mapped address spaces storing compressed data at the storage subsystem control unit or device level
US5671349A (en) * 1994-12-06 1997-09-23 Hitachi Computer Products America, Inc. Apparatus and method for providing data redundancy and reconstruction for redundant arrays of disk drives
US5537534A (en) * 1995-02-10 1996-07-16 Hewlett-Packard Company Disk array having redundant storage and methods for incrementally generating redundancy as data is written to the disk array
US5848230A (en) 1995-05-25 1998-12-08 Tandem Computers Incorporated Continuously available computer memory systems
US5729763A (en) * 1995-08-15 1998-03-17 Emc Corporation Data storage system
US5875456A (en) * 1995-08-17 1999-02-23 Nstor Corporation Storage device array and methods for striping and unstriping data and for adding and removing disks online to/from a raid storage array
US5657468A (en) * 1995-08-17 1997-08-12 Ambex Technologies, Inc. Method and apparatus for improving performance in a reduntant array of independent disks
US5841997A (en) * 1995-09-29 1998-11-24 Emc Corporation Apparatus for effecting port switching of fibre channel loops
US6334195B1 (en) * 1995-12-29 2001-12-25 Lsi Logic Corporation Use of hot spare drives to boost performance during nominal raid operation
US5734861A (en) * 1995-12-12 1998-03-31 International Business Machines Corporation Log-structured disk array with garbage collection regrouping of tracks to preserve seek affinity
US5941994A (en) * 1995-12-22 1999-08-24 Lsi Logic Corporation Technique for sharing hot spare drives among multiple subsystems
JP3140957B2 (en) * 1996-02-16 2001-03-05 インターナショナル・ビジネス・マシーンズ・コーポレ−ション Disk apparatus and error processing method in disk apparatus
US5799324A (en) * 1996-05-10 1998-08-25 International Business Machines Corporation System and method for management of persistent data in a log-structured disk array
JPH10254631A (en) * 1997-03-14 1998-09-25 Hitachi Ltd Computer system
JPH10254642A (en) * 1997-03-14 1998-09-25 Hitachi Ltd Storage device system
JP3595099B2 (en) * 1997-03-17 2004-12-02 富士通株式会社 Device array system
JP3228182B2 (en) * 1997-05-29 2001-11-12 株式会社日立製作所 Storage system and method for accessing storage system
US6092215A (en) * 1997-09-29 2000-07-18 International Business Machines Corporation System and method for reconstructing data in a storage array system
US6029179A (en) * 1997-12-18 2000-02-22 International Business Machines Corporation Automated read-only volume processing in a virtual tape server
US6061753A (en) * 1998-01-27 2000-05-09 Emc Corporation Apparatus and method of accessing target devices across a bus utilizing initiator identifiers
US6219751B1 (en) 1998-04-28 2001-04-17 International Business Machines Corporation Device level coordination of access operations among multiple raid control units
US6247157B1 (en) * 1998-05-13 2001-06-12 Intel Corporation Method of encoding data signals for storage
US6052799A (en) * 1998-05-15 2000-04-18 International Business Machines Corporation System and method for recovering a directory for a log structured array
US6122754A (en) * 1998-05-22 2000-09-19 International Business Machines Corporation Method and system for data recovery using a distributed and scalable data structure
JP2000003255A (en) * 1998-06-12 2000-01-07 Nec Corp Disk array device
US6317839B1 (en) * 1999-01-19 2001-11-13 International Business Machines Corporation Method of and apparatus for controlling supply of power to a peripheral device in a computer system
US6466540B1 (en) 1999-05-05 2002-10-15 International Business Machines Corporation Self-healing coupler for a serial raid device
US7389466B1 (en) * 1999-08-12 2008-06-17 Texas Instruments Incorporated ECC in computer system with associated mass storage device, and method for operating same
JP2001167040A (en) 1999-12-14 2001-06-22 Hitachi Ltd Memory subsystem and memory control unit
US6684209B1 (en) * 2000-01-14 2004-01-27 Hitachi, Ltd. Security method and system for storage subsystem
US7657727B2 (en) * 2000-01-14 2010-02-02 Hitachi, Ltd. Security for logical unit in storage subsystem
JP4651230B2 (en) 2001-07-13 2011-03-16 株式会社日立製作所 Storage system and access control method to logical unit
JP4719957B2 (en) * 2000-05-24 2011-07-06 株式会社日立製作所 Storage control device, storage system, and storage system security setting method
US6546458B2 (en) * 2000-12-29 2003-04-08 Storage Technology Corporation Method and apparatus for arbitrarily large capacity removable media
US20020178162A1 (en) * 2001-01-29 2002-11-28 Ulrich Thomas R. Integrated distributed file system with variable parity groups
US6990547B2 (en) * 2001-01-29 2006-01-24 Adaptec, Inc. Replacing file system processors by hot swapping
US20020124137A1 (en) * 2001-01-29 2002-09-05 Ulrich Thomas R. Enhancing disk array performance via variable parity based load balancing
US6862692B2 (en) * 2001-01-29 2005-03-01 Adaptec, Inc. Dynamic redistribution of parity groups
US7054927B2 (en) 2001-01-29 2006-05-30 Adaptec, Inc. File system metadata describing server directory information
US6990667B2 (en) 2001-01-29 2006-01-24 Adaptec, Inc. Server-independent object positioning for load balancing drives and servers
US20020138559A1 (en) * 2001-01-29 2002-09-26 Ulrich Thomas R. Dynamically distributed file system
US6957351B2 (en) * 2001-07-03 2005-10-18 International Business Machines Corporation Automated disk drive library with removable media powered via contactless coupling
US7152142B1 (en) 2002-10-25 2006-12-19 Copan Systems, Inc. Method for a workload-adaptive high performance storage system with data protection
DE10313892B4 (en) * 2003-03-27 2007-04-19 Fujitsu Siemens Computers Gmbh Arrangement and method for exchanging mass memories
US7484050B2 (en) * 2003-09-08 2009-01-27 Copan Systems Inc. High-density storage systems using hierarchical interconnect
US7428691B2 (en) * 2003-11-12 2008-09-23 Norman Ken Ouchi Data recovery from multiple failed data blocks and storage units
JP4493321B2 (en) * 2003-11-19 2010-06-30 株式会社日立製作所 Disk array device and data saving method
WO2006052830A2 (en) * 2004-11-05 2006-05-18 Trusted Data Corporation Storage system condition indicator and method
US7873782B2 (en) * 2004-11-05 2011-01-18 Data Robotics, Inc. Filesystem-aware block storage system, apparatus, and method
US7661058B1 (en) 2006-04-17 2010-02-09 Marvell International Ltd. Efficient raid ECC controller for raid systems
JP2008197886A (en) * 2007-02-13 2008-08-28 Nec Corp Storage device and control method therefor
JP5111965B2 (en) 2007-07-24 2013-01-09 株式会社日立製作所 Storage control device and control method thereof
US9244779B2 (en) 2010-09-30 2016-01-26 Commvault Systems, Inc. Data recovery operations, such as recovery from modified network data management protocol data
US9069799B2 (en) 2012-12-27 2015-06-30 Commvault Systems, Inc. Restoration of centralized data storage manager, such as data storage manager in a hierarchical data storage system
US9563509B2 (en) 2014-07-15 2017-02-07 Nimble Storage, Inc. Methods and systems for storing data in a redundant manner on a plurality of storage units of a storage system
US10101913B2 (en) 2015-09-02 2018-10-16 Commvault Systems, Inc. Migrating data to disk without interrupting running backup operations

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4667326A (en) * 1984-12-20 1987-05-19 Advanced Micro Devices, Inc. Method and apparatus for error detection and correction in systems comprising floppy and/or hard disk drives
US4761785A (en) * 1986-06-12 1988-08-02 International Business Machines Corporation Parity spreading to enhance storage access
US4817035A (en) * 1984-03-16 1989-03-28 Cii Honeywell Bull Method of recording in a disk memory and disk memory system
US4825403A (en) * 1983-05-16 1989-04-25 Data General Corporation Apparatus guaranteeing that a controller in a disk drive system receives at least some data from an invalid track sector

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE790654A (en) * 1971-10-28 1973-04-27 Siemens Ag TREATMENT SYSTEM WITH SYSTEM UNITS
US3805039A (en) * 1972-11-30 1974-04-16 Raytheon Co High reliability system employing subelement redundancy
US4467421A (en) * 1979-10-18 1984-08-21 Storage Technology Corporation Virtual storage system and method
US4494155A (en) * 1982-11-08 1985-01-15 Eastman Kodak Company Adaptive redundance in data recording
CA1263194A (en) * 1985-05-08 1989-11-21 W. Daniel Hillis Storage system using multiple mechanically-driven storage units
US4722085A (en) * 1986-02-03 1988-01-26 Unisys Corp. High capacity disk storage system having unusually high fault tolerance level and bandpass
US4870643A (en) * 1987-11-06 1989-09-26 Micropolis Corporation Parallel drive array storage system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4825403A (en) * 1983-05-16 1989-04-25 Data General Corporation Apparatus guaranteeing that a controller in a disk drive system receives at least some data from an invalid track sector
US4817035A (en) * 1984-03-16 1989-03-28 Cii Honeywell Bull Method of recording in a disk memory and disk memory system
US4667326A (en) * 1984-12-20 1987-05-19 Advanced Micro Devices, Inc. Method and apparatus for error detection and correction in systems comprising floppy and/or hard disk drives
US4761785A (en) * 1986-06-12 1988-08-02 International Business Machines Corporation Parity spreading to enhance storage access
US4761785B1 (en) * 1986-06-12 1996-03-12 Ibm Parity spreading to enhance storage access

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0422030A4 *

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0515499A1 (en) * 1990-02-13 1992-12-02 Storage Technology Corporation Disk drive memory
EP0515499A4 (en) * 1990-02-13 1994-02-16 Storage Technology Corporation
US5758054A (en) * 1990-03-02 1998-05-26 Emc Corporation Non-volatile memory storage of write operation identifier in data storage device
US5475697A (en) * 1990-03-02 1995-12-12 Mti Technology Corporation Non-volatile memory storage of write operation indentifier in data storage device
US5469453A (en) * 1990-03-02 1995-11-21 Mti Technology Corporation Data corrections applicable to redundant arrays of independent disks
WO1991013404A1 (en) * 1990-03-02 1991-09-05 Sf2 Corporation Data storage apparatus and method
WO1991014982A1 (en) * 1990-03-29 1991-10-03 Sf2 Corporation Methods and apparatus for assigning signatures to members of a set of mass storage devices
US5325497A (en) * 1990-03-29 1994-06-28 Micro Technology, Inc. Method and apparatus for assigning signatures to identify members of a set of mass of storage devices
EP0450801A3 (en) * 1990-03-30 1994-02-16 Ibm
EP0450801A2 (en) * 1990-03-30 1991-10-09 International Business Machines Corporation Disk drive array storage system
AU654482B2 (en) * 1990-04-16 1994-11-10 Storage Technology Corporation A dish memory system
WO1991016711A1 (en) * 1990-04-16 1991-10-31 Storage Technology Corporation Logical track write scheduling system for a parallel disk drive array data storage subsystem
EP0467079A3 (en) * 1990-06-19 1994-02-09 Fujitsu Ltd
US5721861A (en) * 1990-06-19 1998-02-24 Fujitsu Limited Array disc memory equipment capable of confirming logical address positions for disc drive modules installed therein
EP0467079A2 (en) * 1990-06-19 1992-01-22 Fujitsu Limited Disc array storage system
EP0482819A2 (en) * 1990-10-23 1992-04-29 Emc Corporation On-line reconstruction of a failed redundant array system
EP0482819A3 (en) * 1990-10-23 1993-01-13 Array Technology Corporation On-line reconstruction of a failed redundant array system
EP0485110A2 (en) * 1990-11-09 1992-05-13 Emc Corporation Logical partitioning of a redundant array storage system
EP0485110A3 (en) * 1990-11-09 1993-01-13 Array Technology Corporation Logical partitioning of a redundant array storage system
US6757839B2 (en) 1991-01-31 2004-06-29 Hitachi, Ltd. Storage unit subsystem
US6874101B2 (en) 1991-01-31 2005-03-29 Hitachi, Ltd. Storage unit subsystem
US7320089B2 (en) 1991-01-31 2008-01-15 Hitachi, Ltd. Storage unit subsystem
US6532549B2 (en) 1991-01-31 2003-03-11 Hitachi, Ltd. Storage unit subsystem
US6327673B1 (en) 1991-01-31 2001-12-04 Hitachi, Ltd. Storage unit subsystem
EP0508441A3 (en) * 1991-04-11 1995-05-03 Mitsubishi Electric Corp
US5655150A (en) * 1991-04-11 1997-08-05 Mitsubishi Denki Kabushiki Kaisha Recording device having alternative recording units operated in three different conditions depending on activities in maintenance diagnosis mechanism and recording sections
EP0508441A2 (en) * 1991-04-11 1992-10-14 Mitsubishi Denki Kabushiki Kaisha Recording device having short data writing time
GB2270791B (en) * 1992-09-21 1996-07-17 Grass Valley Group Disk-based digital video recorder
GB2270791A (en) * 1992-09-21 1994-03-23 Grass Valley Group Video disk storage array
US5721950A (en) * 1992-11-17 1998-02-24 Starlight Networks Method for scheduling I/O transactions for video data storage unit to maintain continuity of number of video streams which is limited by number of I/O transactions
US5734925A (en) * 1992-11-17 1998-03-31 Starlight Networks Method for scheduling I/O transactions in a data storage system to maintain the continuity of a plurality of video streams
US5754882A (en) * 1992-11-17 1998-05-19 Starlight Networks Method for scheduling I/O transactions for a data storage system to maintain continuity of a plurality of full motion video streams
US5915081A (en) * 1993-05-21 1999-06-22 Mitsubishi Denki Kabushiki Kaisha Arrayed recording apparatus with selectably connectable spare disks
GB2278228A (en) * 1993-05-21 1994-11-23 Mitsubishi Electric Corp An arrayed recording apparatus
US5732239A (en) * 1994-05-19 1998-03-24 Starlight Networks Method for operating a disk storage system which stores video data so as to maintain the continuity of a plurality of video streams
EP0701198A1 (en) * 1994-05-19 1996-03-13 Starlight Networks, Inc. Method for operating an array of storage units
US5802394A (en) * 1994-06-06 1998-09-01 Starlight Networks, Inc. Method for accessing one or more streams in a video storage system using multiple queues and maintaining continuity thereof

Also Published As

Publication number Publication date
AU647931B2 (en) 1994-03-31
AU1802592A (en) 1992-08-27
JP2831072B2 (en) 1998-12-02
ATE113739T1 (en) 1994-11-15
AU647280B2 (en) 1994-03-17
EP0422030A4 (en) 1991-02-18
EP0422030A1 (en) 1991-04-17
CA1322409C (en) 1993-09-21
DE68919219D1 (en) 1994-12-08
EP0422030B1 (en) 1994-11-02
US4914656A (en) 1990-04-03
AU1809392A (en) 1992-07-30
AU621126B2 (en) 1992-03-05
AU3698489A (en) 1990-01-23
DE68919219T2 (en) 1995-05-11
JPH03505935A (en) 1991-12-19

Similar Documents

Publication Publication Date Title
US4989206A (en) Disk drive memory
US4989205A (en) Disk drive memory
US4914656A (en) Disk drive memory
AU645064B2 (en) Disk drive memory
US5566316A (en) Method and apparatus for hierarchical management of data storage elements in an array storage device
US5124987A (en) Logical track write scheduling system for a parallel disk drive array data storage subsystem
US5146588A (en) Redundancy accumulator for disk drive array memory
US6154853A (en) Method and apparatus for dynamic sparing in a RAID storage system
US5430855A (en) Disk drive array memory system using nonuniform disk drives
US5412661A (en) Two-dimensional disk array
US5210866A (en) Incremental disk backup system for a dynamically mapped data storage subsystem
US5007053A (en) Method and apparatus for checksum address generation in a fail-safe modular memory
EP0521630A2 (en) DASD array hierarchies
KR19990051729A (en) Structure of Raid System with Dual Array Controllers
GB2298308A (en) A disk storage array with a spiralling distribution of redundancy data

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE FR GB IT LU NL SE

WWE Wipo information: entry into national phase

Ref document number: 1989906506

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1989906506

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1989906506

Country of ref document: EP