|Numéro de publication||US20020144057 A1|
|Type de publication||Demande|
|Numéro de demande||US 10/061,081|
|Date de publication||3 oct. 2002|
|Date de dépôt||29 janv. 2002|
|Date de priorité||30 janv. 2001|
|Autre référence de publication||US7007141, WO2003065360A1|
|Numéro de publication||061081, 10061081, US 2002/0144057 A1, US 2002/144057 A1, US 20020144057 A1, US 20020144057A1, US 2002144057 A1, US 2002144057A1, US-A1-20020144057, US-A1-2002144057, US2002/0144057A1, US2002/144057A1, US20020144057 A1, US20020144057A1, US2002144057 A1, US2002144057A1|
|Inventeurs||Kai Li, Howard Lee|
|Cessionnaire d'origine||Data Domain|
|Exporter la citation||BiBTeX, EndNote, RefMan|
|Citations de brevets (35), Référencé par (118), Classifications (30), Événements juridiques (6)|
|Liens externes: USPTO, Cession USPTO, Espacenet|
 This application claims priority of U.S. provisional patent application No. 60/265,180, filed Jan. 30, 2001 and entitled “System Architecture and Methods of Building Low-Power, Dynamically Reconfigurable, And Reliable Online Archival System,” which is hereby incorporated by reference for all purposes.
 The present invention relates generally to data storage, and more specifically, to an online archival disk-based data storage system with algorithms for reducing power consumption, improving disk longevity and reliability, and maintaining data integrity.
 With the increasing popularity of Internet commerce and network centric computing, businesses and other entities are becoming more and more reliant on information. Protecting critical data from loss due to human errors, software errors, system crashes, virus attack and the like is therefore of primary importance. Data archival systems are typically used in information systems to restore information in the event of a failure or error. Tape drives and/or write-able CD drives have historically been the storage medium of choice for data archival systems. Magnetic disk based archival storage systems have generally not been considered for long term storage because the lifetime of disks is relatively short and their power consumption is high compared to magnetic tape or write-able CDs.
 Magnetic disks are typically used as primary storage for information infrastructures and as storage drives in personal computers, laptop computers, servers, and the like. A number of power saving techniques have been proposed for laptop computers. Software controlled power saving modes have been used to control power consumption during periods of inactivity. Adaptive algorithms which analyze access patterns to adaptively determine when to spin disks up or down to reduce power consumption. Such algorithms, however, usually focus on reducing the power consumption of laptop computers whose disks are specifically designed to spin up and spin down more times than required during the typical life expectancy of a laptop computer. Disks for desktops or servers are usually engineered to handle a limited number of starts and stops. Applying the same power conservation methods used with laptop computers to disk-based archival systems would shorten disk lifetime. Furthermore, these power saving techniques do not address the problem of checking or maintaining the integrity of data stored on disks for extended periods of time.
 An archival disk-based data storage system that reduces power consumption, improves disk longevity and reliability, and maintains data integrity for extended periods of time is therefore needed.
 To achieve the foregoing, and in accordance with the purpose of the present invention, a disk-based archival storage system is disclosed. The system according to one embodiment includes a storage unit configured to store archival data, the storage unit including at least one spindle of disks configured to magnetically store archival data, an interconnect, and a control unit configured to process requests over the interconnect to either archive or retrieve data from the storage unit. In one embodiment, the system includes a plurality of the storage units, each including at least one spindle of disks. The control unit controls the storage unit(s) in a master-slave relationship. Specifically the control unit is capable of issuing commands to selectively cause the storage unit(s) to shut down or power up, enter a running mode or a standby mode, cause the spindle of disk(s) to either spin up or spin down, and to perform a data integrity check of all the archival data stored in the storage system. In various other embodiments, the control unit runs algorithms that expand the lifetime and longevity of the disk spindles, optimize power consumption, and perform data migration in the event a data integrity check identifies correctable errors. Hence for the first time, the present invention provides a disk-based storage system that practically can be used for data archival purposes.
 The invention, together with further advantages thereof, may best be understood by reference to the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a diagram of an exemplary information infrastructure in which the archival disk-based data storage system (hereafter storage system) of the present invention may be used.
FIG. 2 is a system diagram of the storage system of the present invention.
FIG. 3 is a system diagram of a storage unit provided in the storage system of the present invention.
FIG. 4 is a system diagram of a power controller provided in the storage system of the present invention.
FIG. 5a is a flow diagram illustrating how the control unit of the archival disk-based data storage system manages the storage units with a competitive algorithm to process requests according to the present invention.
FIG. 5b is a flow diagram illustrating how the control unit of the storage system manages the storage units with a competitive algorithm to optimize disk lifetime and power consumption according to the present invention.
FIG. 6a is a flow diagram illustrating how the control unit of the storage system manages the storage units with an adaptive competitive algorithm to process requests according to the present invention.
FIG. 6b is a flow diagram illustrating how the control unit of the storage system manages the storage units with an adaptive competitive algorithm to optimize disk lifetime and power consumption according to the present invention.
FIG. 7 is a flow diagram illustrating how the control unit of the storage system of the present invention performs data integrity checking and migration.
 Referring to FIG. 1, a diagram of an exemplary information infrastructure in which the archival disk-based data storage system of the present invention may be used is shown. The information infrastructure 10 includes a plurality of clients 12 and a server cluster 14 including one or more servers coupled together by a network 16, a primary storage location 18, the archival disk-based data storage system (hereafter “storage system”) 20, and a network connection 19 coupling the primary storage location 18 and the storage system 20. The clients 12 can be any type of client such as but not limited to a personal computer, a “thin” client, a personal digital assistant, a web enabled appliance, or a web enabled cell phone. The server(s) of server cluster 14 may include any type of server(s) configured as either a file server, a database server, or a combination thereof. Likewise, the network 16 can be any type of network. The primary storage location may be configured in any number of different arrangements, such as a storage array network, network attached storage, or a combination thereof. The primary storage location 18 may be either separate or part of the server cluster 14. The network connection 19 can be any type of network connection, such as fiber channel, Ethernet, or SCSI.
 Referring to FIG. 2, a system diagram of the storage system 20 is shown. The storage system 20 includes a control unit 22, an interconnect 24, a plurality of storage units (SUs) 26, and a power controller 28. The control unit 22 is a standard computer such as a personal computer that interfaces with primary storage location 18 over network 19. The control unit 22 also operates as a master with respect to the storage units 26 and sends tasks to the storage units 26, receives results from the storage units 26, and controls the working modes of storage units 26. The interconnect 24 can be either a custom-designed interconnect or a standard local area network capable of transmitting special commands or packets to the storage units 26.
 Referring to FIG. 3, a system diagram of a storage unit 26 is shown. Each storage unit 26 includes a controller 30 and one or more spindles of magnetic disks 32. The storage unit 26 are slaves with respect to the control unit 22. By responding to the commands of the control unit 22 over the, the controller 30 executes software that directs the storage unit 26 to shutdown or power up, change its modes between running and standby (sleep mode), and either spin up or down some or all of the magnetic disks 32. The control unit 22 also commands the controller 30 to periodically perform data integrity checks of the data stored on its disks 32. According to various embodiments of the invention, the magnetic disks 32 may assume a number of different configurations such as a Redundant Array of Independent Disks (RAID) or as individual disks in either a logical or physical arrangement.
 Referring to FIG. 4, a system diagram of the power controller 28 is shown. The power controller includes a power input 40 for receiving power, a command input 42 for receiving an on/off command from the control unit 22, an Input ID 44 for receiving an identity number input corresponding to one of the storage units 26, and a number of power outputs 46 coupled to the storage units 26 respectively. In response to an on/off command and an identity number received from the control unit 22 at inputs 42 and 44, the power controller 28 can selectively provide power from input 40 to the storage units 26 through power outputs 46 respectively.
 The control unit 22 is responsible for moving archived and retrieved data between the primary storage location 18 and the storage units 26. The control unit 22 maintains a directory of all the archived data stored in the storage system 20. The directory includes a map of the data blocks for each of the storage units 26 in the system 20. Each time data is either archived or retrieved, the accessed data block(s) and storage unit(s) 26 are updated in the directory. The control unit 22 also includes management software that controls the physical operation of the storage units 26 and the power controller 28. For example, the control unit 22 under the direction of the management software issues commands to determine which storage units 26 should be used, how long each storage unit 26 should run, and when a storage unit 26 should do a data integrity check. Power on/off commands along with an identify number are sent to the inputs 42 and 44 of power controller 28. Commands and/or packets are sent over the interconnect 24 by the control unit 22 to instruct an individual storage unit 26 to perform the requested task. In response, the controller 30 of the individual storage unit 26 executes software to perform the task.
 An objective of the management software in control unit 22 is to maximize the lifetime of the storage units 26 and minimize their power consumption while providing a desirable response time. Keeping the storage units 26 running all the time provides the best response time, but will consume the maximum amount of power and shorten the lifetime of disks 32. Simply turning off the storage units 26 immediately after each request and turning them on for each request is also a poor solution in terms of response time, lifetime of disks 32, and power consumption. This scenario provides the worst response time because the storage units 26 will be turned off as soon as the current archival or retrieval job is complete. The lifetime of the disks 32 will be shortened because most disks other than those used for laptops are engineered to handle only a limited number of starts and stops (typically less than 50,000). Power consumption is not necessarily reduced because it takes much more power to spin up a disk than to perform normal operations. Therefore a strategy that optimizes disk lifetime, minimizes power consumption and provides desirable response times requires the advanced knowledge of request arrival times. Since it is impossible to know when future requests are going to occur, the best one can do is to derive an optimal off line strategy after the fact.
 The present invention is a competitive algorithm implemented in the management software on the control unit 22. The results of using this algorithm guarantees performance to be within a factor of two of the optimal offline case. H is the amount of time a storage unit 26 runs while waiting for another request before powering-off or entering standby. In other words, H is set to the duration of time where the life cost and power cost of an idle spinning disk approximately equals the life cost and power cost of a disk spin up and spin down cycle. The following equation (1) can therefore be used to define the value of H:
 CSU: the cost of the storage unit
 CW: the cost per watt
 L: the spin lifetime
 N: the total number of start-and-stops
 TUp: the time taken to spin up
 WRW: the number of watts consumed for read or write operations, and
 WUp: the number of watts consumed for a spin up.
 Among these parameters, L and N are variable parameters that are initialized to the spin lifetime and start-and-stop limit as defined by the disk manufacturer. These values will decrease over time as the disks consume their spin lifetime and start-and-stop limits.
 As noted an objective of the disk-based archival storage system 20 is to extend the lifetime of its disks. Each disk typically has a practical spin lifetime of three to five years. The error rate of a disk typically starts to increase significantly when the actual run time exceeds the spin lifetime of the disk. An important consideration therefore is to keep track of the remaining spin lifetime of a disk or a set of disks and to use this information to determine when to spin a disk down to extend its lifetime. A simple algorithm to extend disk lifetime is to spin down the disk as soon as a request is complete. Such an algorithm will preserve the remaining spin lifetime, but will typically provide an unacceptable response time following the next request. An improved algorithm that would generally provide better response times is to spin the disk for a small amount of time after each request. Since requests often have temporal locality, this algorithm seeks to improve response times at the expense of spin lifetime. Furthermore when a disk exceeds the start-and-stop limit, its error rate will typically increase significantly. Disks for desktops or servers usually have a limit of less than 50,000 start-and-stop times. To extend this lifetime, the start-and-stop limit of a disk should also be considered.
 As is described in detail below, the present invention provides an algorithm that provides both excellent response times as well as helps extend the run time and the start and stop limit of the disks. With the algorithm of the present invention, a disk is kept spinning after each request for the amount of time equal to the lifetime of a start and stop. Since the remaining spin lifetime and the remaining start-and-stop limit change over time, the spin time needs to be recalculated after the completion of each request. In addition to lifetime, the algorithms of the present invention have the added benefit of reducing power consumption within an archival storage system 20.
 Referring to FIG. 5a, a flow diagram 100 illustrating how the control unit 22 manages the storage units 26 with a competitive algorithm to process requests according to one embodiment of the invention is shown. For each storage unit (SU) 26, the control unit 22 maintains several parameters including the current threshold value of H, the remaining-spin-lifetime L, remaining number of start-and-stops N, and the time-stamp of the last-request T (step 102). When the control unit 22 receives either an archival or retrieval request (step 104), it first allocates a storage unit 26 for an archival request or finds the appropriate storage unit 26 for a retrieval request using the directory of all the archived data stored in the storage system 20 (step 106). Thereafter the control unit 22 determines if the storage unit 26 is on (diamond 108). If the storage unit 26 is off or in standby mode (diamond 110), the control unit 22 issues commands to either power on or wake up the storage unit 26 (step 110). When the storage unit 26 is ready, the request will be sent (step 112) to that storage unit 26. If the storage unit 26 is already on (diamond 108), the request is sent immediately to that storage unit 26 (step 112). After the request is processes by the storage unit 26, it is reset and the values of SU.L and SU.T are all updated. SU.L or the remaining spin lifetime is calculated from the equation SU.L=SU.L−Time ( )+SU.T where SU.L is the previous spin lifetime value, and Time ( )+SU.T is the elapsed time since the previous request. SU.T is the time stamp of the current request. When another request occurs, control is returned back to step 104.
 Referring to FIG. 5b, a flow diagram 200 illustrating how the control unit 22 manages the storage units 26 with a constant competitive algorithm to optimize disk lifetime and power consumption according to one embodiment of the invention is shown. The control unit 22 checks the status of all the running storage units 26 every k seconds (step 202). During this check, the control unit 22 sequences through storage units 26, one at a time, and identifies which are running (step 204). For each running storage unit 26, the control unit 22 computes an individual threshold SU.H using equation (1) as defined above (step 206). The control unit 22 then checks to determine if the threshold SU. H for each running storage unit 26 is greater than the elapsed time since the previous request Time( )−SU.T (step 208). If yes, control is returned to step 204. If the running time SU.T has exceeded the threshold SU.H, the control unit 22 will turn off that storage unit 26 or issue a command to place it in standby mode. The values for SU.L and SU.N are also updated (step 210). The remaining spin lifetime SU.L is calculated as described above. The number of remaining start-and-stops SU.N is calculated by decrementing the previous value of SU.N by one. Finally, in decision diamond 212, it is determined if the remaining lifetime SU.L and the remaining number of start and stops SU.N are too small as determined by the manufacturer of the disks 32. If no, control is returned to step 204. If yes with either parameter, a warning is generated (step 214) indicating that the storage unit 26 or at least the disks 32 should be replaced. After all the storage units have been checked, control is returned to box 202 and K seconds elapses before the above steps are repeated.
 Referring to FIG. 6a, a flow diagram 300 illustrating how the control unit 22 may manage the storage units 26 with an adaptive competitive algorithm to process requests according to another embodiment of the present invention is shown. With this embodiment, an adaptive algorithm is used that dynamically adjusts the value of H for each storage unit 26 based on the frequency and timing of requests. The adaptive algorithm is based on the assumption that there is a high probability that the wait time for the next request will exceed the time equivalent of a spin up and down cycle if the previous wait time for a request also exceeded the spin up and down cycle time. In situations where request arrivals tend to have temporal locality, this algorithm will achieve better results than the previous competitive algorithm.
 The flow chart 300 is similar to flow chart 100 of FIG. 5a. Steps 302-308 are identical to those of steps 102-108 of FIG. 5a respectively and therefore are not described in detail herein. The main difference between the two flow charts 100 and 300 involves the use of a threshold Hmin and threshold Hmax to store the low and high thresholds for each storage unit 26. These values are initialized in step 302 so that Hmax=SU.H and Hmin=Mmax/10. At decision diamond 308, if the storage unit 26 to be access (in response to an archival or retrieval request) is off, then the current value of SU.H for that storage unit 26 is compared to Hmin (step 310). If the current value of SU.H is greater than Hmin, then the current value is decremented (step 312) before the storage unit 26 is turned on or woken up (step 314). If the current value of SU.H is less than Hmin, then the current value is not decremented and the storage unit 26 is turned on or woken up (step 314). Thereafter the request is sent to the storage unit 26 (step 316). On the other hand, if the storage unit 26 is on, then the current value of SU.H is compared to Hmax (step 318). If the current value is less than Hmax, the current value is incremented (step 320) and then the request is sent to the storage unit 26. Otherwise the request is sent directly to the storage unit 26 (step 316). After the request is received by the storage unit 26, the values of SU.L and SU.T are updated in a similar manner as described above (step 316). SU.H is adjusted between Hmax and Hmin in order to guarantee that the performance is within a factor of two of the optimal offline case.
FIG. 6b is a flow diagram 300 illustrating how a control unit of the archival disk-based data storage system manages the storage units with an adaptive competitive algorithm to optimize disk lifetime and power consumption according to the present invention. FIG. 6B is identical to 5B except in step 406, Hmax and Hmin are recomputed. Thus the value of SU.H remains within the limits of these two thresholds. Otherwise the remainder of the flow chart for 408-414 are identical to 208-214 of FIG. 5B.
 The present invention thus describes several approaches to extend the lifetime of disk storage in a storage unit 26. The first approach keeps track of and uses the remaining spin life of a storage unit 26 to determine when to spin up and down to extend the lifetime of the disk(s) in the storage unit 26. The second approach is to use the remaining spin life and the remaining start-and-stop limit of a storage unit 26 to determine when to spin up and down to extend the lifetime of the disk(s) in the storage unit 26. The third is to use the life cost and power cost as a measure to combine spin life, start-and-stop limit, and power consumption, in order to determine when to spin up and down the storage unit 26 in order to improve both the lifetime and the power consumption of a storage unit 26. This application described two algorithms using the third approach: a competitive algorithm and an adaptive competitive algorithm. Both algorithms have the property that their results are within a factor of two of the optimal offline case.
 The storage system 20 ideally needs to maintain the integrity of its data for a long period of time. This is challenging for two reasons. Disks 32 often have undetectable errors. The error rate of current disk drive technology is typically 1 in 1013 or 1014. For example with RAID, only detectable errors can be corrected. Second, detectable errors can be detected only when accessing data. Thus, there may be intervening catastrophic disk failures that can not be corrected even if they are detectable.
 To detect hardware undetectable errors, the controller 30 of each storage unit 26 uses an algorithm to compute and store an error correction code (ECC) for each data block stored on its disks 32. When the data block is later accessed, the storage unit re-computes the ECC and compares it with the code stored with the data. If they are identical, it is assumed there are no errors. On the other hand if they are not identical, the controller will re-compute the ECC value yet again. If the ECC values are still different, the storage unit 26 invokes correction code to correct the error and the data is stored in a new location. Whenever data is migrated (or scrubbed) to a new location, the directory of all the archived data stored in the storage system 20 maintained by the control unit 22 is updated.
 Referring to FIG. 7, a flow diagram 500 illustrating how the control unit 22 performs data integrity checking and migration according to the present invention is shown. The data integrity check processes one object at a time (step 502). To check data integrity efficiently, the algorithm sorts the object's data blocks by location (step 504) and then checks one data block at a time (step 506). For each block, integrity errors are identified by calculating the ECC code (step 508). If there is no error, the data block is rewritten to the same location (step 520). If there are errors, then the algorithm checks to see whether the errors are correctable (step 510). If errors are not correctable, it will log the errors and go to check the next block (522). For correctable errors, it tries to find a new location for data scrubbing (step 512). If a new location is available on the same storage unit 26, the data be scrubbed and the directory is updated. On the other hand if it a new location can not be found, the storage unit 26 informs the control unit 22 that this object needs to be migrate to another storage unit 26 (step 524). If a new location is found, the data is migrated to the new storage unit 26 and the directory in the control unit 22 is updated before the next block is checked (step 514). When the data integrity check process completes, the control unit 22 is notified of the completion (step 516) and then shuts down the storage unit 26 or puts the unit into standby mode (step 518).
 According to one embodiment, the control unit 22 schedules the storage units 26 to perform data integrity checks of its data once every time period P. Since data integrity checks will consume the spin lifetime and power of disks 32, P should be chosen based on a desired percentage p of the total spin lifetime and the number of start and stops. Accordingly, P may be set based on the following equation:
 where S is the size of the storage unit and BW is the bandwidth of checking data integrity.
 Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. For instance, the storage system 20 can be designed without a power controller 28. In such embodiments, the control unit 22 would not be capable powering off the storage units 26. Power would be conserved only by placing the storage units into standby mode. Typically the decision to either power off or place a disk into standby mode is a trade off between lower power consumption versus response time. If power consumption is more important than response time, the disks 32 should be powered off. If response time is more important, then the disks should be placed into a standby mode. The controller 30 can be a computer used to control the storage unit 26. Therefore, the described embodiments should be taken as illustrative and not restrictive, and the invention should not be limited to the details given herein but should be defined by the following claims and their full scope of equivalents.
|Brevet cité||Date de dépôt||Date de publication||Déposant||Titre|
|US3689891 *||2 nov. 1970||5 sept. 1972||Texas Instruments Inc||Memory system|
|US4084231 *||18 déc. 1975||11 avr. 1978||International Business Machines Corporation||System for facilitating the copying back of data in disc and tape units of a memory hierarchial system|
|US4145739 *||20 juin 1977||20 mars 1979||Wang Laboratories, Inc.||Distributed data processing system|
|US4532802 *||31 mai 1984||6 août 1985||International Business Machines Corporation||Apparatus for analyzing the interface between a recording disk and a read-write head|
|US4980896 *||20 nov. 1989||25 déc. 1990||Hampshire Instruments, Inc.||X-ray lithography system|
|US4987502 *||6 oct. 1986||22 janv. 1991||Eastman Kodak Company||Anti-wear disk drive system|
|US4993785 *||19 janv. 1990||19 févr. 1991||Boitabloc, Societe Anonyme||Writing support|
|US5124987 *||16 avr. 1990||23 juin 1992||Storage Technology Corporation||Logical track write scheduling system for a parallel disk drive array data storage subsystem|
|US5134602 *||27 sept. 1990||28 juil. 1992||International Business Machines Corporation||Calibrating optical disk recorders to some parameters during disk spin up while deferring calibration of other parameters|
|US5155835 *||19 nov. 1990||13 oct. 1992||Storage Technology Corporation||Multilevel, hierarchical, dynamically mapped data storage subsystem|
|US5197055 *||21 mai 1990||23 mars 1993||International Business Machines Corporation||Idle demount in an automated storage library|
|US5210866 *||12 sept. 1990||11 mai 1993||Storage Technology Corporation||Incremental disk backup system for a dynamically mapped data storage subsystem|
|US5345347 *||18 févr. 1992||6 sept. 1994||Western Digital Corporation||Disk drive with reduced power modes|
|US5402200 *||17 sept. 1993||28 mars 1995||Conner Peripherals, Inc.||Low-power hard disk drive system architecture|
|US5423046 *||17 déc. 1992||6 juin 1995||International Business Machines Corporation||High capacity data storage system using disk array|
|US5442608 *||4 mars 1994||15 août 1995||Mitsubishi Electric Corp||Disk apparatus having a power consumption reducing mechanism|
|US5452277 *||30 déc. 1993||19 sept. 1995||International Business Machines Corporation||Adaptive system for optimizing disk drive power consumption|
|US5469533 *||10 juil. 1992||21 nov. 1995||Microsoft Corporation||Resource-oriented printer system and method of operation|
|US5481733 *||15 juin 1994||2 janv. 1996||Panasonic Technologies, Inc.||Method for managing the power distributed to a disk drive in a laptop computer|
|US5493670 *||1 déc. 1994||20 févr. 1996||Panasonic Technologies, Inc.||Adaptive disk spin-down method for managing the power distributed to a disk drive in a laptop computer|
|US5517649 *||19 avr. 1994||14 mai 1996||Maxtor Corporation||Adaptive power management for hard disk drives|
|US5682273 *||22 sept. 1995||28 oct. 1997||International Business Machines Corporation||Disk drive for portable computer with adaptive demand-driven power management|
|US5745458 *||18 oct. 1996||28 avr. 1998||Hewlett-Packard Company||Overlapped spin-up process for optical disk drive|
|US5774292 *||13 avr. 1995||30 juin 1998||International Business Machines Corporation||Disk drive power management system and method|
|US5784610 *||21 nov. 1994||21 juil. 1998||International Business Machines Corporation||Check image distribution and processing system and method|
|US5821924 *||26 oct. 1995||13 oct. 1998||Elonex I.P. Holdings, Ltd.||Computer peripherals low-power-consumption standby system|
|US5870264 *||30 juin 1997||9 févr. 1999||Restle; Wilfried||Method and arrangement for significantly increasing the lifetime of magnetic disk storage devices|
|US5900007 *||7 août 1996||4 mai 1999||International Business Machines Corporation||Data storage disk array having a constraint function for spatially dispersing disk files in the disk array|
|US5954820 *||26 sept. 1997||21 sept. 1999||International Business Machines Corporation||Portable computer with adaptive demand-driven power management|
|US5961613 *||2 sept. 1997||5 oct. 1999||Ast Research, Inc.||Disk power manager for network servers|
|US6115509 *||10 mars 1994||5 sept. 2000||International Business Machines Corp||High volume document image archive system and method|
|US6430005 *||7 juin 1995||6 août 2002||Syquest Technology, Inc.||Removable cartridge disk drive with a receiver for receiving a cartridge housing a hard disk|
|US6470071 *||31 janv. 2001||22 oct. 2002||General Electric Company||Real time data acquisition system including decoupled host computer|
|US6600703 *||26 avr. 2001||29 juil. 2003||International Business Machines Corporation||Magazine for a plurality of removable hard disk drives|
|US6732125 *||8 sept. 2000||4 mai 2004||Storage Technology Corporation||Self archiving log structured volume with intrinsic data protection|
|Brevet citant||Date de dépôt||Date de publication||Déposant||Titre|
|US7035972||26 juin 2003||25 avr. 2006||Copan Systems, Inc.||Method and apparatus for power-efficient high-capacity scalable storage system|
|US7057981||29 janv. 2004||6 juin 2006||Hitachi, Ltd.||Disk array system and method for controlling disk array system|
|US7080201||11 sept. 2003||18 juil. 2006||Hitachi, Ltd.||Disk array apparatus and method for controlling the same|
|US7200074||30 déc. 2004||3 avr. 2007||Hitachi, Ltd.||Disk array system and method for controlling disk array system|
|US7203135||30 déc. 2004||10 avr. 2007||Hitachi, Ltd.||Disk array system and method for controlling disk array system|
|US7210004 *||14 avr. 2005||24 avr. 2007||Copan Systems||Method and system for background processing of data in a storage system|
|US7210005 *||9 févr. 2006||24 avr. 2007||Copan Systems, Inc.||Method and apparatus for power-efficient high-capacity scalable storage system|
|US7281088||31 mars 2004||9 oct. 2007||Hitachi, Ltd.||Disk array apparatus and disk array apparatus controlling method|
|US7310713||19 déc. 2005||18 déc. 2007||Hitachi, Ltd.||Storage system having dynamic volume allocation function|
|US7315965||4 févr. 2004||1 janv. 2008||Network Appliance, Inc.||Method and system for storing data using a continuous data protection system|
|US7325159||4 févr. 2004||29 janv. 2008||Network Appliance, Inc.||Method and system for data recovery in a continuous data protection system|
|US7330931||30 déc. 2005||12 févr. 2008||Copan Systems, Inc.||Method and system for accessing auxiliary data in power-efficient high-capacity scalable storage system|
|US7353406 *||11 févr. 2004||1 avr. 2008||Hitachi, Ltd.||Disk array optimizing the drive operation time|
|US7355806||18 juil. 2005||8 avr. 2008||Hitachi, Ltd.||Disk array unit|
|US7360017 *||19 mars 2004||15 avr. 2008||Hitachi, Ltd.||Storage control device for longevity of the disk spindles based upon access of hard disk drives|
|US7366870||21 févr. 2006||29 avr. 2008||Hitachi, Ltd.||System and method for accessing an offline storage unit through an online storage unit|
|US7373456||27 nov. 2006||13 mai 2008||Hitachi, Ltd.||Disk array apparatus and disk array apparatus controlling method|
|US7373559||8 sept. 2004||13 mai 2008||Copan Systems, Inc.||Method and system for proactive drive replacement for high availability storage systems|
|US7380060 *||9 mars 2007||27 mai 2008||Copan Systems, Inc.||Background processing of data in a storage system|
|US7380088 *||4 févr. 2005||27 mai 2008||Dot Hill Systems Corp.||Storage device method and apparatus|
|US7392364||1 oct. 2004||24 juin 2008||Hitachi, Ltd.||Storage system having dynamic volume allocation function|
|US7398364 *||28 avr. 2005||8 juil. 2008||Hitachi, Ltd.||Switching method of data replication mode|
|US7398399 *||12 déc. 2003||8 juil. 2008||International Business Machines Corporation||Apparatus, methods and computer programs for controlling performance of operations within a data processing system or network|
|US7404035 *||22 nov. 2005||22 juil. 2008||Hitachi, Ltd.||System for controlling spinning of disk|
|US7426617||5 févr. 2004||16 sept. 2008||Network Appliance, Inc.||Method and system for synchronizing volumes in a continuous data protection system|
|US7433218||19 juil. 2005||7 oct. 2008||Marvell International Ltd.||Flash memory module|
|US7434090||30 sept. 2004||7 oct. 2008||Copan System, Inc.||Method and apparatus for just in time RAID spare drive pool management|
|US7434097||3 juin 2004||7 oct. 2008||Copan System, Inc.||Method and apparatus for efficient fault-tolerant disk drive replacement in raid storage systems|
|US7447121||20 mars 2007||4 nov. 2008||Hitachi, Ltd.||Disk array system|
|US7453774||30 déc. 2004||18 nov. 2008||Hitachi, Ltd.||Disk array system|
|US7454529 *||2 août 2002||18 nov. 2008||Netapp, Inc.||Protectable data storage system and a method of protecting and/or managing a data storage system|
|US7457800||6 oct. 2005||25 nov. 2008||Burnside Acquisition, Llc||Storage system for randomly named blocks of data|
|US7457813||6 oct. 2005||25 nov. 2008||Burnside Acquisition, Llc||Storage system for randomly named blocks of data|
|US7457981||21 juil. 2006||25 nov. 2008||Hitachi, Ltd.||Anomaly notification control in disk array|
|US7460382||2 mars 2005||2 déc. 2008||Marvell International Ltd.||Flash memory module|
|US7461203||14 févr. 2005||2 déc. 2008||Hitachi, Ltd.||Disk array apparatus and method for controlling the same|
|US7475283||13 mars 2007||6 janv. 2009||Hitachi, Ltd.||Anomaly notification control in disk array|
|US7480765||12 mai 2006||20 janv. 2009||Hitachi, Ltd.||Storage unit and circuit for shaping communication signal|
|US7483284||2 mars 2005||27 janv. 2009||Marvell International Ltd.||Flash memory module|
|US7484050||8 sept. 2004||27 janv. 2009||Copan Systems Inc.||High-density storage systems using hierarchical interconnect|
|US7516346 *||25 oct. 2005||7 avr. 2009||Nec Laboratories America, Inc.||System and method for dynamically changing the power mode of storage disks based on redundancy and system load|
|US7523258||14 févr. 2005||21 avr. 2009||Hitachi, Ltd.||Disk array apparatus and method for controlling the same|
|US7554758||20 avr. 2006||30 juin 2009||Hitachi, Ltd.||Disk array unit|
|US7581126 *||23 janv. 2006||25 août 2009||Kabushiki Kaisha Toshiba||Information recording apparatus|
|US7587548||14 févr. 2005||8 sept. 2009||Hitachi, Ltd.||Disk array apparatus and method for controlling the same|
|US7600051 *||10 juin 2004||6 oct. 2009||International Business Machines Corporation||Autonomic hardware-level storage device data integrity checking|
|US7650533||20 avr. 2006||19 janv. 2010||Netapp, Inc.||Method and system for performing a restoration in a continuous data protection system|
|US7657768||27 févr. 2008||2 févr. 2010||Hitachi, Ltd.||Disk array optimizing the drive operation time|
|US7669016||20 déc. 2005||23 févr. 2010||Hitachi, Ltd.||Memory control device and method for controlling the same|
|US7671485||26 févr. 2007||2 mars 2010||Hitachi, Ltd.||Storage system|
|US7685362||29 juil. 2008||23 mars 2010||Hitachi, Ltd.||Storage unit and circuit for shaping communication signal|
|US7689835||6 mai 2008||30 mars 2010||International Business Machines Corporation||Computer program product and computer system for controlling performance of operations within a data processing system or networks|
|US7720817||4 févr. 2005||18 mai 2010||Netapp, Inc.||Method and system for browsing objects on a protected volume in a continuous data protection system|
|US7752401||25 janv. 2006||6 juil. 2010||Netapp, Inc.||Method and apparatus to automatically commit files to WORM status|
|US7752669||31 juil. 2008||6 juil. 2010||International Business Machines Corporation||Method and computer program product for identifying or managing vulnerabilities within a data processing network|
|US7757058||30 avr. 2008||13 juil. 2010||Hitachi, Ltd.||Storage system having dynamic volume allocation function|
|US7774610||14 sept. 2005||10 août 2010||Netapp, Inc.||Method and apparatus for verifiably migrating WORM data|
|US7783606||28 avr. 2006||24 août 2010||Netapp, Inc.||Method and system for remote data recovery|
|US7797582||3 août 2007||14 sept. 2010||Netapp, Inc.||Method and system for storing data using a continuous data protection system|
|US7823010||20 oct. 2008||26 oct. 2010||Hitachi, Ltd.||Anomaly notification control in disk array|
|US7865665||30 déc. 2004||4 janv. 2011||Hitachi, Ltd.||Storage system for checking data coincidence between a cache memory and a disk drive|
|US7870402 *||9 juil. 2007||11 janv. 2011||Hitachi, Ltd.||Control method of storage system, storage system, and storage apparatus|
|US7873804 *||17 août 2007||18 janv. 2011||International Business Machines Corporation||Apparatus for facilitating disaster recovery|
|US7882081||30 août 2002||1 févr. 2011||Netapp, Inc.||Optimized disk repository for the storage and retrieval of mostly sequential data|
|US7904679||4 févr. 2005||8 mars 2011||Netapp, Inc.||Method and apparatus for managing backup data|
|US7908526||8 avr. 2008||15 mars 2011||Silicon Graphics International||Method and system for proactive drive replacement for high availability storage systems|
|US7975113||15 janv. 2010||5 juil. 2011||Hitachi, Ltd.||Memory control device and method for controlling the same|
|US7991974||9 juin 2010||2 août 2011||Hitachi, Ltd.||Storage system having dynamic volume allocation function|
|US8001343 *||6 juin 2006||16 août 2011||Fujitsu Limited||Storage device with power control function|
|US8015442||15 sept. 2010||6 sept. 2011||Hitachi, Ltd.||Anomaly notification control in disk array|
|US8024306||16 mai 2007||20 sept. 2011||International Business Machines Corporation||Hash-based access to resources in a data processing network|
|US8078809||11 mars 2008||13 déc. 2011||Hitachi, Ltd.||System for accessing an offline storage unit through an online storage unit|
|US8140754||3 janv. 2008||20 mars 2012||Hitachi, Ltd.||Methods and apparatus for managing HDD's spin-down and spin-up in tiered storage systems|
|US8151046||13 févr. 2009||3 avr. 2012||Hitachi, Ltd.||Disk array apparatus and method for controlling the same|
|US8161317||2 juin 2009||17 avr. 2012||Hitachi, Ltd.||Storage system and control method thereof|
|US8176346 *||1 mai 2009||8 mai 2012||Canon Kabushiki Kaisha||Information processing apparatus with power saving mode and method for controlling information processing apparatus|
|US8200898||1 oct. 2010||12 juin 2012||Hitachi, Ltd.||Storage apparatus and method for controlling the same|
|US8301852||13 nov. 2008||30 oct. 2012||International Business Machines Corporation||Virtual storage migration technique to minimize spinning disks|
|US8356139 *||24 mars 2009||15 janv. 2013||Hitachi, Ltd.||Storage system for maintaining hard disk reliability|
|US8365013||5 août 2011||29 janv. 2013||Hitachi, Ltd.||Anomaly notification control in disk array|
|US8402211||24 août 2007||19 mars 2013||Hitachi, Ltd.||Disk array apparatus and disk array apparatus controlling method|
|US8412986||19 mars 2012||2 avr. 2013||Hitachi, Ltd.||Storage system and control method thereof|
|US8423739||6 févr. 2008||16 avr. 2013||International Business Machines Corporation||Apparatus, system, and method for relocating logical array hot spots|
|US8429342||8 mai 2012||23 avr. 2013||Hitachi, Ltd.||Drive apparatus and method for controlling the same|
|US8468300||9 déc. 2010||18 juin 2013||Hitachi, Ltd.||Storage system having plural controllers and an expansion housing with drive units|
|US8516204||14 juin 2011||20 août 2013||Hitachi, Ltd.||Memory control device and method for controlling the same|
|US8627130 *||8 oct. 2010||7 janv. 2014||Bridgette, Inc.||Power saving archive system|
|US8677162||7 déc. 2010||18 mars 2014||International Business Machines Corporation||Reliability-aware disk power management|
|US8868950||27 févr. 2013||21 oct. 2014||International Business Machines Corporation||Reliability-aware disk power management|
|US8914340 *||6 févr. 2008||16 déc. 2014||International Business Machines Corporation||Apparatus, system, and method for relocating storage pool hot spots|
|US8929018||29 mai 2012||6 janv. 2015||Hitachi, Ltd.||Disk array unit|
|US20040260967 *||3 juin 2004||23 déc. 2004||Copan Systems, Inc.||Method and apparatus for efficient fault-tolerant disk drive replacement in raid storage systems|
|US20050055501 *||8 sept. 2004||10 mars 2005||Copan Systems, Inc.||High-density storage systems using hierarchical interconnect|
|US20050060618 *||8 sept. 2004||17 mars 2005||Copan Systems, Inc.||Method and system for proactive drive replacement for high availability storage systems|
|US20050111249 *||11 févr. 2004||26 mai 2005||Hitachi, Ltd.||Disk array optimizing the drive operation time|
|US20050132184 *||12 déc. 2003||16 juin 2005||International Business Machines Corporation||Apparatus, methods and computer programs for controlling performance of operations within a data processing system or network|
|US20050138223 *||10 juin 2004||23 juin 2005||International Business Machines Corp.||Autonomic hardware-level storage device data integrity checking|
|US20050144383 *||19 mars 2004||30 juin 2005||Seiichi Higaki||Memory control device and method for controlling the same|
|US20050149672 *||14 févr. 2005||7 juil. 2005||Katsuyoshi Suzuki||Disk array apparatus and method for controlling the same|
|US20050160221 *||31 mars 2004||21 juil. 2005||Takashi Yamazaki||Disk array apparatus and disk array apparatus controlling method|
|US20050210304 *||8 mars 2005||22 sept. 2005||Copan Systems||Method and apparatus for power-efficient high-capacity scalable storage system|
|US20050227569 *||3 juin 2005||13 oct. 2005||Matsushita Electric Industrial Co., Ltd.||Light-emitting semiconductor device, light-emitting system and method for fabricating light-emitting semiconductor device|
|US20050243610 *||14 avr. 2005||3 nov. 2005||Copan Systems||Method and system for background processing of data in a storage system|
|US20050259345 *||18 juil. 2005||24 nov. 2005||Kazuo Hakamata||Disk array unit|
|US20050268119 *||26 juin 2003||1 déc. 2005||Aloke Guha||Method and apparatus for power-efficient high-capacity scalable storage system|
|US20090198748 *||6 févr. 2008||6 août 2009||Kevin John Ash||Apparatus, system, and method for relocating storage pool hot spots|
|US20090313431 *||17 déc. 2009||Hajime Takasugi||Disk Array Recording Apparatus and Recording Control Method Thereof|
|US20110087912 *||8 oct. 2010||14 avr. 2011||Bridgette, Inc. Dba Cutting Edge Networked Storage||Power saving archive system|
|US20110264854 *||24 mars 2009||27 oct. 2011||Hitachi, Ltd.||Storage system|
|US20140181061 *||21 déc. 2012||26 juin 2014||Hong Jiang||Data distribution in a cloud computing system|
|USRE45350||24 nov. 2010||20 janv. 2015||Permabit Technology Corporation||Storage system for randomly named blocks of data|
|EP1540450A2 *||26 juin 2003||15 juin 2005||Copan Systems, Inc.||Method and apparatus for power-efficient high-capacity scalable storage system|
|EP1860556A2 *||30 août 2006||28 nov. 2007||Hitachi, Ltd.||Storage system and control method thereof|
|EP1909164A2||29 mars 2007||9 avr. 2008||Hitachi, Ltd.||Storage control device|
|EP1909164A3 *||29 mars 2007||7 juil. 2010||Hitachi, Ltd.||Storage control device|
|EP2077495A2 *||14 oct. 2008||8 juil. 2009||Hitachi Ltd.||Methods and apparatus for managing HDD`s spin-down and spin-up in tiered storage systems|
|EP2527986A1 *||21 janv. 2010||28 nov. 2012||Fujitsu Limited||Information processing apparatus, drive control program and drive control method|
|WO2004025628A2||26 juin 2003||25 mars 2004||Copan Systems Inc||Method and apparatus for power-efficient high-capacity scalable storage system|
|Classification aux États-Unis||711/112, G9B/19.027, 711/161, 713/320, 714/5.11|
|Classification internationale||G06F12/00, G11B19/20, G06F3/06, G11B19/00, G11B15/18, G11B17/00, G11B20/00, G11B11/00, G11B15/00|
|Classification coopérative||G06F3/0683, G06F2211/1088, G06F11/1076, G06F3/0614, Y02B60/1246, G06F3/0634, G06F3/0625, G11B19/20, G06F3/0647|
|Classification européenne||G06F11/10R, G06F3/06A2W, G06F3/06A6L4, G06F3/06A4C4, G06F3/06A2R, G06F3/06A4H2, G11B19/20|
|22 avr. 2002||AS||Assignment|
Owner name: DATA DOMAIN, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, KAI;LEE, HOWARD;REEL/FRAME:012851/0770;SIGNING DATESFROM 20020321 TO 20020403
|27 août 2009||FPAY||Fee payment|
Year of fee payment: 4
|25 févr. 2010||AS||Assignment|
Owner name: DATA DOMAIN LLC,DELAWARE
Free format text: CONVERSION;ASSIGNOR:DATA DOMAIN, INC.;REEL/FRAME:023985/0768
Effective date: 20091218
|3 mars 2010||AS||Assignment|
Owner name: DATA DOMAIN HOLDING, INC.,MASSACHUSETTS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DATA DOMAIN LLC;REEL/FRAME:024025/0978
Effective date: 20091222
|11 mars 2010||AS||Assignment|
Owner name: EMC CORPORATION,MASSACHUSETTS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DATA DOMAIN HOLDING, INC.;REEL/FRAME:024072/0829
Effective date: 20091231
|14 mars 2013||FPAY||Fee payment|
Year of fee payment: 8