US6904508B2 - Recovery of dynamic maps and data managed thereby - Google Patents

Recovery of dynamic maps and data managed thereby Download PDF

Info

Publication number
US6904508B2
US6904508B2 US09/752,253 US75225300A US6904508B2 US 6904508 B2 US6904508 B2 US 6904508B2 US 75225300 A US75225300 A US 75225300A US 6904508 B2 US6904508 B2 US 6904508B2
Authority
US
United States
Prior art keywords
data
data elements
processing
elements
anchor points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US09/752,253
Other versions
US20020087673A1 (en
Inventor
Stephen S. Selkirk
Charles A. Milligan
Perry Merritt
Edward T. Gladstone
Kenneth Willis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle America Inc
Original Assignee
Storage Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Storage Technology Corp filed Critical Storage Technology Corp
Priority to US09/752,253 priority Critical patent/US6904508B2/en
Assigned to STORAGE TECHNOLOGY CORPORATION reassignment STORAGE TECHNOLOGY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GLADSTONE, EDWARD T., WILLIS, KENNETH, MERRITT, PERRY, MILLIGAN, CHARLES A., SELKIRK, STEPHEN S.
Publication of US20020087673A1 publication Critical patent/US20020087673A1/en
Priority to US11/098,840 priority patent/US7146485B1/en
Application granted granted Critical
Publication of US6904508B2 publication Critical patent/US6904508B2/en
Assigned to SUN MICROSYSTEMS, INC. reassignment SUN MICROSYSTEMS, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: STORAGE TECHNOLOGY CORPORATION
Assigned to Oracle America, Inc. reassignment Oracle America, Inc. MERGER AND CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: Oracle America, Inc., ORACLE USA, INC., SUN MICROSYSTEMS, INC.
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1435Saving, restoring, recovering or retrying at system level using file system or storage system metadata
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1038LFS, i.e. Log Structured File System used in RAID systems with parity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/104Metadata, i.e. metadata associated with RAID systems with parity

Definitions

  • the present invention relates to an improved data processing system and, in particular, to recovery of virtualization structures. Still more particularly, the present invention provides a method and apparatus for recovery of virtualization mapping structures utilizing multiple techniques simultaneously and in parallel.
  • Maps are used in a disk controller to convert a host based Logical Unit (LUN) and Logical Block Address (LBA) to a controller based LUN and LBA.
  • LUN Logical Unit
  • LBA Logical Block Address
  • a mapping system is necessary for a disk controller to provide features such as virtual volumes, data compression, and snapshot.
  • maps are used in current controller designs to facilitate the use of Redundant Array of Independent Disk (RAID) devices.
  • a problem that arises when using a mapped based architecture is where to store the maps.
  • Current map designs use anywhere from four megabytes for a very simple map to dynamic mapping systems that use twelve megabytes or more. As the sizes of disks increase and the sizes of system configurations increase, it is not inconceivable that these systems will require maps that are several gigabytes in size.
  • the present invention provides a mechanism for storing self-defining data and mapping elements with either a fixed set of allowed structures or types or with the structures and types determined by rules.
  • Recovery is enhanced by the use of backward and forward pointers between data and mapping elements for the data elements in the order written by the management algorithm.
  • Recovery is also enhanced by the use of companion pointers with metadata.
  • the companion pointers may include pointers to data or mapping elements that are part of the same structural grouping. For example these pointers may point to the elements that make up a redundancy stripe or the elements that make up a mapping sub-tree.
  • the metadata may describe the structural grouping.
  • the metadata may also include pointers to the previous and/or next versions of the same elements.
  • the metadata may include a pointer to the previous older version of a data block or to the location where the next version of the data block will be stored.
  • Recovery of data or mapping structures is achieved by reverse application of the management algorithm. For example, if using a log structured file algorithm for storing the elements, then the whole structure may he recovered by reading the log backwards. Recovery is enhanced by the use of multi-processing and by the use of a binary fracturing algorithm. For example, forward and backward pointers may be used and different processors may be assigned to begin recovery at different partitions of the structure, each recovering a part of the whole.
  • FIG. 1 depicts a pictorial representation of a distributed data processing system in which the present invention may be implemented
  • FIG. 2 is a block diagram of a storage subsystem in accordance with a preferred embodiment of the present invention.
  • FIG. 3 is a block diagram of a data structure in accordance with a preferred embodiment of the present invention.
  • FIG. 4 illustrates the logic floc for finding a data element to be processed in accordance with a preferred embodiment of the present invention.
  • FIG. 5 illustrates the logic flow for processing the meta data of a data element in order to rebuild a virtual mapping table in accordance with a preferred embodiment of the present invention.
  • FIG. 1 depicts a pictorial representation of a distributed data processing system in which the present invention may be implemented.
  • Distributed data processing system 100 is a network of computers in which the present invention may be implemented.
  • Distributed data processing system 100 contains a network 102 , which is the medium used to provide communications links between various devices and computers connected together within distributed data processing system 100 .
  • Network 102 may include permanent connections, such as wire or fiber optic cables, or temporary connections made through telephone connections.
  • a server 104 is connected to network 102 along with storage subsystem 106 .
  • clients 108 , 110 , and 112 also are connected to network 102 .
  • These clients 108 , 110 , and 112 may be, for example, personal computers or network computers.
  • a network computer is any computer, coupled to a network, which receives a program or other application from another computer coupled to the network.
  • server 104 provides data, such as boot files, operating system images, and applications to clients 108 - 112 .
  • Clients 108 , 110 , and 112 are clients to server 104 .
  • Distributed data processing system 100 may include additional servers, clients, and other devices not shown.
  • Distributed data processing system 100 may be implemented as one or more of a number of different types of networks, such as, for example, an intranet, a local area network (LAN), or a wide area network (WAN).
  • Network 102 contains various links, such as, for example, fiber optic links, packet switched communication links, enterprise systems connection (ESCON) fibers, small computer system interface (SCSI) cable, wireless communication links.
  • ESCON enterprise systems connection
  • SCSI small computer system interface
  • storage subsystem 106 may be connected to server 104 using ESCON fibers.
  • FIG. 1 is intended as an example and not as an architectural limitation for the present invention.
  • FIG. 2 a block diagram of a storage subsystem is depicted in accordance with a preferred embodiment of the present invention.
  • Storage subsystem 200 may be used to implement storage subsystem 106 in FIG. 1 .
  • storage subsystem 200 includes storage devices 202 , interface 204 , interface 206 , cache memory 208 , processors 210 - 224 , and shared memory 226 .
  • Interfaces 204 and 206 in storage subsystem 200 provide a communication gateway through which communication between a data processing system and storage subsystem 200 may occur.
  • interfaces 204 and 206 may be implemented using a number of different mechanisms, such as ESCON cards, SCSI cards, fiber channel interfaces, modems, network interfaces, or a network hub.
  • ESCON cards such as ESCON cards, SCSI cards, fiber channel interfaces, modems, network interfaces, or a network hub.
  • the depicted example illustrates the use of two interface units, any number of interface cards may be used depending on the implementation.
  • storage subsystem 200 is a shared virtual array.
  • Storage subsystem 200 is a virtual storage system in that each physical storage device in storage subsystem 200 may be represented to a data processing system, such as client 104 in FIG. 1 , as a number of virtual devices.
  • storage devices 202 are a set of disk drives set up as a redundant array of independent disks (RAID) system.
  • RAID redundant array of independent disks
  • other storage devices may be used other than disk drives.
  • optical drives may be used within storage devices 202 .
  • a mixture of different device types may be used, such as, disk drives and tape drives.
  • cache memory 208 Data being transferred between interfaces 204 and 206 and storage devices 202 are temporarily placed into cache memory 208 . Additionally, cache memory 208 may be accessed by processors 210 - 224 , which are used to handle reading and writing data for storage devices 202 . Shared memory 226 is used by processors 210 - 224 to handle and track the reading and writing of data to storage devices 202 . In particular, processors 210 - 224 are used to execute instructions for routines used in snapshot copy operations.
  • the present invention manages virtual storage facilities comprising an organization of computer equipment, for example, a host network, data transfer means, storage controller means and permanent storage means and attachment means connecting these devices together.
  • the data storage facilities also may include management information associated with data units such that the management information provides an inventory of capabilities with upper and lower boundaries that may limit the options available to store the data and still meets a user's criteria.
  • a data unit is a logical entity known to a owning entity that is composed of a number of data elements and meta-data and a data element is a grouping of data bits or bytes that the subsystem chooses to manage as a consistent set.
  • Such management information may be independent of attributes of characteristics of the elements of the physical storage subsystem actually used to store the data objects, but may consist of imputed associations with those attributes through, for example, changeable rule sets, processes or algorithms. These rule sets, processes or algorithms may be changed by user demand or via processes that may monitor data object usage and manipulation. The storage of data objects may be adjusted to comply with modifications in the, for example, rules sets, processes or algorithms.
  • Data structure 300 includes data elements D 1 301 , D 2 302 , D 3 303 , D 4 304 , D 5 305 , D 6 306 , and D 7 307 .
  • Each data element includes metadata uniquely associated with the data such that installation management criteria, boundaries, and preferences for each data unit and attributes for the data units are maintained.
  • This metadata may include time sequencing of metadata (time stamp), location of stored data, structure definition pointers including size parameters, pointers to related metadata units, management rules, sequencing rules, and management functions invoked to accomplish management rules.
  • the management rules may include performance criteria, reliability criteria, availability criteria, and capacity criteria.
  • the sequencing rules may include logical rules, time rules, and structure rules.
  • Management functions may include RAID, parity, multiple parity, and other known functions that may be invoked to accomplish management rules. Management rules, sequencing rules, and management functions may be stored in the metadata as pointers to the rules or functions.
  • each data element may include pointers to the next or previous version in a time sequence.
  • data element D 1 301 includes a pointer to the next version of updated data, D 2 302 .
  • data element D 2 302 includes a pointer to the previous version, D 1 301 .
  • Each data element may include pointers to the next or previous data element in a logical sequence, such as a next track in a sequence.
  • data element D 3 303 may include a pointer to D 4 304 as the next track and D 4 304 may include a pointer to D 5 305 as the next track.
  • D 5 305 may include a pointer to D 4 304 as the previous data element in the logical sequence and D 4 304 may include a pointer to D 3 303 as the previous data element.
  • Data elements D 2 302 and D 4 304 may include metadata to indicate that they are mirrored with pointers to the mirrored copies. Therefore, one can get twice the read performance and improved availability.
  • Data elements D 5 305 , D 6 306 , and D 7 307 may include metadata to indicate that they are part of a RAID stripe and the available read bandwidth is three drives.
  • each data element may include a virtual address (VA) pointing to the host view of the stored data.
  • VA virtual address
  • D 1 301 includes VA 311
  • D 2 302 includes VA 312
  • D 3 303 includes VA 313
  • D 4 304 includes VA 314
  • D 5 305 includes VA 315
  • D 6 306 includes VA 316
  • D 7 307 includes VA 317 .
  • mapping elements may include forward and backward pointers to mapping elements. If the mapping tables are lost or corrupted, then the mapping may be recovered by finding one or more of the data elements, rebuilding the mapping by following the all the links to the other data elements, and reestablishing the mapping entries with the virtual address stored in the data element.
  • FIG. 4 illustrates the logic flow for finding a data element to be processed in accordance with a preferred embodiment of the present invention.
  • the operation begins by either identifying a virtual space for which mapping has been lost (step 420 ) or identifying a physical space not covered by a virtual map (step 430 ). If an anchor point is recorded (step 422 :YES), then the anchor points are processed to the find data elements in a managed order which includes such techniques as, for example: sequential addresses, sorted by address, sorted by query frequency (step 440 ).
  • a set of pointers stored for example in a linked list, is maintained of logical sequence pointers for the data elements used to store the data units of the virtual LUNS. Pointers are kept in safe storage to at least one data element or an associated metadata unit in the linked list. This may be called an anchor point. Then when recovery is needed or enhanced searching is requested, and a request from a server is received for a data unit in that LUN, the recovery or searching may use the pointer in safe storage to locate one element and follow the sequence of pointers to the requested element. Additional anchor points may be stored to improved recovery or searching speed. The order of processing the discovered anchor points may be optimized for the desired recovery, for example, if seeking a specific data unit, the anchor points are sorted by their address, closer to the requested data unit first.
  • processing is launched for each data element found (step 460 ) as further explained in FIG. 5 .
  • step 432 a determination is then made as to whether or not an anchor point exists (step 432 ). If an anchor point does exist (step 432 :YES), then the anchor point is processed to final data elements in a managed order which is sequential and sorted by address an/or sorted by query (step 440 ). If an anchor point does not exist (step 432 :NO), then a data element is found in a physical space. This may be accomplished by a sequential scan or by selecting a random entry or by a binary search (step 450 ) and then processing is launched for each data element found (step 460 ) which is further explained in FIG. 5 .
  • FIG. 5 illustrates the logic flow for processing the meta data of a data element in order to rebuild a virtual mapping table in accordance with a preferred embodiment of the present invention.
  • the operation begins with a data element being added to the processing queue (step 505 ).
  • the discovered data element(s) is/are sorted (step 510 ).
  • a determination is made as to whether or not the processing queue is empty (step 511 ). If the processing queue is empty (step 511 :YES), the operation terminates. If the processing queue is not empty (step 511 :NO) then a determination is made as to whether or not the data element has the location of meta data (step 514 ).
  • step 514 If the data element has the location of the meta data (step 514 :YES), the associated meta data is retrieved. Then a determination is made as to whether or not the data element is already recovered (step 517 ). If the data element is already recovered (step 517 :YES), then the operation returns to step 511 in which a determination is made as to whether or not the processing queue is empty. Returning to step 517 , if the data element is not already recovered (step 517 :NO), then the management rules and/or links are determined (step 516 ).
  • step 514 if the data element does not have the location of the meta data (step 514 :NO), then a determination is made as to whether or not the meta data is with the data (step 515 ). If the meta data is not with the data (step 515 :NO), then the determination is made as to whether or not the data element is already covered (step 517 ). If the meta data is with the data (step 515 :YES), then the management rules and/or links are determined (step 516 ). A mapping table or structure with a virtual address is updated (step 518 ). Then a determination is made as to whether or not there are more management rules and/or links in selected order (step 520 ).
  • step 520 If there are not anymore management rules and/or links in selected order (step 520 :NO), then the data element is marked as recovered (step 524 ). The operation then returns to the step in which a determination is made as to whether or not the processing queue is empty (step 511 ).
  • a process is initiated to located another data element or elements using the management rules and/or links according to an ordering priority scheme, which may be, for example, sequential, physical address specific, virtual address specific, forward direction specific, backward direction specific or direction specific in both a forward and backward direction (step 522 ). If the meta data or data elements are stored in a sequential table, a process is initiated to locate additional elements by incrementing to the next table entry and initiating a process to locate additional elements by decrementing to the previous table entry.
  • a process is initiated to locate additional elements by following the “next” pointer and initiating a process to locate additional elements using the previous pointer.
  • a process is initiated to locate additional elements using the companion pointer(s).
  • the recovery may be optimized (directed) for recovery of the requested data unit. For example, if the currently dedicated data element in the recovery process has previous and “next” in logical sequence pointers, the pointer may be selected which points to the requested data element and initiate a process to locate additional elements using that pointer. If the currently located data elements are in a sequential table structure and the requested data element also fits within the table structure, a binary search algorithm may be used on the table structure to locate the requested data element.
  • LSF log structured file
  • the map recovery may then proceed by processing all the data elements associated with the last LSF log entry. Then locating the previous log entry and processing all the data elements associated with the log entry and following this process for each previous log entry. Since the probability of valid mapping to data elements may decrease with the age of the log entry, this should maximize the recovery of the valid map entries.
  • step 526 A determination is made as to whether the process in step 522 is able to find a data element (step 526 ). If a data element is not found (step 526 :NO), then the operation returns to step 520 to determine whether there are more management rules or links in selected order. If the process did find a data element (step 526 :YES), then the process sends each found data element back through the process (can be parallel execution) (step 528 ) and returns to step 520 to determine whether there are more management rules or links in selected order.
  • D 4 304 in FIG. 3 is a found data structure. Also, assuming a full recovery (i.e., not a recovery at a specific requested data element), the process determines that the data element mapping has not been recovered. The process determines that there two management rules associated with D 4 304 in FIG. 3 ; one is a logical sequence and the other is a mirrored redundancy group. The mapping table is updated with the virtual address (VA) 314 for D 4 304 . A process (P 1 ) is initiated to locate the next data element using the “next” logical sequence pointer.
  • a process (P 2 ) is initiated to locate the previous element using the “prev” logical sequence pointer.
  • a process (P 3 ) is initiated to locate the companion in the mirror using the “prev” redundancy group pointer.
  • D 4 304 is marked as having been recovered which may be indicated in the mapping structure.
  • Process P 1 locates element D 5 305 .
  • Process P 1 determines that element D 5 305 has not been recovered.
  • Process P 1 determines that element D 5 305 has two associated management rules. One is a logical sequence and the other is a redundancy group (RAID).
  • the mapping table is updated with the virtual address (VA) 315 for data element D 5 305 .
  • a process (P 4 ) is initiated to locate the next element in logical sequence.
  • a process (P 5 ) is initiated to locate the previous element in logical sequence.
  • a process (P 6 ) is initiated to locate the companion element in the redundancy group using the “next” redundancy group pointer.
  • D 5 305 is marked as having been recovered.
  • Process P 6 locates element D 4 304 and determines that element D 4 304 has already been recovered and the process ends.

Abstract

A mechanism is provided for storing self-defining data and mapping elements with either a fixed set of allowed structures or types or with the structures and types determined by rules. Recovery is enhanced by the use of backward and forward pointers between data and mapping elements for the data elements in the order written by the management algorithm. Recovery is also enhanced by the use of companion pointers with metadata. The companion pointers may include pointers to data or mapping elements that are part of the same structural grouping. The metadata may describe the structural grouping. The metadata may also include pointers to the previous and/or next versions of the same elements. Recovery of the data and/or mapping structures is achieved by the reverse application of the management algorithm.

Description

CROSS REFERENCE TO PROVISIONAL AND RELATED APPLICATIONS
This application claims the benefit of the filing date of corresponding U.S. Provisional Patent Application Ser. No. 60/212,284, entitled “System for providing a policy-based demand and use of functions like virtual volumes, instant copy, RAID, etc.”, filed Jun. 19, 2000. In addition, the present invention is related to applications entitled A SYSTEM TO SUPPORT DYNAMICALLY FLEXIBLE DATA DEFINITIONS AND STORAGE REQUIREMENTS, Ser. No. 09/751,635, EFFECTING INSTANT COPIES IN A DYNAMICALLY MAPPED SYSTEM, Ser. No. 09/884,294, USING CURRENT RECOVERY MECHANISMS TO IMPLEMENT DYNAMIC MAPPING OPERATIONS, Ser. No. 09/800,714, DYNAMICALLY CHANGEABLE VIRTUAL MAPPING SCHEME, Ser. No. 09/751,772, FLOATING VIRTUALIZATION LAYERS, Ser. No. 09/752,071, and SELF-DEFINING DATA UNITS, Ser. No. 09/751,641, which are filed even date hereof, assigned to the same assignee, and incorporated herein by reference.
BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to an improved data processing system and, in particular, to recovery of virtualization structures. Still more particularly, the present invention provides a method and apparatus for recovery of virtualization mapping structures utilizing multiple techniques simultaneously and in parallel.
2. Description of Related Art
Maps are used in a disk controller to convert a host based Logical Unit (LUN) and Logical Block Address (LBA) to a controller based LUN and LBA. A mapping system is necessary for a disk controller to provide features such as virtual volumes, data compression, and snapshot. In fact, maps are used in current controller designs to facilitate the use of Redundant Array of Independent Disk (RAID) devices.
A problem that arises when using a mapped based architecture is where to store the maps. Current map designs use anywhere from four megabytes for a very simple map to dynamic mapping systems that use twelve megabytes or more. As the sizes of disks increase and the sizes of system configurations increase, it is not inconceivable that these systems will require maps that are several gigabytes in size.
These large structures make recovery of the data, in the case of a lost or failed virtual map, a time-consuming and complicated process. In addition, some simple and straightforward mapping schemes are vulnerable to loss of data even if only a small portion of the map is corrupted or lost. In some cases, the recovery takes so long that the customer may consider the data lost even if it can eventually be recovered.
Thus, it would be advantageous to provide a method and apparatus for recovery of dynamic maps and data managed thereby.
SUMMARY OF THE INVENTION
The present invention provides a mechanism for storing self-defining data and mapping elements with either a fixed set of allowed structures or types or with the structures and types determined by rules. Recovery is enhanced by the use of backward and forward pointers between data and mapping elements for the data elements in the order written by the management algorithm. Recovery is also enhanced by the use of companion pointers with metadata. The companion pointers may include pointers to data or mapping elements that are part of the same structural grouping. For example these pointers may point to the elements that make up a redundancy stripe or the elements that make up a mapping sub-tree. The metadata may describe the structural grouping. The metadata may also include pointers to the previous and/or next versions of the same elements. For example, the metadata may include a pointer to the previous older version of a data block or to the location where the next version of the data block will be stored.
Recovery of data or mapping structures is achieved by reverse application of the management algorithm. For example, if using a log structured file algorithm for storing the elements, then the whole structure may he recovered by reading the log backwards. Recovery is enhanced by the use of multi-processing and by the use of a binary fracturing algorithm. For example, forward and backward pointers may be used and different processors may be assigned to begin recovery at different partitions of the structure, each recovering a part of the whole.
BRIEF DESCRIPTION OF THE DRAWINGS
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
FIG. 1 depicts a pictorial representation of a distributed data processing system in which the present invention may be implemented;
FIG. 2 is a block diagram of a storage subsystem in accordance with a preferred embodiment of the present invention;
FIG. 3 is a block diagram of a data structure in accordance with a preferred embodiment of the present invention;
FIG. 4 illustrates the logic floc for finding a data element to be processed in accordance with a preferred embodiment of the present invention; and
FIG. 5 illustrates the logic flow for processing the meta data of a data element in order to rebuild a virtual mapping table in accordance with a preferred embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
With reference now to the figures, FIG. 1 depicts a pictorial representation of a distributed data processing system in which the present invention may be implemented. Distributed data processing system 100 is a network of computers in which the present invention may be implemented. Distributed data processing system 100 contains a network 102, which is the medium used to provide communications links between various devices and computers connected together within distributed data processing system 100. Network 102 may include permanent connections, such as wire or fiber optic cables, or temporary connections made through telephone connections.
In the depicted example, a server 104 is connected to network 102 along with storage subsystem 106. In addition, clients 108, 110, and 112 also are connected to network 102. These clients 108, 110, and 112 may be, for example, personal computers or network computers. For purposes of this application, a network computer is any computer, coupled to a network, which receives a program or other application from another computer coupled to the network. In the depicted example, server 104 provides data, such as boot files, operating system images, and applications to clients 108-112. Clients 108, 110, and 112 are clients to server 104. Distributed data processing system 100 may include additional servers, clients, and other devices not shown. Distributed data processing system 100 may be implemented as one or more of a number of different types of networks, such as, for example, an intranet, a local area network (LAN), or a wide area network (WAN). Network 102 contains various links, such as, for example, fiber optic links, packet switched communication links, enterprise systems connection (ESCON) fibers, small computer system interface (SCSI) cable, wireless communication links. In these examples, storage subsystem 106 may be connected to server 104 using ESCON fibers. FIG. 1 is intended as an example and not as an architectural limitation for the present invention.
Turning next to FIG. 2, a block diagram of a storage subsystem is depicted in accordance with a preferred embodiment of the present invention. Storage subsystem 200 may be used to implement storage subsystem 106 in FIG. 1. As illustrated in FIG. 2, storage subsystem 200 includes storage devices 202, interface 204, interface 206, cache memory 208, processors 210-224, and shared memory 226.
Interfaces 204 and 206 in storage subsystem 200 provide a communication gateway through which communication between a data processing system and storage subsystem 200 may occur. In this example, interfaces 204 and 206 may be implemented using a number of different mechanisms, such as ESCON cards, SCSI cards, fiber channel interfaces, modems, network interfaces, or a network hub. Although the depicted example illustrates the use of two interface units, any number of interface cards may be used depending on the implementation.
In this example, storage subsystem 200 is a shared virtual array. Storage subsystem 200 is a virtual storage system in that each physical storage device in storage subsystem 200 may be represented to a data processing system, such as client 104 in FIG. 1, as a number of virtual devices. In this example, storage devices 202 are a set of disk drives set up as a redundant array of independent disks (RAID) system. Of course, other storage devices may be used other than disk drives. For example, optical drives may be used within storage devices 202. Further, a mixture of different device types may be used, such as, disk drives and tape drives.
Data being transferred between interfaces 204 and 206 and storage devices 202 are temporarily placed into cache memory 208. Additionally, cache memory 208 may be accessed by processors 210-224, which are used to handle reading and writing data for storage devices 202. Shared memory 226 is used by processors 210-224 to handle and track the reading and writing of data to storage devices 202. In particular, processors 210-224 are used to execute instructions for routines used in snapshot copy operations.
The present invention manages virtual storage facilities comprising an organization of computer equipment, for example, a host network, data transfer means, storage controller means and permanent storage means and attachment means connecting these devices together. The data storage facilities also may include management information associated with data units such that the management information provides an inventory of capabilities with upper and lower boundaries that may limit the options available to store the data and still meets a user's criteria. For purposes of this application, a data unit is a logical entity known to a owning entity that is composed of a number of data elements and meta-data and a data element is a grouping of data bits or bytes that the subsystem chooses to manage as a consistent set. Such management information may be independent of attributes of characteristics of the elements of the physical storage subsystem actually used to store the data objects, but may consist of imputed associations with those attributes through, for example, changeable rule sets, processes or algorithms. These rule sets, processes or algorithms may be changed by user demand or via processes that may monitor data object usage and manipulation. The storage of data objects may be adjusted to comply with modifications in the, for example, rules sets, processes or algorithms.
With reference to FIG. 3, a block diagram of a data structure is illustrated in accordance with a preferred embodiment of the present invention. Data structure 300 includes data elements D1 301, D2 302, D3 303, D4 304, D5 305, D6 306, and D7 307. Each data element includes metadata uniquely associated with the data such that installation management criteria, boundaries, and preferences for each data unit and attributes for the data units are maintained. This metadata may include time sequencing of metadata (time stamp), location of stored data, structure definition pointers including size parameters, pointers to related metadata units, management rules, sequencing rules, and management functions invoked to accomplish management rules.
The management rules may include performance criteria, reliability criteria, availability criteria, and capacity criteria. The sequencing rules may include logical rules, time rules, and structure rules. Management functions may include RAID, parity, multiple parity, and other known functions that may be invoked to accomplish management rules. Management rules, sequencing rules, and management functions may be stored in the metadata as pointers to the rules or functions.
Furthermore, each data element may include pointers to the next or previous version in a time sequence. For example, data element D1 301 includes a pointer to the next version of updated data, D2 302. Consequently, data element D2 302 includes a pointer to the previous version, D1 301. Each data element may include pointers to the next or previous data element in a logical sequence, such as a next track in a sequence. For example, data element D3 303 may include a pointer to D4 304 as the next track and D4 304 may include a pointer to D5 305 as the next track. Conversely, D5 305 may include a pointer to D4 304 as the previous data element in the logical sequence and D4 304 may include a pointer to D3 303 as the previous data element.
Data elements D2 302 and D4 304 may include metadata to indicate that they are mirrored with pointers to the mirrored copies. Therefore, one can get twice the read performance and improved availability. Data elements D5 305, D6 306, and D7 307 may include metadata to indicate that they are part of a RAID stripe and the available read bandwidth is three drives.
The metadata may be stored separate from the data. Thus, each data element may include a virtual address (VA) pointing to the host view of the stored data. For example, D1 301 includes VA 311, D2 302 includes VA 312, D3 303 includes VA 313, D4 304 includes VA 314, D5 305 includes VA 315, D6 306 includes VA 316, and D7 307 includes VA 317.
The data elements in FIG. 3 may be mapping elements. Mapping elements may include forward and backward pointers to mapping elements. If the mapping tables are lost or corrupted, then the mapping may be recovered by finding one or more of the data elements, rebuilding the mapping by following the all the links to the other data elements, and reestablishing the mapping entries with the virtual address stored in the data element.
FIG. 4 illustrates the logic flow for finding a data element to be processed in accordance with a preferred embodiment of the present invention. In this example, the operation begins by either identifying a virtual space for which mapping has been lost (step 420) or identifying a physical space not covered by a virtual map (step 430). If an anchor point is recorded (step 422:YES), then the anchor points are processed to the find data elements in a managed order which includes such techniques as, for example: sequential addresses, sorted by address, sorted by query frequency (step 440). To enhance recovery for specific virtual LUNs or searching through virtual LUNs, a set of pointers stored, for example in a linked list, is maintained of logical sequence pointers for the data elements used to store the data units of the virtual LUNS. Pointers are kept in safe storage to at least one data element or an associated metadata unit in the linked list. This may be called an anchor point. Then when recovery is needed or enhanced searching is requested, and a request from a server is received for a data unit in that LUN, the recovery or searching may use the pointer in safe storage to locate one element and follow the sequence of pointers to the requested element. Additional anchor points may be stored to improved recovery or searching speed. The order of processing the discovered anchor points may be optimized for the desired recovery, for example, if seeking a specific data unit, the anchor points are sorted by their address, closer to the requested data unit first.
Then processing is launched for each data element found (step 460) as further explained in FIG. 5.
Returning to step 430 in which a physical space is identified which is not covered by a virtual map, a determination is then made as to whether or not an anchor point exists (step 432). If an anchor point does exist (step 432:YES), then the anchor point is processed to final data elements in a managed order which is sequential and sorted by address an/or sorted by query (step 440). If an anchor point does not exist (step 432:NO), then a data element is found in a physical space. This may be accomplished by a sequential scan or by selecting a random entry or by a binary search (step 450) and then processing is launched for each data element found (step 460) which is further explained in FIG. 5.
FIG. 5 illustrates the logic flow for processing the meta data of a data element in order to rebuild a virtual mapping table in accordance with a preferred embodiment of the present invention. Expanding on FIG. 4, in this example, the operation begins with a data element being added to the processing queue (step 505). The discovered data element(s) is/are sorted (step 510). Then a determination is made as to whether or not the processing queue is empty (step 511). If the processing queue is empty (step 511:YES), the operation terminates. If the processing queue is not empty (step 511:NO) then a determination is made as to whether or not the data element has the location of meta data (step 514). If the data element has the location of the meta data (step 514:YES), the associated meta data is retrieved. Then a determination is made as to whether or not the data element is already recovered (step 517). If the data element is already recovered (step 517:YES), then the operation returns to step 511 in which a determination is made as to whether or not the processing queue is empty. Returning to step 517, if the data element is not already recovered (step 517:NO), then the management rules and/or links are determined (step 516).
Returning to step 514, if the data element does not have the location of the meta data (step 514:NO), then a determination is made as to whether or not the meta data is with the data (step 515). If the meta data is not with the data (step 515:NO), then the determination is made as to whether or not the data element is already covered (step 517). If the meta data is with the data (step 515:YES), then the management rules and/or links are determined (step 516). A mapping table or structure with a virtual address is updated (step 518). Then a determination is made as to whether or not there are more management rules and/or links in selected order (step 520). If there are not anymore management rules and/or links in selected order (step 520:NO), then the data element is marked as recovered (step 524). The operation then returns to the step in which a determination is made as to whether or not the processing queue is empty (step 511).
If there are more management rules and/or links in selected order (step 520:YES), then a process is initiated to located another data element or elements using the management rules and/or links according to an ordering priority scheme, which may be, for example, sequential, physical address specific, virtual address specific, forward direction specific, backward direction specific or direction specific in both a forward and backward direction (step 522). If the meta data or data elements are stored in a sequential table, a process is initiated to locate additional elements by incrementing to the next table entry and initiating a process to locate additional elements by decrementing to the previous table entry. If there are pointers to a previous or next element in a logical sequence, a process is initiated to locate additional elements by following the “next” pointer and initiating a process to locate additional elements using the previous pointer. If there are companion pointers to elements associated in a redundancy group, a process is initiated to locate additional elements using the companion pointer(s). If there is a request from a host server to access a particular addressed data unit, while recovery is needed or in progress, the recovery may be optimized (directed) for recovery of the requested data unit. For example, if the currently dedicated data element in the recovery process has previous and “next” in logical sequence pointers, the pointer may be selected which points to the requested data element and initiate a process to locate additional elements using that pointer. If the currently located data elements are in a sequential table structure and the requested data element also fits within the table structure, a binary search algorithm may be used on the table structure to locate the requested data element.
If a primary management rule for the location of data elements is a log structured file (LSF) system and the portion of the map is maximized per unit time, then the LSF may be processed in a reverse manner. In this case, the anchor point may have a pointer to the last LSF log entry and each log entry may contain pointers to the previous log entry and to the meta data and/or data elements updated with that log entry.
The map recovery may then proceed by processing all the data elements associated with the last LSF log entry. Then locating the previous log entry and processing all the data elements associated with the log entry and following this process for each previous log entry. Since the probability of valid mapping to data elements may decrease with the age of the log entry, this should maximize the recovery of the valid map entries.
A determination is made as to whether the process in step 522 is able to find a data element (step 526). If a data element is not found (step 526:NO), then the operation returns to step 520 to determine whether there are more management rules or links in selected order. If the process did find a data element (step 526:YES), then the process sends each found data element back through the process (can be parallel execution) (step 528) and returns to step 520 to determine whether there are more management rules or links in selected order.
An example of applying the process described in FIGS. 4 and 5 to the virtual map structure as described in FIG. 3, assume that D4 304 in FIG. 3 is a found data structure. Also, assuming a full recovery (i.e., not a recovery at a specific requested data element), the process determines that the data element mapping has not been recovered. The process determines that there two management rules associated with D4 304 in FIG. 3; one is a logical sequence and the other is a mirrored redundancy group. The mapping table is updated with the virtual address (VA) 314 for D4 304. A process (P1) is initiated to locate the next data element using the “next” logical sequence pointer. A process (P2) is initiated to locate the previous element using the “prev” logical sequence pointer. A process (P3) is initiated to locate the companion in the mirror using the “prev” redundancy group pointer. D4 304 is marked as having been recovered which may be indicated in the mapping structure. Process P1 locates element D5 305. Process P1 determines that element D5 305 has not been recovered. Process P1 determines that element D5 305 has two associated management rules. One is a logical sequence and the other is a redundancy group (RAID). The mapping table is updated with the virtual address (VA) 315 for data element D5 305. A process (P4) is initiated to locate the next element in logical sequence. A process (P5) is initiated to locate the previous element in logical sequence. A process (P6) is initiated to locate the companion element in the redundancy group using the “next” redundancy group pointer. D5 305 is marked as having been recovered. Process P6 locates element D4 304 and determines that element D4 304 has already been recovered and the process ends.
It is important to note that while the present invention has been described in the context of a fully functioning data processing system, those of ordinary skill in the art will appreciate that the processes of the present invention are capable of being distributed in the form of a computer readable medium of instructions and a variety of forms and that the present invention applies equally regardless of the particular type of signal bearing media actually used to carry out the distribution. Examples of computer readable media include recordable-type media such a floppy disc, a hard disk drive, a RAM, and CD-ROMs and transmission-type media such as digital and analog communications links.
The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (20)

1. A method for recovery of compromised instances of virtual mapping systems, comprising the steps of:
identifying a plurality of physical data elements for which a mapping table has been compromised;
determining whether anchor points exist for the plurality of physical data elements;
processing anchor points to find the data elements in managed order if anchor points exist; and
launching processing for each data element found to recover the compromised mapping table.
2. The method of claim 1, wherein the anchor points are logical pointers, each logical pointer pointing to a data element used to store a data unit and metadata associated with the data unit, the metadata specifying at least one management rule associated with the data unit.
3. The method of claim 1, wherein the managed order comprises data elements that are sorted by their physical address.
4. The method of claim 1, wherein the managed order comprises data clements that are sorted by query frequency.
5. The method of claim 1, further comprising:
finding data elements in physical space if anchor points do not exist.
6. The method of claim 5, wherein the step of finding data elements in physical space comprises at least one of a sequential scan, random entry, and a binary search.
7. The method of claim 1, wherein the step of launching processing far each data element found comprises launching processing of a first element and a second element in parallel.
8. The method of claim 1, wherein the step of launching processing for each data element found comprises processing at least one forward pointer from a found anchor point and processing in parallel at least one backward pointer from an alternate anchor point.
9. The method of claim 1, wherein at least some of the data elements include a first pointer to a previous version of the data clement and a second pointer to a next version of the data element.
10. The method of claim 1, wherein at least some of the data elements include a first pointer to a next data element of a logical sequence.
11. The method of claim 10, wherein the logical sequence is a track logical sequence, and wherein the next data element is a next track in the track logical sequence.
12. The method of claim 1, wherein at least one of the data elements has associated metadata that indicates at least two management rules associated with the at least one data element.
13. The method of claim 12, wherein the at least two management rules are a logical sequence and a mirrored redundancy group.
14. An apparatus for recovery of compromised instances of virtual mapping systems, comprising:
identification means for identifying a plurality of physical data elements for which a mapping table has been compromised;
determination means for determining whether anchor points exist for the plurality of physical data elements;
first processing means for processing anchor points to find the data elements in managed order if anchor points exist; and
second processing means for processing each data element found to recover the compromised mapping table.
15. The apparatus of claim 14, wherein the anchor points are logical pointers, each logical pointer pointing to a data, element used to store a data unit and metadata associated with the data unit, the metadata specifying at least one management rule associated with the data unit.
16. The apparatus of claim 14, wherein the managed order comprises data elements that are sorted by their physical address.
17. The apparatus of claim 14, wherein the managed order comprises data elements that are soiled by query frequency.
18. The apparatus of claim 14, further comprising:
means for finding data elements in physical space if anchor points do not exist.
19. A The apparatus of claim 18, wherein the means for finding data elements in physical space comprises at least one of a sequential scan, random entry, and a binary search.
20. A method of recovering a virtual mapping of data elements comprising steps of:
responsive to a determination that the virtual mapping is compromised, traversing the data elements to find links to other of the data elements; and
reestablishing mapping entries with virtual addresses stored in the data elements.
US09/752,253 2000-06-19 2000-12-30 Recovery of dynamic maps and data managed thereby Expired - Lifetime US6904508B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US09/752,253 US6904508B2 (en) 2000-06-19 2000-12-30 Recovery of dynamic maps and data managed thereby
US11/098,840 US7146485B1 (en) 2000-12-30 2005-04-05 Rebuilding of dynamic maps and data managed thereby

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US21228400P 2000-06-19 2000-06-19
US09/752,253 US6904508B2 (en) 2000-06-19 2000-12-30 Recovery of dynamic maps and data managed thereby

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/098,840 Continuation US7146485B1 (en) 2000-12-30 2005-04-05 Rebuilding of dynamic maps and data managed thereby

Publications (2)

Publication Number Publication Date
US20020087673A1 US20020087673A1 (en) 2002-07-04
US6904508B2 true US6904508B2 (en) 2005-06-07

Family

ID=34636008

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/752,253 Expired - Lifetime US6904508B2 (en) 2000-06-19 2000-12-30 Recovery of dynamic maps and data managed thereby

Country Status (1)

Country Link
US (1) US6904508B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030214326A1 (en) * 2002-02-11 2003-11-20 Craimer Stephen G. Distributed dynamically optimizable processing communications and storage system
US20040165304A1 (en) * 2003-02-25 2004-08-26 International Business Machines Corporation Method, system, and program for maintaining a directory for data written to a storage medium
US11338107B2 (en) 2016-08-24 2022-05-24 Delos Living Llc Systems, methods and articles for enhancing wellness associated with habitable environments
US11587673B2 (en) 2012-08-28 2023-02-21 Delos Living Llc Systems, methods and articles for enhancing wellness associated with habitable environments
US11649977B2 (en) 2018-09-14 2023-05-16 Delos Living Llc Systems and methods for air remediation
US11668481B2 (en) 2017-08-30 2023-06-06 Delos Living Llc Systems, methods and articles for assessing and/or improving health and well-being
US11763401B2 (en) 2014-02-28 2023-09-19 Delos Living Llc Systems, methods and articles for enhancing wellness associated with habitable environments
US11844163B2 (en) 2019-02-26 2023-12-12 Delos Living Llc Method and apparatus for lighting in an office environment
US11898898B2 (en) 2019-03-25 2024-02-13 Delos Living Llc Systems and methods for acoustic monitoring

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030078965A1 (en) * 2001-08-22 2003-04-24 Cocotis Thomas A. Output management system and method for enabling printing via wireless devices
US7493265B2 (en) * 2001-12-11 2009-02-17 Sas Institute Inc. Integrated biomedical information portal system and method
US8356019B1 (en) * 2002-12-11 2013-01-15 Altera Corporation Method and apparatus for utilizing patterns in data to reduce file size
US8078687B1 (en) * 2006-09-06 2011-12-13 Marvell International Ltd. System and method for data management
ATE519158T1 (en) * 2007-01-04 2011-08-15 Sandisk Il Ltd RECOVERING A FAILED DATA TRANSFER BETWEEN A HOST AND A DATA STORAGE DEVICE
KR101555022B1 (en) * 2008-12-02 2015-09-23 삼성전자주식회사 Memory device memory system having ita and mapping information recovering method thereof
US9703655B1 (en) * 2014-11-25 2017-07-11 Scale Computing Inc. Recovery after data loss in a reliable distributed computing system
US9785517B2 (en) * 2015-06-10 2017-10-10 International Business Machines Corporation Rebuilding damaged areas of a volume table using a volume data set
US10481818B2 (en) * 2018-02-28 2019-11-19 Micron Technology, Inc. Meta data processing during startup of storage devices
US11093317B1 (en) * 2020-01-27 2021-08-17 EMC IP Holding Company LLC Managing uncorrectable user data
US11137945B1 (en) * 2020-04-09 2021-10-05 EMC IP Holding Company LLC Recovery of last allocated logical block address

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5008786A (en) * 1985-09-11 1991-04-16 Texas Instruments Incorporated Recoverable virtual memory having persistant objects
US6038639A (en) * 1997-09-09 2000-03-14 Storage Technology Corporation Data file storage management system for snapshot copy operations
US6105115A (en) * 1997-12-31 2000-08-15 Intel Corporation Method and apparatus for managing a memory array
US6529995B1 (en) * 1999-06-18 2003-03-04 Storage Technology Corporation Method and apparatus for maintaining and restoring mapping table entries and data in a raid system
US6549519B1 (en) * 1998-01-23 2003-04-15 Alcatel Internetworking (Pe), Inc. Network switching device with pipelined search engines

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5008786A (en) * 1985-09-11 1991-04-16 Texas Instruments Incorporated Recoverable virtual memory having persistant objects
US6038639A (en) * 1997-09-09 2000-03-14 Storage Technology Corporation Data file storage management system for snapshot copy operations
US6105115A (en) * 1997-12-31 2000-08-15 Intel Corporation Method and apparatus for managing a memory array
US6549519B1 (en) * 1998-01-23 2003-04-15 Alcatel Internetworking (Pe), Inc. Network switching device with pipelined search engines
US6529995B1 (en) * 1999-06-18 2003-03-04 Storage Technology Corporation Method and apparatus for maintaining and restoring mapping table entries and data in a raid system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030214326A1 (en) * 2002-02-11 2003-11-20 Craimer Stephen G. Distributed dynamically optimizable processing communications and storage system
US20040165304A1 (en) * 2003-02-25 2004-08-26 International Business Machines Corporation Method, system, and program for maintaining a directory for data written to a storage medium
US7143232B2 (en) * 2003-02-25 2006-11-28 International Business Machines Corporation Method, system, and program for maintaining a directory for data written to a storage medium
US11587673B2 (en) 2012-08-28 2023-02-21 Delos Living Llc Systems, methods and articles for enhancing wellness associated with habitable environments
US11763401B2 (en) 2014-02-28 2023-09-19 Delos Living Llc Systems, methods and articles for enhancing wellness associated with habitable environments
US11338107B2 (en) 2016-08-24 2022-05-24 Delos Living Llc Systems, methods and articles for enhancing wellness associated with habitable environments
US11668481B2 (en) 2017-08-30 2023-06-06 Delos Living Llc Systems, methods and articles for assessing and/or improving health and well-being
US11649977B2 (en) 2018-09-14 2023-05-16 Delos Living Llc Systems and methods for air remediation
US11844163B2 (en) 2019-02-26 2023-12-12 Delos Living Llc Method and apparatus for lighting in an office environment
US11898898B2 (en) 2019-03-25 2024-02-13 Delos Living Llc Systems and methods for acoustic monitoring

Also Published As

Publication number Publication date
US20020087673A1 (en) 2002-07-04

Similar Documents

Publication Publication Date Title
US6973556B2 (en) Data element including metadata that includes data management information for managing the data element
US6904508B2 (en) Recovery of dynamic maps and data managed thereby
US10936503B2 (en) Device access point mobility in a scale out storage system
US7441096B2 (en) Hierarchical storage management system
US7716183B2 (en) Snapshot preserved data cloning
US8190850B1 (en) Virtual block mapping for relocating compressed and/or encrypted file data block blocks
US9460102B1 (en) Managing data deduplication in storage systems based on I/O activities
US8478729B2 (en) System and method for controlling the storage of redundant electronic files to increase storage reliability and space efficiency
US8200631B2 (en) Snapshot reset method and apparatus
US7930559B1 (en) Decoupled data stream and access structures
US7865677B1 (en) Enhancing access to data storage
US6532527B2 (en) Using current recovery mechanisms to implement dynamic mapping operations
KR100439675B1 (en) An efficient snapshot technique for shated large storage
US8615641B2 (en) System and method for differential backup
US8510526B2 (en) Storage apparatus and snapshot control method of the same
US8046534B2 (en) Managing snapshots in storage systems
EP1653359A2 (en) Data duplication operations in storage networks
US10809932B1 (en) Managing data relocations in storage systems
US20140040661A1 (en) Data recovery for failed memory device of memory device array
US20060106893A1 (en) Incremental backup operations in storage networks
US20070260592A1 (en) Hierarchical storage management of metadata
US20030061456A1 (en) Apparatus and methods for copying, backing up and restoring logical objects in a computer storage system by transferring blocks out of order or in parallel
US20150301758A1 (en) Metadata for data storage array
EP1653360A2 (en) Recovery operations in storage networks
JP2003518659A (en) Apparatus and method for operating a computer storage system

Legal Events

Date Code Title Description
AS Assignment

Owner name: STORAGE TECHNOLOGY CORPORATION, COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SELKIRK, STEPHEN S.;MILLIGAN, CHARLES A.;MERRITT, PERRY;AND OTHERS;REEL/FRAME:011922/0749;SIGNING DATES FROM 20001229 TO 20010526

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: SUN MICROSYSTEMS, INC., CALIFORNIA

Free format text: MERGER;ASSIGNOR:STORAGE TECHNOLOGY CORPORATION;REEL/FRAME:037695/0010

Effective date: 20061222

Owner name: ORACLE AMERICA, INC., CALIFORNIA

Free format text: MERGER AND CHANGE OF NAME;ASSIGNORS:ORACLE USA, INC.;SUN MICROSYSTEMS, INC.;ORACLE AMERICA, INC.;REEL/FRAME:037695/0299

Effective date: 20100212

FPAY Fee payment

Year of fee payment: 12