US20100241807A1 - Virtualized data storage system cache management - Google Patents

Virtualized data storage system cache management Download PDF

Info

Publication number
US20100241807A1
US20100241807A1 US12/730,192 US73019210A US2010241807A1 US 20100241807 A1 US20100241807 A1 US 20100241807A1 US 73019210 A US73019210 A US 73019210A US 2010241807 A1 US2010241807 A1 US 2010241807A1
Authority
US
United States
Prior art keywords
storage
storage block
virtual
block
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/730,192
Inventor
David Tze-Si Wu
Steven McCanne
Michael J. Demmer
Nitin Gupta
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Riverbed Technology LLC
Original Assignee
Riverbed Technology LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Riverbed Technology LLC filed Critical Riverbed Technology LLC
Priority to US12/730,192 priority Critical patent/US20100241807A1/en
Assigned to RIVERBED TECHNOLOGY, INC. reassignment RIVERBED TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DEMMER, MICHAEL, GUPTA, NITIN, MCCANNE, STEVEN, WU, DAVID
Publication of US20100241807A1 publication Critical patent/US20100241807A1/en
Assigned to MORGAN STANLEY & CO. LLC reassignment MORGAN STANLEY & CO. LLC SECURITY AGREEMENT Assignors: OPNET TECHNOLOGIES, INC., RIVERBED TECHNOLOGY, INC.
Assigned to RIVERBED TECHNOLOGY, INC. reassignment RIVERBED TECHNOLOGY, INC. RELEASE OF PATENT SECURITY INTEREST Assignors: MORGAN STANLEY & CO. LLC, AS COLLATERAL AGENT
Assigned to JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT reassignment JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT PATENT SECURITY AGREEMENT Assignors: RIVERBED TECHNOLOGY, INC.
Assigned to RIVERBED TECHNOLOGY, INC. reassignment RIVERBED TECHNOLOGY, INC. RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BARCLAYS BANK PLC
Assigned to RIVERBED TECHNOLOGY, INC. reassignment RIVERBED TECHNOLOGY, INC. CORRECTIVE ASSIGNMENT TO CORRECT THE CONVEYING PARTY NAME PREVIOUSLY RECORDED ON REEL 035521 FRAME 0069. ASSIGNOR(S) HEREBY CONFIRMS THE RELEASE OF SECURITY INTEREST IN PATENTS. Assignors: JPMORGAN CHASE BANK, N.A.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/188Virtual file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/6024History based prefetching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • the present invention relates generally to data storage systems, and systems and methods to improve storage efficiency, compactness, performance, reliability, and compatibility.
  • Enterprises often span geographical locations, including multiple corporate sites, branch offices, and data centers, all of which are generally connected over a wide-area network (WAN).
  • WAN wide-area network
  • servers are run in a data center and accessed over the network, there are also cases in which servers need to be run in distributed locations at the “edges” of the network.
  • These network edge locations are generally referred to as branch locations in this application, regardless of the purposes of these locations.
  • the need to operate servers at branch locations may arise from variety of reasons, including efficiently handling large amounts of newly written data and ensuring service availability during WAN outages.
  • the branch data storage requires maintenance and administration, including proper sizing for future growth, data snapshots, archives, and backups, and replacements and/or upgrades of storage hardware and software when the storage hardware or software fails or branch data storage requirements change.
  • branch data storage is more expensive and inefficient than consolidated data storage at a centralized data center.
  • Organizations often require on-site personnel at each branch location to configure and upgrade each branch's data storage, and to manage data backups and data retention. Additionally, organizations often purchase excess storage capacity for each branch location to allow for upgrades and growing data storage requirements. Because branch locations are serviced infrequently, due to their numbers and geographic dispersion, organizations often deploy enough data storage at each branch location to allow for months or years of storage growth. However, this excess storage capacity often sits unused for months or years until it is needed, unnecessarily driving up costs.
  • FIG. 1 illustrates a virtualized data storage system architecture according to an embodiment of the invention
  • FIGS. 2A-2B illustrate methods of prefetching storage blocks to improve virtualized data storage system performance according to embodiments of the invention
  • FIG. 3 illustrates a method of processing storage block write requests to improve virtualized data storage system performance according to an embodiment of the invention
  • FIGS. 4A-4C illustrate write order preservation policies according to embodiments of the invention
  • FIG. 5 illustrates an arrangement for recursively applying transformations and optimizations to improve virtualized data storage system performance according to an embodiment of the invention
  • FIG. 6 illustrates a method of creating a data storage snapshot in a virtualized data storage system performance according to an embodiment of the invention.
  • FIG. 7 illustrates an example computer system capable of a virtualized data storage system device according to an embodiment of the invention.
  • An embodiment of the invention uses virtual storage arrays to consolidate branch location-specific data storage at data centers connected with branch locations via wide area networks.
  • the virtual storage array appears to a storage client as a local branch data storage; however, embodiments of the invention actually store the virtual storage array data at a data center connected with the branch location via a wide-area network.
  • a branch storage client accesses the virtual storage array using storage block based protocols.
  • Embodiments of the invention overcome the bandwidth and latency limitations of the wide area network between branch locations and the data center by predicting storage blocks likely to be requested in the future by the branch storage client and prefetching and caching these predicted storage blocks at the branch location. When this prediction is successful, storage block requests from the branch storage client may be fulfilled in whole or in part from the branch location' storage block cache. As a result, the latency and bandwidth restrictions of the wide-area network are hidden from the storage client.
  • the branch location storage client uses storage block-based protocols to specify reads, writes, modifications, and/or deletions of storage blocks.
  • servers and higher-level applications typically access data in terms of files in a structured file system, relational database, or other high-level data structure.
  • Each entity in the high-level data structure such as a file or directory, or database table, node, or row, may be spread out over multiple storage blocks at various non-contiguous locations in the storage device.
  • prefetching storage blocks based solely on their locations in the storage device is unlikely to be effective in hiding wide-area network latency and bandwidth limits from storage clients.
  • An embodiment of the invention leverages an understanding of the semantics and structure of the high-level data structures associated with the storage blocks to predict which storage blocks are likely to be requested by a storage client in the near future. To do this, an embodiment of the invention determines the association between requested storage blocks and the corresponding high-level data structure entities, such as files, directories, or database elements. Once this embodiment has identified one or more of the high-level data structure entities associated with a requested storage block, this embodiment of the invention identifies additional portions of the same or other high-level data structure entities that are likely to be accessed by the storage client. This embodiment of the invention then identifies the additional storage blocks corresponding to these additional high-level data structure entities. The additional storage blocks are then prefetched and cached at the branch location.
  • Another embodiment of the invention analyzes a selected high-level data structure entity to identify portions of the same or other high-level data structure entities that is likely to be accessed by the storage client. This embodiment of the invention then identifies the additional storage blocks corresponding to these additional high-level data structure entities. The additional storage blocks are then prefetched and cached at the branch location. This embodiment of the invention may also identify additional high-level data structure entities to analyze based on its analysis of previously selected high-level data structure entities.
  • embodiments of the invention may identify corresponding high-level data structure entities directly from requests for storage blocks. Additionally, embodiments of the invention may successively apply any number of successive transformations to storage block requests to identify associated high-level data structure entities. These successive transformations may include transformations to intermediate level data structure entities. Intermediate and high-level data structure entities may include virtual machine data structures, such as virtual machine file system files, virtual machine file system storage blocks, virtual machine storage structures, and virtual machine disk images.
  • FIG. 1 illustrates a virtualized data storage system architecture 100 according to an embodiment of the invention.
  • Virtualized data storage system architecture 100 includes a data center 101 connected with at least one branch network location 102 via a wide-area network (WAN) 130 .
  • Each branch location 102 includes at least one storage client 139 , such as a file server, application server, database server, or storage area network (SAN) interface.
  • a storage client 139 may be connected with a local-area network (LAN) 151 , including routers, switches, and other wired or wireless network devices, for connecting with server and client systems and other devices 152 .
  • LAN local-area network
  • typical branch location installations also required a local physical data storage device for the storage client.
  • a prior typical branch location LAN installation may include a file server for storing data for the client systems and application servers, such as database servers and e-mail servers.
  • this branch location's data storage is located at the branch location site and connected directly with the branch location LAN or SAN.
  • the branch location physical data storage device previously could not be located at the data center 101 , because the intervening WAN 130 is too slow and has high latency, making storage accesses unacceptably slow for storage clients.
  • An embodiment of the invention allows for storage consolidation of branch location-specific data storage at data centers connected with branch locations via wide area networks.
  • This embodiment of the invention overcomes the bandwidth and latency limitations of the wide area network between branch locations and the data center.
  • an embodiment of the invention includes virtual storage arrays.
  • the branch location 102 includes a virtual storage array interface device 135 .
  • the virtual storage array interface device 135 presents a virtual storage array 137 to branch location users, such as the branch location storage client 139 .
  • a virtual storage array 137 can be used for the same purposes as a local storage area network or other data storage device.
  • a virtual storage array 137 may be used in conjunction with a file server for general-purpose data storage, in conjunction with a database server for database application storage, or in conjunction with an e-mail server for e-mail storage.
  • the virtual storage array 137 stores its data at a data center 101 connected with the branch location 102 via a wide area network 130 . Multiple separate virtual storage arrays, from different branch locations, may store their data in the same data center and, as described below, on the same physical storage devices.
  • An organization can manage and control access to their data storage at a central data center, rather than at large numbers of separate branch locations. This increases the reliability and performance of an organization's data storage. This also reduces the personnel required at branch location offices to provision, maintain, and backup data storage. It also enables organizations to implement more effective backup systems, data snapshots, and disaster recovery for their data storage. Furthermore, organizations can plan for storage growth more efficiently, by consolidating their storage expansion for multiple branch locations and reducing the amount of excess unused storage. Additionally, an organization can apply optimizations such as compression or data deduplication over the data from multiple branch locations stored at the data center, reducing the total amount of storage required by the organization.
  • virtual storage array interface 135 may be a stand-alone computer system or network appliance or built into other computer systems or network equipment as hardware and/or software.
  • a branch location virtual storage array interface 135 may be implemented as a software application or other executable code running on a client system or application server.
  • a branch location virtual storage array interface 135 includes one or more storage array network interfaces and supports one or more storage block network protocols to connect with one or more storage clients 139 via a local storage area network (SAN) 138 .
  • SAN local storage area network
  • Examples of storage array network interfaces suitable for use with embodiments of the invention include Ethernet, Fibre Channel, IP, and InfiniBand interfaces.
  • Examples of storage array network protocols include ATA, Fibre Channel Protocol, and SCSI.
  • Various combinations of storage array network interfaces and protocols are suitable for use with embodiments of the invention, including iSCSI, HyperSCSI, Fibre Channel over Ethernet, and iFCP.
  • an embodiment of the branch location virtual storage array interface can use the branch location LAN's physical connections and networking equipment for communicating with client systems and application services.
  • separate connections and networking equipment such as Fibre Channel networking equipment, is used to connect the branch location virtual storage array interface with client systems and/or application services.
  • branch location virtual storage array interface 135 allows storage clients to access data in the virtual storage array via storage block protocols, unlike file servers that utilize file-based protocols.
  • the virtual storage array 137 may be accessed by any type of storage client in the same manner as a local physical storage device or storage array.
  • applications executed by the storage client 139 or other client and server systems 152 may access the virtual storage array in the same manner as a local physical storage device or storage array.
  • the storage client 139 is included in a file server that also provide a network file interface to the virtual storage array 137 to client systems and other application servers.
  • the branch location virtual storage array interface 135 is integrated as hardware and/or software with an application server, such as a file server, database server, or e-mail server.
  • the branch location virtual storage array interface 135 can include application server interfaces, such as a network file interface, for interfacing with other application servers and/or client systems.
  • a branch location virtual storage array interface 135 presents a virtual storage array 137 to one or more storage clients 139 .
  • the virtual storage array 137 appears to be a local storage array, having its physical data storage at the branch location 102 .
  • the branch location virtual storage array interface 135 actually stores and retrieves data from physical data storage devices located at the data center 101 . Because virtual storage array data accesses must travel via the WAN 130 between the data center 101 LAN to a branch location 102 LAN, the virtual storage array 137 is subject to the latency and bandwidth restrictions of the WAN 130 .
  • the branch location virtual storage array interface 135 includes a virtual storage array cache 145 , which is used to ameliorate the effects of the WAN 130 on virtual storage array 137 performance.
  • the virtual storage array cache 145 includes a storage block read cache 147 and a storage block write cache 149 .
  • the storage block read cache 147 is adapted to store local copies of storage blocks requested by storage client 139 .
  • the virtualized data storage system architecture 100 may attempt to predict which storage blocks will be requested by the storage client 139 in the future and preemptively send these predicted storage blocks from the data center 101 to the branch 102 via WAN 130 for storage in the storage block read cache 147 . If this prediction is partially or wholly correct, then when the storage client 139 eventually requests one or more of these prefetched storage blocks from the virtual storage array 137 , an embodiment of the virtual storage array interface 135 can fulfill this request using local copies of the requested storage blocks from the block read cache 145 .
  • the latency and bandwidth restrictions of WAN 130 are hidden from the storage client 139 .
  • the virtual storage array 137 appears to perform storage block read operations as if the physical data storage were located at the branch location 102 .
  • the storage block write cache 149 is adapted to store local copies of new or updated storage blocks written by the storage client 139 .
  • the storage block write cache 149 temporarily stores new or updated storage blocks written by the storage client 139 until these storage blocks are copied back to physical data storage at the data center 101 via WAN 130 .
  • the bandwidth and latency of the WAN 130 is hidden from the storage client 139 .
  • the virtual storage array 137 appears to perform storage block write operations as if the physical data storage were located at the branch location 102 .
  • the virtual storage array cache 145 includes non-volatile and/or redundant data storage, so that data in new or updated storage blocks are protected from system failures until they can be transferred over the WAN 130 and stored in physical data storage at the data center 101 .
  • the branch location virtual storage array interface 135 operates in conjunction with a data center virtual storage array interface 107 .
  • the data center virtual storage array interface 107 is located on the data center 101 LAN and may communicate with one or more branch location virtual storage array interfaces via the data center 101 LAN, the WAN 130 , and their respective branch location LANs.
  • Data communications between virtual storage array interfaces can be in any form and/or protocol used for carrying data over wired and wireless data communications networks, including TCP/IP.
  • data center virtual storage array interface 107 is connected with one or more physical data storage devices 103 to store and retrieve data for one or more virtual storage arrays, such as virtual storage array 137 .
  • a data center virtual storage array interface 107 accesses a physical storage array network interface, which in turn accesses physical data storage array 103 a on a storage array network (SAN) 105 .
  • the data center virtual storage array interface 107 includes one or more storage array network interfaces and supports one or more storage array network protocols for directly connecting with a physical storage array network 105 and its physical data storage array 103 a. Examples of storage array network interfaces suitable for use with embodiments of the invention include Ethernet, Fibre Channel, IP, and InfiniBand interfaces.
  • Examples of storage array network protocols include ATA, Fibre Channel Protocol, and SCSI.
  • Various combinations of storage array network interfaces and protocols are suitable for use with embodiments of the invention, including iSCSI, HyperSCSI, Fibre Channel over Ethernet, and iFCP.
  • Embodiments of the data center virtual storage array interface 107 may connect with the physical storage array interface and/or directly with the physical storage array network 105 using the Ethernet network of the data center LAN and/or separate data communications connections, such as a Fibre Channel network.
  • data center virtual storage array interface 107 may store and retrieve data for one or more virtual storage arrays, such as virtual storage array 137 , using a network storage device, such as file server 103 b.
  • File server 103 b may be connected with data center virtual storage array 137 via local-area network (LAN) 115 , such as an Ethernet network, and communicate using a network file system protocol, such as NFS, SMB, or CIFS.
  • LAN local-area network
  • Embodiments of the data center virtual storage array interface 107 may utilize a number of different arrangements to store and retrieve virtual storage array data with physical data storage array 103 a or file server 103 b.
  • the virtual data storage array 137 presents a virtualized logical storage unit, such as an iSCSI or FibreChannel logical unit number (LUN), to storage client 139 .
  • This virtual logical storage unit is mapped to a corresponding logical storage unit 104 a on physical data storage array 103 a.
  • Data center virtual storage array interface 107 stores and retrieves data for this virtualized logical storage unit using a non-virtual logical storage unit 104 a provided by physical data storage array 103 a.
  • the data center virtual data storage array interface 107 supports multiple branch locations and maps each storage client's virtualized logical storage unit to a different non-virtual logical storage unit provided by physical data storage array 103 a.
  • virtual data storage array interface 107 maps a virtualized logical storage unit to a virtual machine file system 104 b, which is provided by the physical data storage array 103 a.
  • Virtual machine file system 104 b is adapted to store one or more virtual machine disk images 113 , each representing the configuration and optionally state and data of a virtual machine.
  • Each of the virtual machine disk images 113 such as virtual machine disk images 113 a and 113 b, includes one or more virtual machine file systems to store applications and data of a virtual machine.
  • its virtual machine disk image 113 within the virtual machine file system 104 b appears as a logical storage unit.
  • the complete virtual machine file system 104 b appears to the data center virtual storage array interface 107 as a single logical storage unit.
  • virtual data storage array interface 107 maps a virtualized logical storage unit to a logical storage unit or file system 104 c provided by the file server 103 c.
  • storage clients can interact with virtual storage arrays in the same manner that they would interact with physical storage arrays. This includes issuing storage commands to the branch location virtual storage interface using storage array network protocols such as iSCSI or Fibre Channel protocol.
  • storage array network protocols such as iSCSI or Fibre Channel protocol.
  • Most storage array network protocols organize data according to storage blocks, each of which has a unique storage address or location.
  • a storage block's unique storage address may include logical unit number (using the SCSI protocol) or other representation of a logical volume.
  • the virtual storage array provided by a branch location virtual storage interface allows a storage client to access storage blocks by their unique storage address within the virtual storage array.
  • an embodiment of the invention allows arbitrary mappings between the unique storage addresses of storage blocks in the virtual storage array and the corresponding unique storage addresses in one or more physical data storage devices 103 .
  • the mapping between virtual and physical storage address may be performed by a branch location virtual storage array interface 137 and/or by data center virtual storage array interface 107 .
  • storage blocks in the virtual storage array may be of a different size and/or structure than the corresponding storage blocks in a physical storage array or data storage device. For example, if data compression is applied to the storage data, then the physical storage array data blocks may be smaller than the storage blocks of the virtual storage array to take advantage of data storage savings.
  • the branch location and/or data center virtual storage array interfaces map one or more virtual storage array storage blocks to one or more physical storage array storage blocks.
  • a virtual storage array storage block can correspond with a fraction of a physical storage array storage block, a single physical storage array storage block, or multiple physical storage array storage blocks, as required by the configuration of the virtual and physical storage arrays.
  • the branch location and data center virtual storage array interfaces may reorder or regroup storage operations from storage clients to improve efficiency of data optimizations such as data compression. For example, if two storage clients are simultaneously accessing the same virtual storage array, then these storage operations will be intermixed when received by the branch location virtual storage array interface.
  • An embodiment of the branch location and/or data center virtual storage array interface can reorder or regroup these storage operations according to storage client, type of storage operation, data or application type, or any other attribute or criteria to improve virtual storage array performance and efficiency.
  • a virtual storage array interface can group storage operations by storage client and apply data compression to each storage client's operations separately, which is likely to provide greater data compression than compressing all storage operations together.
  • an embodiment of the virtualized data storage system architecture 100 attempts to predict which storage blocks will be requested by a storage client in the near future, prefetches these storage blocks from the physical data storage devices 103 , and forwards these to the branch location 102 for storage in the storage block read cache 147 .
  • This prediction is successful and storage block requests may be fulfilled in whole or in part from the block read cache 147 , the latency and bandwidth restrictions of the WAN 130 are hidden from the storage client.
  • An embodiment of the virtualized data storage system architecture 100 includes a storage block access optimizer 120 to select storage blocks for prefetching to storage clients.
  • the storage block access optimizer 120 is located at the data center 101 and is connected or incorporated into the data center virtual data storage array interface 107 .
  • the storage block access optimizer 120 may be located at the branch location 102 and be connected with or incorporated into the branch location virtual data storage interface 135 .
  • storage devices such as physical data storage arrays and the virtual data storage array are accessed using storage block-based protocols.
  • a storage block is a sequence of bytes or bits of data.
  • Data storage devices represent their data storage as a set of storage blocks that may be used to store and retrieve data.
  • the set of storage blocks is an abstraction of the underlying hardware of a physical or virtual data storage device.
  • Storage clients use storage block-based protocols to specify reads, writes, modifications, and/or deletions of storage blocks.
  • servers and higher-level applications typically access data in terms of files in a structured file system, relational database, or other high-level data structure.
  • Each entity in the high-level data structure such as a file or directory, or database table, node, or row, may be spread out over multiple storage blocks at various non-contiguous locations in the storage device.
  • prefetching storage blocks based solely on their location in the storage device is unlikely to be effective in hiding WAN latency and bandwidth limits from storage clients.
  • the storage block access optimizer 120 leverages an understanding of the semantics and structure of the high-level data structures associated with the storage blocks to predict which storage blocks are likely to be requested by a storage client in the near future. To do this, the storage block access optimizer 120 must be able to determine the association between storage blocks and its high-level data structure.
  • An embodiment of the storage block access optimizer 120 uses an inferred storage structure database (ISSD) 123 to match storage blocks with their associated entity in the high-level data structure. For example, given a specific storage block location, the storage block access optimizer 120 may use the ISSD 123 to identify the file or directory in a file system, or the database table, record, or node, that is using this storage block to store some or all of its data.
  • ISSD storage structure database
  • the storage block access optimizer 120 may employ a number of different techniques to predict which additional storage blocks are likely to be requested by a storage client. For example, storage block access optimizer 120 may observe requests from a storage client 139 for storage blocks from the virtual data storage array 137 , identify the high-level data structure entities associated with the requested storage blocks, and select additional storage blocks associated with these or other high-level data structure entities for prefetching. These types of storage block prefetching techniques are referred to as reactive prefetching.
  • the storage block access optimizer 120 may analyze entities in the high-level data structures, such as files, directories, or database entities, to identify specific entities or portions thereof that are likely to be requested by the storage client 139 . Using the ISSD 123 , the storage block access optimizer 120 identifies storage blocks corresponding with these identified entities or portions thereof and prefetches these storage blocks for storage in the block read cache 147 at the branch location 102 . These types of storage block prefetching techniques are referred to as policy-based prefetching. Further examples of reactive and policy-based prefetching are discussed below. Embodiments of the storage block access optimizer 120 may utilize any combination of reactive and policy-based prefetching techniques to select storage blocks to be prefetched and stored in the block read cache 147 at the branch location 102 .
  • the branch location 102 and data center location 101 may optionally include network optimizers 125 for improving the performance of data communications over the WAN between branches and/or the data center.
  • Network optimizers 125 can improve actual and perceived WAN network performance using techniques including compressing data communications; anticipating and prefetching data; caching frequently accessed data; shaping and restricting network traffic; and optimizing usage of network protocols.
  • network optimizers 125 may be used in conjunction with virtual data storage array interfaces 107 and 135 to further improve virtual storage array 137 performance for storage blocks accessed via the WAN 130 .
  • network optimizers 125 may ignore or pass-through virtual storage array 137 data traffic, relying on the virtual storage array interfaces 107 and 135 at the data center 101 and branch location 102 to optimize WAN performance.
  • a data center virtual storage array interface 107 may be connected directly between WAN 130 and a physical data storage array 103 , eliminating the need for a data center LAN.
  • a branch location virtual storage array interface 135 implemented for example in the form of a software application executed by a storage client computer system, may be connected directly with WAN 130 , such as the internet, eliminating the need for a branch location LAN.
  • the data center and branch location virtual data storage array interfaces 107 and 135 may be combined into a single unit, which may be located at the branch location 102 .
  • FIGS. 2A-2B illustrate methods of prefetching storage blocks to improve virtualized data storage system performance according to embodiments of the invention.
  • FIG. 2A illustrates a method 200 of performing reactive prefetching of storage blocks according to an embodiment of the invention.
  • Step 205 receives a storage block read request from a storage client at the branch location.
  • the storage block read request may be received by a branch location virtual data storage array interface.
  • decision block 210 determines if the requested storage block has been previously retrieved and stored in the storage block read cache at the branch location. If so, step 220 retrieves the requested storage block from the storage block read cache and returns it to the requesting storage client. In an embodiment, if the system includes a data center virtual storage array interface, then step 220 also forwards the storage block read request back to the data center virtual storage array interface for use in identifying additional storage blocks likely to be requested by the storage client in the future.
  • step 215 retrieves the requested storage block via a WAN connection from the virtual storage array data located in a physical data storage at the data center.
  • a branch location virtual storage array interface forwards the storage block read request to the data center virtual storage array interface via the WAN connection.
  • the data center virtual storage array interface then retrieves the requested storage block from the physical storage array and returns it to the branch location virtual storage array interface, which in turn provides this requested storage block to the storage client.
  • a copy of the retrieved storage block may be stored in the storage block read cache for future accesses.
  • steps 225 to 250 prefetch additional storage blocks likely to be requested by the storage client in the near future.
  • Step 225 identifies the high-level data structure entity associated with the requested storage block.
  • Typical block storage protocols such as iSCSI and FCP, specify block read requests using a storage block address or identifier.
  • these storage block read requests do not include any identification of the high-level data structure, such as a file, directory, or database entity, that is associated with this storage block. Therefore, an embodiment of step 225 accesses an ISSD to identify the high-level data structure associated with the requested storage block.
  • step 225 provides the ISSD with the storage block address or identifier.
  • the ISSD returns an identifier of the high-level data structure entity associated with the requested storage block.
  • the identifier of the high-level data structure entity may be an inode or similar file system identifier or a database storage structure identifier, such as a database table or B-tree node.
  • the ISSD also includes a location within the high-level data structure entity corresponding with the requested storage block.
  • step 225 may provide a storage block identifier to the ISSD and in response receive the inode or other file system identifier for a file stored in this storage block. Additionally, the ISSD can return an offset, index, or other file location indicator that specifies the portion of this file stored in the storage block.
  • step 230 identifies additional high-level data structure entities or portions thereof that are likely to be requested by the storage client.
  • additional high-level data structure entities or portions thereof for prefetching may be used by embodiments of step 230 . Some of these are described in detail in co-pending U.S. patent application Ser. No. ______ [Attorney Docket Number R001420US], entitled “Virtual Data Storage System Optimizations”, filed ______, which is incorporated by reference herein for all purposes.
  • One example technique is to prefetch portions of the high-level data structure entity based on their adjacency or close proximity to the identified portion of the entity. For example, if step 225 determines that the requested storage block corresponds with a portion of a file from file offset 0 up to offset 4095, then step 230 may identify a second portion of this same file beginning with offset 4096 for prefetching. It should be noted that although these two portions are adjacent in the high-level data structure entity, their corresponding storage blocks may be non-contiguous.
  • Another example technique is to identify the type of high-level data structure entity, such as a file of a specific format, a directory in a file system, or a database table, and apply one or more heuristics to identify additional portions of this high-level data structure entity or a related high-level data structure entity for prefetching.
  • high-level data structure entity such as a file of a specific format, a directory in a file system, or a database table
  • identify additional portions of this high-level data structure entity or a related high-level data structure entity for prefetching For example, applications employing a specific type of file may frequently access data at a specific location within these files, such as at the beginning or end of the file.
  • step 230 may identify these frequently accessed portions of the file for prefetching.
  • step 230 monitors the times at which high-level data structure entities are accessed.
  • High-level data structure entities that are accessed at approximately the same time are associated together by the virtual storage array architecture. If any one of these associated high-level data structure entities is later accessed again, an embodiment of step 230 identifies one or more associated high-level data structure entities that were previously accessed at approximately the same time as the requested high-level data structure entity for prefetching.
  • a storage client may have previously requested storage blocks from files A, B, and C at approximately the same time, such as within a minute of each other. Based on this previous access pattern, if step 225 determines that a requested storage block is associated with file A, step 230 may identify all or portions of files B and C for prefetching.
  • step 230 analyzes the high-level data structure entity associated with the requested storage block to identify related portions of the same or other high-level data structure entity for prefetching.
  • application files may include references to additional files, such as overlay files or dynamically loaded libraries.
  • a database table may include references to other database tables.
  • Step 230 identifies all or portions of one or more high-level data structure entities for prefetching based on the high-level data structure entity associated with the requested storage block.
  • storage clients specify data access requests in terms of storage blocks, not high-level data structure entities such as files, directories, or database tables.
  • step 235 identifies one or more storage blocks corresponding with the high-level data structure entities identified for prefetching in step 230 .
  • step 235 provides the ISSD with identifiers for one or more high-level data structure entities, such as the inodes of files or similar identifiers for other types of file systems or database storage structures.
  • step 235 also provides an offset, file location, or other type of address identify a specific portion of a high-level data structure entity to be prefetched.
  • the ISSD returns an identifier of one or more storage blocks associated with the high-level data structure entities. These identified storage blocks are used to store the high-level data structure entities or portions thereof.
  • Decision block 240 determines if the storage blocks identified in step 235 have already been stored in the storage block read cache located at the branch location.
  • the storage block access optimizer at the data center maintains a record of all of the storage blocks that have copies stored in the storage block read cache.
  • the storage block access optimizer queries the branch location virtual storage array interface to determine if copies of these identified storage blocks have already been stored in the storage block read cache.
  • decision block 240 and the determination of whether an additional storage block has been previously retrieved and cached may be omitted. Instead, this embodiment can send all of the additional storage blocks identified by step 235 to the branch location virtual storage array interface to be cached. This embodiment can be used when WAN latency, rather than WAN bandwidth limitations, are an overriding concern.
  • method 200 proceeds from decision block 240 back to step 205 to await receipt of further storage block requests.
  • step 245 retrieves these uncached storage blocks from the virtual storage array data located in a physical data storage on the data center LAN.
  • the retrieved storage blocks are sent via the WAN connection from the data center location to the branch location.
  • the data center virtual storage array interface receives a request for the uncached identified storage blocks from the storage block access optimizer and, in response, accesses the physical data storage array to retrieve these storage blocks.
  • the data center virtual storage array interface then forwards these storage blocks to the branch location virtual storage array interface via the WAN connection.
  • Step 250 stores the storage blocks identified for prefetching in the storage block read cache.
  • the branch location virtual storage array interface receives one or more storage blocks from the data center virtual storage array interface via the WAN connection and stores these storage blocks in the storage block read cache.
  • method 200 proceeds to step 205 to await receipt of further storage block requests.
  • the storage blocks added to the storage block read cache in previous iterations of method 200 may be available for fulfilling storage block read requests.
  • Method 200 may be performed by a branch virtual data storage array interface, by a data center virtual data storage array interface, or by both virtual data storage array interfaces working in concert.
  • steps 205 to 220 of method 200 may be performed by a branch location virtual storage array interface and steps 225 to 250 of method 200 may be performed by a data center virtual storage array interface.
  • all of the steps of method 200 may be performed by a branch location virtual storage array interface.
  • FIG. 2B illustrates a method 255 of performing policy-based prefetching of storage blocks according to an embodiment of the invention.
  • Step 260 selects a high-level data structure entity for analysis. Examples of a selected high-level data structure entities include a file, directory, and other file system entity such as an inode, as well as database entities such as tables, records, and B-tree nodes or other structures.
  • Step 265 analyzes the selected high-level data structure entity to identify additional portions of the same high-level data structure entity or all or portions of additional high-level data structure entities that are likely to be requested by the storage client.
  • identifying addition high-level data structures or portions thereof for prefetching may be used by embodiments of step 265 . Some of these are described in detail in co-pending U.S. patent application Ser. No. ______ [Attorney Docket Number R001420US], entitled “Virtual Data Storage System Optimizations”, filed ______, which is incorporated by reference herein for all purposes.
  • One example technique is to identify the type of entity, such as a file of a specific format, a directory in a file system, or a database table, and apply one or more heuristics to identify additional portions of this high-level data structure entity or a related high-level data structure entity for prefetching. For example, applications employing a specific type of file may frequently access data at a specific location within these files, such as at the beginning or end of the file. Using knowledge of this application or entity-specific behavior, step 265 may identify the beginning or end portions of these types of files for prefetching.
  • entity such as a file of a specific format, a directory in a file system, or a database table
  • step 265 analyzes the high-level data structure entity associated with the requested storage block to identify related portions of the same or other high-level data structure entity for prefetching.
  • application files may include references to additional files, such as overlay files or dynamically loaded libraries.
  • a database table may include references to other database tables. Step 265 may use an analysis of this high-level data structure entity to identify additional referenced high-level data structure entities.
  • the referenced high-level data structure entities may be prefetched.
  • step 265 may analyze application, virtual machine, or operating system specific files or other high-level data structure entities to identify additional high-level data structure entities for prefetching.
  • step 265 may analyze application or operating system log files to identify the sequence of files accessed during operations such a system or application start-up. These identified files may then be selected for prefetching.
  • step 270 identifies all or portions of one or more high-level data structure entities for prefetching based on the high-level data structure entity associated with the requested storage block.
  • storage clients specify data access requests in terms of storage blocks, not high-level data structure entities such as files, directories, or database tables.
  • step 270 provides the ISSD with identifiers for one or more high-level data structure entities, such as the inodes of files or similar identifiers for other types of file systems or database storage structures.
  • step 270 also provides an offset, file location, or other type of address identify a specific portion of a high-level data structure entity to be prefetched.
  • the ISSD returns an identifier of one or more storage blocks associated with the high-level data structure entities. These storage blocks are used to store the high-level data structure entities or portions thereof.
  • Decision block 275 determines if the storage blocks identified in step 270 have already been stored in the storage block read cache located at the branch location.
  • the storage block access optimizer at the data center maintains a record of all of the storage blocks that have copies stored in the storage block read cache.
  • the storage block access optimizer queries the branch location virtual storage array interface to determine if copies of these identified storage blocks have already been stored in the storage block read cache.
  • decision block 275 and the determination of whether an additional storage block has been previously retrieved and cached may be omitted. Instead, this embodiment can send all of the additional storage blocks identified by step 270 to the branch location virtual storage array interface to be cached. This embodiment can be used when WAN latency, rather than WAN bandwidth limitations, are an overriding concern.
  • step 280 determines if there are additional high-level data structure entities that should be included in the analysis of method 255 , based on the results of step 265 . For example, if steps 260 and 265 analyze a first file and identify a second file that should be prefetched, step 285 may include this second file in a list of high-level data structure entities to be analyzed by method 255 , potentially identifying additional files from the analysis of this second file.
  • step 285 retrieves these uncached storage blocks from the virtual storage array data located in a physical data storage on the data center LAN.
  • the retrieved storage blocks are sent via the WAN connection from the data center location to the branch location.
  • the data center virtual storage array interface receives a request for the uncached identified storage blocks from the storage block access optimizer and accesses the physical data storage array to retrieve these storage blocks.
  • the data center virtual storage array interface then forwards these storage blocks to the branch location virtual storage array interface via the WAN connection.
  • Step 290 stores the storage blocks identified for prefetching in the storage block read cache.
  • the branch location virtual storage array interface receives one or more storage blocks from the data center virtual storage array interface via the WAN connection and stores these storage blocks in the storage block read cache.
  • method 255 proceeds to step 285 .
  • the storage blocks added to the storage block read cache in previous iterations of method 255 may be available for fulfilling storage block read requests.
  • step 280 proceeds to step 260 to select another high-level data structure entity for analysis.
  • steps 285 and 290 may be performed asynchronously or in parallel with further iterations of method 255 .
  • a storage block access optimizer may direct the data center virtual storage array interface to retrieve one or more storage blocks. While this operation is being performed, the storage block access optimizer may continue with the execution of method 255 by proceeding to optional step 280 to identify further high-level data structure entities for analysis, and/or returning to step 260 for an additional iteration of method 255 .
  • step 290 may be performed in the background and in parallel to transfer these storage blocks via the WAN to the branch location for storage in the storage block read cache.
  • Method 255 may be performed by a branch virtual data storage array interface, by a data center virtual data storage array interface, or by both virtual data storage array interfaces working in concert.
  • steps 260 to 285 of method 255 may be performed by a data center virtual storage array interface.
  • all of the steps of method 255 may be performed by a branch location virtual storage array interface.
  • Embodiments of both methods 200 and 255 utilize the ISSD to identify high-level data structure entities from storage blocks and/or to identify storage blocks from their associated high-level data structure entities.
  • An embodiment of the invention creates the ISSD by initially searching high-level data structure entities, such as a master file table, allocation table or tree, or other types of file system metadata structures, to identify the high-level data structure entities corresponding with the storage blocks.
  • An embodiment of the invention may further recursively analyze other high-level data structure entities, such as inodes, directory structures, files, and database tables and nodes, that are referenced by the master file table or other high-level data structures.
  • This initial analysis may be performed by either the branch location or data center virtual storage array interface as a preprocessing activity or in the background while processing storage client requests.
  • the ISSD may be updated frequently or infrequently, depending upon the desired prefetching performance.
  • Embodiments of the invention may update the ISSD by periodically scanning the high-level data structure entities or by monitoring storage client activity for changes or additions to the virtual storage array, which is then used to update the affected portions of the ISSD.
  • embodiments of the invention prefetch storage blocks from the data center storage array and cache these storage blocks in a storage block cache located at the branch location.
  • the storage block cache may be smaller than the virtual storage array.
  • the branch or data center virtual storage array interface may need to occasionally evict or remove some storage blocks from the storage block cache to make room for other prefetched storage blocks.
  • the branch virtual storage array interface may use any cache replacement scheme or policy known in the art, such as a least recently used (LRU) cache management policy.
  • LRU least recently used
  • the storage block cache replacement policy of the storage block cache is based on an understanding of the relationship between storage blocks and corresponding high-level data structure entities, such as file system or database entities. In this embodiment, even though the storage block cache operates on the basis of storage blocks, the storage block cache replacement policies determine whether to retain or evict storage blocks in the storage block cache based on their associations to files or other high level data structure entities.
  • an embodiment of the virtual storage interface uses information associating storage blocks with corresponding files to evict all of the storage blocks associated with a single file, rather than evicting some storage blocks from one file and some from another file.
  • storage blocks are not necessarily evicted based on their own usage alone, but on the overall usage of their associated file or other high-level data structure entity.
  • the storage block cache may elect to preferentially retain storage blocks including file system metadata and/or directory structures over other storage blocks that include file data only.
  • the storage block cache may identify files or other high-level data structure entities that have not been accessed recently, and then use the ISSD to identify and select the storage blocks corresponding with these infrequently used files for eviction.
  • an embodiment of the virtual array storage system can also include cache policies to preferentially retain or “pin” specific storage blocks in the storage block cache, regardless of their usage or other factors. These cache retention policies can ensure that specific storage blocks are always accessible at the branch location, even at times when the WAN is unavailable, since copies of these storage blocks will always exist in the storage block cache.
  • a user, administrator, or administrative application may specify all or a portion of the virtual storage array for preferential retention or pinning in the storage block cache.
  • the virtual storage array system Upon receiving a request to pin some or all of the virtual storage array data in the storage block cache, the virtual storage array system needs to determine if the storage block cache has sufficient additional capacity to store the specified storage blocks. If the storage block cache has sufficient capacity, the virtual storage array system is allowed to reserves space in the storage block cache for the specified storage blocks; otherwise this request is denied.
  • the cache also may initiate a proactive prefetch process to retrieve any requested storage blocks that are not already in the storage block cache from the data center via the WAN. For large pinning requests, such as an entire virtual storage array, it may take hours or days for this proactive prefetch to be completed.
  • this proactive prefetching of pinned storage blocks may be performed asynchronously and at a lower priority than storage clients' requests for virtual storage array read operations, associated prefetching (discussed above), and the virtual storage array write operations (discussed below). This embodiment may be used to deploy data to a new branch location.
  • the virtual storage array data is copied asynchronously via the WAN to the branch location storage block cache.
  • this data transfer may take some time to complete, storage clients at this new branch location can access virtual storage array data immediately using the virtual storage array read and write operations, with the above-described storage block prefetching hiding the bandwidth and latency limitations of the WAN when storage clients access storage blocks that have yet to be copied to the branch location.
  • the storage block cache may allow users, administrators, and administration applications the ability to directly specify the pinning of high-level data structure entities, such as files or database elements, as opposed to specifying storage blocks for pinning in the storage block cache.
  • the virtual storage array uses the ISSD to identify storage blocks corresponding with the specified high-level data structure entities.
  • the virtual storage array may allow user, administrators, and administrative applications to specify only a portion of high-level data structure entities for pinning, such as file metadata and frequently used indices within high-level data structure entities. The virtual storage array then uses the associations between storage blocks and high-level data structure entities from the ISSD to identify specific storage blocks to be pinned in the storage block cache.
  • FIG. 3 illustrates a method 300 of processing storage block write requests to improve virtualized data storage system performance according to an embodiment of the invention.
  • An embodiment of method 300 starts with step 305 receiving a storage block write request from a storage client within the branch location LAN.
  • the storage block write request may be received from a storage client by a branch location virtual storage interface.
  • decision block 310 determines if the storage block write cache in the virtual storage array cache at the branch location is capable of accepting additional write requests or is full.
  • the virtual storage array cache may use some or all of its storage as a storage block write cache for pending virtual storage array write operations.
  • step 315 stores the storage block write request, including the storage block data to be written, in the storage block write cache.
  • Step 320 then sends a write acknowledgement to the storage client. Following the storage client's receipt of this write request, the storage client believes its storage block write request is complete and can continue to operation normally.
  • Step 325 then transfers the queued written storage block via the WAN to the physical storage array at the data center LAN. This transfer may occur in the background and asynchronously with the operation of storage clients.
  • a storage client may wish to access this storage block for a read or an additional write.
  • the virtual storage array interface intercepts the storage block access request.
  • the virtual storage array interface provides the storage client with the previously queued storage block.
  • the virtual storage array interface will update the queued storage block data and send a write acknowledgement to the storage client for this additional storage block access.
  • step 330 immediately transfers the storage block via the WAN to the physical storage array at the data center LAN.
  • the branch location virtual storage array interface receives a write confirmation that the storage block write operation is complete. This confirmation may be received from a data center virtual storage array interface or directly from a physical storage array or other data storage device.
  • step 340 sends a write acknowledgement to the storage client, allowing the storage client to resume normal operation.
  • a branch location virtual storage array interface may throttle storage block read and/or write requests from storage clients to prevent the virtual storage array cache from filling up under typical usage scenarios.
  • Embodiments of the virtual storage array use write order preservation to maintain data consistency.
  • the storage block cache tracks the order in which write requests are received and can use this ordering information when forwarding the storage block write requests to the physical storage array via the WAN, as described by step 325 .
  • FIGS. 4A-4C illustrate three write order preservation policies according to an embodiment of the invention.
  • FIG. 4A illustrates the contents of an example storage block write WAN queue 400 .
  • Storage block write WAN queue 400 is used by embodiments of a branch virtual storage array interface to schedule the transmission of storage blocks written by storage clients at the branch location from the storage block write cache to the physical storage array at the data center location.
  • a sequence of ten write operations from one or more branch storage clients is recorded.
  • the storage block write WAN queue 400 includes a reference to the storage block written by this write operation. For example, the first or earliest write operation received, write operation 1 , is a write to storage block 4 and the last write or most recent write operation received, write operation 10 , is a write to storage block 5 .
  • a first write order preservation policy is to preserve the semantics of the original file system, database, or other high-level data structure entity by forwarding all block write requests over the WAN to the physical storage array in the same order that they were received by the virtual array storage cache.
  • the branch virtual storage array interface will communicate written storage blocks to the physical storage array at the data center via the WAN in the same sequence as shown in example storage block write WAN queue 400 .
  • the image of the file system or database that exists on the physical storage array is always an internally consistent replica of the modifications made by storage clients at some point in time.
  • snapshots of the virtual storage array data such as snapshots A and B, are guaranteed to be internally consistent, because they include all of the write operations prior to the snapshot time.
  • this write order preservation policy requires the storage block write cache to keep track of multiple versions of these storage blocks and forward all of the write operations to these different versions of the storage block in the order received.
  • this policy requires more WAN bandwidth because every version of a storage block in the storage block write WAN queue must be forwarded to the data center, even if these versions are superseded by more recent versions of the storage block already in the storage block write WAN queue. For example, in storage block write WAN queue 400 , storage block 3 is written to in write operations 2 , 4 , and 7 . Thus, the storage block write cache must transmit all three of these versions of storage block 3 in the order that they were received.
  • FIG. 4B illustrates an example storage block WAN transmission order 405 according to this embodiment of the invention.
  • Example storage block WAN transmission order 405 is based on the example storage block writes WAN queue 400 shown in FIG. 4A .
  • example storage block WAN transmission order 405 only the most recent versions of each storage block in storage block writes WAN queue 400 are communicated to the data center via the WAN. For example, write operation 5 in storage block writes WAN queue 400 is the most recent version of storage block 4 .
  • write operations 7 , 8 , 9 , 10 in storage block writes WAN queue 400 are the most recent version of storage block 3 , 1 , 2 , and 5 , respectively.
  • storage block operations 5 , 7 , 8 , 9 , and 10 are the only write operations in storage block writes WAN queue 400 that need to be transmitted to the physical storage array at the data center, as shown by example storage block WAN transmission order 405 .
  • the remaining storage block write operations in the storage block writes WAN queue 400 may be discarded.
  • the most recent version policy shown by FIG. 4B reduces the WAN bandwidth required, because multiple versions of the same storage block need not be transmitted.
  • the virtual storage array data on the physical storage array may not be internally consistent until all of the write operations in the storage block write cache have been processed, if necessary, and transmitted back to the physical storage device at the data center.
  • this policy does not preserve consistent snapshots of the virtual storage array, because some write operations prior to a snapshot may be omitted from the storage block WAN transmission order 405 if there are further writes to the same storage block after the snapshot time. For example, write operations 1 , 2 , and 3 from the storage block writes WAN queue 400 , which occur before the time of snapshot A, are omitted from the storage block WAN transmission order 405 . Thus, snapshot A will not be internally consistent because it is missing the most recent version of storage blocks 4 , 3 , and 1 prior to the time of snapshot A.
  • FIG. 4C illustrates an example storage block WAN transmission order 410 according to this embodiment of the invention.
  • Example storage block WAN transmission order 410 is based on the example storage block writes WAN queue 400 shown in FIG. 4A .
  • example storage block WAN transmission order 410 the most recent versions of each storage block before each snapshot time in storage block writes WAN queue 400 are communicated to the data center via the WAN.
  • storage block writes WAN queue 400 includes two snapshot times, snapshot A and snapshot B.
  • an embodiment of the storage block write cache forwards only the most recent version of storage blocks updated by write operations prior to this snapshot time.
  • storage block 4 is updated by write operations 1 and 3 and storage block 3 is updated by write operation 2 prior to snapshot time A.
  • the storage block WAN transmission order 410 output by the storage block write cache will include write operations 2 and 3 to update storage blocks 3 and 4 , reflecting the most recent updates of these storage blocks prior to the snapshot time A.
  • write operation 1 is omitted because the write operation 3 is a more recent update the same storage block before the snapshot time A.
  • the storage block WAN transmission order 410 includes write operations 5 , 6 , and 7 , reflecting the most recent updates of storage blocks 4 , 2 , and 3 , respectively, prior to the snapshot time B.
  • the storage block WAN transmission order 410 include multiple versions of the same storage block if there is one or more snapshots between the associated write operations. For example, write operations 3 and 5 are both included in storage block WAN transmission order 410 because they update storage block 4 prior to and following the snapshot time A.
  • the storage block WAN transmission order 410 includes write operations 8 , 9 , and 10 , which are the most recent updates to storage blocks 1 , 2 , and 5 , respectively, following snapshot time B.
  • the physical storage array may contain an inconsistent view of the virtual storage array data at some arbitrary points in time, this embodiment ensures that the virtual storage array data will be internally consistent at the times of snapshots.
  • the data of a virtual storage array may be stored in physical storage array or other data storage device.
  • the physical storage blocks used by the virtual storage array belong to a virtual machine file system, such as VMFS.
  • VMFS virtual machine file system
  • FIG. 5 illustrates an example arrangement 500 for successively applying transformations and optimizations to improve virtualized data storage system performance according to an embodiment of the invention.
  • successive levels of translation may be used to convert storage block requests to corresponding intermediate level data structure entities and then into corresponding high-level data structure entities.
  • Example arrangement 500 includes a physical data storage system 505 , such as a physical data storage array or file server.
  • the physical data storage system 505 may be associated with a file system or volume manager that provides an interface for accessing physical storage blocks.
  • a virtual storage array interface receives a request for a virtual storage array storage block from a storage client. This request for a virtual storage array storage block is converted by one or more virtual storage array interfaces to a request 507 for a corresponding physical storage block in the physical data storage system 505 .
  • example arrangement 500 includes a physical storage block to virtual machine storage structure translation module 510 .
  • Module 510 maps a given physical storage block to a corresponding portion of a virtual machine storage structure 515 .
  • virtual machine storage structure 515 may be a VMFS storage volume.
  • the VMFS storage volume appears as a logical storage unit, such as a LUN, to the virtual storage array interface.
  • the VMFS storage volume may include multiple virtual machine disk images.
  • the VMFS storage volume appears as a single logical storage unit to the storage client, each disk image within the VMFS storage volume appears to a virtual machine application as a separate virtual logical storage unit.
  • module 510 may identify a portion of a virtual logical storage unit within the VMFS storage volume as corresponding with the requested physical storage block.
  • Module 520 maps the identified portion of a virtual machine storage structure, such as a virtual logical storage unit within a VMFS storage volume, to one or more corresponding virtual file system storage blocks within a virtual file system 525 .
  • Virtual file system 525 may be any type of file system implemented within a virtual logical storage unit. Examples of virtual file systems include FAT, NTFS, and the ext family of file systems.
  • a virtual logical storage unit may be a disk image used by a virtual machine application. The disk image represents as data as virtual storage blocks of a virtual data storage device. The virtual storage blocks in this disk image are organized according to the virtual file system 525 .
  • virtual machine applications and their hosted applications typically access data in terms of files in the virtual file system 525 , rather than storage blocks.
  • high-level data structure entities within the virtual file system such as files or directories, may be spread out over multiple non-contiguous virtual storage blocks in the virtual file system 525 .
  • a virtual file system inferred storage structure database 530 and virtual file system block access optimizer 532 leverage an understanding of the semantics and structure of the high-level data structures associated with the virtual storage blocks to predict which virtual storage blocks are likely to be requested by a storage client in the near future.
  • the virtual file system ISSD 530 and virtual file system block access optimizer 532 are similar to the ISSD and block access optimizer, respectively, for physical data storage discussed above.
  • the virtual file system block access optimizer 532 receives an identification of one or more virtual storage blocks in the virtual file system 525 that correspond with the requested physical storage block in request 507 .
  • the virtual file system block access optimizer 532 uses the virtual file system ISSD 530 to identify one or more virtual file system high-level data structure entities, such as virtual file system files, corresponding with these virtual file system storage blocks.
  • the virtual file system block access optimizer 532 uses its knowledge of the high-level data structure entities and reactive and/or policy-based prefetching techniques to identify one or more additional high-level data structure entities or portions thereof for prefetching.
  • the virtual file system block access optimizer 532 then uses the virtual file system ISSD 530 to identify additional virtual storage blocks in the virtual file system 525 corresponding with these additional high-level data structure entities or portions thereof.
  • the additional virtual storage blocks in the virtual file system 525 are selected for prefetching.
  • a request 533 for these virtual file system storage blocks is generated.
  • module 520 translates the prefetch request 533 for virtual file system storage blocks into an equivalent prefetch request 535 for a portion of the virtual machine storage structure.
  • module 510 translates the prefetch request 525 for a portion of the virtual machine storage structure into an equivalent prefetch request 537 for physical storage blocks in the physical data storage system 505 .
  • the physical storage blocks indicated by request 537 correspond with the virtual file system storage blocks from request 533 .
  • These requested physical storage blocks may be retrieved from the physical data storage system 505 and communicated via the WAN to a branch location virtual storage array interface for storage in a storage block read cache.
  • Arrangement 500 is one example for successively applying transformations and optimizations to improve virtualized data storage system performance according to an embodiment of the invention. Further embodiments of the invention may apply any number of successive transformations to physical storage blocks to identify associated high-level data structure entities. Additionally, once one or more associated high-level data structure entities have been identified, embodiments of the invention may apply optimizations at the level of high-level data structure entities or at any lower level of abstraction. For example, optimizations may be performed at the level of virtual machine file system files, virtual machine file system storage blocks, virtual machine storage structures, physical storage blocks, and/or at any other intermediate data structure level of abstraction.
  • FIG. 6 illustrates a method 600 of creating a data storage snapshot in a virtualized data storage system performance according to an embodiment of the invention.
  • Method 300 begins with step 605 initiating of a virtual storage array checkpoint.
  • a virtual storage array checkpoint may be initiated automatically by a branch location virtual storage array interface according to a schedule or based on criteria, such as the amount of data changed since the last checkpoint.
  • a virtual storage array checkpoint may be initiated in response to a request for a virtual storage array snapshot from a system administrator or administration application.
  • step 610 sets the branch location virtual storage array interface to a quiescent state. This entails completing any pending operations with storage clients (though not necessarily background operations between the branch location and data center virtual storage array interfaces, such as transferring new or updated storage blocks from the storage block write cache to the data center via the WAN). While in the quiescent state, the branch location virtual storage interface will not accept any new storage operations from storage clients.
  • step 615 identifies new or updated storage blocks in the branch location virtual storage array cache. These new or updated storage blocks include data that has been created or updated by storage clients but have yet to be transferred via the WAN back to the data center LAN for storage in the physical data storage array.
  • step 615 creates a checkpoint data structure.
  • the checkpoint data structure specifies a time of checkpoint creation and the set of new and updated storage blocks at that moment of time.
  • step 620 reactivates the branch location's virtual storage array.
  • the branch location virtual storage array interface can resume servicing storage operations from storage clients. Additionally, the branch location virtual storage array interface may resume transferring new or updated storage blocks via the WAN to the data center LAN for storage in the physical data storage array.
  • the virtual storage array cache may maintain a copy of an updated storage block even after a copy is transferred back to the data center LAN for storage. This allows subsequent snapshots to be created based on this data.
  • the branch location virtual storage array interface preserves the updated storage blocks specified by the checkpoint data structure from further changes. If a storage client attempts to update a storage block that is associated with a checkpoint, an embodiment of the branch location virtual storage array interface creates a duplicate of this storage block in the virtual storage array cache to store the updated data. By making a copy of this storage block, rather than replacing it with further updated data, this embodiment preserves the data of this storage block at the time of the checkpoint for potential future reference.
  • an embodiment of method 600 may initiate one or more additional virtual storage array checkpoints at later times or in response to criteria or conditions.
  • Embodiments of the branch location virtual storage array interface may maintain any arbitrary number of checkpoint data structures and automatically delete outdated checkpoint data structures. For example, a branch location virtual storage interface may maintain only the most recently created checkpoint data structure, or checkpoint data structures from the beginning of the most recent business day and the most recent hour.
  • a system administrator or administration application may request a snapshot of the virtual storage array data.
  • a snapshot of the virtual storage array data represents the complete set of virtual storage array data at a specific moment of time.
  • Step 625 receives a snapshot request.
  • step 630 transfers a copy of the appropriate checkpoint data structure from the branch location virtual storage array interface to the data center virtual storage interface. Additionally, step 630 transfers a copy of any updated storage blocks specified by this checkpoint data structure from the branch location virtual storage array interface to the data center virtual storage array interface for storage in the physical storage array.
  • the data center virtual storage array interface creates a snapshot of the data of the virtual storage array.
  • the snapshot includes a copy of all of the virtual storage array data in the physical data storage array unchanged from the time of creation of the checkpoint data structure.
  • the snapshot also includes a copy of the updated storage blocks specified by the checkpoint data structure.
  • An embodiment of the data center virtual storage array interface may store the snapshot in the physical storage array or using a data backup.
  • the data center virtual storage array interface automatically sends storage operations to the physical storage array interface to create a snapshot from a checkpoint data structure. These storage operations can be carried out in the background by the data center virtual storage array interface in addition to translating virtual storage array operations from one or more branch location virtual storage array interfaces into corresponding physical storage array operations.
  • Embodiments of the invention can implement virtual storage array interfaces at the branch and/or data center as standalone devices or as part of other devices, computer systems, or applications.
  • FIG. 7 illustrates an example computer system capable of implementing a virtual storage array interface according to an embodiment of the invention.
  • FIG. 7 is a block diagram of a computer system 2000 , such as a personal computer or other digital device, suitable for practicing an embodiment of the invention.
  • Embodiments of computer system 2000 may include dedicated networking devices, such as wireless access points, network switches, hubs, routers, hardware firewalls, network traffic optimizers and accelerators, network attached storage devices, storage array network interfaces, and combinations thereof.
  • Computer system 2000 includes a central processing unit (CPU) 2005 for running software applications and optionally an operating system.
  • CPU 2005 may be comprised of one or more processing cores.
  • CPU 2005 may execute virtual machine software applications to create one or more virtual processors capable of executing additional software applications and optional additional operating systems.
  • Virtual machine applications can include interpreters, recompilers, and just-in-time compilers to assist in executing software applications within virtual machines.
  • one or more CPUs 2005 or associated processing cores can include virtualization specific hardware, such as additional register sets, memory address manipulation hardware, additional virtualization-specific processor instructions, and virtual machine state maintenance and migration hardware.
  • Memory 2010 stores applications and data for use by the CPU 2005 .
  • Examples of memory 2010 include dynamic and static random access memory.
  • Storage 2015 provides non-volatile storage for applications and data and may include fixed or removable hard disk drives, flash memory devices, ROM memory, and CD-ROM, DVD-ROM, Blu-ray, or other magnetic, optical, or solid state storage devices.
  • storage 2015 includes multiple storage devices configured to act as a storage array for improved performance and/or reliability.
  • storage 2015 includes a storage array network utilizing a storage array network interface and storage array network protocols to store and retrieve data. Examples of storage array network interfaces suitable for use with embodiments of the invention include Ethernet, Fibre Channel, IP, and InfiniBand interfaces. Examples of storage array network protocols include ATA, Fibre Channel Protocol, and SCSI. Various combinations of storage array network interfaces and protocols are suitable for use with embodiments of the invention, including iSCSI, HyperSCSI, Fibre Channel over Ethernet, and iFCP.
  • Optional user input devices 2020 communicate user inputs from one or more users to the computer system 2000 , examples of which may include keyboards, mice, joysticks, digitizer tablets, touch pads, touch screens, still or video cameras, and/or microphones.
  • user input devices may be omitted and computer system 2000 may present a user interface to a user over a network, for example using a web page or network management protocol and network management software applications.
  • Computer system 2000 includes one or more network interfaces 2025 that allow computer system 2000 to communicate with other computer systems via an electronic communications network, and may include wired or wireless communication over local area networks and wide area networks such as the Internet.
  • Computer system 2000 may support a variety of networking protocols at one or more levels of abstraction.
  • Computer system may support networking protocols at one or more layers of the seven layer OSI network model.
  • An embodiment of network interface 2025 includes one or more wireless network interfaces adapted to communicate with wireless clients and with other wireless networking devices using radio waves, for example using the 802.11 family of protocols, such as 802.11a, 802.11b, 802.11g, and 802.11n.
  • An embodiment of the computer system 2000 may also include a wired networking interface, such as one or more Ethernet connections to communicate with other networking devices via local or wide-area networks.
  • a wired networking interface such as one or more Ethernet connections to communicate with other networking devices via local or wide-area networks.
  • the components of computer system 2000 including CPU 2005 , memory 2010 , data storage 2015 , user input devices 2020 , and network interface 2025 are connected via one or more data buses 2060 . Additionally, some or all of the components of computer system 2000 , including CPU 2005 , memory 2010 , data storage 2015 , user input devices 2020 , and network interface 2025 may be integrated together into one or more integrated circuits or integrated circuit packages. Furthermore, some or all of the components of computer system 2000 may be implemented as application specific integrated circuits (ASICS) and/or programmable logic.
  • ASICS application specific integrated circuits
  • embodiments of the invention can be used with any number of network connections and may be added to any type of network device, client or server computer, or other computing device in addition to the computer illustrated above.
  • combinations or sub-combinations of the above disclosed invention can be advantageously made.
  • the block diagrams of the architecture and flow charts are grouped for ease of understanding. However it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present invention.

Abstract

Virtual storage arrays consolidate branch data storage at data centers connected via wide area networks. Virtual storage arrays appear to storage clients as local data storage; however, virtual storage arrays actually store data at the data center. The virtual storage arrays overcomes bandwidth and latency limitations of the wide area network by predicting and prefetching storage blocks, which are then cached at the branch location. Virtual storage arrays leverage an understanding of the semantics and structure of high-level data structures associated with storage blocks to predict which storage blocks are likely to be requested by a storage client in the near future. Virtual storage arrays determine the association between requested storage blocks and corresponding high-level data structure entities to predict additional high-level data structure entities that are likely to be accessed. From this, the virtual storage array identifies the additional storage blocks for prefetching.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is related to U.S. Provisional Patent Application No. 61/162,463, entitled “Virtualized Data Storage Over Wide-Area Networks”, filed Mar. 23, 2009; U.S. patent application Ser. No. ______ [Attorney Docket Number R001110US], entitled “Virtualized Data Storage Over Wide-Area Networks”, filed ______; U.S. patent application Ser. No. ______ [Attorney Docket Number R001410US], entitled “Virtualized Data Storage System Architecture”, filed ______; and U.S. patent application Ser. No. ______ [Attorney Docket Number R001420US], entitled “Virtual Data Storage System Optimizations”, filed ______; all of which are incorporated by reference herein for all purposes.
  • BACKGROUND
  • The present invention relates generally to data storage systems, and systems and methods to improve storage efficiency, compactness, performance, reliability, and compatibility. Enterprises often span geographical locations, including multiple corporate sites, branch offices, and data centers, all of which are generally connected over a wide-area network (WAN). Although in many cases, servers are run in a data center and accessed over the network, there are also cases in which servers need to be run in distributed locations at the “edges” of the network. These network edge locations are generally referred to as branch locations in this application, regardless of the purposes of these locations. The need to operate servers at branch locations may arise from variety of reasons, including efficiently handling large amounts of newly written data and ensuring service availability during WAN outages.
  • The need to run servers at branch locations in a network, as opposed to a centralized data center location, leads to a corresponding requirement for data storage for those servers at the branch locations, both to store the operating system data for branch servers, in some cases, for user or application data. The branch data storage requires maintenance and administration, including proper sizing for future growth, data snapshots, archives, and backups, and replacements and/or upgrades of storage hardware and software when the storage hardware or software fails or branch data storage requirements change.
  • Although the maintenance and administration of data storage in general incurs additional costs, branch data storage is more expensive and inefficient than consolidated data storage at a centralized data center. Organizations often require on-site personnel at each branch location to configure and upgrade each branch's data storage, and to manage data backups and data retention. Additionally, organizations often purchase excess storage capacity for each branch location to allow for upgrades and growing data storage requirements. Because branch locations are serviced infrequently, due to their numbers and geographic dispersion, organizations often deploy enough data storage at each branch location to allow for months or years of storage growth. However, this excess storage capacity often sits unused for months or years until it is needed, unnecessarily driving up costs.
  • Although the consolidation of information technology infrastructure decreases costs and improves management efficiency, branch data storage is rarely consolidated at a network branch location, because the intervening WAN is slow and has high latency, making storage accesses unacceptably slow for branch client systems and application servers. Thus, organizations have previously been unable to consolidate data storage from multiple branches.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention will be described with reference to the drawings, in which:
  • FIG. 1 illustrates a virtualized data storage system architecture according to an embodiment of the invention;
  • FIGS. 2A-2B illustrate methods of prefetching storage blocks to improve virtualized data storage system performance according to embodiments of the invention;
  • FIG. 3 illustrates a method of processing storage block write requests to improve virtualized data storage system performance according to an embodiment of the invention;
  • FIGS. 4A-4C illustrate write order preservation policies according to embodiments of the invention;
  • FIG. 5 illustrates an arrangement for recursively applying transformations and optimizations to improve virtualized data storage system performance according to an embodiment of the invention;
  • FIG. 6 illustrates a method of creating a data storage snapshot in a virtualized data storage system performance according to an embodiment of the invention; and
  • FIG. 7 illustrates an example computer system capable of a virtualized data storage system device according to an embodiment of the invention.
  • SUMMARY
  • An embodiment of the invention uses virtual storage arrays to consolidate branch location-specific data storage at data centers connected with branch locations via wide area networks. The virtual storage array appears to a storage client as a local branch data storage; however, embodiments of the invention actually store the virtual storage array data at a data center connected with the branch location via a wide-area network. In embodiments of the invention, a branch storage client accesses the virtual storage array using storage block based protocols.
  • Embodiments of the invention overcome the bandwidth and latency limitations of the wide area network between branch locations and the data center by predicting storage blocks likely to be requested in the future by the branch storage client and prefetching and caching these predicted storage blocks at the branch location. When this prediction is successful, storage block requests from the branch storage client may be fulfilled in whole or in part from the branch location' storage block cache. As a result, the latency and bandwidth restrictions of the wide-area network are hidden from the storage client.
  • The branch location storage client uses storage block-based protocols to specify reads, writes, modifications, and/or deletions of storage blocks. However, servers and higher-level applications typically access data in terms of files in a structured file system, relational database, or other high-level data structure. Each entity in the high-level data structure, such as a file or directory, or database table, node, or row, may be spread out over multiple storage blocks at various non-contiguous locations in the storage device. Thus, prefetching storage blocks based solely on their locations in the storage device is unlikely to be effective in hiding wide-area network latency and bandwidth limits from storage clients.
  • An embodiment of the invention leverages an understanding of the semantics and structure of the high-level data structures associated with the storage blocks to predict which storage blocks are likely to be requested by a storage client in the near future. To do this, an embodiment of the invention determines the association between requested storage blocks and the corresponding high-level data structure entities, such as files, directories, or database elements. Once this embodiment has identified one or more of the high-level data structure entities associated with a requested storage block, this embodiment of the invention identifies additional portions of the same or other high-level data structure entities that are likely to be accessed by the storage client. This embodiment of the invention then identifies the additional storage blocks corresponding to these additional high-level data structure entities. The additional storage blocks are then prefetched and cached at the branch location.
  • Another embodiment of the invention analyzes a selected high-level data structure entity to identify portions of the same or other high-level data structure entities that is likely to be accessed by the storage client. This embodiment of the invention then identifies the additional storage blocks corresponding to these additional high-level data structure entities. The additional storage blocks are then prefetched and cached at the branch location. This embodiment of the invention may also identify additional high-level data structure entities to analyze based on its analysis of previously selected high-level data structure entities.
  • Further embodiments of the invention may identify corresponding high-level data structure entities directly from requests for storage blocks. Additionally, embodiments of the invention may successively apply any number of successive transformations to storage block requests to identify associated high-level data structure entities. These successive transformations may include transformations to intermediate level data structure entities. Intermediate and high-level data structure entities may include virtual machine data structures, such as virtual machine file system files, virtual machine file system storage blocks, virtual machine storage structures, and virtual machine disk images.
  • DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
  • FIG. 1 illustrates a virtualized data storage system architecture 100 according to an embodiment of the invention. Virtualized data storage system architecture 100 includes a data center 101 connected with at least one branch network location 102 via a wide-area network (WAN) 130. Each branch location 102 includes at least one storage client 139, such as a file server, application server, database server, or storage area network (SAN) interface. A storage client 139 may be connected with a local-area network (LAN) 151, including routers, switches, and other wired or wireless network devices, for connecting with server and client systems and other devices 152.
  • Previously, typical branch location installations also required a local physical data storage device for the storage client. For example, a prior typical branch location LAN installation may include a file server for storing data for the client systems and application servers, such as database servers and e-mail servers. In prior systems, this branch location's data storage is located at the branch location site and connected directly with the branch location LAN or SAN. The branch location physical data storage device previously could not be located at the data center 101, because the intervening WAN 130 is too slow and has high latency, making storage accesses unacceptably slow for storage clients.
  • An embodiment of the invention allows for storage consolidation of branch location-specific data storage at data centers connected with branch locations via wide area networks. This embodiment of the invention overcomes the bandwidth and latency limitations of the wide area network between branch locations and the data center. To this end, an embodiment of the invention includes virtual storage arrays.
  • In an embodiment, the branch location 102 includes a virtual storage array interface device 135. The virtual storage array interface device 135 presents a virtual storage array 137 to branch location users, such as the branch location storage client 139. A virtual storage array 137 can be used for the same purposes as a local storage area network or other data storage device. For example, a virtual storage array 137 may be used in conjunction with a file server for general-purpose data storage, in conjunction with a database server for database application storage, or in conjunction with an e-mail server for e-mail storage. However, the virtual storage array 137 stores its data at a data center 101 connected with the branch location 102 via a wide area network 130. Multiple separate virtual storage arrays, from different branch locations, may store their data in the same data center and, as described below, on the same physical storage devices.
  • Because the data storage of multiple branch locations is consolidated at a data center, the efficiency, reliability, cost-effectiveness, and performance of data storage is improved. An organization can manage and control access to their data storage at a central data center, rather than at large numbers of separate branch locations. This increases the reliability and performance of an organization's data storage. This also reduces the personnel required at branch location offices to provision, maintain, and backup data storage. It also enables organizations to implement more effective backup systems, data snapshots, and disaster recovery for their data storage. Furthermore, organizations can plan for storage growth more efficiently, by consolidating their storage expansion for multiple branch locations and reducing the amount of excess unused storage. Additionally, an organization can apply optimizations such as compression or data deduplication over the data from multiple branch locations stored at the data center, reducing the total amount of storage required by the organization.
  • In an embodiment, virtual storage array interface 135 may be a stand-alone computer system or network appliance or built into other computer systems or network equipment as hardware and/or software. In a further embodiment, a branch location virtual storage array interface 135 may be implemented as a software application or other executable code running on a client system or application server.
  • In an embodiment, a branch location virtual storage array interface 135 includes one or more storage array network interfaces and supports one or more storage block network protocols to connect with one or more storage clients 139 via a local storage area network (SAN) 138. Examples of storage array network interfaces suitable for use with embodiments of the invention include Ethernet, Fibre Channel, IP, and InfiniBand interfaces. Examples of storage array network protocols include ATA, Fibre Channel Protocol, and SCSI. Various combinations of storage array network interfaces and protocols are suitable for use with embodiments of the invention, including iSCSI, HyperSCSI, Fibre Channel over Ethernet, and iFCP. In cases where the storage array network interface uses Ethernet, an embodiment of the branch location virtual storage array interface can use the branch location LAN's physical connections and networking equipment for communicating with client systems and application services. In other embodiments, separate connections and networking equipment, such as Fibre Channel networking equipment, is used to connect the branch location virtual storage array interface with client systems and/or application services.
  • It should be noted that the branch location virtual storage array interface 135 allows storage clients to access data in the virtual storage array via storage block protocols, unlike file servers that utilize file-based protocols. Thus, the virtual storage array 137 may be accessed by any type of storage client in the same manner as a local physical storage device or storage array. Furthermore, applications executed by the storage client 139 or other client and server systems 152 may access the virtual storage array in the same manner as a local physical storage device or storage array.
  • In an embodiment, the storage client 139 is included in a file server that also provide a network file interface to the virtual storage array 137 to client systems and other application servers. In a further embodiment, the branch location virtual storage array interface 135 is integrated as hardware and/or software with an application server, such as a file server, database server, or e-mail server. In this embodiment, the branch location virtual storage array interface 135 can include application server interfaces, such as a network file interface, for interfacing with other application servers and/or client systems.
  • A branch location virtual storage array interface 135 presents a virtual storage array 137 to one or more storage clients 139. To the storage client 139, the virtual storage array 137 appears to be a local storage array, having its physical data storage at the branch location 102. However, the branch location virtual storage array interface 135 actually stores and retrieves data from physical data storage devices located at the data center 101. Because virtual storage array data accesses must travel via the WAN 130 between the data center 101 LAN to a branch location 102 LAN, the virtual storage array 137 is subject to the latency and bandwidth restrictions of the WAN 130.
  • In an embodiment, the branch location virtual storage array interface 135 includes a virtual storage array cache 145, which is used to ameliorate the effects of the WAN 130 on virtual storage array 137 performance. In an embodiment, the virtual storage array cache 145 includes a storage block read cache 147 and a storage block write cache 149.
  • The storage block read cache 147 is adapted to store local copies of storage blocks requested by storage client 139. As described in detail below, the virtualized data storage system architecture 100 may attempt to predict which storage blocks will be requested by the storage client 139 in the future and preemptively send these predicted storage blocks from the data center 101 to the branch 102 via WAN 130 for storage in the storage block read cache 147. If this prediction is partially or wholly correct, then when the storage client 139 eventually requests one or more of these prefetched storage blocks from the virtual storage array 137, an embodiment of the virtual storage array interface 135 can fulfill this request using local copies of the requested storage blocks from the block read cache 145. By fulfilling access requests using prefetched local copies of storage blocks from the block read cache 145, the latency and bandwidth restrictions of WAN 130 are hidden from the storage client 139. Thus, from the perspective of the storage client 139, the virtual storage array 137 appears to perform storage block read operations as if the physical data storage were located at the branch location 102.
  • Similarly, the storage block write cache 149 is adapted to store local copies of new or updated storage blocks written by the storage client 139. As described in detail below, the storage block write cache 149 temporarily stores new or updated storage blocks written by the storage client 139 until these storage blocks are copied back to physical data storage at the data center 101 via WAN 130. By temporarily storing new and updated storage blocks locally at the branch location 102, the bandwidth and latency of the WAN 130 is hidden from the storage client 139. Thus, from the perspective of the storage client 139, the virtual storage array 137 appears to perform storage block write operations as if the physical data storage were located at the branch location 102.
  • In an embodiment, the virtual storage array cache 145 includes non-volatile and/or redundant data storage, so that data in new or updated storage blocks are protected from system failures until they can be transferred over the WAN 130 and stored in physical data storage at the data center 101.
  • In an embodiment, the branch location virtual storage array interface 135 operates in conjunction with a data center virtual storage array interface 107. The data center virtual storage array interface 107 is located on the data center 101 LAN and may communicate with one or more branch location virtual storage array interfaces via the data center 101 LAN, the WAN 130, and their respective branch location LANs. Data communications between virtual storage array interfaces can be in any form and/or protocol used for carrying data over wired and wireless data communications networks, including TCP/IP.
  • In an embodiment, data center virtual storage array interface 107 is connected with one or more physical data storage devices 103 to store and retrieve data for one or more virtual storage arrays, such as virtual storage array 137. To this end, an embodiment of a data center virtual storage array interface 107 accesses a physical storage array network interface, which in turn accesses physical data storage array 103 a on a storage array network (SAN) 105. In another embodiment, the data center virtual storage array interface 107 includes one or more storage array network interfaces and supports one or more storage array network protocols for directly connecting with a physical storage array network 105 and its physical data storage array 103 a. Examples of storage array network interfaces suitable for use with embodiments of the invention include Ethernet, Fibre Channel, IP, and InfiniBand interfaces. Examples of storage array network protocols include ATA, Fibre Channel Protocol, and SCSI. Various combinations of storage array network interfaces and protocols are suitable for use with embodiments of the invention, including iSCSI, HyperSCSI, Fibre Channel over Ethernet, and iFCP. Embodiments of the data center virtual storage array interface 107 may connect with the physical storage array interface and/or directly with the physical storage array network 105 using the Ethernet network of the data center LAN and/or separate data communications connections, such as a Fibre Channel network.
  • In another embodiment, data center virtual storage array interface 107 may store and retrieve data for one or more virtual storage arrays, such as virtual storage array 137, using a network storage device, such as file server 103 b. File server 103 b may be connected with data center virtual storage array 137 via local-area network (LAN) 115, such as an Ethernet network, and communicate using a network file system protocol, such as NFS, SMB, or CIFS.
  • Embodiments of the data center virtual storage array interface 107 may utilize a number of different arrangements to store and retrieve virtual storage array data with physical data storage array 103 a or file server 103 b. In one embodiment, the virtual data storage array 137 presents a virtualized logical storage unit, such as an iSCSI or FibreChannel logical unit number (LUN), to storage client 139. This virtual logical storage unit is mapped to a corresponding logical storage unit 104 a on physical data storage array 103 a. Data center virtual storage array interface 107 stores and retrieves data for this virtualized logical storage unit using a non-virtual logical storage unit 104 a provided by physical data storage array 103 a. In a further embodiment, the data center virtual data storage array interface 107 supports multiple branch locations and maps each storage client's virtualized logical storage unit to a different non-virtual logical storage unit provided by physical data storage array 103 a.
  • In another embodiment, virtual data storage array interface 107 maps a virtualized logical storage unit to a virtual machine file system 104 b, which is provided by the physical data storage array 103 a. Virtual machine file system 104 b is adapted to store one or more virtual machine disk images 113, each representing the configuration and optionally state and data of a virtual machine. Each of the virtual machine disk images 113, such as virtual machine disk images 113 a and 113 b, includes one or more virtual machine file systems to store applications and data of a virtual machine. To a virtual machine application, its virtual machine disk image 113 within the virtual machine file system 104 b appears as a logical storage unit. However, the complete virtual machine file system 104 b appears to the data center virtual storage array interface 107 as a single logical storage unit.
  • In another embodiment, virtual data storage array interface 107 maps a virtualized logical storage unit to a logical storage unit or file system 104 c provided by the file server 103 c.
  • As described above, storage clients can interact with virtual storage arrays in the same manner that they would interact with physical storage arrays. This includes issuing storage commands to the branch location virtual storage interface using storage array network protocols such as iSCSI or Fibre Channel protocol. Most storage array network protocols organize data according to storage blocks, each of which has a unique storage address or location. A storage block's unique storage address may include logical unit number (using the SCSI protocol) or other representation of a logical volume.
  • In an embodiment, the virtual storage array provided by a branch location virtual storage interface allows a storage client to access storage blocks by their unique storage address within the virtual storage array. However, because one or more virtual storage arrays actually store their data within one or more of the physical data storage devices 103, an embodiment of the invention allows arbitrary mappings between the unique storage addresses of storage blocks in the virtual storage array and the corresponding unique storage addresses in one or more physical data storage devices 103. In an embodiment, the mapping between virtual and physical storage address may be performed by a branch location virtual storage array interface 137 and/or by data center virtual storage array interface 107. Furthermore, there may be multiple levels of mapping between the addresses of storage blocks in the virtual storage array and their corresponding addresses in the physical storage device.
  • In an embodiment, storage blocks in the virtual storage array may be of a different size and/or structure than the corresponding storage blocks in a physical storage array or data storage device. For example, if data compression is applied to the storage data, then the physical storage array data blocks may be smaller than the storage blocks of the virtual storage array to take advantage of data storage savings. In an embodiment, the branch location and/or data center virtual storage array interfaces map one or more virtual storage array storage blocks to one or more physical storage array storage blocks. Thus, a virtual storage array storage block can correspond with a fraction of a physical storage array storage block, a single physical storage array storage block, or multiple physical storage array storage blocks, as required by the configuration of the virtual and physical storage arrays.
  • In a further embodiment, the branch location and data center virtual storage array interfaces may reorder or regroup storage operations from storage clients to improve efficiency of data optimizations such as data compression. For example, if two storage clients are simultaneously accessing the same virtual storage array, then these storage operations will be intermixed when received by the branch location virtual storage array interface. An embodiment of the branch location and/or data center virtual storage array interface can reorder or regroup these storage operations according to storage client, type of storage operation, data or application type, or any other attribute or criteria to improve virtual storage array performance and efficiency. For example, a virtual storage array interface can group storage operations by storage client and apply data compression to each storage client's operations separately, which is likely to provide greater data compression than compressing all storage operations together.
  • As described above, an embodiment of the virtualized data storage system architecture 100 attempts to predict which storage blocks will be requested by a storage client in the near future, prefetches these storage blocks from the physical data storage devices 103, and forwards these to the branch location 102 for storage in the storage block read cache 147. When this prediction is successful and storage block requests may be fulfilled in whole or in part from the block read cache 147, the latency and bandwidth restrictions of the WAN 130 are hidden from the storage client. An embodiment of the virtualized data storage system architecture 100 includes a storage block access optimizer 120 to select storage blocks for prefetching to storage clients. In an embodiment, the storage block access optimizer 120 is located at the data center 101 and is connected or incorporated into the data center virtual data storage array interface 107. In an alternate embodiment, the storage block access optimizer 120 may be located at the branch location 102 and be connected with or incorporated into the branch location virtual data storage interface 135.
  • As discussed above, storage devices such as physical data storage arrays and the virtual data storage array are accessed using storage block-based protocols. A storage block is a sequence of bytes or bits of data. Data storage devices represent their data storage as a set of storage blocks that may be used to store and retrieve data. The set of storage blocks is an abstraction of the underlying hardware of a physical or virtual data storage device. Storage clients use storage block-based protocols to specify reads, writes, modifications, and/or deletions of storage blocks. However, servers and higher-level applications typically access data in terms of files in a structured file system, relational database, or other high-level data structure. Each entity in the high-level data structure, such as a file or directory, or database table, node, or row, may be spread out over multiple storage blocks at various non-contiguous locations in the storage device. Thus, prefetching storage blocks based solely on their location in the storage device is unlikely to be effective in hiding WAN latency and bandwidth limits from storage clients.
  • In an embodiment, the storage block access optimizer 120 leverages an understanding of the semantics and structure of the high-level data structures associated with the storage blocks to predict which storage blocks are likely to be requested by a storage client in the near future. To do this, the storage block access optimizer 120 must be able to determine the association between storage blocks and its high-level data structure. An embodiment of the storage block access optimizer 120 uses an inferred storage structure database (ISSD) 123 to match storage blocks with their associated entity in the high-level data structure. For example, given a specific storage block location, the storage block access optimizer 120 may use the ISSD 123 to identify the file or directory in a file system, or the database table, record, or node, that is using this storage block to store some or all of its data.
  • Once the storage block access optimizer 120 has identified the high-level data structure entity associated with a storage block, the storage block access optimizer 120 may employ a number of different techniques to predict which additional storage blocks are likely to be requested by a storage client. For example, storage block access optimizer 120 may observe requests from a storage client 139 for storage blocks from the virtual data storage array 137, identify the high-level data structure entities associated with the requested storage blocks, and select additional storage blocks associated with these or other high-level data structure entities for prefetching. These types of storage block prefetching techniques are referred to as reactive prefetching. In another example, the storage block access optimizer 120 may analyze entities in the high-level data structures, such as files, directories, or database entities, to identify specific entities or portions thereof that are likely to be requested by the storage client 139. Using the ISSD 123, the storage block access optimizer 120 identifies storage blocks corresponding with these identified entities or portions thereof and prefetches these storage blocks for storage in the block read cache 147 at the branch location 102. These types of storage block prefetching techniques are referred to as policy-based prefetching. Further examples of reactive and policy-based prefetching are discussed below. Embodiments of the storage block access optimizer 120 may utilize any combination of reactive and policy-based prefetching techniques to select storage blocks to be prefetched and stored in the block read cache 147 at the branch location 102.
  • In a further embodiment, the branch location 102 and data center location 101 may optionally include network optimizers 125 for improving the performance of data communications over the WAN between branches and/or the data center. Network optimizers 125 can improve actual and perceived WAN network performance using techniques including compressing data communications; anticipating and prefetching data; caching frequently accessed data; shaping and restricting network traffic; and optimizing usage of network protocols. In an embodiment, network optimizers 125 may be used in conjunction with virtual data storage array interfaces 107 and 135 to further improve virtual storage array 137 performance for storage blocks accessed via the WAN 130. In other embodiments, network optimizers 125 may ignore or pass-through virtual storage array 137 data traffic, relying on the virtual storage array interfaces 107 and 135 at the data center 101 and branch location 102 to optimize WAN performance.
  • Further embodiments of the invention may be used in different network architectures. For example, a data center virtual storage array interface 107 may be connected directly between WAN 130 and a physical data storage array 103, eliminating the need for a data center LAN. Similarly, a branch location virtual storage array interface 135, implemented for example in the form of a software application executed by a storage client computer system, may be connected directly with WAN 130, such as the internet, eliminating the need for a branch location LAN. In another example, the data center and branch location virtual data storage array interfaces 107 and 135 may be combined into a single unit, which may be located at the branch location 102.
  • FIGS. 2A-2B illustrate methods of prefetching storage blocks to improve virtualized data storage system performance according to embodiments of the invention. FIG. 2A illustrates a method 200 of performing reactive prefetching of storage blocks according to an embodiment of the invention. Step 205 receives a storage block read request from a storage client at the branch location. In an embodiment, the storage block read request may be received by a branch location virtual data storage array interface.
  • In response to the receipt of the storage block read request in step 205, decision block 210 determines if the requested storage block has been previously retrieved and stored in the storage block read cache at the branch location. If so, step 220 retrieves the requested storage block from the storage block read cache and returns it to the requesting storage client. In an embodiment, if the system includes a data center virtual storage array interface, then step 220 also forwards the storage block read request back to the data center virtual storage array interface for use in identifying additional storage blocks likely to be requested by the storage client in the future.
  • If the storage block read cache at the branch location does not include the requested storage block, step 215 retrieves the requested storage block via a WAN connection from the virtual storage array data located in a physical data storage at the data center. In an embodiment, a branch location virtual storage array interface forwards the storage block read request to the data center virtual storage array interface via the WAN connection. The data center virtual storage array interface then retrieves the requested storage block from the physical storage array and returns it to the branch location virtual storage array interface, which in turn provides this requested storage block to the storage client. In a further embodiment of step 215, a copy of the retrieved storage block may be stored in the storage block read cache for future accesses.
  • During and/or following the retrieval of the requested storage block from the virtual storage array or virtual storage array cache, steps 225 to 250 prefetch additional storage blocks likely to be requested by the storage client in the near future. Step 225 identifies the high-level data structure entity associated with the requested storage block. Typical block storage protocols, such as iSCSI and FCP, specify block read requests using a storage block address or identifier. However, these storage block read requests do not include any identification of the high-level data structure, such as a file, directory, or database entity, that is associated with this storage block. Therefore, an embodiment of step 225 accesses an ISSD to identify the high-level data structure associated with the requested storage block.
  • In an embodiment, step 225 provides the ISSD with the storage block address or identifier. In response, the ISSD returns an identifier of the high-level data structure entity associated with the requested storage block. The identifier of the high-level data structure entity may be an inode or similar file system identifier or a database storage structure identifier, such as a database table or B-tree node. In a further embodiment, the ISSD also includes a location within the high-level data structure entity corresponding with the requested storage block. For example, step 225 may provide a storage block identifier to the ISSD and in response receive the inode or other file system identifier for a file stored in this storage block. Additionally, the ISSD can return an offset, index, or other file location indicator that specifies the portion of this file stored in the storage block.
  • Using the identification of the high-level data structure entity and optionally the location provided by the ISSD, step 230 identifies additional high-level data structure entities or portions thereof that are likely to be requested by the storage client. There are a number of different techniques for identifying addition high-level data structure entities or portions thereof for prefetching that may be used by embodiments of step 230. Some of these are described in detail in co-pending U.S. patent application Ser. No. ______ [Attorney Docket Number R001420US], entitled “Virtual Data Storage System Optimizations”, filed ______, which is incorporated by reference herein for all purposes.
  • One example technique is to prefetch portions of the high-level data structure entity based on their adjacency or close proximity to the identified portion of the entity. For example, if step 225 determines that the requested storage block corresponds with a portion of a file from file offset 0 up to offset 4095, then step 230 may identify a second portion of this same file beginning with offset 4096 for prefetching. It should be noted that although these two portions are adjacent in the high-level data structure entity, their corresponding storage blocks may be non-contiguous.
  • Another example technique is to identify the type of high-level data structure entity, such as a file of a specific format, a directory in a file system, or a database table, and apply one or more heuristics to identify additional portions of this high-level data structure entity or a related high-level data structure entity for prefetching. For example, applications employing a specific type of file may frequently access data at a specific location within these files, such as at the beginning or end of the file. Using knowledge of this application or entity-specific behavior, step 230 may identify these frequently accessed portions of the file for prefetching.
  • Yet another example technique monitors the times at which high-level data structure entities are accessed. High-level data structure entities that are accessed at approximately the same time are associated together by the virtual storage array architecture. If any one of these associated high-level data structure entities is later accessed again, an embodiment of step 230 identifies one or more associated high-level data structure entities that were previously accessed at approximately the same time as the requested high-level data structure entity for prefetching. For example, a storage client may have previously requested storage blocks from files A, B, and C at approximately the same time, such as within a minute of each other. Based on this previous access pattern, if step 225 determines that a requested storage block is associated with file A, step 230 may identify all or portions of files B and C for prefetching.
  • In still another example technique, step 230 analyzes the high-level data structure entity associated with the requested storage block to identify related portions of the same or other high-level data structure entity for prefetching. For example, application files may include references to additional files, such as overlay files or dynamically loaded libraries. Similarly, a database table may include references to other database tables. Once step 225 identifies the high-level data structure entity associated with a requested storage block, step 230 may use an analysis of this high-level data structure entity to identify additional referenced high-level data structure entities. The referenced high-level data structure entities may be prefetched. In an embodiment, the analysis of high-level data structure entities for references to other high-level data structure entities may be performed asynchronously with method 200.
  • Step 230 identifies all or portions of one or more high-level data structure entities for prefetching based on the high-level data structure entity associated with the requested storage block. However, as discussed above, storage clients specify data access requests in terms of storage blocks, not high-level data structure entities such as files, directories, or database tables. Thus, step 235 identifies one or more storage blocks corresponding with the high-level data structure entities identified for prefetching in step 230. In an embodiment, step 235 provides the ISSD with identifiers for one or more high-level data structure entities, such as the inodes of files or similar identifiers for other types of file systems or database storage structures. Optionally, step 235 also provides an offset, file location, or other type of address identify a specific portion of a high-level data structure entity to be prefetched. In response, the ISSD returns an identifier of one or more storage blocks associated with the high-level data structure entities. These identified storage blocks are used to store the high-level data structure entities or portions thereof.
  • Decision block 240 determines if the storage blocks identified in step 235 have already been stored in the storage block read cache located at the branch location. In an embodiment, the storage block access optimizer at the data center maintains a record of all of the storage blocks that have copies stored in the storage block read cache. In an alternate embodiment, the storage block access optimizer queries the branch location virtual storage array interface to determine if copies of these identified storage blocks have already been stored in the storage block read cache.
  • In still a further embodiment, decision block 240 and the determination of whether an additional storage block has been previously retrieved and cached may be omitted. Instead, this embodiment can send all of the additional storage blocks identified by step 235 to the branch location virtual storage array interface to be cached. This embodiment can be used when WAN latency, rather than WAN bandwidth limitations, are an overriding concern.
  • If all of the identified storage blocks from step 235 are already stored in the storage block read cache, then method 200 proceeds from decision block 240 back to step 205 to await receipt of further storage block requests.
  • If some or all of the storage blocks identified in step 235 are not already stored in the storage block read cache, then step 245 retrieves these uncached storage blocks from the virtual storage array data located in a physical data storage on the data center LAN. The retrieved storage blocks are sent via the WAN connection from the data center location to the branch location. In an embodiment of step 245, the data center virtual storage array interface receives a request for the uncached identified storage blocks from the storage block access optimizer and, in response, accesses the physical data storage array to retrieve these storage blocks. The data center virtual storage array interface then forwards these storage blocks to the branch location virtual storage array interface via the WAN connection.
  • Step 250 stores the storage blocks identified for prefetching in the storage block read cache. In an embodiment of step 250, the branch location virtual storage array interface receives one or more storage blocks from the data center virtual storage array interface via the WAN connection and stores these storage blocks in the storage block read cache. Following step 250, method 200 proceeds to step 205 to await receipt of further storage block requests. The storage blocks added to the storage block read cache in previous iterations of method 200 may be available for fulfilling storage block read requests.
  • Method 200 may be performed by a branch virtual data storage array interface, by a data center virtual data storage array interface, or by both virtual data storage array interfaces working in concert. For example, steps 205 to 220 of method 200 may be performed by a branch location virtual storage array interface and steps 225 to 250 of method 200 may be performed by a data center virtual storage array interface. In another example, all of the steps of method 200 may be performed by a branch location virtual storage array interface.
  • FIG. 2B illustrates a method 255 of performing policy-based prefetching of storage blocks according to an embodiment of the invention. Step 260 selects a high-level data structure entity for analysis. Examples of a selected high-level data structure entities include a file, directory, and other file system entity such as an inode, as well as database entities such as tables, records, and B-tree nodes or other structures.
  • Step 265 analyzes the selected high-level data structure entity to identify additional portions of the same high-level data structure entity or all or portions of additional high-level data structure entities that are likely to be requested by the storage client. There are a number of different techniques for identifying addition high-level data structures or portions thereof for prefetching that may be used by embodiments of step 265. Some of these are described in detail in co-pending U.S. patent application Ser. No. ______ [Attorney Docket Number R001420US], entitled “Virtual Data Storage System Optimizations”, filed ______, which is incorporated by reference herein for all purposes.
  • One example technique is to identify the type of entity, such as a file of a specific format, a directory in a file system, or a database table, and apply one or more heuristics to identify additional portions of this high-level data structure entity or a related high-level data structure entity for prefetching. For example, applications employing a specific type of file may frequently access data at a specific location within these files, such as at the beginning or end of the file. Using knowledge of this application or entity-specific behavior, step 265 may identify the beginning or end portions of these types of files for prefetching.
  • In another example technique, step 265 analyzes the high-level data structure entity associated with the requested storage block to identify related portions of the same or other high-level data structure entity for prefetching. For example, application files may include references to additional files, such as overlay files or dynamically loaded libraries. Similarly, a database table may include references to other database tables. Step 265 may use an analysis of this high-level data structure entity to identify additional referenced high-level data structure entities. The referenced high-level data structure entities may be prefetched.
  • In still another example technique, step 265 may analyze application, virtual machine, or operating system specific files or other high-level data structure entities to identify additional high-level data structure entities for prefetching. For example, step 265 may analyze application or operating system log files to identify the sequence of files accessed during operations such a system or application start-up. These identified files may then be selected for prefetching.
  • Once step 265 has identified one or more high-level data structure entities or portions thereof for prefetching, step 270 identifies all or portions of one or more high-level data structure entities for prefetching based on the high-level data structure entity associated with the requested storage block. However, as discussed above, storage clients specify data access requests in terms of storage blocks, not high-level data structure entities such as files, directories, or database tables. In an embodiment, step 270 provides the ISSD with identifiers for one or more high-level data structure entities, such as the inodes of files or similar identifiers for other types of file systems or database storage structures. Optionally, step 270 also provides an offset, file location, or other type of address identify a specific portion of a high-level data structure entity to be prefetched. In response, the ISSD returns an identifier of one or more storage blocks associated with the high-level data structure entities. These storage blocks are used to store the high-level data structure entities or portions thereof.
  • Decision block 275 determines if the storage blocks identified in step 270 have already been stored in the storage block read cache located at the branch location. In an embodiment, the storage block access optimizer at the data center maintains a record of all of the storage blocks that have copies stored in the storage block read cache. In an alternate embodiment, the storage block access optimizer queries the branch location virtual storage array interface to determine if copies of these identified storage blocks have already been stored in the storage block read cache.
  • In still a further embodiment, decision block 275 and the determination of whether an additional storage block has been previously retrieved and cached may be omitted. Instead, this embodiment can send all of the additional storage blocks identified by step 270 to the branch location virtual storage array interface to be cached. This embodiment can be used when WAN latency, rather than WAN bandwidth limitations, are an overriding concern.
  • If all of the identified storage blocks from step 270 are already stored in the storage block read cache, then method 255 proceeds from decision block 275 to step 280. Optional step 280 determines if there are additional high-level data structure entities that should be included in the analysis of method 255, based on the results of step 265. For example, if steps 260 and 265 analyze a first file and identify a second file that should be prefetched, step 285 may include this second file in a list of high-level data structure entities to be analyzed by method 255, potentially identifying additional files from the analysis of this second file.
  • If some or all of the storage blocks identified in step 270 are not already stored in the storage block read cache, then step 285 retrieves these uncached storage blocks from the virtual storage array data located in a physical data storage on the data center LAN. The retrieved storage blocks are sent via the WAN connection from the data center location to the branch location. In an embodiment of step 280, the data center virtual storage array interface receives a request for the uncached identified storage blocks from the storage block access optimizer and accesses the physical data storage array to retrieve these storage blocks. The data center virtual storage array interface then forwards these storage blocks to the branch location virtual storage array interface via the WAN connection.
  • Step 290 stores the storage blocks identified for prefetching in the storage block read cache. In an embodiment of step 290, the branch location virtual storage array interface receives one or more storage blocks from the data center virtual storage array interface via the WAN connection and stores these storage blocks in the storage block read cache. Following step 290, method 255 proceeds to step 285. The storage blocks added to the storage block read cache in previous iterations of method 255 may be available for fulfilling storage block read requests.
  • Following step 280 or, if step 280 is omitted, decision block 275 or step 290, an embodiment of method 255 proceeds to step 260 to select another high-level data structure entity for analysis.
  • In an embodiment, steps 285 and 290 may be performed asynchronously or in parallel with further iterations of method 255. For example, a storage block access optimizer may direct the data center virtual storage array interface to retrieve one or more storage blocks. While this operation is being performed, the storage block access optimizer may continue with the execution of method 255 by proceeding to optional step 280 to identify further high-level data structure entities for analysis, and/or returning to step 260 for an additional iteration of method 255. When the data center virtual storage array interface has completed its retrieval of one or more storage blocks as requested, step 290 may be performed in the background and in parallel to transfer these storage blocks via the WAN to the branch location for storage in the storage block read cache.
  • Method 255 may be performed by a branch virtual data storage array interface, by a data center virtual data storage array interface, or by both virtual data storage array interfaces working in concert. For example, steps 260 to 285 of method 255 may be performed by a data center virtual storage array interface. In another example, all of the steps of method 255 may be performed by a branch location virtual storage array interface.
  • Embodiments of both methods 200 and 255 utilize the ISSD to identify high-level data structure entities from storage blocks and/or to identify storage blocks from their associated high-level data structure entities. An embodiment of the invention creates the ISSD by initially searching high-level data structure entities, such as a master file table, allocation table or tree, or other types of file system metadata structures, to identify the high-level data structure entities corresponding with the storage blocks. An embodiment of the invention may further recursively analyze other high-level data structure entities, such as inodes, directory structures, files, and database tables and nodes, that are referenced by the master file table or other high-level data structures. This initial analysis may be performed by either the branch location or data center virtual storage array interface as a preprocessing activity or in the background while processing storage client requests. In an embodiment, the ISSD may be updated frequently or infrequently, depending upon the desired prefetching performance. Embodiments of the invention may update the ISSD by periodically scanning the high-level data structure entities or by monitoring storage client activity for changes or additions to the virtual storage array, which is then used to update the affected portions of the ISSD.
  • As described above, embodiments of the invention prefetch storage blocks from the data center storage array and cache these storage blocks in a storage block cache located at the branch location. In some embodiments, the storage block cache may be smaller than the virtual storage array. Thus, when the storage block cache is full, the branch or data center virtual storage array interface may need to occasionally evict or remove some storage blocks from the storage block cache to make room for other prefetched storage blocks. In an embodiment, the branch virtual storage array interface may use any cache replacement scheme or policy known in the art, such as a least recently used (LRU) cache management policy.
  • In another embodiment, the storage block cache replacement policy of the storage block cache is based on an understanding of the relationship between storage blocks and corresponding high-level data structure entities, such as file system or database entities. In this embodiment, even though the storage block cache operates on the basis of storage blocks, the storage block cache replacement policies determine whether to retain or evict storage blocks in the storage block cache based on their associations to files or other high level data structure entities.
  • For example, when a virtual storage array interface needs to evict storage blocks from the storage block cache to create free space for other prefetched storage blocks, an embodiment of the virtual storage interface uses information associating storage blocks with corresponding files to evict all of the storage blocks associated with a single file, rather than evicting some storage blocks from one file and some from another file. In this example, storage blocks are not necessarily evicted based on their own usage alone, but on the overall usage of their associated file or other high-level data structure entity.
  • As another example, the storage block cache may elect to preferentially retain storage blocks including file system metadata and/or directory structures over other storage blocks that include file data only.
  • In yet another example, the storage block cache may identify files or other high-level data structure entities that have not been accessed recently, and then use the ISSD to identify and select the storage blocks corresponding with these infrequently used files for eviction.
  • Although these examples of storage block cache replacement policies are discussed with reference to file and file systems, similar techniques can be applied to databases and other types of high-level data structure entities.
  • In addition to selectively evict storage blocks based on their associated high-level data structure entities, an embodiment of the virtual array storage system can also include cache policies to preferentially retain or “pin” specific storage blocks in the storage block cache, regardless of their usage or other factors. These cache retention policies can ensure that specific storage blocks are always accessible at the branch location, even at times when the WAN is unavailable, since copies of these storage blocks will always exist in the storage block cache.
  • In this embodiment, a user, administrator, or administrative application may specify all or a portion of the virtual storage array for preferential retention or pinning in the storage block cache. Upon receiving a request to pin some or all of the virtual storage array data in the storage block cache, the virtual storage array system needs to determine if the storage block cache has sufficient additional capacity to store the specified storage blocks. If the storage block cache has sufficient capacity, the virtual storage array system is allowed to reserves space in the storage block cache for the specified storage blocks; otherwise this request is denied.
  • If the storage block cache has sufficient capacity to satisfy the pinning request, the cache also may initiate a proactive prefetch process to retrieve any requested storage blocks that are not already in the storage block cache from the data center via the WAN. For large pinning requests, such as an entire virtual storage array, it may take hours or days for this proactive prefetch to be completed. In a further embodiment, this proactive prefetching of pinned storage blocks may be performed asynchronously and at a lower priority than storage clients' requests for virtual storage array read operations, associated prefetching (discussed above), and the virtual storage array write operations (discussed below). This embodiment may be used to deploy data to a new branch location. For example, upon activation of the branch storage array interface, the virtual storage array data is copied asynchronously via the WAN to the branch location storage block cache. Although this data transfer may take some time to complete, storage clients at this new branch location can access virtual storage array data immediately using the virtual storage array read and write operations, with the above-described storage block prefetching hiding the bandwidth and latency limitations of the WAN when storage clients access storage blocks that have yet to be copied to the branch location.
  • In another embodiment, the storage block cache may allow users, administrators, and administration applications the ability to directly specify the pinning of high-level data structure entities, such as files or database elements, as opposed to specifying storage blocks for pinning in the storage block cache. In this embodiment, the virtual storage array uses the ISSD to identify storage blocks corresponding with the specified high-level data structure entities. In a further embodiment, the virtual storage array may allow user, administrators, and administrative applications to specify only a portion of high-level data structure entities for pinning, such as file metadata and frequently used indices within high-level data structure entities. The virtual storage array then uses the associations between storage blocks and high-level data structure entities from the ISSD to identify specific storage blocks to be pinned in the storage block cache.
  • Similarly, the virtual storage array cache can be used to hide latency and bandwidth limitations of the WAN during virtual storage array writes. FIG. 3 illustrates a method 300 of processing storage block write requests to improve virtualized data storage system performance according to an embodiment of the invention.
  • An embodiment of method 300 starts with step 305 receiving a storage block write request from a storage client within the branch location LAN. The storage block write request may be received from a storage client by a branch location virtual storage interface.
  • In response to the receipt of the storage block write request, decision block 310 determines if the storage block write cache in the virtual storage array cache at the branch location is capable of accepting additional write requests or is full. In an embodiment, the virtual storage array cache may use some or all of its storage as a storage block write cache for pending virtual storage array write operations.
  • If the storage block write cache in the virtual storage array cache can accept an additional storage block write request, then step 315 stores the storage block write request, including the storage block data to be written, in the storage block write cache. Step 320 then sends a write acknowledgement to the storage client. Following the storage client's receipt of this write request, the storage client believes its storage block write request is complete and can continue to operation normally. Step 325 then transfers the queued written storage block via the WAN to the physical storage array at the data center LAN. This transfer may occur in the background and asynchronously with the operation of storage clients.
  • While a storage block write request is queued in the storage block write cache and waiting to be transferred to the data center, a storage client may wish to access this storage block for a read or an additional write. In this situation, the virtual storage array interface intercepts the storage block access request. In the case of a storage block read, the virtual storage array interface provides the storage client with the previously queued storage block. In the case of a storage block write, the virtual storage array interface will update the queued storage block data and send a write acknowledgement to the storage client for this additional storage block access.
  • Conversely, if decision block 310 determines that the storage block read cache cannot accept an additional storage block write request, then step 330 immediately transfers the storage block via the WAN to the physical storage array at the data center LAN. In an embodiment of step 335, the branch location virtual storage array interface receives a write confirmation that the storage block write operation is complete. This confirmation may be received from a data center virtual storage array interface or directly from a physical storage array or other data storage device. Following completion of this transfer, step 340 sends a write acknowledgement to the storage client, allowing the storage client to resume normal operation.
  • In a further embodiment, a branch location virtual storage array interface may throttle storage block read and/or write requests from storage clients to prevent the virtual storage array cache from filling up under typical usage scenarios.
  • To prevent data loss or corruption in the face of unexpected events such as power failures, typical file systems and databases issue data writes to block storage devices in a specific order and with certain dependencies to maintain internal consistency of structures and ensure the desired semantics for modifications. For example, most transactional databases employ write ahead logging techniques when modifying index structures, so that in case of failure, any operations that are logged but not completed can be replayed upon restart.
  • Embodiments of the virtual storage array use write order preservation to maintain data consistency. In these embodiments, the storage block cache tracks the order in which write requests are received and can use this ordering information when forwarding the storage block write requests to the physical storage array via the WAN, as described by step 325.
  • FIGS. 4A-4C illustrate three write order preservation policies according to an embodiment of the invention. FIG. 4A illustrates the contents of an example storage block write WAN queue 400. Storage block write WAN queue 400 is used by embodiments of a branch virtual storage array interface to schedule the transmission of storage blocks written by storage clients at the branch location from the storage block write cache to the physical storage array at the data center location. In the example storage block write WAN queue 400, a sequence of ten write operations from one or more branch storage clients is recorded. For each write operation in this example sequence, the storage block write WAN queue 400 includes a reference to the storage block written by this write operation. For example, the first or earliest write operation received, write operation 1, is a write to storage block 4 and the last write or most recent write operation received, write operation 10, is a write to storage block 5.
  • In an embodiment of the invention, a first write order preservation policy is to preserve the semantics of the original file system, database, or other high-level data structure entity by forwarding all block write requests over the WAN to the physical storage array in the same order that they were received by the virtual array storage cache. Thus, the branch virtual storage array interface will communicate written storage blocks to the physical storage array at the data center via the WAN in the same sequence as shown in example storage block write WAN queue 400.
  • When using this policy, the image of the file system or database that exists on the physical storage array is always an internally consistent replica of the modifications made by storage clients at some point in time. Additionally, snapshots of the virtual storage array data, such as snapshots A and B, are guaranteed to be internally consistent, because they include all of the write operations prior to the snapshot time. However, if the same storage blocks are written to multiple times prior to their transfer to the physical storage array, this write order preservation policy requires the storage block write cache to keep track of multiple versions of these storage blocks and forward all of the write operations to these different versions of the storage block in the order received. Moreover, this policy requires more WAN bandwidth because every version of a storage block in the storage block write WAN queue must be forwarded to the data center, even if these versions are superseded by more recent versions of the storage block already in the storage block write WAN queue. For example, in storage block write WAN queue 400, storage block 3 is written to in write operations 2, 4, and 7. Thus, the storage block write cache must transmit all three of these versions of storage block 3 in the order that they were received.
  • In another embodiment of the invention, a second write order preservation policy forwards only the most recently written version of each storage block in the storage block write cache. FIG. 4B illustrates an example storage block WAN transmission order 405 according to this embodiment of the invention. Example storage block WAN transmission order 405 is based on the example storage block writes WAN queue 400 shown in FIG. 4A. In example storage block WAN transmission order 405, only the most recent versions of each storage block in storage block writes WAN queue 400 are communicated to the data center via the WAN. For example, write operation 5 in storage block writes WAN queue 400 is the most recent version of storage block 4. Similarly, write operations 7, 8, 9, 10 in storage block writes WAN queue 400 are the most recent version of storage block 3, 1, 2, and 5, respectively. Thus, storage block operations 5, 7, 8, 9, and 10 are the only write operations in storage block writes WAN queue 400 that need to be transmitted to the physical storage array at the data center, as shown by example storage block WAN transmission order 405. The remaining storage block write operations in the storage block writes WAN queue 400 may be discarded.
  • The most recent version policy shown by FIG. 4B reduces the WAN bandwidth required, because multiple versions of the same storage block need not be transmitted. However, by ignoring the write ordering dependencies of the original sequence of write operations, the virtual storage array data on the physical storage array may not be internally consistent until all of the write operations in the storage block write cache have been processed, if necessary, and transmitted back to the physical storage device at the data center.
  • Additionally, this policy does not preserve consistent snapshots of the virtual storage array, because some write operations prior to a snapshot may be omitted from the storage block WAN transmission order 405 if there are further writes to the same storage block after the snapshot time. For example, write operations 1, 2, and 3 from the storage block writes WAN queue 400, which occur before the time of snapshot A, are omitted from the storage block WAN transmission order 405. Thus, snapshot A will not be internally consistent because it is missing the most recent version of storage blocks 4, 3, and 1 prior to the time of snapshot A.
  • In another embodiment of the invention, a third write order preservation policy forwards the most recently written versions of storage blocks before each snapshot time. FIG. 4C illustrates an example storage block WAN transmission order 410 according to this embodiment of the invention. Example storage block WAN transmission order 410 is based on the example storage block writes WAN queue 400 shown in FIG. 4A. In example storage block WAN transmission order 410, the most recent versions of each storage block before each snapshot time in storage block writes WAN queue 400 are communicated to the data center via the WAN.
  • For example, storage block writes WAN queue 400 includes two snapshot times, snapshot A and snapshot B. For each snapshot time, an embodiment of the storage block write cache forwards only the most recent version of storage blocks updated by write operations prior to this snapshot time. For example, storage block 4 is updated by write operations 1 and 3 and storage block 3 is updated by write operation 2 prior to snapshot time A. In this example, the storage block WAN transmission order 410 output by the storage block write cache will include write operations 2 and 3 to update storage blocks 3 and 4, reflecting the most recent updates of these storage blocks prior to the snapshot time A. In this example, write operation 1 is omitted because the write operation 3 is a more recent update the same storage block before the snapshot time A.
  • Similarly, the storage block WAN transmission order 410 includes write operations 5, 6, and 7, reflecting the most recent updates of storage blocks 4, 2, and 3, respectively, prior to the snapshot time B. In this example, the storage block WAN transmission order 410 include multiple versions of the same storage block if there is one or more snapshots between the associated write operations. For example, write operations 3 and 5 are both included in storage block WAN transmission order 410 because they update storage block 4 prior to and following the snapshot time A.
  • Additionally, the storage block WAN transmission order 410 includes write operations 8, 9, and 10, which are the most recent updates to storage blocks 1, 2, and 5, respectively, following snapshot time B.
  • In this embodiment, although the physical storage array may contain an inconsistent view of the virtual storage array data at some arbitrary points in time, this embodiment ensures that the virtual storage array data will be internally consistent at the times of snapshots.
  • As discussed above, the data of a virtual storage array may be stored in physical storage array or other data storage device. In some applications, such as with virtual machine applications, the physical storage blocks used by the virtual storage array belong to a virtual machine file system, such as VMFS. In these applications, there may be many layers of abstraction between virtual storage array storage blocks and the high-level data structure entities used by a virtual machine application and its hosted applications. Because of this, embodiments of the invention may perform multiple transformations to identify high-level data structure entities corresponding with given virtual storage array storage blocks and, once these high-level data structure entities are identified, may perform multiple optimizations to attempt to predict and prefetch virtual storage array storage blocks that will be requested by a storage client in the near future.
  • FIG. 5 illustrates an example arrangement 500 for successively applying transformations and optimizations to improve virtualized data storage system performance according to an embodiment of the invention. In example 500, successive levels of translation may be used to convert storage block requests to corresponding intermediate level data structure entities and then into corresponding high-level data structure entities. Example arrangement 500 includes a physical data storage system 505, such as a physical data storage array or file server. The physical data storage system 505 may be associated with a file system or volume manager that provides an interface for accessing physical storage blocks. In this example arrangement 500, a virtual storage array interface receives a request for a virtual storage array storage block from a storage client. This request for a virtual storage array storage block is converted by one or more virtual storage array interfaces to a request 507 for a corresponding physical storage block in the physical data storage system 505.
  • To identify additional physical storage blocks for prefetching, example arrangement 500 includes a physical storage block to virtual machine storage structure translation module 510. Module 510 maps a given physical storage block to a corresponding portion of a virtual machine storage structure 515. For example, virtual machine storage structure 515 may be a VMFS storage volume. The VMFS storage volume appears as a logical storage unit, such as a LUN, to the virtual storage array interface. In this example, the VMFS storage volume may include multiple virtual machine disk images. Although the VMFS storage volume appears as a single logical storage unit to the storage client, each disk image within the VMFS storage volume appears to a virtual machine application as a separate virtual logical storage unit. In this example, module 510 may identify a portion of a virtual logical storage unit within the VMFS storage volume as corresponding with the requested physical storage block.
  • Module 520 maps the identified portion of a virtual machine storage structure, such as a virtual logical storage unit within a VMFS storage volume, to one or more corresponding virtual file system storage blocks within a virtual file system 525. Virtual file system 525 may be any type of file system implemented within a virtual logical storage unit. Examples of virtual file systems include FAT, NTFS, and the ext family of file systems. For example, a virtual logical storage unit may be a disk image used by a virtual machine application. The disk image represents as data as virtual storage blocks of a virtual data storage device. The virtual storage blocks in this disk image are organized according to the virtual file system 525.
  • As with physical storage blocks and physical file systems, virtual machine applications and their hosted applications typically access data in terms of files in the virtual file system 525, rather than storage blocks. Moreover, high-level data structure entities within the virtual file system, such as files or directories, may be spread out over multiple non-contiguous virtual storage blocks in the virtual file system 525. Thus, a virtual file system inferred storage structure database 530 and virtual file system block access optimizer 532 leverage an understanding of the semantics and structure of the high-level data structures associated with the virtual storage blocks to predict which virtual storage blocks are likely to be requested by a storage client in the near future. The virtual file system ISSD 530 and virtual file system block access optimizer 532 are similar to the ISSD and block access optimizer, respectively, for physical data storage discussed above.
  • In arrangement 500, the virtual file system block access optimizer 532 receives an identification of one or more virtual storage blocks in the virtual file system 525 that correspond with the requested physical storage block in request 507. The virtual file system block access optimizer 532 uses the virtual file system ISSD 530 to identify one or more virtual file system high-level data structure entities, such as virtual file system files, corresponding with these virtual file system storage blocks. The virtual file system block access optimizer 532 uses its knowledge of the high-level data structure entities and reactive and/or policy-based prefetching techniques to identify one or more additional high-level data structure entities or portions thereof for prefetching. The virtual file system block access optimizer 532 then uses the virtual file system ISSD 530 to identify additional virtual storage blocks in the virtual file system 525 corresponding with these additional high-level data structure entities or portions thereof. The additional virtual storage blocks in the virtual file system 525 are selected for prefetching.
  • Once the virtual file system block access optimizer 532 has selected one or more virtual file system storage blocks for prefetching, a request 533 for these virtual file system storage blocks is generated. In an embodiment of arrangement 500, module 520 translates the prefetch request 533 for virtual file system storage blocks into an equivalent prefetch request 535 for a portion of the virtual machine storage structure. Then, module 510 translates the prefetch request 525 for a portion of the virtual machine storage structure into an equivalent prefetch request 537 for physical storage blocks in the physical data storage system 505. The physical storage blocks indicated by request 537 correspond with the virtual file system storage blocks from request 533. These requested physical storage blocks may be retrieved from the physical data storage system 505 and communicated via the WAN to a branch location virtual storage array interface for storage in a storage block read cache.
  • Arrangement 500 is one example for successively applying transformations and optimizations to improve virtualized data storage system performance according to an embodiment of the invention. Further embodiments of the invention may apply any number of successive transformations to physical storage blocks to identify associated high-level data structure entities. Additionally, once one or more associated high-level data structure entities have been identified, embodiments of the invention may apply optimizations at the level of high-level data structure entities or at any lower level of abstraction. For example, optimizations may be performed at the level of virtual machine file system files, virtual machine file system storage blocks, virtual machine storage structures, physical storage blocks, and/or at any other intermediate data structure level of abstraction.
  • FIG. 6 illustrates a method 600 of creating a data storage snapshot in a virtualized data storage system performance according to an embodiment of the invention. Method 300 begins with step 605 initiating of a virtual storage array checkpoint. A virtual storage array checkpoint may be initiated automatically by a branch location virtual storage array interface according to a schedule or based on criteria, such as the amount of data changed since the last checkpoint. In a further embodiment, a virtual storage array checkpoint may be initiated in response to a request for a virtual storage array snapshot from a system administrator or administration application.
  • To create a virtual storage array checkpoint, step 610 sets the branch location virtual storage array interface to a quiescent state. This entails completing any pending operations with storage clients (though not necessarily background operations between the branch location and data center virtual storage array interfaces, such as transferring new or updated storage blocks from the storage block write cache to the data center via the WAN). While in the quiescent state, the branch location virtual storage interface will not accept any new storage operations from storage clients.
  • Once the branch location virtual storage array interface is set to a quiescent state, step 615 identifies new or updated storage blocks in the branch location virtual storage array cache. These new or updated storage blocks include data that has been created or updated by storage clients but have yet to be transferred via the WAN back to the data center LAN for storage in the physical data storage array.
  • Once all of the updated storage blocks have been identified, step 615 creates a checkpoint data structure. The checkpoint data structure specifies a time of checkpoint creation and the set of new and updated storage blocks at that moment of time. Following the creation of the checkpoint data structure, step 620 reactivates the branch location's virtual storage array. The branch location virtual storage array interface can resume servicing storage operations from storage clients. Additionally, the branch location virtual storage array interface may resume transferring new or updated storage blocks via the WAN to the data center LAN for storage in the physical data storage array. In a further embodiment, the virtual storage array cache may maintain a copy of an updated storage block even after a copy is transferred back to the data center LAN for storage. This allows subsequent snapshots to be created based on this data.
  • In an embodiment, following the reactivation of the virtual storage array, the branch location virtual storage array interface preserves the updated storage blocks specified by the checkpoint data structure from further changes. If a storage client attempts to update a storage block that is associated with a checkpoint, an embodiment of the branch location virtual storage array interface creates a duplicate of this storage block in the virtual storage array cache to store the updated data. By making a copy of this storage block, rather than replacing it with further updated data, this embodiment preserves the data of this storage block at the time of the checkpoint for potential future reference.
  • Optionally, an embodiment of method 600 may initiate one or more additional virtual storage array checkpoints at later times or in response to criteria or conditions. Embodiments of the branch location virtual storage array interface may maintain any arbitrary number of checkpoint data structures and automatically delete outdated checkpoint data structures. For example, a branch location virtual storage interface may maintain only the most recently created checkpoint data structure, or checkpoint data structures from the beginning of the most recent business day and the most recent hour.
  • At some point, a system administrator or administration application may request a snapshot of the virtual storage array data. A snapshot of the virtual storage array data represents the complete set of virtual storage array data at a specific moment of time. Step 625 receives a snapshot request. In response to a snapshot request, step 630 transfers a copy of the appropriate checkpoint data structure from the branch location virtual storage array interface to the data center virtual storage interface. Additionally, step 630 transfers a copy of any updated storage blocks specified by this checkpoint data structure from the branch location virtual storage array interface to the data center virtual storage array interface for storage in the physical storage array.
  • In an embodiment of step 630, the data center virtual storage array interface creates a snapshot of the data of the virtual storage array. The snapshot includes a copy of all of the virtual storage array data in the physical data storage array unchanged from the time of creation of the checkpoint data structure. The snapshot also includes a copy of the updated storage blocks specified by the checkpoint data structure. An embodiment of the data center virtual storage array interface may store the snapshot in the physical storage array or using a data backup. In an embodiment, the data center virtual storage array interface automatically sends storage operations to the physical storage array interface to create a snapshot from a checkpoint data structure. These storage operations can be carried out in the background by the data center virtual storage array interface in addition to translating virtual storage array operations from one or more branch location virtual storage array interfaces into corresponding physical storage array operations.
  • Embodiments of the invention can implement virtual storage array interfaces at the branch and/or data center as standalone devices or as part of other devices, computer systems, or applications. FIG. 7 illustrates an example computer system capable of implementing a virtual storage array interface according to an embodiment of the invention. FIG. 7 is a block diagram of a computer system 2000, such as a personal computer or other digital device, suitable for practicing an embodiment of the invention. Embodiments of computer system 2000 may include dedicated networking devices, such as wireless access points, network switches, hubs, routers, hardware firewalls, network traffic optimizers and accelerators, network attached storage devices, storage array network interfaces, and combinations thereof.
  • Computer system 2000 includes a central processing unit (CPU) 2005 for running software applications and optionally an operating system. CPU 2005 may be comprised of one or more processing cores. In a further embodiment, CPU 2005 may execute virtual machine software applications to create one or more virtual processors capable of executing additional software applications and optional additional operating systems. Virtual machine applications can include interpreters, recompilers, and just-in-time compilers to assist in executing software applications within virtual machines. Additionally, one or more CPUs 2005 or associated processing cores can include virtualization specific hardware, such as additional register sets, memory address manipulation hardware, additional virtualization-specific processor instructions, and virtual machine state maintenance and migration hardware.
  • Memory 2010 stores applications and data for use by the CPU 2005. Examples of memory 2010 include dynamic and static random access memory. Storage 2015 provides non-volatile storage for applications and data and may include fixed or removable hard disk drives, flash memory devices, ROM memory, and CD-ROM, DVD-ROM, Blu-ray, or other magnetic, optical, or solid state storage devices. In an embodiment, storage 2015 includes multiple storage devices configured to act as a storage array for improved performance and/or reliability. In a further embodiment, storage 2015 includes a storage array network utilizing a storage array network interface and storage array network protocols to store and retrieve data. Examples of storage array network interfaces suitable for use with embodiments of the invention include Ethernet, Fibre Channel, IP, and InfiniBand interfaces. Examples of storage array network protocols include ATA, Fibre Channel Protocol, and SCSI. Various combinations of storage array network interfaces and protocols are suitable for use with embodiments of the invention, including iSCSI, HyperSCSI, Fibre Channel over Ethernet, and iFCP.
  • Optional user input devices 2020 communicate user inputs from one or more users to the computer system 2000, examples of which may include keyboards, mice, joysticks, digitizer tablets, touch pads, touch screens, still or video cameras, and/or microphones. In an embodiment, user input devices may be omitted and computer system 2000 may present a user interface to a user over a network, for example using a web page or network management protocol and network management software applications.
  • Computer system 2000 includes one or more network interfaces 2025 that allow computer system 2000 to communicate with other computer systems via an electronic communications network, and may include wired or wireless communication over local area networks and wide area networks such as the Internet. Computer system 2000 may support a variety of networking protocols at one or more levels of abstraction. For example, computer system may support networking protocols at one or more layers of the seven layer OSI network model. An embodiment of network interface 2025 includes one or more wireless network interfaces adapted to communicate with wireless clients and with other wireless networking devices using radio waves, for example using the 802.11 family of protocols, such as 802.11a, 802.11b, 802.11g, and 802.11n.
  • An embodiment of the computer system 2000 may also include a wired networking interface, such as one or more Ethernet connections to communicate with other networking devices via local or wide-area networks.
  • The components of computer system 2000, including CPU 2005, memory 2010, data storage 2015, user input devices 2020, and network interface 2025 are connected via one or more data buses 2060. Additionally, some or all of the components of computer system 2000, including CPU 2005, memory 2010, data storage 2015, user input devices 2020, and network interface 2025 may be integrated together into one or more integrated circuits or integrated circuit packages. Furthermore, some or all of the components of computer system 2000 may be implemented as application specific integrated circuits (ASICS) and/or programmable logic.
  • Further embodiments can be envisioned to one of ordinary skill in the art after reading the attached documents. For example, embodiments of the invention can be used with any number of network connections and may be added to any type of network device, client or server computer, or other computing device in addition to the computer illustrated above. In other embodiments, combinations or sub-combinations of the above disclosed invention can be advantageously made. The block diagrams of the architecture and flow charts are grouped for ease of understanding. However it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present invention.
  • The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims.

Claims (20)

1. A method of optimizing a block storage protocol access to a block storage device via a wide area network, the method comprising:
receiving a first storage block for storage in a storage block cache at a first network location;
determining if the storage block cache has sufficient capacity to store the first storage block; and
in response to the determination that the storage block cache does not have sufficient capacity for the first storage block:
selecting a first high-level data structure entity;
identifying at least a second storage block associated with the high-level data structure entity and stored in the storage block cache; and
removing at least the second storage block from the storage block cache.
2. The method of claim 1, wherein selecting a first high-level data structure entity comprises:
selecting the first high-level data structure entity based on its infrequent access by a storage client.
3. The method of claim 1, wherein selecting a first high-level data structure entity comprises:
selecting a third storage block stored in the storage block cache; and
selecting the first high-level data structure entity based on its correspondence with the third storage block.
4. The method of claim 3, wherein the third storage block is selected based on its infrequent access by a storage client.
5. The method of claim 4, comprising:
removing the third storage block from the storage block cache.
6. The method of claim 1, wherein the second storage block is identified as an infrequently accessed portion of the first high-level data structure entity.
7. The method of claim 7, wherein the second storage block does not include metadata of the first high-level data structure entity.
8. The method of claim 1, wherein the first storage block is received from a storage client in association with a storage write operation.
9. The method of claim 1, wherein the first storage block is received from a data storage is association with a storage block prefetching operation.
10. The method of claim 9, wherein the data storage is connected with a wide-area network at a first network location and the storage block cache is connected with the wide-area network location at a second network location.
11. A method of optimizing a block storage protocol access to a block storage device via a wide area network, the method comprising:
receiving a storage block cache replacement policy;
selecting at least a portion of a first high-level data structure entity identified by the storage block cache replacement policy;
identifying at least a first storage block associated with the selected portion of the first high-level data structure entity; and
selecting the first storage block for retention in a storage block cache connected with a wide-area network at a first network location.
12. The method of claim 11, wherein a copy of the first storage block is stored in the storage block cache.
13. The method of claim 11, wherein a copy of the first storage block is not stored in the storage block cache, the method comprising:
retrieving the first storage block from a data storage connected with the wide area network at a second network location.
14. The method of claim 11, wherein the selected portion of the first high-level data structure entity is identified as a frequently accessed portion of the first high-level data structure entity by the storage block cache replacement policy.
15. The method of claim 14, wherein the second storage block includes metadata of the first high-level data structure entity.
16. A method of optimizing a block storage protocol write access to a block storage device via a wide area network, the method comprising:
receiving a sequence of storage block write operations;
selecting a first storage block write operation included in the sequence of storage block write operations, wherein the first storage block operation includes a first version of a storage block;
determining if the sequence of storage block write operations includes a second storage block write operation including a second version of the storage block, wherein the second storage block write operation is more recent than the first storage block write operation;
in response to the determination that the sequence of storage block write operations does not include the second storage block write operation including the second version of the storage block, communicating the first version of the storage block via a wide area network to a data storage connected with the wide area network at a first network location; and
in response to the determination that the sequence of storage block write operations includes the second storage block write operation including the second version of the storage block, communicating the second version of the storage block via the wide area network to the data storage.
17. The method of claim 16, comprising:
in response to receiving the sequence of storage block requests, caching the sequence of storage block requests in a storage block cache; and
following the communication of the second version of the storage block to the data storage, removing the first storage block write operation and the first version of the storage block from the storage block cache.
18. The method of claim 17, wherein the storage block cache is connected with the wide-area network at a second network location, the method comprising:
following the communication of the second version of the storage block to the data storage, retaining the first version of the storage block in the storage block cache for read access by a storage client connected with the wide-area network at the second location.
19. The method of claim 16, wherein determining if the sequence of storage block write operations includes the second storage block write operation including the second version of the storage block comprises:
searching the sequence of storage block write operations from a time associated with the first storage block write operation up to a snapshot time.
20. The method of claim 16, wherein determining if the sequence of storage block write operations includes the second storage block write operation including the second version of the storage block comprises:
searching the sequence of storage block write operations from a time associated with the first storage block write operation up to an end of the sequence of storage block write operations.
US12/730,192 2009-03-23 2010-03-23 Virtualized data storage system cache management Abandoned US20100241807A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/730,192 US20100241807A1 (en) 2009-03-23 2010-03-23 Virtualized data storage system cache management

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16246309P 2009-03-23 2009-03-23
US12/730,192 US20100241807A1 (en) 2009-03-23 2010-03-23 Virtualized data storage system cache management

Publications (1)

Publication Number Publication Date
US20100241807A1 true US20100241807A1 (en) 2010-09-23

Family

ID=42738538

Family Applications (5)

Application Number Title Priority Date Filing Date
US12/730,179 Abandoned US20100241726A1 (en) 2009-03-23 2010-03-23 Virtualized Data Storage Over Wide-Area Networks
US12/730,185 Active 2031-12-29 US10831721B2 (en) 2009-03-23 2010-03-23 Virtualized data storage system architecture
US12/730,198 Active 2031-11-14 US9348842B2 (en) 2009-03-23 2010-03-23 Virtualized data storage system optimizations
US12/730,192 Abandoned US20100241807A1 (en) 2009-03-23 2010-03-23 Virtualized data storage system cache management
US16/849,888 Active 2030-07-03 US11593319B2 (en) 2009-03-23 2020-04-15 Virtualized data storage system architecture

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US12/730,179 Abandoned US20100241726A1 (en) 2009-03-23 2010-03-23 Virtualized Data Storage Over Wide-Area Networks
US12/730,185 Active 2031-12-29 US10831721B2 (en) 2009-03-23 2010-03-23 Virtualized data storage system architecture
US12/730,198 Active 2031-11-14 US9348842B2 (en) 2009-03-23 2010-03-23 Virtualized data storage system optimizations

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/849,888 Active 2030-07-03 US11593319B2 (en) 2009-03-23 2020-04-15 Virtualized data storage system architecture

Country Status (3)

Country Link
US (5) US20100241726A1 (en)
EP (1) EP2411918B1 (en)
WO (1) WO2010111312A2 (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100312827A1 (en) * 2009-06-09 2010-12-09 Internatinal Business Machines Corporation Method and apparatus to enable protocol verification
US20110197052A1 (en) * 2010-02-08 2011-08-11 Microsoft Corporation Fast Machine Booting Through Streaming Storage
US20110238775A1 (en) * 2010-03-23 2011-09-29 Riverbed Technology, Inc. Virtualized Data Storage Applications and Optimizations
US20120089786A1 (en) * 2010-10-06 2012-04-12 Arvind Pruthi Distributed cache coherency protocol
US20120144391A1 (en) * 2010-12-02 2012-06-07 International Business Machines Corporation Provisioning a virtual machine
US20120311263A1 (en) * 2011-06-04 2012-12-06 Microsoft Corporation Sector-based write filtering with selective file and registry exclusions
WO2013095622A1 (en) * 2011-12-23 2013-06-27 Empire Technology Develpment Llc Optimization of resource utilization in a collection of devices
US8554954B1 (en) * 2012-03-31 2013-10-08 Emc Corporation System and method for improving cache performance
US20130275679A1 (en) * 2012-04-16 2013-10-17 International Business Machines Corporation Loading a pre-fetch cache using a logical volume mapping
US20140258639A1 (en) * 2013-03-06 2014-09-11 Gregory RECUPERO Client spatial locality through the use of virtual request trackers
US20140281153A1 (en) * 2013-03-15 2014-09-18 Saratoga Speed, Inc. Flash-based storage system including reconfigurable circuitry
US8874799B1 (en) 2012-03-31 2014-10-28 Emc Corporation System and method for improving cache performance
US20140325141A1 (en) * 2013-04-30 2014-10-30 WMware Inc. Trim support for a solid-state drive in a virtualized environment
US8914585B1 (en) 2012-03-31 2014-12-16 Emc Corporation System and method for obtaining control of a logical unit number
US8914584B1 (en) 2012-03-31 2014-12-16 Emc Corporation System and method for improving cache performance upon detection of a LUN control event
US20150177999A1 (en) * 2013-12-20 2015-06-25 Lyve Minds, Inc. Storage network data retrieval
US9069782B2 (en) 2012-10-01 2015-06-30 The Research Foundation For The State University Of New York System and method for security and privacy aware virtual machine checkpointing
US9081510B2 (en) 2010-02-08 2015-07-14 Microsoft Technology Licensing, Llc Background migration of virtual storage
US20150215320A1 (en) * 2012-08-20 2015-07-30 Alcatel Lucent Method for establishing an authorized communication between a physical object and a communication device enabling a write access
US9304902B2 (en) 2013-03-15 2016-04-05 Saratoga Speed, Inc. Network storage system using flash storage
US9372726B2 (en) 2013-01-09 2016-06-21 The Research Foundation For The State University Of New York Gang migration of virtual machines using cluster-wide deduplication
US9384147B1 (en) 2014-08-13 2016-07-05 Saratoga Speed, Inc. System and method for cache entry aging
US9483484B1 (en) * 2011-05-05 2016-11-01 Veritas Technologies Llc Techniques for deduplicated data access statistics management
US9507628B1 (en) 2015-09-28 2016-11-29 International Business Machines Corporation Memory access request for a memory protocol
US9509604B1 (en) 2013-12-31 2016-11-29 Sanmina Corporation Method of configuring a system for flow based services for flash storage and associated information structure
US20160366094A1 (en) * 2015-06-10 2016-12-15 Cisco Technology, Inc. Techniques for implementing ipv6-based distributed storage space
US9672180B1 (en) 2014-08-06 2017-06-06 Sanmina Corporation Cache memory management system and method
US9715428B1 (en) 2014-09-24 2017-07-25 Sanmina Corporation System and method for cache data recovery
US9727268B2 (en) 2013-01-08 2017-08-08 Lyve Minds, Inc. Management of storage in a storage network
US9767271B2 (en) 2010-07-15 2017-09-19 The Research Foundation For The State University Of New York System and method for validating program execution at run-time
US9767284B2 (en) 2012-09-14 2017-09-19 The Research Foundation For The State University Of New York Continuous run-time validation of program execution: a practical approach
US9823842B2 (en) 2014-05-12 2017-11-21 The Research Foundation For The State University Of New York Gang migration of virtual machines using cluster-wide deduplication
US9984079B1 (en) * 2012-01-13 2018-05-29 Amazon Technologies, Inc. Managing data storage using storage policy specifications
US11281589B2 (en) * 2018-08-30 2022-03-22 Micron Technology, Inc. Asynchronous forward caching memory systems and methods

Families Citing this family (82)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100241726A1 (en) 2009-03-23 2010-09-23 Riverbed Technology, Inc. Virtualized Data Storage Over Wide-Area Networks
US8151033B2 (en) * 2009-05-28 2012-04-03 Red Hat, Inc. Mechanism for virtual logical volume management
US9135031B1 (en) * 2010-04-28 2015-09-15 Netapp, Inc. System and method for determining storage resources of a virtual machine in a virtual server environment
US9253548B2 (en) 2010-05-27 2016-02-02 Adobe Systems Incorporated Optimizing caches for media streaming
US10423577B2 (en) 2010-06-29 2019-09-24 International Business Machines Corporation Collections for storage artifacts of a tree structured repository established via artifact metadata
US10394757B2 (en) 2010-11-18 2019-08-27 Microsoft Technology Licensing, Llc Scalable chunk store for data deduplication
US8495108B2 (en) 2010-11-30 2013-07-23 International Business Machines Corporation Virtual node subpool management
US8364641B2 (en) * 2010-12-15 2013-01-29 International Business Machines Corporation Method and system for deduplicating data
US8380681B2 (en) 2010-12-16 2013-02-19 Microsoft Corporation Extensible pipeline for data deduplication
US8645335B2 (en) 2010-12-16 2014-02-04 Microsoft Corporation Partial recall of deduplicated files
EP2659405B1 (en) * 2010-12-29 2021-02-03 Amazon Technologies, Inc. Receiver-side data deduplication in data systems
US8892707B2 (en) 2011-04-13 2014-11-18 Netapp, Inc. Identification of virtual applications for backup in a cloud computing system
US9244933B2 (en) 2011-04-29 2016-01-26 International Business Machines Corporation Disk image introspection for storage systems
US9721089B2 (en) * 2011-05-06 2017-08-01 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for efficient computer forensic analysis and data access control
US8683466B2 (en) * 2011-05-24 2014-03-25 Vmware, Inc. System and method for generating a virtual desktop
WO2012166139A1 (en) 2011-06-02 2012-12-06 Hewlett-Packard Development Company, L.P. Network virtualization
US9158632B1 (en) * 2011-06-30 2015-10-13 Emc Corporation Efficient file browsing using key value databases for virtual backups
US8949829B1 (en) 2011-06-30 2015-02-03 Emc Corporation Virtual machine disaster recovery
US8843443B1 (en) 2011-06-30 2014-09-23 Emc Corporation Efficient backup of virtual data
US8849769B1 (en) * 2011-06-30 2014-09-30 Emc Corporation Virtual machine file level recovery
US9229951B1 (en) 2011-06-30 2016-01-05 Emc Corporation Key value databases for virtual backups
US9710338B1 (en) * 2011-06-30 2017-07-18 EMC IP Holding Company LLC Virtual machine data recovery
US8671075B1 (en) 2011-06-30 2014-03-11 Emc Corporation Change tracking indices in virtual machines
US9311327B1 (en) * 2011-06-30 2016-04-12 Emc Corporation Updating key value databases for virtual backups
US8849777B1 (en) 2011-06-30 2014-09-30 Emc Corporation File deletion detection in key value databases for virtual backups
US8490092B2 (en) 2011-07-06 2013-07-16 Microsoft Corporation Combined live migration and storage migration using file shares and mirroring
US8990171B2 (en) * 2011-09-01 2015-03-24 Microsoft Corporation Optimization of a partially deduplicated file
US8966172B2 (en) 2011-11-15 2015-02-24 Pavilion Data Systems, Inc. Processor agnostic data storage in a PCIE based shared storage enviroment
US20130159382A1 (en) * 2011-12-15 2013-06-20 Microsoft Corporation Generically presenting virtualized data
US9223607B2 (en) 2012-01-17 2015-12-29 Microsoft Technology Licensing, Llc System for replicating or migrating virtual machine operations log by throttling guest write iOS based on destination throughput
US9652182B2 (en) 2012-01-31 2017-05-16 Pavilion Data Systems, Inc. Shareable virtual non-volatile storage device for a server
US9773006B1 (en) * 2012-02-15 2017-09-26 Veritas Technologies Llc Techniques for managing non-snappable volumes
US9110595B2 (en) 2012-02-28 2015-08-18 AVG Netherlands B.V. Systems and methods for enhancing performance of software applications
US20130232215A1 (en) * 2012-03-05 2013-09-05 Riverbed Technology, Inc. Virtualized data storage system architecture using prefetching agent
US9813353B1 (en) * 2012-06-07 2017-11-07 Open Invention Network Llc Migration of files contained on virtual storage to a cloud storage infrastructure
US9223799B1 (en) * 2012-06-29 2015-12-29 Emc Corporation Lightweight metadata sharing protocol for location transparent file access
US9047195B2 (en) * 2012-07-05 2015-06-02 Hitachi, Ltd. Computer system with virtualization mechanism and management table, cache control method and computer program
US9971787B2 (en) * 2012-07-23 2018-05-15 Red Hat, Inc. Unified file and object data storage
US9600206B2 (en) 2012-08-01 2017-03-21 Microsoft Technology Licensing, Llc Request ordering support when switching virtual disk replication logs
US9766873B2 (en) * 2012-08-17 2017-09-19 Tripwire, Inc. Operating system patching and software update reconciliation
US9189396B2 (en) * 2012-08-24 2015-11-17 Dell Products L.P. Snapshot coordination
US9262329B2 (en) 2012-08-24 2016-02-16 Dell Products L.P. Snapshot access
US10489295B2 (en) * 2012-10-08 2019-11-26 Sandisk Technologies Llc Systems and methods for managing cache pre-fetch
US9753980B1 (en) 2013-02-25 2017-09-05 EMC IP Holding Company LLC M X N dispatching in large scale distributed system
US9984083B1 (en) 2013-02-25 2018-05-29 EMC IP Holding Company LLC Pluggable storage system for parallel query engines across non-native file systems
US10902081B1 (en) * 2013-05-06 2021-01-26 Veeva Systems Inc. System and method for controlling electronic communications
US9430031B2 (en) * 2013-07-29 2016-08-30 Western Digital Technologies, Inc. Power conservation based on caching
US9280780B2 (en) 2014-01-27 2016-03-08 Umbel Corporation Systems and methods of generating and using a bitmap index
US9378021B2 (en) * 2014-02-14 2016-06-28 Intel Corporation Instruction and logic for run-time evaluation of multiple prefetchers
US10185636B2 (en) * 2014-08-15 2019-01-22 Hitachi, Ltd. Method and apparatus to virtualize remote copy pair in three data center configuration
US9697227B2 (en) 2014-10-27 2017-07-04 Cohesity, Inc. Concurrent access and transactions in a distributed file system
US9565269B2 (en) 2014-11-04 2017-02-07 Pavilion Data Systems, Inc. Non-volatile memory express over ethernet
US9712619B2 (en) 2014-11-04 2017-07-18 Pavilion Data Systems, Inc. Virtual non-volatile memory express drive
US9628350B2 (en) 2014-11-05 2017-04-18 Amazon Technologies, Inc. Dynamic scaling of storage volumes for storage client file systems
JP2016115286A (en) * 2014-12-17 2016-06-23 株式会社リコー Information processing apparatus and information processing method
US10095707B1 (en) 2014-12-19 2018-10-09 EMC IP Holding Company LLC Nearline cloud storage based on FUSE framework
US10095710B1 (en) * 2014-12-19 2018-10-09 EMC IP Holding Company LLC Presenting cloud based storage as a virtual synthetic
US9753814B1 (en) 2014-12-19 2017-09-05 EMC IP Holding Company LLC Application level support for selectively accessing files in cloud-based storage
US10235463B1 (en) * 2014-12-19 2019-03-19 EMC IP Holding Company LLC Restore request and data assembly processes
US10120765B1 (en) 2014-12-19 2018-11-06 EMC IP Holding Company LLC Restore process using incremental inversion
US10129357B2 (en) 2015-08-21 2018-11-13 International Business Machines Corporation Managing data storage in distributed virtual environment
US10069896B2 (en) * 2015-11-01 2018-09-04 International Business Machines Corporation Data transfer via a data storage drive
US10067711B2 (en) 2015-11-01 2018-09-04 International Business Machines Corporation Data transfer between data storage libraries
US9607104B1 (en) 2016-04-29 2017-03-28 Umbel Corporation Systems and methods of using a bitmap index to determine bicliques
US10901943B1 (en) * 2016-09-30 2021-01-26 EMC IP Holding Company LLC Multi-tier storage system with direct client access to archive storage tier
US11003658B2 (en) * 2016-11-21 2021-05-11 International Business Machines Corporation Selectively retrieving data from remote share nothing computer clusters
US10067876B2 (en) * 2017-01-09 2018-09-04 Splunk, Inc. Pre-fetching data from buckets in remote storage for a cache
US10594771B2 (en) * 2017-02-09 2020-03-17 International Business Machines Corporation Distributed file transfer with high performance
US10528329B1 (en) * 2017-04-27 2020-01-07 Intuit Inc. Methods, systems, and computer program product for automatic generation of software application code
US10467122B1 (en) 2017-04-27 2019-11-05 Intuit Inc. Methods, systems, and computer program product for capturing and classification of real-time data and performing post-classification tasks
US10705796B1 (en) 2017-04-27 2020-07-07 Intuit Inc. Methods, systems, and computer program product for implementing real-time or near real-time classification of digital data
US10467261B1 (en) 2017-04-27 2019-11-05 Intuit Inc. Methods, systems, and computer program product for implementing real-time classification and recommendations
CN110582758B (en) * 2017-06-08 2022-11-29 日立数据管理有限公司 Fast recall of geographically distributed object data
US11477280B1 (en) * 2017-07-26 2022-10-18 Pure Storage, Inc. Integrating cloud storage services
US11526469B1 (en) * 2017-07-31 2022-12-13 EMC IP Holding Company LLC File system reorganization in the presence of inline compression
GB2569651A (en) * 2017-12-22 2019-06-26 Veea Systems Ltd Edge computing system
CN109460862B (en) * 2018-10-22 2021-04-27 郑州大学 Method for solving multi-objective optimization problem based on MAB (multi-object-based) hyperheuristic algorithm
US11321114B2 (en) * 2019-07-19 2022-05-03 Vmware, Inc. Hypervisor assisted application virtualization
US11782610B2 (en) * 2020-01-30 2023-10-10 Seagate Technology Llc Write and compare only data storage
US11150840B2 (en) * 2020-02-09 2021-10-19 International Business Machines Corporation Pinning selected volumes within a heterogeneous cache
WO2021225080A1 (en) * 2020-05-08 2021-11-11 ソニーグループ株式会社 Information processing device, information processing method, and program
US11755219B1 (en) 2022-05-26 2023-09-12 International Business Machines Corporation Block access prediction for hybrid cloud storage

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040068627A1 (en) * 2002-10-04 2004-04-08 Stuart Sechrest Methods and mechanisms for proactive memory management
US20080140937A1 (en) * 2006-12-12 2008-06-12 Sybase, Inc. System and Methodology Providing Multiple Heterogeneous Buffer Caches
US7441012B2 (en) * 1999-06-11 2008-10-21 Microsoft Corporation Network file system
US7631148B2 (en) * 2004-01-08 2009-12-08 Netapp, Inc. Adaptive file readahead based on multiple factors

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5947424A (en) * 1995-08-01 1999-09-07 Tolco, Incorporated Pipe support assembly with retaining strap
US6442565B1 (en) * 1999-08-13 2002-08-27 Hiddenmind Technology, Inc. System and method for transmitting data content in a computer network
US6718454B1 (en) * 2000-04-29 2004-04-06 Hewlett-Packard Development Company, L.P. Systems and methods for prefetch operations to reduce latency associated with memory access
ATE381191T1 (en) * 2000-10-26 2007-12-15 Prismedia Networks Inc METHOD AND SYSTEM FOR MANAGING DISTRIBUTED CONTENT AND CORRESPONDING METADATA
US20020156973A1 (en) * 2001-01-29 2002-10-24 Ulrich Thomas R. Enhanced disk array
US20020161860A1 (en) * 2001-02-28 2002-10-31 Benjamin Godlin Method and system for differential distributed data file storage, management and access
US7685126B2 (en) * 2001-08-03 2010-03-23 Isilon Systems, Inc. System and methods for providing a distributed file system utilizing metadata to track information about data stored throughout the system
US6868439B2 (en) * 2002-04-04 2005-03-15 Hewlett-Packard Development Company, L.P. System and method for supervising use of shared storage by multiple caching servers physically connected through a switching router to said shared storage via a robust high speed connection
US7359890B1 (en) * 2002-05-08 2008-04-15 Oracle International Corporation System load based adaptive prefetch
JP4244572B2 (en) * 2002-07-04 2009-03-25 ソニー株式会社 Cache device, cache data management method, and computer program
JP4124331B2 (en) * 2002-09-17 2008-07-23 株式会社日立製作所 Virtual volume creation and management method for DBMS
JP4116413B2 (en) * 2002-12-11 2008-07-09 株式会社日立製作所 Prefetch appliance server
US7953819B2 (en) * 2003-08-22 2011-05-31 Emc Corporation Multi-protocol sharable virtual storage objects
JP2005148868A (en) * 2003-11-12 2005-06-09 Hitachi Ltd Data prefetch in storage device
KR100899462B1 (en) * 2004-07-21 2009-05-27 비치 언리미티드 엘엘씨 Distributed storage architecture based on block map caching and vfs stackable file system modules
US7849257B1 (en) * 2005-01-06 2010-12-07 Zhe Khi Pak Method and apparatus for storing and retrieving data
US7937404B2 (en) * 2005-02-04 2011-05-03 Hewlett-Packard Development Company, L.P. Data processing system and method
AU2006239882B2 (en) * 2005-04-25 2009-10-29 Network Appliance, Inc. System and method for caching network file systems
US7386662B1 (en) * 2005-06-20 2008-06-10 Symantec Operating Corporation Coordination of caching and I/O management in a multi-layer virtualized storage environment
US7386675B2 (en) * 2005-10-21 2008-06-10 Isilon Systems, Inc. Systems and methods for using excitement values to predict future access to resources
EP2153340A4 (en) 2007-05-08 2015-10-21 Riverbed Technology Inc A hybrid segment-oriented file server and wan accelerator
US8903938B2 (en) * 2007-06-18 2014-12-02 Amazon Technologies, Inc. Providing enhanced data retrieval from remote locations
US7702857B2 (en) * 2007-08-22 2010-04-20 International Business Machines Corporation Adjusting parameters used to prefetch data from storage into cache
US9323680B1 (en) * 2007-09-28 2016-04-26 Veritas Us Ip Holdings Llc Method and apparatus for prefetching data
US20100241726A1 (en) 2009-03-23 2010-09-23 Riverbed Technology, Inc. Virtualized Data Storage Over Wide-Area Networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7441012B2 (en) * 1999-06-11 2008-10-21 Microsoft Corporation Network file system
US20040068627A1 (en) * 2002-10-04 2004-04-08 Stuart Sechrest Methods and mechanisms for proactive memory management
US20100199043A1 (en) * 2002-10-04 2010-08-05 Microsoft Corporation Methods and mechanisms for proactive memory management
US7631148B2 (en) * 2004-01-08 2009-12-08 Netapp, Inc. Adaptive file readahead based on multiple factors
US20080140937A1 (en) * 2006-12-12 2008-06-12 Sybase, Inc. System and Methodology Providing Multiple Heterogeneous Buffer Caches

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100312827A1 (en) * 2009-06-09 2010-12-09 Internatinal Business Machines Corporation Method and apparatus to enable protocol verification
US8417765B2 (en) * 2009-06-09 2013-04-09 International Business Machines Corporation Method and apparatus to enable protocol verification
US20110197052A1 (en) * 2010-02-08 2011-08-11 Microsoft Corporation Fast Machine Booting Through Streaming Storage
US10025509B2 (en) 2010-02-08 2018-07-17 Microsoft Technology Licensing, Llc Background migration of virtual storage
US9081510B2 (en) 2010-02-08 2015-07-14 Microsoft Technology Licensing, Llc Background migration of virtual storage
US8751780B2 (en) * 2010-02-08 2014-06-10 Microsoft Corporation Fast machine booting through streaming storage
US20110238775A1 (en) * 2010-03-23 2011-09-29 Riverbed Technology, Inc. Virtualized Data Storage Applications and Optimizations
US8504670B2 (en) * 2010-03-23 2013-08-06 Riverbed Technology, Inc. Virtualized data storage applications and optimizations
US9767271B2 (en) 2010-07-15 2017-09-19 The Research Foundation For The State University Of New York System and method for validating program execution at run-time
US20120089786A1 (en) * 2010-10-06 2012-04-12 Arvind Pruthi Distributed cache coherency protocol
US9043560B2 (en) * 2010-10-06 2015-05-26 Toshiba Corporation Distributed cache coherency protocol
US20120144391A1 (en) * 2010-12-02 2012-06-07 International Business Machines Corporation Provisioning a virtual machine
US9483484B1 (en) * 2011-05-05 2016-11-01 Veritas Technologies Llc Techniques for deduplicated data access statistics management
US9342254B2 (en) * 2011-06-04 2016-05-17 Microsoft Technology Licensing, Llc Sector-based write filtering with selective file and registry exclusions
US20120311263A1 (en) * 2011-06-04 2012-12-06 Microsoft Corporation Sector-based write filtering with selective file and registry exclusions
WO2013095622A1 (en) * 2011-12-23 2013-06-27 Empire Technology Develpment Llc Optimization of resource utilization in a collection of devices
KR101554113B1 (en) * 2011-12-23 2015-09-17 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Optimization of resource utilization in a collection of devices
US9736043B2 (en) 2011-12-23 2017-08-15 Empire Technology Development Llc Optimization of resource utilization in a collection of devices
CN103917966A (en) * 2011-12-23 2014-07-09 英派尔科技开发有限公司 Optimization of resource utilization in a collection of devices
US8959228B2 (en) 2011-12-23 2015-02-17 Empire Technology Development Llc Optimization of resource utilization in a collection of devices
US9984079B1 (en) * 2012-01-13 2018-05-29 Amazon Technologies, Inc. Managing data storage using storage policy specifications
US20180267979A1 (en) * 2012-01-13 2018-09-20 Amazon Technologies, Inc. Managing data storage using storage policy specifications
US8874799B1 (en) 2012-03-31 2014-10-28 Emc Corporation System and method for improving cache performance
US8914584B1 (en) 2012-03-31 2014-12-16 Emc Corporation System and method for improving cache performance upon detection of a LUN control event
US8914585B1 (en) 2012-03-31 2014-12-16 Emc Corporation System and method for obtaining control of a logical unit number
US9336157B1 (en) 2012-03-31 2016-05-10 Emc Corporation System and method for improving cache performance
US8554954B1 (en) * 2012-03-31 2013-10-08 Emc Corporation System and method for improving cache performance
US10606754B2 (en) * 2012-04-16 2020-03-31 International Business Machines Corporation Loading a pre-fetch cache using a logical volume mapping
US20130275679A1 (en) * 2012-04-16 2013-10-17 International Business Machines Corporation Loading a pre-fetch cache using a logical volume mapping
US10397223B2 (en) * 2012-08-20 2019-08-27 Alcatel Lucent Method for establishing an authorized communication between a physical object and a communication device enabling a write access
US20150215320A1 (en) * 2012-08-20 2015-07-30 Alcatel Lucent Method for establishing an authorized communication between a physical object and a communication device enabling a write access
US9767284B2 (en) 2012-09-14 2017-09-19 The Research Foundation For The State University Of New York Continuous run-time validation of program execution: a practical approach
US9552495B2 (en) 2012-10-01 2017-01-24 The Research Foundation For The State University Of New York System and method for security and privacy aware virtual machine checkpointing
US9069782B2 (en) 2012-10-01 2015-06-30 The Research Foundation For The State University Of New York System and method for security and privacy aware virtual machine checkpointing
US10324795B2 (en) 2012-10-01 2019-06-18 The Research Foundation for the State University o System and method for security and privacy aware virtual machine checkpointing
US9910614B2 (en) 2013-01-08 2018-03-06 Lyve Minds, Inc. Storage network data distribution
US9727268B2 (en) 2013-01-08 2017-08-08 Lyve Minds, Inc. Management of storage in a storage network
US9372726B2 (en) 2013-01-09 2016-06-21 The Research Foundation For The State University Of New York Gang migration of virtual machines using cluster-wide deduplication
US9298634B2 (en) * 2013-03-06 2016-03-29 Gregory RECUPERO Client spatial locality through the use of virtual request trackers
US20140258639A1 (en) * 2013-03-06 2014-09-11 Gregory RECUPERO Client spatial locality through the use of virtual request trackers
US20140281153A1 (en) * 2013-03-15 2014-09-18 Saratoga Speed, Inc. Flash-based storage system including reconfigurable circuitry
US9304902B2 (en) 2013-03-15 2016-04-05 Saratoga Speed, Inc. Network storage system using flash storage
US9286225B2 (en) * 2013-03-15 2016-03-15 Saratoga Speed, Inc. Flash-based storage system including reconfigurable circuitry
US20140325141A1 (en) * 2013-04-30 2014-10-30 WMware Inc. Trim support for a solid-state drive in a virtualized environment
US9983992B2 (en) * 2013-04-30 2018-05-29 WMware Inc. Trim support for a solid-state drive in a virtualized environment
US10642529B2 (en) 2013-04-30 2020-05-05 Vmware, Inc. Trim support for a solid-state drive in a virtualized environment
US9678678B2 (en) * 2013-12-20 2017-06-13 Lyve Minds, Inc. Storage network data retrieval
US20150177999A1 (en) * 2013-12-20 2015-06-25 Lyve Minds, Inc. Storage network data retrieval
US10313236B1 (en) 2013-12-31 2019-06-04 Sanmina Corporation Method of flow based services for flash storage
US9509604B1 (en) 2013-12-31 2016-11-29 Sanmina Corporation Method of configuring a system for flow based services for flash storage and associated information structure
US9823842B2 (en) 2014-05-12 2017-11-21 The Research Foundation For The State University Of New York Gang migration of virtual machines using cluster-wide deduplication
US10156986B2 (en) 2014-05-12 2018-12-18 The Research Foundation For The State University Of New York Gang migration of virtual machines using cluster-wide deduplication
US9672180B1 (en) 2014-08-06 2017-06-06 Sanmina Corporation Cache memory management system and method
US9384147B1 (en) 2014-08-13 2016-07-05 Saratoga Speed, Inc. System and method for cache entry aging
US9715428B1 (en) 2014-09-24 2017-07-25 Sanmina Corporation System and method for cache data recovery
US20160366094A1 (en) * 2015-06-10 2016-12-15 Cisco Technology, Inc. Techniques for implementing ipv6-based distributed storage space
US11588783B2 (en) * 2015-06-10 2023-02-21 Cisco Technology, Inc. Techniques for implementing IPV6-based distributed storage space
US9535608B1 (en) 2015-09-28 2017-01-03 International Business Machines Corporation Memory access request for a memory protocol
US10521262B2 (en) 2015-09-28 2019-12-31 International Business Machines Corporation Memory access request for a memory protocol
US9507628B1 (en) 2015-09-28 2016-11-29 International Business Machines Corporation Memory access request for a memory protocol
US11586462B2 (en) 2015-09-28 2023-02-21 International Business Machines Corporation Memory access request for a memory protocol
US11281589B2 (en) * 2018-08-30 2022-03-22 Micron Technology, Inc. Asynchronous forward caching memory systems and methods

Also Published As

Publication number Publication date
US20200242088A1 (en) 2020-07-30
US20100241726A1 (en) 2010-09-23
US10831721B2 (en) 2020-11-10
EP2411918A2 (en) 2012-02-01
WO2010111312A3 (en) 2010-12-29
EP2411918B1 (en) 2018-07-11
US20100241673A1 (en) 2010-09-23
EP2411918A4 (en) 2013-08-21
US11593319B2 (en) 2023-02-28
WO2010111312A2 (en) 2010-09-30
US20100241654A1 (en) 2010-09-23
US9348842B2 (en) 2016-05-24

Similar Documents

Publication Publication Date Title
US11593319B2 (en) Virtualized data storage system architecture
US8504670B2 (en) Virtualized data storage applications and optimizations
US20130232215A1 (en) Virtualized data storage system architecture using prefetching agent
US10296494B2 (en) Managing a global namespace for a distributed filesystem
US8788628B1 (en) Pre-fetching data for a distributed filesystem
US8799413B2 (en) Distributing data for a distributed filesystem across multiple cloud storage systems
US8799414B2 (en) Archiving data for a distributed filesystem
US9811662B2 (en) Performing anti-virus checks for a distributed filesystem
US9852150B2 (en) Avoiding client timeouts in a distributed filesystem
US9852149B1 (en) Transferring and caching a cloud file in a distributed filesystem
US9811532B2 (en) Executing a cloud command for a distributed filesystem
US8805968B2 (en) Accessing cached data from a peer cloud controller in a distributed filesystem
US9678968B1 (en) Deleting a file from a distributed filesystem
US9678981B1 (en) Customizing data management for a distributed filesystem
US9679040B1 (en) Performing deduplication in a distributed filesystem
US9824095B1 (en) Using overlay metadata in a cloud controller to generate incremental snapshots for a distributed filesystem
US9804928B2 (en) Restoring an archived file in a distributed filesystem
US8805967B2 (en) Providing disaster recovery for a distributed filesystem
US9792298B1 (en) Managing metadata and data storage for a cloud controller in a distributed filesystem
US9613064B1 (en) Facilitating the recovery of a virtual machine using a distributed filesystem
US20160170885A1 (en) Cached volumes at storage gateways
US20120084261A1 (en) Cloud-based disaster recovery of backup data and metadata
US20090150462A1 (en) Data migration operations in a distributed file system
US20050071560A1 (en) Autonomic block-level hierarchical storage management for storage networks
US20090150449A1 (en) Open file migration operations in a distributed file system

Legal Events

Date Code Title Description
AS Assignment

Owner name: RIVERBED TECHNOLOGY, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WU, DAVID;MCCANNE, STEVEN;DEMMER, MICHAEL;AND OTHERS;SIGNING DATES FROM 20100323 TO 20100329;REEL/FRAME:024482/0630

AS Assignment

Owner name: MORGAN STANLEY & CO. LLC, MARYLAND

Free format text: SECURITY AGREEMENT;ASSIGNORS:RIVERBED TECHNOLOGY, INC.;OPNET TECHNOLOGIES, INC.;REEL/FRAME:029646/0060

Effective date: 20121218

AS Assignment

Owner name: RIVERBED TECHNOLOGY, INC., CALIFORNIA

Free format text: RELEASE OF PATENT SECURITY INTEREST;ASSIGNOR:MORGAN STANLEY & CO. LLC, AS COLLATERAL AGENT;REEL/FRAME:032113/0425

Effective date: 20131220

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT, NEW YORK

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:RIVERBED TECHNOLOGY, INC.;REEL/FRAME:032421/0162

Effective date: 20131220

Owner name: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:RIVERBED TECHNOLOGY, INC.;REEL/FRAME:032421/0162

Effective date: 20131220

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: RIVERBED TECHNOLOGY, INC., CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BARCLAYS BANK PLC;REEL/FRAME:035521/0069

Effective date: 20150424

AS Assignment

Owner name: RIVERBED TECHNOLOGY, INC., CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE CONVEYING PARTY NAME PREVIOUSLY RECORDED ON REEL 035521 FRAME 0069. ASSIGNOR(S) HEREBY CONFIRMS THE RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:035807/0680

Effective date: 20150424