US20050114595A1 - System and method for emulating operating system metadata to provide cross-platform access to storage volumes - Google Patents

System and method for emulating operating system metadata to provide cross-platform access to storage volumes Download PDF

Info

Publication number
US20050114595A1
US20050114595A1 US10/722,614 US72261403A US2005114595A1 US 20050114595 A1 US20050114595 A1 US 20050114595A1 US 72261403 A US72261403 A US 72261403A US 2005114595 A1 US2005114595 A1 US 2005114595A1
Authority
US
United States
Prior art keywords
operating system
storage
metadata
host computer
storage device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/722,614
Inventor
Ronald Karr
Oleg Kiselev
Alex Miroschnichenko
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Symantec Operating Corp
Original Assignee
Veritas Operating Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US10/722,614 priority Critical patent/US20050114595A1/en
Application filed by Veritas Operating Corp filed Critical Veritas Operating Corp
Assigned to VERITAS OPERATING CORPORATION reassignment VERITAS OPERATING CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KARR, RONALD S., KISELEV, OLEG, MIROSCHNICHENKO, ALEX
Priority to JP2006541649A priority patent/JP4750040B2/en
Priority to CNB2004800405835A priority patent/CN100552611C/en
Priority to EP04811936A priority patent/EP1687706A1/en
Priority to PCT/US2004/039306 priority patent/WO2005055043A1/en
Publication of US20050114595A1 publication Critical patent/US20050114595A1/en
Priority to US11/156,821 priority patent/US20050235132A1/en
Priority to US11/156,820 priority patent/US7669032B2/en
Priority to US11/156,635 priority patent/US7689803B2/en
Priority to US11/156,636 priority patent/US20050228950A1/en
Assigned to SYMANTEC CORPORATION reassignment SYMANTEC CORPORATION CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: VERITAS OPERATING CORPORATION
Assigned to SYMANTEC OPERATING CORPORATION reassignment SYMANTEC OPERATING CORPORATION CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 019872 FRAME 979. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNEE IS SYMANTEC OPERATING CORPORATION. Assignors: VERITAS OPERATING CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0664Virtualisation aspects at device level, e.g. emulation of a storage device or system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • This invention is related to the field of storage management and, more particularly, to the integration of off-host storage virtualization with hosts in a heterogeneous computing environment.
  • Clustering may be defined as the use of multiple computers (e.g., PCs or UNIX workstations), multiple storage devices, and redundant interconnections to form what appears to external users as a single and highly available system. Clustering may be used for load balancing and parallel processing as well as for high availability.
  • the storage area network (SAN) model places storage on its own dedicated network, removing data storage from the main user network.
  • This dedicated network most commonly uses Fibre Channel technology as a versatile, high-speed transport.
  • the SAN includes one or more hosts that provide a point of interface with LAN users, as well as (in the case of large SANs) one or more fabric switches, SAN hubs and other devices to accommodate a large number of storage devices.
  • the hardware e.g. fabric switches, hubs, bridges, routers, cables, etc.
  • the SAN fabric may enable server-to-storage device connectivity through Fibre Channel switching technology to a wide range of servers and storage devices.
  • the versatility of the SAN model enables organizations to perform tasks that were previously difficult to implement, such as LAN-free and server-free tape backup, storage leasing, and full-motion video services.
  • SAN deployment promises numerous advantages, including cost management through storage consolidation, higher availability of data, better performance and seamless management of online and offline data.
  • the LAN is relieved of the overhead of disk access and tape backup, data availability becomes less server-dependent, and downtime incurred by service and maintenance tasks affects more granular portions of the available storage system.
  • a block server enables a computer system to take its storage and “serve” that storage onto a SAN (e.g., as virtual SCSI disks).
  • a block server may also be referred to herein as a “block-server appliance” or “appliance.”
  • the block server and its virtual disks may have many advantages for ease of management and consolidation of storage.
  • a block server may provide the ability to easily allocate and reallocate storage on a SAN: the right amount of storage to the right computer system at the right time. This functionality may be thought of as “repurposing” the storage from the application server perspective.
  • a block server may also provide the ability to consolidate storage behind the SAN (i.e., managing the storage pool). For example, this consolidation can be employed with pre-existing fibre channel or SCSI storage not on a SAN. One could then “repurpose” non-SAN storage and move it into a managed SAN environment.
  • a block server enables a computer system to take its storage and “serve” that storage onto a SAN (e.g., as virtual SCSI disks).
  • a “block server” comprises a hardware or software entity that provides a collection of linearly addressed blocks of uniform size that can be read or written.
  • a block server may also be referred to herein as a “block device, “block-server appliance,” or “appliance.”
  • the block server and its virtual disks may have many advantages for ease of management and consolidation of storage.
  • a block server may provide the ability to easily allocate and reallocate storage on a SAN: the right amount of storage to the right computer system at the right time. This functionality may be thought of as “repurposing” the storage from the application server perspective.
  • a block server may also provide the ability to consolidate storage behind the SAN (i.e., managing the storage pool). For example, this consolidation can be employed with pre-existing fibre channel or SCSI storage not on a SAN. One could then “repurpose” non-SAN storage and move it into a managed SAN environment.
  • a block device differs from a file in that it does not require use of a file system, and is typically less dynamic.
  • a block device presented by an operating system presents relatively few primitives: open, close, read, write, plus a few miscellaneous control and query primitives.
  • File systems provide a richer set of primitives, including support for creating and removing files, appending to files, creating and removing directories, etc.
  • Typical interfaces to block devices also allow for higher raw throughput and greater concurrency than typical interfaces to single files of a file system.
  • Block devices residing on hardware devices typically present some form of SCSI interface, though other interfaces are possible.
  • a basic block device comprises a simple array of blocks.
  • the prototypical block device is a single disk drive presenting all of its sectors as an indexed array blocks.
  • Disk arrays and volume managers introduce virtualization of blocks, creating some number of virtual block devices.
  • block virtualization one or more layers of software and/or hardware rearrange blocks from one or more disks, add various kinds of functions, and present the aggregation to a layer above as if it were essentially a collection of basic disk drives (i.e., presenting the more complex structure as if it were simple arrays of blocks).
  • Block virtualization can add aggregation (striping and spanning), mirroring and other forms of redundancy, some performance optimizations, snapshots, replication, and various capabilities for online reorganization. Block virtualization provides all these capabilities without affecting the structure of the file systems and applications that use them.
  • a “logical volume” comprises a virtualized block device that is presented directly for use by a file system, database, or other applications that can directly use block devices. This differs from block devices implemented in hardware devices, or below system disk drivers, in that those devices do not present a direct system device that can be opened for direct use. Instead, a system-dependent disk driver is typically used to access the device.
  • the disk driver is generally unaware of the hardware virtualization, but adds some limited virtualization of its own (often just segmenting in the form of partitions). The disk driver also forms an abstraction barrier that makes it more difficult for applications and file systems to cooperate with the underlying virtualization in advanced ways.
  • a “logical or physical disk [drive]” (also referred to as a “physical volume”) comprises a disk drive (physical) or a device (logical) presented by a hardware block virtualizer to look like a disk drive.
  • Disk arrays present logical disks.
  • Virtualizing host bus adapters and many virtualizing switches also present logical disks.
  • Upper layers of virtualization typically run on top of logical disks.
  • Distributed block virtualization may distribute a description of how a virtual block device (for example, a logical volume or a virtual Logical Unit) relates to underlying storage, as well as how distributed block virtualization components might relate in order to accomplish certain functions.
  • distributed [block] virtualization typically refers to what is commonly called “out-of-band” virtualization.
  • Block virtualization is basically the concept of defining a more complex structure between a consumer of a block device and the underlying block storage. The block device presented is often called a logical volume.
  • Distributed block virtualization somehow communicates that structure between systems either so that several systems can share parts of underlying storage that is managed above by a virtualizer, and so that the implementation of some of the block virtualization operations can be distributed and coordinated.
  • block storage architectures included the following layers in the I/O stack of the host computer system: file system, swap, or database; logical partitioning in a disk driver; and an interconnect-dependent I/O driver.
  • a storage device or other storage subsystem outside of the host typically included an interconnect-dependent target firmware and one or more disks.
  • Logical partitioning in the disk driver was developed as a way of subdividing the space from disk drives so that several smaller file systems (or a raw swap device) could be created on a single disk drive.
  • Operating systems differ in how they perform logical partitioning. Most operating systems (e.g., Solaris, Windows, and Linux) use the simple partitioning described above. Some UNIX-based operating systems from HP and IBM, however, do not implement simple logical partitioning in their disk drivers; instead, these operating systems include a virtualization layer that defines logical volumes that can subdivide or span disks in more flexible ways. These logical volumes may be referred to as “host-virtual objects.”
  • the virtualization layer used with host-virtual objects may employ complex on-disk metadata, potentially spread across several disks and potentially including additional metadata stored elsewhere, to define a virtual structure.
  • the storage management function of dividing storage is implemented in host software, and underlying disk storage subsystems are expected basically to supply raw storage containers which are not used directly by file systems or applications.
  • Block virtualization layers have been added in at various places in the I/O stack. Block virtualization layers have been added between the file system and partitioning disk driver, between the target firmware and the disk (e.g., disk arrays), and between the interconnect driver in the system and the physical bus (e.g., virtualizing host bus adapters). More recently, there has been a trend toward adding virtualization into the physical interconnect itself (e.g., virtualizing SAN switches and block-server appliances).
  • a Logical Unit is a block device with an interface presented on a storage bus or a storage network as described, for example, by the SCSI standard.
  • LUN logical unit number
  • storage device is intended to include SCSI Logical Unit devices as well as other suitable hardware devices.
  • any partitioning or virtualization schemes that are specific to a particular operating system may interfere with cross-platform (e.g., cross-operating-system) access to external storage volumes.
  • Modern enterprise computing environments may include computer systems and storage devices from many different vendors. The computer systems may be operating under different operating systems, each of which may use its own file system. Different file systems may each feature their own proprietary sets of metadata relating to their underlying data objects. For these reasons and more, offering uniform access to storage devices in a heterogeneous computing environment can be problematic.
  • the method may include generating operating system metadata for a storage device, wherein the operating system metadata emulates a storage volume hosted under a first operating system.
  • the method may further include making the operating system metadata available to a host computer system, wherein the host computer system runs the first operating system.
  • the operating system metadata enables the host computer system to recognize the storage device as the storage volume hosted under the first operating system.
  • FIG. 1 illustrates an exemplary enterprise computing system, including a storage area network, with which embodiments of a system and method for providing cross-platform access to storage volumes may be implemented.
  • FIG. 2 illustrates an exemplary computer system with which embodiments of a system and method for providing cross-platform access to storage volumes may be implemented.
  • FIG. 3 illustrates an architecture of software and/or hardware components for providing cross-platform access to storage volumes according to one embodiment.
  • FIG. 4 illustrates examples of emulated storage volumes according to one embodiment.
  • FIG. 5 is a flowchart which illustrates a method for emulating operating system metadata to provide cross-platform access to storage volumes according to one embodiment.
  • Various embodiments of the system and method disclosed herein provide cross-platform accessibility of logical storage volumes that are encapsulated within and exported by various storage devices or systems.
  • the storage devices or systems will export block devices using basic SCSI or other appropriate protocols in a way that is picked up by regular system disk drivers.
  • a logical volume which is defined externally to one or more hosts may be made available as that logical volume to the one or more hosts, even if those hosts use different operating systems and different kinds of disk drivers for accessing external storage.
  • the system and method may be used with various kinds of storage networks and intelligent disk arrays that are locally attached to systems using various kinds of I/O buses.
  • the system and method described herein may be referred to as “volume tunneling.”
  • FIG. 1 illustrates an exemplary Storage Area Network (SAN) environment with which embodiments of a system and method for providing cross-platform access to storage volumes may be implemented.
  • the SAN may be described as a high-speed, special-purpose network that interconnects one or more storage devices 104 (e.g. storage devices 104 A, 104 B, and 104 C) with one or more associated host systems or servers 102 on behalf of a larger network of users.
  • This dedicated network may employ Fibre Channel technology.
  • a SAN may be part of the overall network of computing resources for an enterprise or other entity.
  • the one or more servers 102 and one or more storage devices 104 may be coupled via a fabric 100 .
  • One or more client systems 106 may access the SAN by accessing one or more of the servers 102 via a network 108 .
  • the client systems 106 may communicate with the server 102 to access data on the storage devices 104 A, 104 B, and 104 C which are managed by server 102 .
  • the client systems 106 may comprise server systems which manage a different set of storage devices. The client systems 106 may therefore also act as servers and may be coupled to other storage devices (not shown) through the fabric 100 .
  • Network 108 may include wired or wireless communications connections separate from the Fibre Channel network.
  • network 108 is representative of any local area network (LAN) such as an intranet or any wide area network (WAN) such as the Internet.
  • Network 108 may use a variety of wired or wireless connection mediums.
  • wired mediums may include: a modem connected to plain old telephone service (POTS), Ethernet, and fiber channel.
  • POTS plain old telephone service
  • Wireless connection mediums include a satellite link, a modem link through a cellular service or a wireless link such as Wi-FiTM, for example.
  • Storage devices may include any of one or more types of storage devices including, but not limited to, storage systems such as RAID (Redundant Array of Independent Disks) systems, disk arrays, JBODs (Just a Bunch Of Disks, used to refer to disks that are not configured according to RAID), tape devices, and optical storage devices. These devices may be products of any of a number of vendors including, but not limited to, Compaq, EMC, and Hitachi.
  • Clients 106 and server 102 may run any of a variety of operating systems, including, but not limited to, Solaris 2.6, 7 or 8, Microsoft Windows NT 4.0 (Server and Enterprise Server), Microsoft Windows 2000 (Server, Advanced Server and Datacenter Editions), and various versions of HP-UX.
  • Each server 102 may be connected to the fabric 100 via one or more Host Bus Adapters (HBAs).
  • HBAs Host Bus Adapters
  • the hardware that connects servers 102 to storage devices 104 in a SAN may be referred to as a fabric 100 .
  • the SAN fabric 100 enables server-to-storage device connectivity through Fibre Channel switching technology.
  • the SAN fabric 100 hardware may include one or more switches (also referred to as fabric switches), bridges, hubs, or other devices such as routers, as well as the interconnecting cables (for Fibre Channel SANs, fibre optic cables).
  • SAN fabric 100 may include one or more distinct device interconnection structures (e.g. Fibre Channel Arbitrated Loops, Fibre Channel Fabrics, etc.) that collectively form the SAN fabric 100 .
  • a SAN-aware file system may use the Network File System (NFS) protocol in providing access to shared files on the SAN.
  • NFS Network File System
  • each server 102 may include a logical hierarchy of files (i.e. a directory tree) physically stored on one or more of storage devices 104 and accessible by the client systems 106 through the server 102 .
  • These hierarchies of files, or portions or sub-trees of the hierarchies of files may be referred to herein as “file systems.”
  • the SAN components may be organized into one or more clusters to provide high availability, load balancing and/or parallel processing. For example, in FIG. 1 , server 102 and clients 106 A and 106 B may be in a cluster.
  • each server is privately connected to one or more storage devices using SCSI or other storage interconnect technology. If a server is functioning as a file server, it can give other servers (its clients) on the network access to its locally attached files through the local area network. With a storage area network, storage devices are consolidated on their own high-speed network using a shared SCSI bus and/or a fibre channel switch/hub.
  • a SAN is a logical place to host files that may be shared between multiple systems.
  • a shared storage environment is one in which multiple servers may access the same set of data.
  • a challenge with this architecture is the maintenance of consistency between file data and file system data.
  • a common architecture for sharing file-based storage is the file server architecture (e.g., the SAN environment illustrated in FIG. 1 ).
  • the file server architecture one or more servers are connected to a large amount of storage (either attached locally or in a SAN) and provide other systems access to this storage.
  • FIG. 2 illustrates an exemplary computer system 106 with which embodiments of a system and method for providing cross-platform access to storage volumes may be implemented.
  • the client computer system 106 may include various hardware and software components.
  • the client computer system 106 includes a processor 220 A coupled to a memory 230 which is in turn coupled to storage 240 .
  • processor 220 A is coupled to a network interface 260 .
  • the client computer system 106 may be connected to a network such as network 108 via a network connection 275 .
  • the client computer system 106 includes operating system software 130 .
  • the operating system software 130 is executable by processor 220 A out of memory 230 A.
  • the operating system software 130 may include an I/O or storage stack 250 .
  • the I/O stack 250 may include one or more file systems, volume managers, and device drivers.
  • Processor 220 A may be configured to execute instructions and to operate on data stored within memory 230 A.
  • processor 220 A may operate in conjunction with memory 230 A in a paged mode, such that frequently used pages of memory may be paged in and out of memory 230 A from storage 240 according to conventional techniques.
  • processor 220 A is representative of any type of processor.
  • processor 220 A may be compatible with the ⁇ 86 architecture, while in another embodiment processor 220 A may be compatible with the SPARCTM family of processors.
  • Memory 230 A is configured to store instructions and data.
  • memory 230 A may be implemented in various forms of random access memory (RAM) such as dynamic RAM (DRAM) or synchronous DRAM (SDRAM).
  • RAM random access memory
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DRAM dynamic RAM
  • other embodiments may me implemented using other types of suitable memory.
  • Storage 240 is configured to store instructions and data.
  • Storage 240 may be an example of any type of mass storage device or system.
  • storage 240 may be implemented as one or more hard disks configured independently or as a disk storage system.
  • the disk storage system may be an example of a redundant array of inexpensive disks (RAID) system.
  • the disk storage system may be a disk array, or Just a Bunch Of Disks (JBOD), (used to refer to disks that are not configured according to RAID).
  • storage 240 may include tape drives, optical storage devices or RAM disks, for example.
  • Network interface 260 may implement functionality to connect the client computer system 106 to a network such as network 108 via network connection 275 .
  • network interconnect 260 may include a hardware layer and a software layer which controls the hardware layer.
  • the software may be executable by processor 220 A out of memory 230 A to implement various network protocols such as TCP/IP and hypertext transport protocol (HTTP), for example.
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • HTTP hypertext transport protocol
  • FIGS. 1 and 2 illustrate typical computer system and network storage architectures in which embodiments of the system and method for providing cross-platform access to storage volumes may be implemented, embodiments may be implemented in other computer and network storage architectures including other SAN architectures.
  • FIG. 3 illustrates an architecture of software and/or hardware components for providing cross-platform access to storage volumes according to one embodiment.
  • Embodiments of a server system 102 may be similar to the client system 106 illustrated in FIG. 2 .
  • the server system 102 may include a processor 220 B coupled to a memory 230 B which is in turn coupled to optional storage.
  • the server computer system 102 may be connected to a network via a network connection.
  • the server computer system 102 may include operating system and/or server software which is executable by processor 220 B out of memory 230 B.
  • the server system 102 may “own” or manage data objects (e.g., files) on one or more storage devices 104 .
  • the server system 102 may execute file server software to control access to the data it owns.
  • the server functionality may be implemented in hardware and/or firmware.
  • the server 102 may take the form of a block-server appliance (e.g., on a SAN).
  • the server 102 may be referred to herein as a “storage virtualization controller” or “storage virtualizer.”
  • a storage virtualization controller is a computer system or other apparatus which is operable to emulate operating-system-specific storage metadata to enable cross-platform availability of managed storage devices 104 .
  • Block I/O is typically the I/O which is tied directly to the disks 104 and used when direct and fast access to the physical drives themselves is required by the client application.
  • the client 106 may request the data from the block server 102 using the starting location of the data blocks and the number of blocks to be transferred.
  • the operating system 130 on the client 106 may provide a file system to help translate the block locations into file locations.
  • the memory 230 B of the server 102 may store instructions which are executable by the processor 220 B to implement the method and system for providing cross-platform access to storage as disclosed herein.
  • the instructions may include multiple components or layers.
  • device drivers 256 may be used for I/O between the server and the storage devices 104 .
  • Components such as an emulation layer 252 and/or storage virtualizer 254 may be used to provide uniform access to data for heterogeneous clients, as will be discussed in greater detail below.
  • connection 109 between the block server 102 and the client 106 may comprise a channel protocol such as SCSI or fibre channel which may also be connected directly to disks or disk controllers (e.g., RAID controllers). Since SCSI and fibre channel are protocols having relatively low overhead, they may provide a low latency connection with high performance.
  • FIG. 4 illustrates examples of emulated storage volumes according to one embodiment.
  • One or more storage devices may include one or more storage volumes (e.g., volumes 105 A, 105 B, 105 C, 105 D, and 105 E).
  • a volume is an abstract logical entity, it can start and end anywhere on a physical disk or in a partition and be composed of space on physical disks on different devices, using a variety of organizational styles, including simple aggregation, mirroring, striping, and RAID-5.
  • Physical disk space can also be used within a volume or a collection of volumes for storing additional information, such as update logs, configuration information, or various kinds of tracking structures. Therefore, a single storage device 104 may comprise more than one logical volume 105 , and a single volume 105 may span two or more physical devices 104 .
  • a client computer system 106 may expect the data to be formatted in a certain way. In other words, each client 106 may expect the data to be associated with metadata that is specific to the particular operating system 130 executed by the client 106 .
  • an “operating system” (“OS”) is a set of program instructions which, when executed, provide basic functions, such as data input/output (“I/O”), for a computer system.
  • metadata generally refers to data that identifies the formatting, structure, origin, or other attributes of other data, for example, as needed to satisfy the native requirements of a particular operating's block storage I/O subsystems.
  • OS-specific emulated storage volumes 107 may be created from the logical storage volumes 105 on the physical devices 104 .
  • the Logical Unit presented for the external volume may be “decorated” with additional metadata which is supplied before and/or after the actual volume contents.
  • This metadata enables the operating system (OS) on the host to recognize that the storage device contains a partition or other virtual structure (e.g., a host-virtual object) that happens to map to the actual contents of the volume.
  • OS-specific metadata for a storage volume 105 an OS-specific emulated volume 107 may be created that satisfies the I/O expectations of a client running the specific OS.
  • the storage virtualizer may be configured to provide different sets of operating system metadata for different storage devices. If a storage device can be shared (either serially or concurrently) by two different types of operating systems, then the storage virtualizer may be configured to provide different sets of operating system metadata for the same storage device.
  • multiple emulated volumes 107 may therefore be generated from a single volume 105 .
  • OS-specific volumes 107 A, 107 B, and 107 C may emulate storage volume 105 A for particular respective operating systems.
  • Emulated volume 107 D may emulate volume 105 D for the Solaris operating system.
  • Emulated volumes 107 E and 107 F may emulate volume 105 E for the Windows NT and HP-UX operating systems, respectively.
  • FIG. 5 is a flowchart which illustrates a method for emulating operating system metadata to provide cross-platform access to storage volumes according to one embodiment. The method may be performed in whole or part by a storage virtualization controller or server 102 (as illustrated in FIG. 3 ).
  • the actual blocks of an external volume are made available as a range of blocks within the storage device.
  • additional blocks of the storage device are created in order to satisfy the OS-specific needs of a partitioning disk driver or host-based virtualization layer in the host.
  • These additional blocks contain operating system metadata to satisfy the OS-specific needs of the host.
  • the operating system metadata may effectively emulate the storage volume as it would look if hosted under a particular operating system (e.g., Solaris, Windows NT, HP-UX, etc.), thereby generating an emulated storage volume.
  • An “emulated storage volume” refers to a storage volume which imitates a storage volume hosted under a particular operating system.
  • the phrase “hosted under an operating system” refers to the host computer running the particular operating system and/or the volume being formatted using the particular operating system.
  • the contents of the metadata blocks may be sent to the host in 303 .
  • the host e.g., a disk driver or host-based virtualization layer
  • the host is able to recognize that the storage device contains an addressable object (e.g., a partition or host-virtual object) whose offset and length correspond to the range of blocks that map the actual external volume within the storage device.
  • a driver on the host may use a form of operating system metadata to locate the actual volume contents within the presented storage device.
  • the driver may be located in a layer above the basic disk driver.
  • the contents of the metadata blocks may satisfy the needs of the OS disk driver stack.
  • the contents of the metadata blocks may allow the storage device to contain unmolested blocks that map to the external volume and that can be located within the storage device by the special driver.
  • a structure similar to a partition table may identify a partition that maps a volume, in which case the OS metadata may be used to locate the volume.
  • additional data may identify a location within the emulated volume where a layered driver could find the volume.
  • the additional data may be in-band within the emulated data, from in-band SCSI data (such as mode pages), or from an out-of-band source (e.g., communicated over a network from another computer or over a network other than the storage network from the virtualizing device).
  • in-band SCSI data such as mode pages
  • out-of-band source e.g., communicated over a network from another computer or over a network other than the storage network from the virtualizing device.
  • the metadata blocks may either be stored and logically concatenated with the volume.
  • the metadata blocks can be generated on the fly in response to requests to read blocks from the storage device. Because the contents of the metadata blocks are predictable (if the software and OS on the host are known), it may be more efficient to generate the operating system metadata on the fly. Nevertheless, storing the metadata blocks may allow the format of the blocks to be programmed externally by an agent outside of the storage virtualizer. This external agent may understand the format of an arbitrary operating system, including one that was not known when the storage virtualization software was written and deployed.
  • the method shown in FIG. 5 may be performed in response to an I/O request from the host computer system.
  • a data request may be received from a client computer system 106 .
  • the data request may comprise a request to access a set of data which is stored on one or more storage volumes 105 and whose access is controlled by the server 102 .
  • the client system 106 may send the data request in response to a request from the client's application software to read, write, delete, change ownership or security attributes, or perform another I/O function on a data object (e.g., a file) that is managed by the server 102 .
  • a data object e.g., a file
  • a client's operating system software 130 may handle the sending and receiving of data regarding the data request and any responses from the server 102 .
  • a data request may include a file identifier (e.g., a file name, file handle, or file ID number) to identify the requested file and any additional information which is useful for performing the desired operation.
  • a “read” request may include the file handle, an offset into the file, a length for the read, and a destination address or buffer of the read.
  • the operating system metadata may comprise different information for different operating systems.
  • the operating system metadata may comprise information which is located at the beginning of the emulated storage volume and information which is located at the end of the emulated storage volume.
  • the operating system metadata may identify the set of data as being stored at a particular location on the emulated storage volume.
  • the operating system metadata may comprise an identifier for the emulated storage volume.
  • a Solaris volume typically comprises an identifying VTOC (virtual table of contents) at a particular location on the volume, usually in partition 1 with a copy in the last two cylinders.
  • the operating system metadata may comprise a cylinder alignment and/or cylinder size for the emulated storage volume.
  • the operating system metadata may include appropriate OS-specific boot code if the volume is intended to be bootable. Typically, a boot program does little more than determine where to find the real operating system and then transfer control to that address.
  • a Windows NT or Windows 2000 volume may include the following characteristics: a “magic number” (i.e., numbers that the OS expects to see, usually at or near the start of the disk); a Master Boot Record (MBR) comprising all 512 bytes of sector zero; a fixed size and position on a disk or over a set of disks; a particular cylinder alignment organized as a number of 512-byte blocks that can be read or written to by the OS; includes one or more subdisks (Windows 2000) as a group of subdisks called plexes; and may or may not include a file system.
  • MLR Master Boot Record
  • a Solaris volume includes a VTOC (virtual table of contents), typically in partition 1.
  • the VTOC may be emulated by operating system metadata generated in 305 .
  • the VTOC may include a layout version identifier, a volume name, the sector size (in bytes) of the volume, the number of partitions in the volume, the free space on the volume, partition headers (each including an ID tag, permission flags, the partition's beginning sector number, and the number of blocks in the partition), and other suitable information.
  • System-to-system variations in operating system metadata may be configured in a variety of ways.
  • a storage device may be configured to respond with a particular metadata format to particular specified hosts (e.g., particular specified I/O controllers).
  • a management infrastructure layer may communicate with host-based software, determine the necessary metadata format to use for a particular host, and communicate the information to the storage virtualizer.
  • an extended I/O request (e.g., a vendor-unique SCSI request) may be used to directly inform the storage virtualizer of the required metadata format.
  • a layered driver that locates the external volume within the storage device may be used instead of system-to-system variations in operating system metadata.
  • a layered driver may locate the external volume within the storage device by locating the external volume at a standard offset.
  • the layered driver may locate the external volume within the storage device by encoding the offset within some block of the storage device that does have a known offset or by encoding that offset within some property of the disk (e.g., within a SCSI mode page).
  • preferred, suggested, or mandatory names may be associated with storage devices.
  • Naming a block device may allow applications and file systems to know what they are accessing. Moreover, naming a block device may allow the administrator of an environment to understand the relationship between the device that the file system is using and the virtual volume defined by the block virtualization.
  • Block devices may be named by using software running on the host with the file system. That software might use symbolic links, alternate device files, or similar mechanisms to create an alias of a device node normally presented by the disk driver. Alternately, a local driver that sits above the disk driver may perform remapping to provide a local name space and devices that are coordinated with the external virtualization.
  • a management communication layer may provide coordination of the naming function.
  • the basic properties of the external virtualized Logical Unit e.g., through an extended SCSI mode page
  • the name may be provided in a part of the emulated volume that resides outside the address space dedicated to the encapsulated volume.
  • the method set forth in FIG. 5 may be performed a plurality of times, each for a different client computer and/or different operating system. As illustrated in FIG. 4 , a plurality of OS-specific emulated volumes 107 may be generated for a single volume 105 .
  • the storage volume may be moved from a host (e.g., a LAN-connected computer system) to the network (e.g., the SAN) prior to performing the method of FIG. 5 .
  • a host e.g., a LAN-connected computer system
  • the method and system described herein may be implemented using host-based storage devices rather than off-host, SAN-based storage.
  • the method of FIG. 5 may be performed to generate emulated storage volumes 107 for storage volumes 105 which are managed by a host computer system (typically on a LAN).
  • emulated storage volumes can be dynamic: volumes can be created, grown, shrunk, deleted, and snapshotted. In one embodiment, emulated storage volumes can be assigned and unassigned to systems. These operations may typically be performed synchronously and online.
  • one or more Logical Units may be created in advance of their use by client systems.
  • the pre-provisioned Logical Units may contain the appropriate emulated metadata and may be pre-assigned to client systems. The presence of the emulated metadata may permit the storage stack on client systems to recognize these pre-provisioned Logical Units.
  • the volume may be assigned to a pre-provisioned Logical Unit.
  • the assignment may increase the size of the Logical Unit to include the volume, and the assignment may adjust the emulated metadata as necessary to point to the volume and to adjust to the new volume and Logical Unit size.
  • the storage stack (e.g., the disk driver) may be altered to recognize the new Logical Unit size and/or re-read the emulated metadata (e.g., to re-read the emulated OS partition table).
  • the pre-provisioned Logical Units may be given a size that matches the maximum Logical Unit size.
  • the pre-provisioned Logical Units may be given a size that matches the maximum emulated volume size. In either case, the pre-provisioned Logical Unit would largely comprise unmapped blocks.
  • an emulated volume may be mapped to multiple Logical Units to support larger volumes. In one embodiment, multiple emulated volumes may be mapped to one Logical Unit to reduce the number of pre-provisioned Logical Unit assignments.
  • a carrier medium may include storage media or memory media such as magnetic or optical media, e.g., disk or CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc. as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link.
  • storage media or memory media such as magnetic or optical media, e.g., disk or CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc.
  • RAM e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.
  • ROM etc.
  • transmission media or signals such as electrical, electromagnetic, or digital signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Stored Programmes (AREA)

Abstract

A method and system are provided for emulating operating system metadata to provide cross-platform access to storage volumes. The method may include generating operating system metadata for a storage device, wherein the operating system metadata emulates a storage volume hosted under a first operating system. The method may further include making the operating system metadata available to a host computer system, wherein the host computer system runs the first operating system. The operating system metadata enables the host computer system to recognize the storage device as the storage volume hosted under the first operating system.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • This invention is related to the field of storage management and, more particularly, to the integration of off-host storage virtualization with hosts in a heterogeneous computing environment.
  • 2. Description of the Related Art
  • Modern enterprise computing environments are characterized by pressures for continuous availability and growth. These factors are driving the adoption of new approaches to computing and storage. Enterprise computing environments are increasingly using computer clusters, Storage Area Networks (SANs), and other centralized storage mechanisms to simplify storage, improve availability, and handle escalating demands for data and applications.
  • Clustering may be defined as the use of multiple computers (e.g., PCs or UNIX workstations), multiple storage devices, and redundant interconnections to form what appears to external users as a single and highly available system. Clustering may be used for load balancing and parallel processing as well as for high availability.
  • The storage area network (SAN) model places storage on its own dedicated network, removing data storage from the main user network. This dedicated network most commonly uses Fibre Channel technology as a versatile, high-speed transport. The SAN includes one or more hosts that provide a point of interface with LAN users, as well as (in the case of large SANs) one or more fabric switches, SAN hubs and other devices to accommodate a large number of storage devices. The hardware (e.g. fabric switches, hubs, bridges, routers, cables, etc.) that connects workstations and servers to storage devices in a SAN is referred to as a “fabric.” The SAN fabric may enable server-to-storage device connectivity through Fibre Channel switching technology to a wide range of servers and storage devices.
  • The versatility of the SAN model enables organizations to perform tasks that were previously difficult to implement, such as LAN-free and server-free tape backup, storage leasing, and full-motion video services. SAN deployment promises numerous advantages, including cost management through storage consolidation, higher availability of data, better performance and seamless management of online and offline data. In addition, the LAN is relieved of the overhead of disk access and tape backup, data availability becomes less server-dependent, and downtime incurred by service and maintenance tasks affects more granular portions of the available storage system.
  • A block server enables a computer system to take its storage and “serve” that storage onto a SAN (e.g., as virtual SCSI disks). A block server may also be referred to herein as a “block-server appliance” or “appliance.” The block server and its virtual disks may have many advantages for ease of management and consolidation of storage. For example, a block server may provide the ability to easily allocate and reallocate storage on a SAN: the right amount of storage to the right computer system at the right time. This functionality may be thought of as “repurposing” the storage from the application server perspective. A block server may also provide the ability to consolidate storage behind the SAN (i.e., managing the storage pool). For example, this consolidation can be employed with pre-existing fibre channel or SCSI storage not on a SAN. One could then “repurpose” non-SAN storage and move it into a managed SAN environment.
  • A block server enables a computer system to take its storage and “serve” that storage onto a SAN (e.g., as virtual SCSI disks). As used herein, a “block server” comprises a hardware or software entity that provides a collection of linearly addressed blocks of uniform size that can be read or written. A block server may also be referred to herein as a “block device, “block-server appliance,” or “appliance.” The block server and its virtual disks may have many advantages for ease of management and consolidation of storage. For example, a block server may provide the ability to easily allocate and reallocate storage on a SAN: the right amount of storage to the right computer system at the right time. This functionality may be thought of as “repurposing” the storage from the application server perspective. A block server may also provide the ability to consolidate storage behind the SAN (i.e., managing the storage pool). For example, this consolidation can be employed with pre-existing fibre channel or SCSI storage not on a SAN. One could then “repurpose” non-SAN storage and move it into a managed SAN environment.
  • A block device differs from a file in that it does not require use of a file system, and is typically less dynamic. A block device presented by an operating system presents relatively few primitives: open, close, read, write, plus a few miscellaneous control and query primitives. File systems provide a richer set of primitives, including support for creating and removing files, appending to files, creating and removing directories, etc. Typical interfaces to block devices also allow for higher raw throughput and greater concurrency than typical interfaces to single files of a file system. Block devices residing on hardware devices typically present some form of SCSI interface, though other interfaces are possible.
  • A basic block device comprises a simple array of blocks. The prototypical block device is a single disk drive presenting all of its sectors as an indexed array blocks. Disk arrays and volume managers introduce virtualization of blocks, creating some number of virtual block devices. In block virtualization, one or more layers of software and/or hardware rearrange blocks from one or more disks, add various kinds of functions, and present the aggregation to a layer above as if it were essentially a collection of basic disk drives (i.e., presenting the more complex structure as if it were simple arrays of blocks). Block virtualization can add aggregation (striping and spanning), mirroring and other forms of redundancy, some performance optimizations, snapshots, replication, and various capabilities for online reorganization. Block virtualization provides all these capabilities without affecting the structure of the file systems and applications that use them.
  • As used herein, a “logical volume” comprises a virtualized block device that is presented directly for use by a file system, database, or other applications that can directly use block devices. This differs from block devices implemented in hardware devices, or below system disk drivers, in that those devices do not present a direct system device that can be opened for direct use. Instead, a system-dependent disk driver is typically used to access the device. The disk driver is generally unaware of the hardware virtualization, but adds some limited virtualization of its own (often just segmenting in the form of partitions). The disk driver also forms an abstraction barrier that makes it more difficult for applications and file systems to cooperate with the underlying virtualization in advanced ways.
  • As used herein, a “logical or physical disk [drive]” (also referred to as a “physical volume”) comprises a disk drive (physical) or a device (logical) presented by a hardware block virtualizer to look like a disk drive. Disk arrays present logical disks. Virtualizing host bus adapters and many virtualizing switches also present logical disks. Upper layers of virtualization typically run on top of logical disks.
  • Distributed block virtualization may distribute a description of how a virtual block device (for example, a logical volume or a virtual Logical Unit) relates to underlying storage, as well as how distributed block virtualization components might relate in order to accomplish certain functions. As used herein, “distributed [block] virtualization” typically refers to what is commonly called “out-of-band” virtualization. Block virtualization is basically the concept of defining a more complex structure between a consumer of a block device and the underlying block storage. The block device presented is often called a logical volume. Distributed block virtualization somehow communicates that structure between systems either so that several systems can share parts of underlying storage that is managed above by a virtualizer, and so that the implementation of some of the block virtualization operations can be distributed and coordinated.
  • Traditionally, block storage architectures included the following layers in the I/O stack of the host computer system: file system, swap, or database; logical partitioning in a disk driver; and an interconnect-dependent I/O driver. A storage device (or other storage subsystem outside of the host) typically included an interconnect-dependent target firmware and one or more disks. Logical partitioning in the disk driver was developed as a way of subdividing the space from disk drives so that several smaller file systems (or a raw swap device) could be created on a single disk drive.
  • Operating systems differ in how they perform logical partitioning. Most operating systems (e.g., Solaris, Windows, and Linux) use the simple partitioning described above. Some UNIX-based operating systems from HP and IBM, however, do not implement simple logical partitioning in their disk drivers; instead, these operating systems include a virtualization layer that defines logical volumes that can subdivide or span disks in more flexible ways. These logical volumes may be referred to as “host-virtual objects.” The virtualization layer used with host-virtual objects may employ complex on-disk metadata, potentially spread across several disks and potentially including additional metadata stored elsewhere, to define a virtual structure. For both simple partitions and host-virtual objects, the storage management function of dividing storage is implemented in host software, and underlying disk storage subsystems are expected basically to supply raw storage containers which are not used directly by file systems or applications.
  • Evolution in this storage management structure has occurred through increased complexity of each layer, or by introducing intermediate layers that emulate the relationships below and above and leaving surrounding layers unchanged. Most commonly, block virtualization layers have been added in at various places in the I/O stack. Block virtualization layers have been added between the file system and partitioning disk driver, between the target firmware and the disk (e.g., disk arrays), and between the interconnect driver in the system and the physical bus (e.g., virtualizing host bus adapters). More recently, there has been a trend toward adding virtualization into the physical interconnect itself (e.g., virtualizing SAN switches and block-server appliances).
  • If block virtualization is implemented outside of the host, and if preservation of the structure of the storage stack is required, then external virtualization may be used to present standard SCSI LUNs (with optional extensions) to the host. A Logical Unit is a block device with an interface presented on a storage bus or a storage network as described, for example, by the SCSI standard. LUN (logical unit number) is a unique identifier used on a SCSI bus to distinguish between devices that share the same bus. These numbers are used to address Logical Units over one or another interface accessed by some connected host or device. As used herein, the term “storage device” is intended to include SCSI Logical Unit devices as well as other suitable hardware devices.
  • In the process of presenting the blocks of the block-virtual volumes directly to file systems or block-consuming applications, several problems may be encountered. First, without a consistent name for a block device, applications and file systems may not know what they are accessing, and the administrator of an environment may not understand the relationship between the device that the file system is using and the virtual volume defined by the block virtualization.
  • Second, any partitioning or virtualization schemes that are specific to a particular operating system may interfere with cross-platform (e.g., cross-operating-system) access to external storage volumes. Modern enterprise computing environments may include computer systems and storage devices from many different vendors. The computer systems may be operating under different operating systems, each of which may use its own file system. Different file systems may each feature their own proprietary sets of metadata relating to their underlying data objects. For these reasons and more, offering uniform access to storage devices in a heterogeneous computing environment can be problematic.
  • SUMMARY OF THE INVENTION
  • Various embodiments of a method and system for emulating operating system metadata to provide cross-platform access to storage volumes are disclosed. In one embodiment, the method may include generating operating system metadata for a storage device, wherein the operating system metadata emulates a storage volume hosted under a first operating system. The method may further include making the operating system metadata available to a host computer system, wherein the host computer system runs the first operating system. The operating system metadata enables the host computer system to recognize the storage device as the storage volume hosted under the first operating system.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an exemplary enterprise computing system, including a storage area network, with which embodiments of a system and method for providing cross-platform access to storage volumes may be implemented.
  • FIG. 2 illustrates an exemplary computer system with which embodiments of a system and method for providing cross-platform access to storage volumes may be implemented.
  • FIG. 3 illustrates an architecture of software and/or hardware components for providing cross-platform access to storage volumes according to one embodiment.
  • FIG. 4 illustrates examples of emulated storage volumes according to one embodiment.
  • FIG. 5 is a flowchart which illustrates a method for emulating operating system metadata to provide cross-platform access to storage volumes according to one embodiment.
  • While the invention is described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to.
  • DETAILED DESCRIPTION OF EMBODIMENTS
  • Various embodiments of the system and method disclosed herein provide cross-platform accessibility of logical storage volumes that are encapsulated within and exported by various storage devices or systems. Typically, the storage devices or systems will export block devices using basic SCSI or other appropriate protocols in a way that is picked up by regular system disk drivers. Using the system and method disclosed herein, a logical volume which is defined externally to one or more hosts may be made available as that logical volume to the one or more hosts, even if those hosts use different operating systems and different kinds of disk drivers for accessing external storage. The system and method may be used with various kinds of storage networks and intelligent disk arrays that are locally attached to systems using various kinds of I/O buses. The system and method described herein may be referred to as “volume tunneling.”
  • FIG. 1 illustrates an exemplary Storage Area Network (SAN) environment with which embodiments of a system and method for providing cross-platform access to storage volumes may be implemented. For one embodiment, the SAN may be described as a high-speed, special-purpose network that interconnects one or more storage devices 104 ( e.g. storage devices 104A, 104B, and 104C) with one or more associated host systems or servers 102 on behalf of a larger network of users. This dedicated network may employ Fibre Channel technology. A SAN may be part of the overall network of computing resources for an enterprise or other entity. The one or more servers 102 and one or more storage devices 104 ( e.g. storage devices 104A, 104B, and 104C) may be coupled via a fabric 100. One or more client systems 106 may access the SAN by accessing one or more of the servers 102 via a network 108. The client systems 106 may communicate with the server 102 to access data on the storage devices 104A, 104B, and 104C which are managed by server 102. The client systems 106 may comprise server systems which manage a different set of storage devices. The client systems 106 may therefore also act as servers and may be coupled to other storage devices (not shown) through the fabric 100.
  • Network 108 may include wired or wireless communications connections separate from the Fibre Channel network. For example, network 108 is representative of any local area network (LAN) such as an intranet or any wide area network (WAN) such as the Internet. Network 108 may use a variety of wired or wireless connection mediums. For example, wired mediums may include: a modem connected to plain old telephone service (POTS), Ethernet, and fiber channel. Wireless connection mediums include a satellite link, a modem link through a cellular service or a wireless link such as Wi-Fi™, for example.
  • Storage devices may include any of one or more types of storage devices including, but not limited to, storage systems such as RAID (Redundant Array of Independent Disks) systems, disk arrays, JBODs (Just a Bunch Of Disks, used to refer to disks that are not configured according to RAID), tape devices, and optical storage devices. These devices may be products of any of a number of vendors including, but not limited to, Compaq, EMC, and Hitachi. Clients 106 and server 102 may run any of a variety of operating systems, including, but not limited to, Solaris 2.6, 7 or 8, Microsoft Windows NT 4.0 (Server and Enterprise Server), Microsoft Windows 2000 (Server, Advanced Server and Datacenter Editions), and various versions of HP-UX. Each server 102 may be connected to the fabric 100 via one or more Host Bus Adapters (HBAs).
  • The hardware that connects servers 102 to storage devices 104 in a SAN may be referred to as a fabric 100. The SAN fabric 100 enables server-to-storage device connectivity through Fibre Channel switching technology. The SAN fabric 100 hardware may include one or more switches (also referred to as fabric switches), bridges, hubs, or other devices such as routers, as well as the interconnecting cables (for Fibre Channel SANs, fibre optic cables). SAN fabric 100 may include one or more distinct device interconnection structures (e.g. Fibre Channel Arbitrated Loops, Fibre Channel Fabrics, etc.) that collectively form the SAN fabric 100.
  • In one embodiment, a SAN-aware file system may use the Network File System (NFS) protocol in providing access to shared files on the SAN. Using NFS, each server 102 may include a logical hierarchy of files (i.e. a directory tree) physically stored on one or more of storage devices 104 and accessible by the client systems 106 through the server 102. These hierarchies of files, or portions or sub-trees of the hierarchies of files, may be referred to herein as “file systems.” In one embodiment, the SAN components may be organized into one or more clusters to provide high availability, load balancing and/or parallel processing. For example, in FIG. 1, server 102 and clients 106A and 106B may be in a cluster.
  • In traditional storage architecture, each server is privately connected to one or more storage devices using SCSI or other storage interconnect technology. If a server is functioning as a file server, it can give other servers (its clients) on the network access to its locally attached files through the local area network. With a storage area network, storage devices are consolidated on their own high-speed network using a shared SCSI bus and/or a fibre channel switch/hub. A SAN is a logical place to host files that may be shared between multiple systems.
  • A shared storage environment is one in which multiple servers may access the same set of data. A challenge with this architecture is the maintenance of consistency between file data and file system data. A common architecture for sharing file-based storage is the file server architecture (e.g., the SAN environment illustrated in FIG. 1). In the file server architecture, one or more servers are connected to a large amount of storage (either attached locally or in a SAN) and provide other systems access to this storage.
  • FIG. 2 illustrates an exemplary computer system 106 with which embodiments of a system and method for providing cross-platform access to storage volumes may be implemented. Generally speaking, the client computer system 106 may include various hardware and software components. In the illustrated embodiment, the client computer system 106 includes a processor 220A coupled to a memory 230 which is in turn coupled to storage 240. In addition, processor 220A is coupled to a network interface 260. The client computer system 106 may be connected to a network such as network 108 via a network connection 275. Further, the client computer system 106 includes operating system software 130. The operating system software 130 is executable by processor 220A out of memory 230A. The operating system software 130 may include an I/O or storage stack 250. Typically, the I/O stack 250 may include one or more file systems, volume managers, and device drivers.
  • Processor 220A may be configured to execute instructions and to operate on data stored within memory 230A. In one embodiment, processor 220A may operate in conjunction with memory 230A in a paged mode, such that frequently used pages of memory may be paged in and out of memory 230A from storage 240 according to conventional techniques. It is noted that processor 220A is representative of any type of processor. For example, in one embodiment, processor 220A may be compatible with the ×86 architecture, while in another embodiment processor 220A may be compatible with the SPARC™ family of processors.
  • Memory 230A is configured to store instructions and data. In one embodiment, memory 230A may be implemented in various forms of random access memory (RAM) such as dynamic RAM (DRAM) or synchronous DRAM (SDRAM). However, it is contemplated that other embodiments may me implemented using other types of suitable memory.
  • Storage 240 is configured to store instructions and data. Storage 240 may be an example of any type of mass storage device or system. For example, in one embodiment, storage 240 may be implemented as one or more hard disks configured independently or as a disk storage system. In one embodiment, the disk storage system may be an example of a redundant array of inexpensive disks (RAID) system. In an alternative embodiment, the disk storage system may be a disk array, or Just a Bunch Of Disks (JBOD), (used to refer to disks that are not configured according to RAID). In yet other embodiments, storage 240 may include tape drives, optical storage devices or RAM disks, for example.
  • Network interface 260 may implement functionality to connect the client computer system 106 to a network such as network 108 via network connection 275. For example, network interconnect 260 may include a hardware layer and a software layer which controls the hardware layer. The software may be executable by processor 220A out of memory 230A to implement various network protocols such as TCP/IP and hypertext transport protocol (HTTP), for example.
  • While FIGS. 1 and 2 illustrate typical computer system and network storage architectures in which embodiments of the system and method for providing cross-platform access to storage volumes may be implemented, embodiments may be implemented in other computer and network storage architectures including other SAN architectures.
  • FIG. 3 illustrates an architecture of software and/or hardware components for providing cross-platform access to storage volumes according to one embodiment. Embodiments of a server system 102 may be similar to the client system 106 illustrated in FIG. 2. The server system 102 may include a processor 220B coupled to a memory 230B which is in turn coupled to optional storage. The server computer system 102 may be connected to a network via a network connection. In one embodiment, the server computer system 102 may include operating system and/or server software which is executable by processor 220B out of memory 230B. Using the operating system software and/or server software, the server system 102 may “own” or manage data objects (e.g., files) on one or more storage devices 104. In one embodiment, the server system 102 may execute file server software to control access to the data it owns. In another embodiment, the server functionality may be implemented in hardware and/or firmware.
  • In one embodiment, the server 102 may take the form of a block-server appliance (e.g., on a SAN). The server 102 may be referred to herein as a “storage virtualization controller” or “storage virtualizer.” As used herein, a storage virtualization controller is a computer system or other apparatus which is operable to emulate operating-system-specific storage metadata to enable cross-platform availability of managed storage devices 104.
  • Storage devices are usually designed to provide data to servers 102 using one of two methods, either block-level or file-level access. Block I/O is typically the I/O which is tied directly to the disks 104 and used when direct and fast access to the physical drives themselves is required by the client application. The client 106 may request the data from the block server 102 using the starting location of the data blocks and the number of blocks to be transferred. In one embodiment, the operating system 130 on the client 106 may provide a file system to help translate the block locations into file locations.
  • The memory 230B of the server 102 may store instructions which are executable by the processor 220B to implement the method and system for providing cross-platform access to storage as disclosed herein. The instructions may include multiple components or layers. For example, device drivers 256 may be used for I/O between the server and the storage devices 104. Components such as an emulation layer 252 and/or storage virtualizer 254 may be used to provide uniform access to data for heterogeneous clients, as will be discussed in greater detail below.
  • The connection 109 between the block server 102 and the client 106 may comprise a channel protocol such as SCSI or fibre channel which may also be connected directly to disks or disk controllers (e.g., RAID controllers). Since SCSI and fibre channel are protocols having relatively low overhead, they may provide a low latency connection with high performance.
  • FIG. 4 illustrates examples of emulated storage volumes according to one embodiment. One or more storage devices (e.g., devices 104A, 104B, and 104C) may include one or more storage volumes (e.g., volumes 105A, 105B, 105C, 105D, and 105E). Because a volume is an abstract logical entity, it can start and end anywhere on a physical disk or in a partition and be composed of space on physical disks on different devices, using a variety of organizational styles, including simple aggregation, mirroring, striping, and RAID-5. Physical disk space can also be used within a volume or a collection of volumes for storing additional information, such as update logs, configuration information, or various kinds of tracking structures. Therefore, a single storage device 104 may comprise more than one logical volume 105, and a single volume 105 may span two or more physical devices 104.
  • When reading data from storage through its own I/O stack 250, a client computer system 106 may expect the data to be formatted in a certain way. In other words, each client 106 may expect the data to be associated with metadata that is specific to the particular operating system 130 executed by the client 106. As used herein, an “operating system” (“OS”) is a set of program instructions which, when executed, provide basic functions, such as data input/output (“I/O”), for a computer system. As used herein, “metadata” generally refers to data that identifies the formatting, structure, origin, or other attributes of other data, for example, as needed to satisfy the native requirements of a particular operating's block storage I/O subsystems.
  • Using the method and system for emulating operating system metadata disclosed herein, OS-specific emulated storage volumes 107 may be created from the logical storage volumes 105 on the physical devices 104. In one embodiment, the Logical Unit presented for the external volume may be “decorated” with additional metadata which is supplied before and/or after the actual volume contents. This metadata enables the operating system (OS) on the host to recognize that the storage device contains a partition or other virtual structure (e.g., a host-virtual object) that happens to map to the actual contents of the volume. By generating OS-specific metadata for a storage volume 105, an OS-specific emulated volume 107 may be created that satisfies the I/O expectations of a client running the specific OS.
  • Different operating systems may have different requirements in accessing volumes on storage devices, and thus different sets of operating system metadata may be used for different operating systems. Therefore, the storage virtualizer may be configured to provide different sets of operating system metadata for different storage devices. If a storage device can be shared (either serially or concurrently) by two different types of operating systems, then the storage virtualizer may be configured to provide different sets of operating system metadata for the same storage device.
  • In one embodiment, multiple emulated volumes 107 may therefore be generated from a single volume 105. For example, OS- specific volumes 107A, 107B, and 107C may emulate storage volume 105A for particular respective operating systems. Emulated volume 107D may emulate volume 105D for the Solaris operating system. Emulated volumes 107E and 107F may emulate volume 105E for the Windows NT and HP-UX operating systems, respectively.
  • FIG. 5 is a flowchart which illustrates a method for emulating operating system metadata to provide cross-platform access to storage volumes according to one embodiment. The method may be performed in whole or part by a storage virtualization controller or server 102 (as illustrated in FIG. 3).
  • The actual blocks of an external volume (e.g., a volume defined in an external virtualizer) are made available as a range of blocks within the storage device. In 301, additional blocks of the storage device (typically before and/or after the volume) are created in order to satisfy the OS-specific needs of a partitioning disk driver or host-based virtualization layer in the host. These additional blocks contain operating system metadata to satisfy the OS-specific needs of the host. When associated with a storage volume 105, the operating system metadata may effectively emulate the storage volume as it would look if hosted under a particular operating system (e.g., Solaris, Windows NT, HP-UX, etc.), thereby generating an emulated storage volume. An “emulated storage volume” refers to a storage volume which imitates a storage volume hosted under a particular operating system. As used herein, the phrase “hosted under an operating system” refers to the host computer running the particular operating system and/or the volume being formatted using the particular operating system.
  • The contents of the metadata blocks may be sent to the host in 303. Using the metadata, the host (e.g., a disk driver or host-based virtualization layer) is able to recognize that the storage device contains an addressable object (e.g., a partition or host-virtual object) whose offset and length correspond to the range of blocks that map the actual external volume within the storage device.
  • In one embodiment, a driver on the host may use a form of operating system metadata to locate the actual volume contents within the presented storage device. The driver may be located in a layer above the basic disk driver. In this embodiment, the contents of the metadata blocks may satisfy the needs of the OS disk driver stack. Furthermore, the contents of the metadata blocks may allow the storage device to contain unmolested blocks that map to the external volume and that can be located within the storage device by the special driver. In one embodiment, a structure similar to a partition table may identify a partition that maps a volume, in which case the OS metadata may be used to locate the volume. In an alternative embodiment, additional data may identify a location within the emulated volume where a layered driver could find the volume. The additional data may be in-band within the emulated data, from in-band SCSI data (such as mode pages), or from an out-of-band source (e.g., communicated over a network from another computer or over a network other than the storage network from the virtualizing device).
  • In one embodiment, the metadata blocks may either be stored and logically concatenated with the volume. In one embodiment, the metadata blocks can be generated on the fly in response to requests to read blocks from the storage device. Because the contents of the metadata blocks are predictable (if the software and OS on the host are known), it may be more efficient to generate the operating system metadata on the fly. Nevertheless, storing the metadata blocks may allow the format of the blocks to be programmed externally by an agent outside of the storage virtualizer. This external agent may understand the format of an arbitrary operating system, including one that was not known when the storage virtualization software was written and deployed.
  • In one embodiment, the method shown in FIG. 5 may be performed in response to an I/O request from the host computer system. A data request may be received from a client computer system 106. The data request may comprise a request to access a set of data which is stored on one or more storage volumes 105 and whose access is controlled by the server 102. For example, the client system 106 may send the data request in response to a request from the client's application software to read, write, delete, change ownership or security attributes, or perform another I/O function on a data object (e.g., a file) that is managed by the server 102. A client's operating system software 130, including an I/O stack 250, may handle the sending and receiving of data regarding the data request and any responses from the server 102. A data request may include a file identifier (e.g., a file name, file handle, or file ID number) to identify the requested file and any additional information which is useful for performing the desired operation. For example, a “read” request may include the file handle, an offset into the file, a length for the read, and a destination address or buffer of the read.
  • The operating system metadata may comprise different information for different operating systems. For example, the operating system metadata may comprise information which is located at the beginning of the emulated storage volume and information which is located at the end of the emulated storage volume. The operating system metadata may identify the set of data as being stored at a particular location on the emulated storage volume. The operating system metadata may comprise an identifier for the emulated storage volume. For example, a Solaris volume typically comprises an identifying VTOC (virtual table of contents) at a particular location on the volume, usually in partition 1 with a copy in the last two cylinders. The operating system metadata may comprise a cylinder alignment and/or cylinder size for the emulated storage volume. The operating system metadata may include appropriate OS-specific boot code if the volume is intended to be bootable. Typically, a boot program does little more than determine where to find the real operating system and then transfer control to that address.
  • In one embodiment, a Windows NT or Windows 2000 volume may include the following characteristics: a “magic number” (i.e., numbers that the OS expects to see, usually at or near the start of the disk); a Master Boot Record (MBR) comprising all 512 bytes of sector zero; a fixed size and position on a disk or over a set of disks; a particular cylinder alignment organized as a number of 512-byte blocks that can be read or written to by the OS; includes one or more subdisks (Windows 2000) as a group of subdisks called plexes; and may or may not include a file system. These characteristics may be emulated by operating system metadata generated in 305.
  • In one embodiment, a Solaris volume includes a VTOC (virtual table of contents), typically in partition 1. The VTOC may be emulated by operating system metadata generated in 305. The VTOC may include a layout version identifier, a volume name, the sector size (in bytes) of the volume, the number of partitions in the volume, the free space on the volume, partition headers (each including an ID tag, permission flags, the partition's beginning sector number, and the number of blocks in the partition), and other suitable information.
  • System-to-system variations in operating system metadata may be configured in a variety of ways. First, a storage device may be configured to respond with a particular metadata format to particular specified hosts (e.g., particular specified I/O controllers). Second, a management infrastructure layer may communicate with host-based software, determine the necessary metadata format to use for a particular host, and communicate the information to the storage virtualizer. Third, an extended I/O request (e.g., a vendor-unique SCSI request) may be used to directly inform the storage virtualizer of the required metadata format.
  • In one embodiment, a layered driver that locates the external volume within the storage device may be used instead of system-to-system variations in operating system metadata. In a cross-platform data sharing environment, it may be possible to define a consistent format that satisfies the needs of disk drivers on various operating systems (e.g., Solaris, HP/UX, AIX, and Linux). With those OS-specific needs satisfied, a layered driver may locate the external volume within the storage device by locating the external volume at a standard offset. Alternatively, the layered driver may locate the external volume within the storage device by encoding the offset within some block of the storage device that does have a known offset or by encoding that offset within some property of the disk (e.g., within a SCSI mode page).
  • In one embodiment, preferred, suggested, or mandatory names may be associated with storage devices. Naming a block device may allow applications and file systems to know what they are accessing. Moreover, naming a block device may allow the administrator of an environment to understand the relationship between the device that the file system is using and the virtual volume defined by the block virtualization. Block devices may be named by using software running on the host with the file system. That software might use symbolic links, alternate device files, or similar mechanisms to create an alias of a device node normally presented by the disk driver. Alternately, a local driver that sits above the disk driver may perform remapping to provide a local name space and devices that are coordinated with the external virtualization.
  • In one embodiment, a management communication layer may provide coordination of the naming function. In one embodiment, the basic properties of the external virtualized Logical Unit (e.g., through an extended SCSI mode page) may be extended to include a suggested or preferred name. In another embodiment, the name may be provided in a part of the emulated volume that resides outside the address space dedicated to the encapsulated volume.
  • The method set forth in FIG. 5 may be performed a plurality of times, each for a different client computer and/or different operating system. As illustrated in FIG. 4, a plurality of OS-specific emulated volumes 107 may be generated for a single volume 105.
  • In one embodiment, the storage volume may be moved from a host (e.g., a LAN-connected computer system) to the network (e.g., the SAN) prior to performing the method of FIG. 5. In another embodiment, the method and system described herein may be implemented using host-based storage devices rather than off-host, SAN-based storage. In this embodiment, the method of FIG. 5 may be performed to generate emulated storage volumes 107 for storage volumes 105 which are managed by a host computer system (typically on a LAN).
  • It is noted that the steps described above in conjunction with the descriptions of FIG. 5 are numbered for discussion purposes only and that the steps may numbered differently in alternative embodiments.
  • In one embodiment, emulated storage volumes can be dynamic: volumes can be created, grown, shrunk, deleted, and snapshotted. In one embodiment, emulated storage volumes can be assigned and unassigned to systems. These operations may typically be performed synchronously and online.
  • With Logical Units, on the other hand, these various dynamic operations may carry undesirable overhead. The operations may be at least partially asynchronous, having unbounded completion times and ambiguous failures. On some operating systems, a system reboot may be required to complete some of these operations.
  • To reduce the overhead, one or more Logical Units may be created in advance of their use by client systems. The pre-provisioned Logical Units may contain the appropriate emulated metadata and may be pre-assigned to client systems. The presence of the emulated metadata may permit the storage stack on client systems to recognize these pre-provisioned Logical Units.
  • To associate an emulated storage volume with a client system, the volume may be assigned to a pre-provisioned Logical Unit. The assignment may increase the size of the Logical Unit to include the volume, and the assignment may adjust the emulated metadata as necessary to point to the volume and to adjust to the new volume and Logical Unit size.
  • In one embodiment, the storage stack (e.g., the disk driver) may be altered to recognize the new Logical Unit size and/or re-read the emulated metadata (e.g., to re-read the emulated OS partition table). In one embodiment, the pre-provisioned Logical Units may be given a size that matches the maximum Logical Unit size. Alternatively, the pre-provisioned Logical Units may be given a size that matches the maximum emulated volume size. In either case, the pre-provisioned Logical Unit would largely comprise unmapped blocks.
  • In one embodiment, an emulated volume may be mapped to multiple Logical Units to support larger volumes. In one embodiment, multiple emulated volumes may be mapped to one Logical Unit to reduce the number of pre-provisioned Logical Unit assignments.
  • Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a carrier medium. Generally speaking, a carrier medium may include storage media or memory media such as magnetic or optical media, e.g., disk or CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc. as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link.
  • The various methods as illustrated in the Figures and described herein represent exemplary embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc.
  • Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended that the invention embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.

Claims (28)

1. A storage subsystem, comprising:
at least one storage device; and
a storage virtualization controller, wherein the storage virtualization controller is communicatively coupled to the at least one storage device, and wherein the storage virtualization controller is operable to:
generate operating system metadata for the at least one storage device, wherein the operating system metadata emulates a storage volume hosted under a first operating system; and
send the operating system metadata to a host computer system, wherein the host computer system runs the first operating system, and wherein the operating system metadata enables the host computer system to recognize the storage device as the storage volume hosted under the first operating system.
2. The storage subsystem of claim 1,
wherein the operating system metadata enables a block storage I/O stack in the first operating system on the host computer system to recognize the storage device as a partition.
3. The storage subsystem of claim 1,
wherein the operating system metadata enables a block storage I/O stack in the first operating system on the host computer system to recognize the storage device as a host-virtual object.
4. The storage subsystem of claim 1,
wherein the operating system metadata enables a driver on the host computer system to recognize the storage device as an enclosed volume, wherein the driver is layered above a block storage I/O stack in the first operating system.
5. The storage subsystem of claim 1,
wherein the storage virtualization controller is operable to configure the operating system metadata in response to a requirement of the first operating system.
6. The storage subsystem of claim 1,
wherein a management environment is configured to supply operating system types and operating system metadata configuration requirements to the storage virtualization controller, wherein the operating system types comprise the first operating system.
7. The storage subsystem of claim 1,
wherein in generating the operating system metadata for the storage device, the storage virtualization controller is operable to add a storage property to identify an offset and a length of the storage volume.
8. The storage subsystem of claim 1,
wherein an operation is provided to configure operating system types and operating system metadata configuration requirements for generating the operating system metadata, wherein the operating system types comprise the first operating system.
9. The storage subsystem of claim 1,
wherein the storage virtualization controller is operable to receive user input to select one of a plurality of operating system types for the operating system metadata, wherein the operating system types comprise the first operating system.
10. The storage subsystem of claim 1,
wherein the storage virtualization controller is operable to send an operating system metadata configuration instruction to the storage device through a vendor-unique I/O request to the storage device.
11. The storage subsystem of claim 1,
wherein the operating system metadata emulates a storage volume hosted under a first operating system and one or more additional operating systems; and
wherein the operating system metadata enables a layered driver on the host computer system to recognize the storage device.
12. The storage subsystem of claim 1,
using a layered driver on the host computer system to provide access to a storage volume mapped within a Logical Unit, wherein the Logical Unit is provided by an external device or an external virtualization layer.
13. The storage subsystem of claim 1,
wherein a management environment is configured to supply a preferred name of the storage device to software on the host computer system.
14. A method comprising:
generating operating system metadata for a storage device, wherein the operating system metadata emulates a storage volume hosted under a first operating system; and
sending the operating system metadata to a host computer system, wherein the host computer system runs the first operating system, and wherein the operating system metadata enables the host computer system to recognize the storage device as the storage volume hosted under the first operating system.
15. The method of claim 14,
wherein the operating system metadata enables a block storage I/O stack in the first operating system on the host computer system to recognize the storage device as a partition.
16. The method of claim 14,
wherein the operating system metadata enables a block storage I/O stack in the first operating system on the host computer system to recognize the storage device as a host-virtual object.
17. The method of claim 14,
wherein the operating system metadata enables a driver on the host computer system to recognize the storage device as an enclosed volume, wherein the driver is layered above a block storage I/O stack in the first operating system.
18. The method of claim 14, further comprising:
configuring the generating the operating system metadata in response to a requirement of the first operating system.
19. The method of claim 14,
wherein the generating the operating system metadata for the storage device is performed by a storage virtualizer; and
wherein a management environment is configured to supply operating system types and operating system metadata configuration requirements to the storage virtualizer, wherein the operating system types comprise the first operating system.
20. The method of claim 14,
wherein the generating the operating system metadata for the storage device comprises adding a storage property to identify an offset and a length of the storage volume.
21. The method of claim 14,
wherein an operation is provided to configure operating system types and operating system metadata configuration requirements for the generating the operating system metadata, wherein the operating system types comprise the first operating system.
22. The method of claim 14, further comprising:
receiving user input to select one of a plurality of operating system types for the operating system metadata, wherein the operating system types comprise the first operating system.
23. The method of claim 14, further comprising:
sending an operating system metadata configuration instruction to the storage device through a vendor-unique I/O request to the storage device.
24. The method of claim 14,
wherein the operating system metadata emulates a storage volume hosted under a first operating system and one or more additional operating systems; and
wherein the operating system metadata enables a layered driver on the host computer system to recognize the storage device.
25. The method of claim 14,
using a layered driver on the host computer system to provide access to a storage volume mapped within a Logical Unit, wherein the Logical Unit is provided by an external device or an external virtualization layer.
26. The method of claim 14,
wherein a management environment is configured to supply a preferred name of the storage device to software on the host computer system.
27. A carrier medium comprising program instructions, wherein the program instructions are computer-executable to implement:
generating operating system metadata for a storage device, wherein the operating system metadata emulates a storage volume hosted under a first operating system; and
sending the operating system metadata to a host computer system, wherein the host computer system runs the first operating system, and wherein the operating system metadata enables the host computer system to recognize the storage device as the storage volume hosted under the first operating system.
28. A system comprising:
means for generating operating system metadata for a storage device, wherein the operating system metadata emulates a storage volume hosted under a first operating system; and
means for sending the operating system metadata to a host computer system, wherein the host computer system runs the first operating system, and wherein the operating system metadata enables the host computer system to recognize the storage device as the storage volume hosted under the first operating system.
US10/722,614 2003-11-26 2003-11-26 System and method for emulating operating system metadata to provide cross-platform access to storage volumes Abandoned US20050114595A1 (en)

Priority Applications (9)

Application Number Priority Date Filing Date Title
US10/722,614 US20050114595A1 (en) 2003-11-26 2003-11-26 System and method for emulating operating system metadata to provide cross-platform access to storage volumes
JP2006541649A JP4750040B2 (en) 2003-11-26 2004-11-22 System and method for emulating operating system metadata enabling cross-platform access to storage volumes
CNB2004800405835A CN100552611C (en) 2003-11-26 2004-11-22 Emulating operating system metadata is to provide the system and method to the cross-platform access of storage volume
EP04811936A EP1687706A1 (en) 2003-11-26 2004-11-22 System and method for emulating operating system metadata to provide cross-platform access to storage volumes
PCT/US2004/039306 WO2005055043A1 (en) 2003-11-26 2004-11-22 System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US11/156,821 US20050235132A1 (en) 2003-11-26 2005-06-20 System and method for dynamic LUN mapping
US11/156,636 US20050228950A1 (en) 2003-11-26 2005-06-20 External encapsulation of a volume into a LUN to allow booting and installation on a complex volume
US11/156,635 US7689803B2 (en) 2003-11-26 2005-06-20 System and method for communication using emulated LUN blocks in storage virtualization environments
US11/156,820 US7669032B2 (en) 2003-11-26 2005-06-20 Host-based virtualization optimizations in storage environments employing off-host storage virtualization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/722,614 US20050114595A1 (en) 2003-11-26 2003-11-26 System and method for emulating operating system metadata to provide cross-platform access to storage volumes

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/039306 Continuation-In-Part WO2005055043A1 (en) 2003-11-26 2004-11-22 System and method for emulating operating system metadata to provide cross-platform access to storage volumes

Related Child Applications (4)

Application Number Title Priority Date Filing Date
US11/156,821 Continuation-In-Part US20050235132A1 (en) 2003-11-26 2005-06-20 System and method for dynamic LUN mapping
US11/156,635 Continuation-In-Part US7689803B2 (en) 2003-11-26 2005-06-20 System and method for communication using emulated LUN blocks in storage virtualization environments
US11/156,820 Continuation-In-Part US7669032B2 (en) 2003-11-26 2005-06-20 Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US11/156,636 Continuation-In-Part US20050228950A1 (en) 2003-11-26 2005-06-20 External encapsulation of a volume into a LUN to allow booting and installation on a complex volume

Publications (1)

Publication Number Publication Date
US20050114595A1 true US20050114595A1 (en) 2005-05-26

Family

ID=34592023

Family Applications (4)

Application Number Title Priority Date Filing Date
US10/722,614 Abandoned US20050114595A1 (en) 2003-11-26 2003-11-26 System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US11/156,635 Expired - Fee Related US7689803B2 (en) 2003-11-26 2005-06-20 System and method for communication using emulated LUN blocks in storage virtualization environments
US11/156,636 Abandoned US20050228950A1 (en) 2003-11-26 2005-06-20 External encapsulation of a volume into a LUN to allow booting and installation on a complex volume
US11/156,821 Abandoned US20050235132A1 (en) 2003-11-26 2005-06-20 System and method for dynamic LUN mapping

Family Applications After (3)

Application Number Title Priority Date Filing Date
US11/156,635 Expired - Fee Related US7689803B2 (en) 2003-11-26 2005-06-20 System and method for communication using emulated LUN blocks in storage virtualization environments
US11/156,636 Abandoned US20050228950A1 (en) 2003-11-26 2005-06-20 External encapsulation of a volume into a LUN to allow booting and installation on a complex volume
US11/156,821 Abandoned US20050235132A1 (en) 2003-11-26 2005-06-20 System and method for dynamic LUN mapping

Country Status (5)

Country Link
US (4) US20050114595A1 (en)
EP (1) EP1687706A1 (en)
JP (1) JP4750040B2 (en)
CN (1) CN100552611C (en)
WO (1) WO2005055043A1 (en)

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050216680A1 (en) * 2004-03-25 2005-09-29 Itzhak Levy Device to allow multiple data processing channels to share a single disk drive
US20050262150A1 (en) * 2004-05-21 2005-11-24 Computer Associates Think, Inc. Object-based storage
US20060123062A1 (en) * 2001-12-19 2006-06-08 Emc Corporation Virtual file system
US20060161754A1 (en) * 2005-01-20 2006-07-20 Dewey Douglas W Apparatus, system, and method for validating logical volume configuration
US20060179343A1 (en) * 2005-02-08 2006-08-10 Hitachi, Ltd. Method and apparatus for replicating volumes between heterogenous storage systems
US20060282438A1 (en) * 2005-06-10 2006-12-14 Microsoft Corporation Performing a deletion of a node in a tree data storage structure
US20070028138A1 (en) * 2005-07-29 2007-02-01 Broadcom Corporation Combined local and network storage interface
US20070038749A1 (en) * 2005-07-29 2007-02-15 Broadcom Corporation Combined local and network storage interface
US20070226270A1 (en) * 2006-03-23 2007-09-27 Network Appliance, Inc. Method and apparatus for concurrent read-only access to filesystem
US20070277015A1 (en) * 2006-05-23 2007-11-29 Matthew Joseph Kalos Apparatus, system, and method for presenting a storage volume as a virtual volume
US20090182961A1 (en) * 2008-01-11 2009-07-16 International Business Machines Corporation Methods, apparatuses, and computer program products for protecting pre-staged provisioned data in a storage system
US20090204759A1 (en) * 2008-02-13 2009-08-13 International Business Machines Corporation On-line volume coalesce operation to enable on-line storage subsystem volume consolidation
US20090216944A1 (en) * 2008-02-22 2009-08-27 International Business Machines Corporation Efficient validation of writes for protection against dropped writes
US20090307716A1 (en) * 2008-06-09 2009-12-10 David Nevarez Block storage interface for virtual memory
US20100082715A1 (en) * 2008-09-30 2010-04-01 Karl Dohm Reduced-Resource Block Thin Provisioning
US7761738B2 (en) 2006-09-07 2010-07-20 International Business Machines Corporation Establishing communications across virtual enclosure boundaries
US20110055476A1 (en) * 2008-03-27 2011-03-03 Christ Bryan E RAID Array Access By A RAID Array-unaware Operating System
US7945657B1 (en) * 2005-03-30 2011-05-17 Oracle America, Inc. System and method for emulating input/output performance of an application
US8055842B1 (en) 2008-09-26 2011-11-08 Nvidia Corporation Using raid with large sector size ATA mass storage devices
CN102567217A (en) * 2012-01-04 2012-07-11 北京航空航天大学 MIPS platform-oriented memory virtualization method
US8516190B1 (en) * 2008-09-26 2013-08-20 Nvidia Corporation Reporting logical sector alignment for ATA mass storage devices
US8677023B2 (en) 2004-07-22 2014-03-18 Oracle International Corporation High availability and I/O aggregation for server environments
US8756338B1 (en) * 2010-04-29 2014-06-17 Netapp, Inc. Storage server with embedded communication agent
WO2014100472A1 (en) * 2012-12-21 2014-06-26 Atlantis Computing, Inc. Systems and apparatuses for aggregating nodes to form an aggregated virtual storage for a virtualized desktop environment
US8996800B2 (en) 2011-07-07 2015-03-31 Atlantis Computing, Inc. Deduplication of virtual machine files in a virtualized desktop environment
US9069472B2 (en) 2012-12-21 2015-06-30 Atlantis Computing, Inc. Method for dispersing and collating I/O's from virtual machines for parallelization of I/O access and redundancy of storing virtual machine data
US9083550B2 (en) 2012-10-29 2015-07-14 Oracle International Corporation Network virtualization over infiniband
US9183232B1 (en) 2013-03-15 2015-11-10 MiMedia, Inc. Systems and methods for organizing content using content organization rules and robust content information
US9250946B2 (en) 2013-02-12 2016-02-02 Atlantis Computing, Inc. Efficient provisioning of cloned virtual machine images using deduplication metadata
US9298758B1 (en) 2013-03-13 2016-03-29 MiMedia, Inc. Systems and methods providing media-to-media connection
US9331963B2 (en) 2010-09-24 2016-05-03 Oracle International Corporation Wireless host I/O using virtualized I/O controllers
US9372865B2 (en) 2013-02-12 2016-06-21 Atlantis Computing, Inc. Deduplication metadata access in deduplication file system
US9465521B1 (en) 2013-03-13 2016-10-11 MiMedia, Inc. Event based media interface
US9471590B2 (en) 2013-02-12 2016-10-18 Atlantis Computing, Inc. Method and apparatus for replicating virtual machine images using deduplication metadata
US9606748B2 (en) 2011-05-04 2017-03-28 International Business Machines Corporation Importing pre-existing data of a prior storage solution into a storage pool for use with a new storage solution
CN107122127A (en) * 2008-05-29 2017-09-01 威睿公司 Unloading is operated to the storage of storage hardware
US9813283B2 (en) 2005-08-09 2017-11-07 Oracle International Corporation Efficient data transfer between servers and remote peripherals
US20170357462A1 (en) * 2016-06-08 2017-12-14 Intel Corporation Method and apparatus for improving performance of sequential logging in a storage device
US9912713B1 (en) 2012-12-17 2018-03-06 MiMedia LLC Systems and methods for providing dynamically updated image sets for applications
US9973446B2 (en) 2009-08-20 2018-05-15 Oracle International Corporation Remote shared server peripherals over an Ethernet network for resource virtualization
US10013188B2 (en) 2014-12-17 2018-07-03 Fujitsu Limited Storage control device and method of copying data in failover in storage cluster
US20180217763A1 (en) * 2017-01-27 2018-08-02 Wyse Technology L.L.C. Attaching a windows file system to a remote non-windows disk stack
US10257301B1 (en) 2013-03-15 2019-04-09 MiMedia, Inc. Systems and methods providing a drive interface for content delivery
US10459882B2 (en) * 2008-09-05 2019-10-29 Commvault Systems, Inc. Image level copy or restore, such as image level restore without knowledge of data object metadata
US10754829B2 (en) 2017-04-04 2020-08-25 Oracle International Corporation Virtual configuration systems and methods
US10776329B2 (en) 2017-03-28 2020-09-15 Commvault Systems, Inc. Migration of a database management system to cloud storage
US10831778B2 (en) 2012-12-27 2020-11-10 Commvault Systems, Inc. Application of information management policies based on operation with a geographic entity
US10838821B2 (en) 2017-02-08 2020-11-17 Commvault Systems, Inc. Migrating content and metadata from a backup system
US10860401B2 (en) 2014-02-27 2020-12-08 Commvault Systems, Inc. Work flow management for an information management system
US11093336B2 (en) 2013-03-11 2021-08-17 Commvault Systems, Inc. Browsing data stored in a backup format
US11409608B2 (en) * 2020-12-29 2022-08-09 Advanced Micro Devices, Inc. Providing host-based error detection capabilities in a remote execution device
US20240004563A1 (en) * 2022-07-01 2024-01-04 Dell Products, L.P. Performance Efficient and Resilient Creation of Network Attached Storage Obects

Families Citing this family (254)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9603582D0 (en) 1996-02-20 1996-04-17 Hewlett Packard Co Method of accessing service resource items that are for use in a telecommunications system
US8032701B1 (en) * 2004-03-26 2011-10-04 Emc Corporation System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network
US7769722B1 (en) 2006-12-08 2010-08-03 Emc Corporation Replication and restoration of multiple data storage object types in a data network
US7669032B2 (en) * 2003-11-26 2010-02-23 Symantec Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US20050114595A1 (en) * 2003-11-26 2005-05-26 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US7461141B2 (en) * 2004-01-30 2008-12-02 Applied Micro Circuits Corporation System and method for performing driver configuration operations without a system reboot
US7409495B1 (en) * 2004-12-22 2008-08-05 Symantec Operating Corporation Method and apparatus for providing a temporal storage appliance with block virtualization in storage networks
US8161318B2 (en) * 2005-02-07 2012-04-17 Mimosa Systems, Inc. Enterprise service availability through identity preservation
US8543542B2 (en) * 2005-02-07 2013-09-24 Mimosa Systems, Inc. Synthetic full copies of data and dynamic bulk-to-brick transformation
US7778976B2 (en) * 2005-02-07 2010-08-17 Mimosa, Inc. Multi-dimensional surrogates for data management
US8799206B2 (en) * 2005-02-07 2014-08-05 Mimosa Systems, Inc. Dynamic bulk-to-brick transformation of data
US7657780B2 (en) * 2005-02-07 2010-02-02 Mimosa Systems, Inc. Enterprise service availability through identity preservation
US8271436B2 (en) * 2005-02-07 2012-09-18 Mimosa Systems, Inc. Retro-fitting synthetic full copies of data
US8918366B2 (en) * 2005-02-07 2014-12-23 Mimosa Systems, Inc. Synthetic full copies of data and dynamic bulk-to-brick transformation
US7870416B2 (en) * 2005-02-07 2011-01-11 Mimosa Systems, Inc. Enterprise service availability through identity preservation
US8275749B2 (en) * 2005-02-07 2012-09-25 Mimosa Systems, Inc. Enterprise server version migration through identity preservation
US8812433B2 (en) * 2005-02-07 2014-08-19 Mimosa Systems, Inc. Dynamic bulk-to-brick transformation of data
US7917475B2 (en) * 2005-02-07 2011-03-29 Mimosa Systems, Inc. Enterprise server version migration through identity preservation
US7774514B2 (en) * 2005-05-16 2010-08-10 Infortrend Technology, Inc. Method of transmitting data between storage virtualization controllers and storage virtualization controller designed to implement the method
US7802000B1 (en) * 2005-08-01 2010-09-21 Vmware Virtual network in server farm
KR101340176B1 (en) * 2005-08-25 2013-12-10 실리콘 이미지, 인크. Smart scalable storage switch architecture
US20070083653A1 (en) * 2005-09-16 2007-04-12 Balasubramanian Chandrasekaran System and method for deploying information handling system images through fibre channel
JP2007094578A (en) * 2005-09-27 2007-04-12 Fujitsu Ltd Storage system and its component replacement processing method
US7765187B2 (en) * 2005-11-29 2010-07-27 Emc Corporation Replication of a consistency group of data storage objects from servers in a data network
US8572330B2 (en) * 2005-12-19 2013-10-29 Commvault Systems, Inc. Systems and methods for granular resource management in a storage network
JP4474356B2 (en) * 2005-12-27 2010-06-02 富士通株式会社 Computer system and storage virtualization apparatus
JP4797636B2 (en) * 2006-01-16 2011-10-19 株式会社日立製作所 Complex information platform apparatus and information processing apparatus configuration method thereof
US8533409B2 (en) * 2006-01-26 2013-09-10 Infortrend Technology, Inc. Method of managing data snapshot images in a storage system
US20070180287A1 (en) * 2006-01-31 2007-08-02 Dell Products L. P. System and method for managing node resets in a cluster
US20070180167A1 (en) * 2006-02-02 2007-08-02 Seagate Technology Llc Dynamic partition mapping in a hot-pluggable data storage apparatus
JP2007265001A (en) * 2006-03-28 2007-10-11 Hitachi Ltd Storage device
JP5037881B2 (en) 2006-04-18 2012-10-03 株式会社日立製作所 Storage system and control method thereof
US20080140888A1 (en) * 2006-05-30 2008-06-12 Schneider Automation Inc. Virtual Placeholder Configuration for Distributed Input/Output Modules
US7904681B1 (en) * 2006-06-30 2011-03-08 Emc Corporation Methods and systems for migrating data with minimal disruption
US8416954B1 (en) 2008-09-30 2013-04-09 Emc Corporation Systems and methods for accessing storage or network based replicas of encrypted volumes with no additional key management
US8261068B1 (en) 2008-09-30 2012-09-04 Emc Corporation Systems and methods for selective encryption of operating system metadata for host-based encryption of data at rest on a logical unit
US7536503B1 (en) * 2006-06-30 2009-05-19 Emc Corporation Methods and systems for preserving disk geometry when migrating existing data volumes
US7610483B2 (en) * 2006-07-25 2009-10-27 Nvidia Corporation System and method to accelerate identification of hardware platform classes
US8909746B2 (en) * 2006-07-25 2014-12-09 Nvidia Corporation System and method for operating system installation on a diskless computing platform
US9003000B2 (en) * 2006-07-25 2015-04-07 Nvidia Corporation System and method for operating system installation on a diskless computing platform
US10013268B2 (en) * 2006-08-29 2018-07-03 Prometric Inc. Performance-based testing system and method employing emulation and virtualization
US8095715B1 (en) * 2006-09-05 2012-01-10 Nvidia Corporation SCSI HBA management using logical units
US7584378B2 (en) 2006-09-07 2009-09-01 International Business Machines Corporation Reconfigurable FC-AL storage loops in a data storage system
US8332613B1 (en) * 2006-09-29 2012-12-11 Emc Corporation Methods and systems for managing I/O requests to minimize disruption required for data encapsulation and de-encapsulation
JP2008090657A (en) * 2006-10-03 2008-04-17 Hitachi Ltd Storage system and control method
JP2008112399A (en) * 2006-10-31 2008-05-15 Fujitsu Ltd Storage virtualization switch and computer system
US7975135B2 (en) * 2006-11-23 2011-07-05 Dell Products L.P. Apparatus, method and product for selecting an iSCSI target for automated initiator booting
US9104599B2 (en) 2007-12-06 2015-08-11 Intelligent Intellectual Property Holdings 2 Llc Apparatus, system, and method for destaging cached data
US8706968B2 (en) 2007-12-06 2014-04-22 Fusion-Io, Inc. Apparatus, system, and method for redundant write caching
US8443134B2 (en) 2006-12-06 2013-05-14 Fusion-Io, Inc. Apparatus, system, and method for graceful cache device degradation
US8019940B2 (en) 2006-12-06 2011-09-13 Fusion-Io, Inc. Apparatus, system, and method for a front-end, distributed raid
US8489817B2 (en) 2007-12-06 2013-07-16 Fusion-Io, Inc. Apparatus, system, and method for caching data
US8706833B1 (en) * 2006-12-08 2014-04-22 Emc Corporation Data storage server having common replication architecture for multiple storage object types
JP4813385B2 (en) * 2007-01-29 2011-11-09 株式会社日立製作所 Control device that controls multiple logical resources of a storage system
US7840790B1 (en) * 2007-02-16 2010-11-23 Vmware, Inc. Method and system for providing device drivers in a virtualization system
JP5104855B2 (en) * 2007-03-23 2012-12-19 富士通株式会社 Load distribution program, load distribution method, and storage management apparatus
CN100547566C (en) * 2007-06-28 2009-10-07 忆正存储技术(深圳)有限公司 Control method based on multi-passage flash memory apparatus logic strip
US8635429B1 (en) 2007-06-29 2014-01-21 Symantec Corporation Method and apparatus for mapping virtual drives
US7568051B1 (en) * 2007-06-29 2009-07-28 Emc Corporation Flexible UCB
US8738871B1 (en) * 2007-06-29 2014-05-27 Symantec Corporation Method and apparatus for mapping virtual drives
US8176405B2 (en) * 2007-09-24 2012-05-08 International Business Machines Corporation Data integrity validation in a computing environment
US20090089498A1 (en) * 2007-10-02 2009-04-02 Michael Cameron Hay Transparently migrating ongoing I/O to virtualized storage
US20090119452A1 (en) * 2007-11-02 2009-05-07 Crossroads Systems, Inc. Method and system for a sharable storage device
US9519540B2 (en) 2007-12-06 2016-12-13 Sandisk Technologies Llc Apparatus, system, and method for destaging cached data
US7836226B2 (en) 2007-12-06 2010-11-16 Fusion-Io, Inc. Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment
WO2009070898A1 (en) * 2007-12-07 2009-06-11 Scl Elements Inc. Auto-configuring multi-layer network
US8032689B2 (en) * 2007-12-18 2011-10-04 Hitachi Global Storage Technologies Netherlands, B.V. Techniques for data storage device virtualization
US8028062B1 (en) * 2007-12-26 2011-09-27 Emc Corporation Non-disruptive data mobility using virtual storage area networks with split-path virtualization
JP2009238114A (en) * 2008-03-28 2009-10-15 Hitachi Ltd Storage management method, storage management program, storage management apparatus, and storage management system
US7979260B1 (en) * 2008-03-31 2011-07-12 Symantec Corporation Simulating PXE booting for virtualized machines
GB2460841B (en) * 2008-06-10 2012-01-11 Virtensys Ltd Methods of providing access to I/O devices
US8073674B2 (en) * 2008-09-23 2011-12-06 Oracle America, Inc. SCSI device emulation in user space facilitating storage virtualization
US8510352B2 (en) * 2008-10-24 2013-08-13 Microsoft Corporation Virtualized boot block with discovery volume
US8166314B1 (en) 2008-12-30 2012-04-24 Emc Corporation Selective I/O to logical unit when encrypted, but key is not available or when encryption status is unknown
US8417969B2 (en) * 2009-02-19 2013-04-09 Microsoft Corporation Storage volume protection supporting legacy systems
US8073886B2 (en) * 2009-02-20 2011-12-06 Microsoft Corporation Non-privileged access to data independent of filesystem implementation
US8074038B2 (en) 2009-05-12 2011-12-06 Microsoft Corporation Converting luns into files or files into luns in real time
US9015198B2 (en) * 2009-05-26 2015-04-21 Pi-Coral, Inc. Method and apparatus for large scale data storage
US8238538B2 (en) 2009-05-28 2012-08-07 Comcast Cable Communications, Llc Stateful home phone service
US8495289B2 (en) * 2010-02-24 2013-07-23 Red Hat, Inc. Automatically detecting discrepancies between storage subsystem alignments
US8539124B1 (en) * 2010-03-31 2013-09-17 Emc Corporation Storage integration plugin for virtual servers
US8560825B2 (en) * 2010-06-30 2013-10-15 International Business Machines Corporation Streaming virtual machine boot services over a network
US8261003B2 (en) * 2010-08-11 2012-09-04 Lsi Corporation Apparatus and methods for managing expanded capacity of virtual volumes in a storage system
JP2012058912A (en) * 2010-09-07 2012-03-22 Nec Corp Logical unit number management device, logical unit number management method and program therefor
US11614893B2 (en) 2010-09-15 2023-03-28 Pure Storage, Inc. Optimizing storage device access based on latency
CN101986655A (en) * 2010-10-21 2011-03-16 浪潮(北京)电子信息产业有限公司 Storage network and data reading and writing method thereof
US8458145B2 (en) * 2011-01-20 2013-06-04 Infinidat Ltd. System and method of storage optimization
US8966184B2 (en) 2011-01-31 2015-02-24 Intelligent Intellectual Property Holdings 2, LLC. Apparatus, system, and method for managing eviction of data
US9141527B2 (en) 2011-02-25 2015-09-22 Intelligent Intellectual Property Holdings 2 Llc Managing cache pools
US8838931B1 (en) * 2012-03-30 2014-09-16 Emc Corporation Techniques for automated discovery and performing storage optimizations on a component external to a data storage system
US9152404B2 (en) 2011-07-13 2015-10-06 Z124 Remote device filter
US20130268559A1 (en) 2011-07-13 2013-10-10 Z124 Virtual file system remote search
US8909891B2 (en) * 2011-07-21 2014-12-09 International Business Machines Corporation Virtual logical volume for overflow storage of special data sets
US8589640B2 (en) 2011-10-14 2013-11-19 Pure Storage, Inc. Method for maintaining multiple fingerprint tables in a deduplicating storage system
US20130268703A1 (en) * 2011-09-27 2013-10-10 Z124 Rules based hierarchical data virtualization
US9767032B2 (en) 2012-01-12 2017-09-19 Sandisk Technologies Llc Systems and methods for cache endurance
US9251086B2 (en) 2012-01-24 2016-02-02 SanDisk Technologies, Inc. Apparatus, system, and method for managing a cache
US9158568B2 (en) 2012-01-30 2015-10-13 Hewlett-Packard Development Company, L.P. Input/output operations at a virtual block device of a storage server
US9626284B2 (en) 2012-02-09 2017-04-18 Vmware, Inc. Systems and methods to test programs
US9946559B1 (en) * 2012-02-13 2018-04-17 Veritas Technologies Llc Techniques for managing virtual machine backups
US9098325B2 (en) 2012-02-28 2015-08-04 Hewlett-Packard Development Company, L.P. Persistent volume at an offset of a virtual block device of a storage server
US10831728B2 (en) 2012-05-29 2020-11-10 International Business Machines Corporation Application-controlled sub-LUN level data migration
US10817202B2 (en) 2012-05-29 2020-10-27 International Business Machines Corporation Application-controlled sub-LUN level data migration
US10831727B2 (en) 2012-05-29 2020-11-10 International Business Machines Corporation Application-controlled sub-LUN level data migration
US8856484B2 (en) * 2012-08-14 2014-10-07 Infinidat Ltd. Mass storage system and methods of controlling resources thereof
US9116623B2 (en) * 2012-08-14 2015-08-25 International Business Machines Corporation Optimizing storage system behavior in virtualized cloud computing environments by tagging input/output operation data to indicate storage policy
US9454670B2 (en) 2012-12-03 2016-09-27 International Business Machines Corporation Hybrid file systems
US20140164581A1 (en) * 2012-12-10 2014-06-12 Transparent Io, Inc. Dispersed Storage System with Firewall
US9280359B2 (en) * 2012-12-11 2016-03-08 Cisco Technology, Inc. System and method for selecting a least cost path for performing a network boot in a data center network environment
US10445229B1 (en) * 2013-01-28 2019-10-15 Radian Memory Systems, Inc. Memory controller with at least one address segment defined for which data is striped across flash memory dies, with a common address offset being used to obtain physical addresses for the data in each of the dies
US20140359612A1 (en) * 2013-06-03 2014-12-04 Microsoft Corporation Sharing a Virtual Hard Disk Across Multiple Virtual Machines
US9176890B2 (en) 2013-06-07 2015-11-03 Globalfoundries Inc. Non-disruptive modification of a device mapper stack
US9871889B1 (en) * 2014-03-18 2018-01-16 EMC IP Holing Company LLC Techniques for automated capture of configuration data for simulation
US11399063B2 (en) 2014-06-04 2022-07-26 Pure Storage, Inc. Network authentication for a storage system
US9218244B1 (en) 2014-06-04 2015-12-22 Pure Storage, Inc. Rebuilding data across storage nodes
US11068363B1 (en) 2014-06-04 2021-07-20 Pure Storage, Inc. Proactively rebuilding data in a storage cluster
US11652884B2 (en) 2014-06-04 2023-05-16 Pure Storage, Inc. Customized hash algorithms
US8850108B1 (en) 2014-06-04 2014-09-30 Pure Storage, Inc. Storage cluster
US10574754B1 (en) 2014-06-04 2020-02-25 Pure Storage, Inc. Multi-chassis array with multi-level load balancing
US11960371B2 (en) 2014-06-04 2024-04-16 Pure Storage, Inc. Message persistence in a zoned system
US9836234B2 (en) 2014-06-04 2017-12-05 Pure Storage, Inc. Storage cluster
US9003144B1 (en) 2014-06-04 2015-04-07 Pure Storage, Inc. Mechanism for persisting messages in a storage system
US9367243B1 (en) 2014-06-04 2016-06-14 Pure Storage, Inc. Scalable non-uniform storage sizes
US9213485B1 (en) 2014-06-04 2015-12-15 Pure Storage, Inc. Storage system architecture
US11604598B2 (en) 2014-07-02 2023-03-14 Pure Storage, Inc. Storage cluster with zoned drives
US11886308B2 (en) 2014-07-02 2024-01-30 Pure Storage, Inc. Dual class of service for unified file and object messaging
US9021297B1 (en) 2014-07-02 2015-04-28 Pure Storage, Inc. Redundant, fault-tolerant, distributed remote procedure call cache in a storage system
US8868825B1 (en) 2014-07-02 2014-10-21 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US9836245B2 (en) 2014-07-02 2017-12-05 Pure Storage, Inc. Non-volatile RAM and flash memory in a non-volatile solid-state storage
US10114757B2 (en) 2014-07-02 2018-10-30 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US10853311B1 (en) 2014-07-03 2020-12-01 Pure Storage, Inc. Administration through files in a storage system
US9747229B1 (en) 2014-07-03 2017-08-29 Pure Storage, Inc. Self-describing data format for DMA in a non-volatile solid-state storage
US9811677B2 (en) 2014-07-03 2017-11-07 Pure Storage, Inc. Secure data replication in a storage grid
US8874836B1 (en) 2014-07-03 2014-10-28 Pure Storage, Inc. Scheduling policy for queues in a non-volatile solid-state storage
US9082512B1 (en) 2014-08-07 2015-07-14 Pure Storage, Inc. Die-level monitoring in a storage cluster
US9558069B2 (en) 2014-08-07 2017-01-31 Pure Storage, Inc. Failure mapping in a storage array
US9483346B2 (en) 2014-08-07 2016-11-01 Pure Storage, Inc. Data rebuild on feedback from a queue in a non-volatile solid-state storage
US10983859B2 (en) 2014-08-07 2021-04-20 Pure Storage, Inc. Adjustable error correction based on memory health in a storage unit
US9766972B2 (en) 2014-08-07 2017-09-19 Pure Storage, Inc. Masking defective bits in a storage array
US9495255B2 (en) 2014-08-07 2016-11-15 Pure Storage, Inc. Error recovery in a storage cluster
US10079711B1 (en) 2014-08-20 2018-09-18 Pure Storage, Inc. Virtual file server with preserved MAC address
US10001927B1 (en) * 2014-09-30 2018-06-19 EMC IP Holding Company LLC Techniques for optimizing I/O operations
US9389789B2 (en) 2014-12-15 2016-07-12 International Business Machines Corporation Migration of executing applications and associated stored data
US9948615B1 (en) 2015-03-16 2018-04-17 Pure Storage, Inc. Increased storage unit encryption based on loss of trust
US11294893B2 (en) 2015-03-20 2022-04-05 Pure Storage, Inc. Aggregation of queries
US9940234B2 (en) 2015-03-26 2018-04-10 Pure Storage, Inc. Aggressive data deduplication using lazy garbage collection
US10082985B2 (en) 2015-03-27 2018-09-25 Pure Storage, Inc. Data striping across storage nodes that are assigned to multiple logical arrays
US10178169B2 (en) * 2015-04-09 2019-01-08 Pure Storage, Inc. Point to point based backend communication layer for storage processing
US9672125B2 (en) 2015-04-10 2017-06-06 Pure Storage, Inc. Ability to partition an array into two or more logical arrays with independently running software
US10140149B1 (en) 2015-05-19 2018-11-27 Pure Storage, Inc. Transactional commits with hardware assists in remote memory
US9817576B2 (en) 2015-05-27 2017-11-14 Pure Storage, Inc. Parallel update to NVRAM
US10846275B2 (en) 2015-06-26 2020-11-24 Pure Storage, Inc. Key management in a storage device
CN107710160B (en) * 2015-07-08 2021-06-22 株式会社日立制作所 Computer and storage area management method
US10983732B2 (en) 2015-07-13 2021-04-20 Pure Storage, Inc. Method and system for accessing a file
US11232079B2 (en) 2015-07-16 2022-01-25 Pure Storage, Inc. Efficient distribution of large directories
US10579275B2 (en) * 2015-07-27 2020-03-03 Hitachi, Ltd. Storage system and storage control method
US10108355B2 (en) 2015-09-01 2018-10-23 Pure Storage, Inc. Erase block state detection
US11341136B2 (en) 2015-09-04 2022-05-24 Pure Storage, Inc. Dynamically resizable structures for approximate membership queries
US10853266B2 (en) 2015-09-30 2020-12-01 Pure Storage, Inc. Hardware assisted data lookup methods
US10762069B2 (en) 2015-09-30 2020-09-01 Pure Storage, Inc. Mechanism for a system where data and metadata are located closely together
US9768953B2 (en) 2015-09-30 2017-09-19 Pure Storage, Inc. Resharing of a split secret
US9965184B2 (en) 2015-10-19 2018-05-08 International Business Machines Corporation Multiple storage subpools of a virtual storage pool in a multiple processor environment
US9843453B2 (en) 2015-10-23 2017-12-12 Pure Storage, Inc. Authorizing I/O commands with I/O tokens
US10007457B2 (en) 2015-12-22 2018-06-26 Pure Storage, Inc. Distributed transactions with token-associated execution
US10261690B1 (en) 2016-05-03 2019-04-16 Pure Storage, Inc. Systems and methods for operating a storage system
EP3308316B1 (en) * 2016-07-05 2020-09-02 Viirii, LLC Operating system independent, secure data storage subsystem
US11861188B2 (en) 2016-07-19 2024-01-02 Pure Storage, Inc. System having modular accelerators
US10768819B2 (en) 2016-07-22 2020-09-08 Pure Storage, Inc. Hardware support for non-disruptive upgrades
US11449232B1 (en) 2016-07-22 2022-09-20 Pure Storage, Inc. Optimal scheduling of flash operations
US9672905B1 (en) 2016-07-22 2017-06-06 Pure Storage, Inc. Optimize data protection layouts based on distributed flash wear leveling
US10216420B1 (en) 2016-07-24 2019-02-26 Pure Storage, Inc. Calibration of flash channels in SSD
US11080155B2 (en) 2016-07-24 2021-08-03 Pure Storage, Inc. Identifying error types among flash memory
US11604690B2 (en) 2016-07-24 2023-03-14 Pure Storage, Inc. Online failure span determination
US11734169B2 (en) 2016-07-26 2023-08-22 Pure Storage, Inc. Optimizing spool and memory space management
US10366004B2 (en) 2016-07-26 2019-07-30 Pure Storage, Inc. Storage system with elective garbage collection to reduce flash contention
US11797212B2 (en) 2016-07-26 2023-10-24 Pure Storage, Inc. Data migration for zoned drives
US11886334B2 (en) 2016-07-26 2024-01-30 Pure Storage, Inc. Optimizing spool and memory space management
US10203903B2 (en) 2016-07-26 2019-02-12 Pure Storage, Inc. Geometry based, space aware shelf/writegroup evacuation
US11422719B2 (en) 2016-09-15 2022-08-23 Pure Storage, Inc. Distributed file deletion and truncation
US10756816B1 (en) 2016-10-04 2020-08-25 Pure Storage, Inc. Optimized fibre channel and non-volatile memory express access
US9747039B1 (en) 2016-10-04 2017-08-29 Pure Storage, Inc. Reservations over multiple paths on NVMe over fabrics
US11550481B2 (en) 2016-12-19 2023-01-10 Pure Storage, Inc. Efficiently writing data in a zoned drive storage system
US9747158B1 (en) 2017-01-13 2017-08-29 Pure Storage, Inc. Intelligent refresh of 3D NAND
US11955187B2 (en) 2017-01-13 2024-04-09 Pure Storage, Inc. Refresh of differing capacity NAND
US10979223B2 (en) 2017-01-31 2021-04-13 Pure Storage, Inc. Separate encryption for a solid-state drive
US10528488B1 (en) 2017-03-30 2020-01-07 Pure Storage, Inc. Efficient name coding
US11016667B1 (en) 2017-04-05 2021-05-25 Pure Storage, Inc. Efficient mapping for LUNs in storage memory with holes in address space
US10141050B1 (en) 2017-04-27 2018-11-27 Pure Storage, Inc. Page writes for triple level cell flash memory
US10516645B1 (en) 2017-04-27 2019-12-24 Pure Storage, Inc. Address resolution broadcasting in a networked device
US10944671B2 (en) 2017-04-27 2021-03-09 Pure Storage, Inc. Efficient data forwarding in a networked device
US10524022B2 (en) * 2017-05-02 2019-12-31 Seagate Technology Llc Data storage system with adaptive data path routing
US11467913B1 (en) 2017-06-07 2022-10-11 Pure Storage, Inc. Snapshots with crash consistency in a storage system
US11947814B2 (en) 2017-06-11 2024-04-02 Pure Storage, Inc. Optimizing resiliency group formation stability
US11138103B1 (en) 2017-06-11 2021-10-05 Pure Storage, Inc. Resiliency groups
US11782625B2 (en) 2017-06-11 2023-10-10 Pure Storage, Inc. Heterogeneity supportive resiliency groups
US10425473B1 (en) 2017-07-03 2019-09-24 Pure Storage, Inc. Stateful connection reset in a storage cluster with a stateless load balancer
US10402266B1 (en) 2017-07-31 2019-09-03 Pure Storage, Inc. Redundant array of independent disks in a direct-mapped flash storage system
US10877827B2 (en) 2017-09-15 2020-12-29 Pure Storage, Inc. Read voltage optimization
US10210926B1 (en) 2017-09-15 2019-02-19 Pure Storage, Inc. Tracking of optimum read voltage thresholds in nand flash devices
US10545687B1 (en) 2017-10-31 2020-01-28 Pure Storage, Inc. Data rebuild when changing erase block sizes during drive replacement
US11024390B1 (en) 2017-10-31 2021-06-01 Pure Storage, Inc. Overlapping RAID groups
US10884919B2 (en) 2017-10-31 2021-01-05 Pure Storage, Inc. Memory management in a storage system
US10515701B1 (en) 2017-10-31 2019-12-24 Pure Storage, Inc. Overlapping raid groups
US10496330B1 (en) 2017-10-31 2019-12-03 Pure Storage, Inc. Using flash storage devices with different sized erase blocks
US10860475B1 (en) 2017-11-17 2020-12-08 Pure Storage, Inc. Hybrid flash translation layer
US10990566B1 (en) 2017-11-20 2021-04-27 Pure Storage, Inc. Persistent file locks in a storage system
US10929053B2 (en) 2017-12-08 2021-02-23 Pure Storage, Inc. Safe destructive actions on drives
US10719265B1 (en) 2017-12-08 2020-07-21 Pure Storage, Inc. Centralized, quorum-aware handling of device reservation requests in a storage system
US10929031B2 (en) 2017-12-21 2021-02-23 Pure Storage, Inc. Maximizing data reduction in a partially encrypted volume
US10976948B1 (en) 2018-01-31 2021-04-13 Pure Storage, Inc. Cluster expansion mechanism
US10467527B1 (en) 2018-01-31 2019-11-05 Pure Storage, Inc. Method and apparatus for artificial intelligence acceleration
US10733053B1 (en) 2018-01-31 2020-08-04 Pure Storage, Inc. Disaster recovery for high-bandwidth distributed archives
US11036596B1 (en) 2018-02-18 2021-06-15 Pure Storage, Inc. System for delaying acknowledgements on open NAND locations until durability has been confirmed
US11494109B1 (en) 2018-02-22 2022-11-08 Pure Storage, Inc. Erase block trimming for heterogenous flash memory storage devices
US11385792B2 (en) 2018-04-27 2022-07-12 Pure Storage, Inc. High availability controller pair transitioning
US10931450B1 (en) 2018-04-27 2021-02-23 Pure Storage, Inc. Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers
US10853146B1 (en) 2018-04-27 2020-12-01 Pure Storage, Inc. Efficient data forwarding in a networked device
US20190362075A1 (en) * 2018-05-22 2019-11-28 Fortinet, Inc. Preventing users from accessing infected files by using multiple file storage repositories and a secure data transfer agent logically interposed therebetween
US11436023B2 (en) 2018-05-31 2022-09-06 Pure Storage, Inc. Mechanism for updating host file system and flash translation layer based on underlying NAND technology
US11438279B2 (en) 2018-07-23 2022-09-06 Pure Storage, Inc. Non-disruptive conversion of a clustered service from single-chassis to multi-chassis
US11868309B2 (en) 2018-09-06 2024-01-09 Pure Storage, Inc. Queue management for data relocation
US11354058B2 (en) 2018-09-06 2022-06-07 Pure Storage, Inc. Local relocation of data stored at a storage device of a storage system
US11500570B2 (en) 2018-09-06 2022-11-15 Pure Storage, Inc. Efficient relocation of data utilizing different programming modes
US11520514B2 (en) 2018-09-06 2022-12-06 Pure Storage, Inc. Optimized relocation of data based on data characteristics
US11036856B2 (en) 2018-09-16 2021-06-15 Fortinet, Inc. Natively mounting storage for inspection and sandboxing in the cloud
US10454498B1 (en) 2018-10-18 2019-10-22 Pure Storage, Inc. Fully pipelined hardware engine design for fast and efficient inline lossless data compression
US10976947B2 (en) 2018-10-26 2021-04-13 Pure Storage, Inc. Dynamically selecting segment heights in a heterogeneous RAID group
US11334254B2 (en) 2019-03-29 2022-05-17 Pure Storage, Inc. Reliability based flash page sizing
US11775189B2 (en) 2019-04-03 2023-10-03 Pure Storage, Inc. Segment level heterogeneity
US11099986B2 (en) 2019-04-12 2021-08-24 Pure Storage, Inc. Efficient transfer of memory contents
US11714572B2 (en) 2019-06-19 2023-08-01 Pure Storage, Inc. Optimized data resiliency in a modular storage system
US11281394B2 (en) 2019-06-24 2022-03-22 Pure Storage, Inc. Replication across partitioning schemes in a distributed storage system
US11893126B2 (en) 2019-10-14 2024-02-06 Pure Storage, Inc. Data deletion for a multi-tenant environment
CN112748848A (en) * 2019-10-29 2021-05-04 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for storage management
US11847331B2 (en) 2019-12-12 2023-12-19 Pure Storage, Inc. Budgeting open blocks of a storage unit based on power loss prevention
US11704192B2 (en) 2019-12-12 2023-07-18 Pure Storage, Inc. Budgeting open blocks based on power loss protection
US11416144B2 (en) 2019-12-12 2022-08-16 Pure Storage, Inc. Dynamic use of segment or zone power loss protection in a flash device
US10990537B1 (en) 2020-01-07 2021-04-27 International Business Machines Corporation Logical to virtual and virtual to physical translation in storage class memory
US11188432B2 (en) 2020-02-28 2021-11-30 Pure Storage, Inc. Data resiliency by partially deallocating data blocks of a storage device
US11507297B2 (en) 2020-04-15 2022-11-22 Pure Storage, Inc. Efficient management of optimal read levels for flash storage systems
US11256587B2 (en) 2020-04-17 2022-02-22 Pure Storage, Inc. Intelligent access to a storage device
US11416338B2 (en) 2020-04-24 2022-08-16 Pure Storage, Inc. Resiliency scheme to enhance storage performance
US11474986B2 (en) 2020-04-24 2022-10-18 Pure Storage, Inc. Utilizing machine learning to streamline telemetry processing of storage media
US11768763B2 (en) 2020-07-08 2023-09-26 Pure Storage, Inc. Flash secure erase
US11681448B2 (en) 2020-09-08 2023-06-20 Pure Storage, Inc. Multiple device IDs in a multi-fabric module storage system
US11513974B2 (en) 2020-09-08 2022-11-29 Pure Storage, Inc. Using nonce to control erasure of data blocks of a multi-controller storage system
US11487455B2 (en) 2020-12-17 2022-11-01 Pure Storage, Inc. Dynamic block allocation to optimize storage system performance
US11847324B2 (en) 2020-12-31 2023-12-19 Pure Storage, Inc. Optimizing resiliency groups for data regions of a storage system
US11614880B2 (en) 2020-12-31 2023-03-28 Pure Storage, Inc. Storage system with selectable write paths
EP4281879A1 (en) 2021-01-25 2023-11-29 Volumez Technologies Ltd. Remote online volume cloning method and system
US11630593B2 (en) 2021-03-12 2023-04-18 Pure Storage, Inc. Inline flash memory qualification in a storage system
US11507597B2 (en) 2021-03-31 2022-11-22 Pure Storage, Inc. Data replication to meet a recovery point objective
US11832410B2 (en) 2021-09-14 2023-11-28 Pure Storage, Inc. Mechanical energy absorbing bracket apparatus
US11816363B2 (en) * 2021-11-04 2023-11-14 International Business Machines Corporation File based virtual disk management

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6044367A (en) * 1996-08-02 2000-03-28 Hewlett-Packard Company Distributed I/O store
US6240416B1 (en) * 1998-09-11 2001-05-29 Ambeo, Inc. Distributed metadata system and method
US6311213B2 (en) * 1998-10-27 2001-10-30 International Business Machines Corporation System and method for server-to-server data storage in a network environment
US20020103889A1 (en) * 2000-02-11 2002-08-01 Thomas Markson Virtual storage layer approach for dynamically associating computer storage with processing hosts
US20020156984A1 (en) * 2001-02-20 2002-10-24 Storageapps Inc. System and method for accessing a storage area network as network attached storage
US20030177330A1 (en) * 2002-03-13 2003-09-18 Hideomi Idei Computer system
US20040030822A1 (en) * 2002-08-09 2004-02-12 Vijayan Rajan Storage virtualization by layering virtual disk objects on a file system
US6792557B1 (en) * 1999-10-22 2004-09-14 Hitachi, Ltd. Storage area network system
US6889309B1 (en) * 2002-04-15 2005-05-03 Emc Corporation Method and apparatus for implementing an enterprise virtual storage system
US7020760B2 (en) * 2002-12-16 2006-03-28 International Business Machines Corporation Hybrid logical block virtualization system for a storage area network

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5193184A (en) * 1990-06-18 1993-03-09 Storage Technology Corporation Deleted data file space release system for a dynamically mapped virtual data storage subsystem
US5829053A (en) * 1996-05-10 1998-10-27 Apple Computer, Inc. Block storage memory management system and method utilizing independent partition managers and device drivers
US6493811B1 (en) * 1998-01-26 2002-12-10 Computer Associated Think, Inc. Intelligent controller accessed through addressable virtual space
US6434637B1 (en) * 1998-12-31 2002-08-13 Emc Corporation Method and apparatus for balancing workloads among paths in a multi-path computer system based on the state of previous I/O operations
US6347371B1 (en) * 1999-01-25 2002-02-12 Dell Usa, L.P. System and method for initiating operation of a computer system
US6370605B1 (en) 1999-03-04 2002-04-09 Sun Microsystems, Inc. Switch based scalable performance storage architecture
US6467023B1 (en) 1999-03-23 2002-10-15 Lsi Logic Corporation Method for logical unit creation with immediate availability in a raid storage environment
US6779016B1 (en) * 1999-08-23 2004-08-17 Terraspring, Inc. Extensible computing system
JP4651230B2 (en) * 2001-07-13 2011-03-16 株式会社日立製作所 Storage system and access control method to logical unit
US6658563B1 (en) * 2000-05-18 2003-12-02 International Business Machines Corporation Virtual floppy diskette image within a primary partition in a hard disk drive and method for booting system with virtual diskette
US6532527B2 (en) * 2000-06-19 2003-03-11 Storage Technology Corporation Using current recovery mechanisms to implement dynamic mapping operations
US6912537B2 (en) * 2000-06-20 2005-06-28 Storage Technology Corporation Dynamically changeable virtual mapping scheme
AU2002230585A1 (en) * 2000-11-02 2002-05-15 Pirus Networks Switching system
US6871245B2 (en) * 2000-11-29 2005-03-22 Radiant Data Corporation File system translators and methods for implementing the same
JP4187403B2 (en) * 2000-12-20 2008-11-26 インターナショナル・ビジネス・マシーンズ・コーポレーション Data recording system, data recording method, and network system
WO2002065309A1 (en) * 2001-02-13 2002-08-22 Candera, Inc. System and method for policy based storage provisioning and management
JP4105398B2 (en) * 2001-02-28 2008-06-25 株式会社日立製作所 Information processing system
US6779063B2 (en) * 2001-04-09 2004-08-17 Hitachi, Ltd. Direct access storage system having plural interfaces which permit receipt of block and file I/O requests
US20040015864A1 (en) * 2001-06-05 2004-01-22 Boucher Michael L. Method and system for testing memory operations of computer program
US6782401B2 (en) * 2001-07-02 2004-08-24 Sepaton, Inc. Method and apparatus for implementing a reliable open file system
US7433948B2 (en) * 2002-01-23 2008-10-07 Cisco Technology, Inc. Methods and apparatus for implementing virtualization of storage within a storage area network
US7548975B2 (en) * 2002-01-09 2009-06-16 Cisco Technology, Inc. Methods and apparatus for implementing virtualization of storage within a storage area network through a virtual enclosure
US6934799B2 (en) * 2002-01-18 2005-08-23 International Business Machines Corporation Virtualization of iSCSI storage
US6954852B2 (en) * 2002-04-18 2005-10-11 Ardence, Inc. System for and method of network booting of an operating system to a client computer using hibernation
US7188194B1 (en) * 2002-04-22 2007-03-06 Cisco Technology, Inc. Session-based target/LUN mapping for a storage area network and associated method
US6973587B1 (en) * 2002-05-03 2005-12-06 American Megatrends, Inc. Systems and methods for out-of-band booting of a computer
US7100089B1 (en) 2002-09-06 2006-08-29 3Pardata, Inc. Determining differences between snapshots
US7263593B2 (en) * 2002-11-25 2007-08-28 Hitachi, Ltd. Virtualization controller and data transfer control method
US7797392B2 (en) * 2002-11-26 2010-09-14 International Business Machines Corporation System and method for efficiently supporting multiple native network protocol implementations in a single system
US6816917B2 (en) 2003-01-15 2004-11-09 Hewlett-Packard Development Company, L.P. Storage system with LUN virtualization
US7606239B2 (en) * 2003-01-31 2009-10-20 Brocade Communications Systems, Inc. Method and apparatus for providing virtual ports with attached virtual devices in a storage area network
US6990573B2 (en) * 2003-02-05 2006-01-24 Dell Products L.P. System and method for sharing storage to boot multiple servers
US7984108B2 (en) * 2003-10-08 2011-07-19 Unisys Corporation Computer system para-virtualization using a hypervisor that is implemented in a partition of the host system
US7669032B2 (en) * 2003-11-26 2010-02-23 Symantec Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US20050114595A1 (en) 2003-11-26 2005-05-26 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US20050125538A1 (en) * 2003-12-03 2005-06-09 Dell Products L.P. Assigning logical storage units to host computers
US8190714B2 (en) * 2004-04-15 2012-05-29 Raytheon Company System and method for computer cluster virtualization using dynamic boot images and virtual disk

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6044367A (en) * 1996-08-02 2000-03-28 Hewlett-Packard Company Distributed I/O store
US6240416B1 (en) * 1998-09-11 2001-05-29 Ambeo, Inc. Distributed metadata system and method
US6311213B2 (en) * 1998-10-27 2001-10-30 International Business Machines Corporation System and method for server-to-server data storage in a network environment
US6792557B1 (en) * 1999-10-22 2004-09-14 Hitachi, Ltd. Storage area network system
US20020103889A1 (en) * 2000-02-11 2002-08-01 Thomas Markson Virtual storage layer approach for dynamically associating computer storage with processing hosts
US20020156984A1 (en) * 2001-02-20 2002-10-24 Storageapps Inc. System and method for accessing a storage area network as network attached storage
US6606690B2 (en) * 2001-02-20 2003-08-12 Hewlett-Packard Development Company, L.P. System and method for accessing a storage area network as network attached storage
US20030177330A1 (en) * 2002-03-13 2003-09-18 Hideomi Idei Computer system
US6889309B1 (en) * 2002-04-15 2005-05-03 Emc Corporation Method and apparatus for implementing an enterprise virtual storage system
US20040030822A1 (en) * 2002-08-09 2004-02-12 Vijayan Rajan Storage virtualization by layering virtual disk objects on a file system
US7020760B2 (en) * 2002-12-16 2006-03-28 International Business Machines Corporation Hybrid logical block virtualization system for a storage area network

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060123062A1 (en) * 2001-12-19 2006-06-08 Emc Corporation Virtual file system
US20050216680A1 (en) * 2004-03-25 2005-09-29 Itzhak Levy Device to allow multiple data processing channels to share a single disk drive
US20050262150A1 (en) * 2004-05-21 2005-11-24 Computer Associates Think, Inc. Object-based storage
US8677023B2 (en) 2004-07-22 2014-03-18 Oracle International Corporation High availability and I/O aggregation for server environments
US9264384B1 (en) * 2004-07-22 2016-02-16 Oracle International Corporation Resource virtualization mechanism including virtual host bus adapters
US7493462B2 (en) * 2005-01-20 2009-02-17 International Business Machines Corporation Apparatus, system, and method for validating logical volume configuration
US20060161754A1 (en) * 2005-01-20 2006-07-20 Dewey Douglas W Apparatus, system, and method for validating logical volume configuration
US20060179343A1 (en) * 2005-02-08 2006-08-10 Hitachi, Ltd. Method and apparatus for replicating volumes between heterogenous storage systems
US7519851B2 (en) * 2005-02-08 2009-04-14 Hitachi, Ltd. Apparatus for replicating volumes between heterogenous storage systems
US7945657B1 (en) * 2005-03-30 2011-05-17 Oracle America, Inc. System and method for emulating input/output performance of an application
US20060282438A1 (en) * 2005-06-10 2006-12-14 Microsoft Corporation Performing a deletion of a node in a tree data storage structure
US7630998B2 (en) * 2005-06-10 2009-12-08 Microsoft Corporation Performing a deletion of a node in a tree data storage structure
US20070038749A1 (en) * 2005-07-29 2007-02-15 Broadcom Corporation Combined local and network storage interface
US20070028138A1 (en) * 2005-07-29 2007-02-01 Broadcom Corporation Combined local and network storage interface
US8433770B2 (en) 2005-07-29 2013-04-30 Broadcom Corporation Combined local and network storage interface
US9813283B2 (en) 2005-08-09 2017-11-07 Oracle International Corporation Efficient data transfer between servers and remote peripherals
US20070226270A1 (en) * 2006-03-23 2007-09-27 Network Appliance, Inc. Method and apparatus for concurrent read-only access to filesystem
US7904492B2 (en) * 2006-03-23 2011-03-08 Network Appliance, Inc. Method and apparatus for concurrent read-only access to filesystem
US20070277015A1 (en) * 2006-05-23 2007-11-29 Matthew Joseph Kalos Apparatus, system, and method for presenting a storage volume as a virtual volume
US7617373B2 (en) 2006-05-23 2009-11-10 International Business Machines Corporation Apparatus, system, and method for presenting a storage volume as a virtual volume
US7761738B2 (en) 2006-09-07 2010-07-20 International Business Machines Corporation Establishing communications across virtual enclosure boundaries
US20090182961A1 (en) * 2008-01-11 2009-07-16 International Business Machines Corporation Methods, apparatuses, and computer program products for protecting pre-staged provisioned data in a storage system
US8055867B2 (en) 2008-01-11 2011-11-08 International Business Machines Corporation Methods, apparatuses, and computer program products for protecting pre-staged provisioned data in a storage system
US8074020B2 (en) 2008-02-13 2011-12-06 International Business Machines Corporation On-line volume coalesce operation to enable on-line storage subsystem volume consolidation
US20090204759A1 (en) * 2008-02-13 2009-08-13 International Business Machines Corporation On-line volume coalesce operation to enable on-line storage subsystem volume consolidation
US20090216944A1 (en) * 2008-02-22 2009-08-27 International Business Machines Corporation Efficient validation of writes for protection against dropped writes
US20110055476A1 (en) * 2008-03-27 2011-03-03 Christ Bryan E RAID Array Access By A RAID Array-unaware Operating System
CN107122127A (en) * 2008-05-29 2017-09-01 威睿公司 Unloading is operated to the storage of storage hardware
US20090307716A1 (en) * 2008-06-09 2009-12-10 David Nevarez Block storage interface for virtual memory
US8893160B2 (en) 2008-06-09 2014-11-18 International Business Machines Corporation Block storage interface for virtual memory
US10459882B2 (en) * 2008-09-05 2019-10-29 Commvault Systems, Inc. Image level copy or restore, such as image level restore without knowledge of data object metadata
US11392542B2 (en) * 2008-09-05 2022-07-19 Commvault Systems, Inc. Image level copy or restore, such as image level restore without knowledge of data object metadata
US8055842B1 (en) 2008-09-26 2011-11-08 Nvidia Corporation Using raid with large sector size ATA mass storage devices
US8516190B1 (en) * 2008-09-26 2013-08-20 Nvidia Corporation Reporting logical sector alignment for ATA mass storage devices
US20100082715A1 (en) * 2008-09-30 2010-04-01 Karl Dohm Reduced-Resource Block Thin Provisioning
US10880235B2 (en) 2009-08-20 2020-12-29 Oracle International Corporation Remote shared server peripherals over an ethernet network for resource virtualization
US9973446B2 (en) 2009-08-20 2018-05-15 Oracle International Corporation Remote shared server peripherals over an Ethernet network for resource virtualization
US8756338B1 (en) * 2010-04-29 2014-06-17 Netapp, Inc. Storage server with embedded communication agent
US9331963B2 (en) 2010-09-24 2016-05-03 Oracle International Corporation Wireless host I/O using virtualized I/O controllers
US9606748B2 (en) 2011-05-04 2017-03-28 International Business Machines Corporation Importing pre-existing data of a prior storage solution into a storage pool for use with a new storage solution
US9606747B2 (en) 2011-05-04 2017-03-28 International Business Machines Corporation Importing pre-existing data of a prior storage solution into a storage pool for use with a new storage solution
US8996800B2 (en) 2011-07-07 2015-03-31 Atlantis Computing, Inc. Deduplication of virtual machine files in a virtualized desktop environment
CN102567217A (en) * 2012-01-04 2012-07-11 北京航空航天大学 MIPS platform-oriented memory virtualization method
US9083550B2 (en) 2012-10-29 2015-07-14 Oracle International Corporation Network virtualization over infiniband
US9912713B1 (en) 2012-12-17 2018-03-06 MiMedia LLC Systems and methods for providing dynamically updated image sets for applications
US9277010B2 (en) 2012-12-21 2016-03-01 Atlantis Computing, Inc. Systems and apparatuses for aggregating nodes to form an aggregated virtual storage for a virtualized desktop environment
US9069472B2 (en) 2012-12-21 2015-06-30 Atlantis Computing, Inc. Method for dispersing and collating I/O's from virtual machines for parallelization of I/O access and redundancy of storing virtual machine data
WO2014100472A1 (en) * 2012-12-21 2014-06-26 Atlantis Computing, Inc. Systems and apparatuses for aggregating nodes to form an aggregated virtual storage for a virtualized desktop environment
US11409765B2 (en) 2012-12-27 2022-08-09 Commvault Systems, Inc. Application of information management policies based on operation with a geographic entity
US10831778B2 (en) 2012-12-27 2020-11-10 Commvault Systems, Inc. Application of information management policies based on operation with a geographic entity
US9250946B2 (en) 2013-02-12 2016-02-02 Atlantis Computing, Inc. Efficient provisioning of cloned virtual machine images using deduplication metadata
US9471590B2 (en) 2013-02-12 2016-10-18 Atlantis Computing, Inc. Method and apparatus for replicating virtual machine images using deduplication metadata
US9372865B2 (en) 2013-02-12 2016-06-21 Atlantis Computing, Inc. Deduplication metadata access in deduplication file system
US11093336B2 (en) 2013-03-11 2021-08-17 Commvault Systems, Inc. Browsing data stored in a backup format
US9298758B1 (en) 2013-03-13 2016-03-29 MiMedia, Inc. Systems and methods providing media-to-media connection
US9465521B1 (en) 2013-03-13 2016-10-11 MiMedia, Inc. Event based media interface
US10257301B1 (en) 2013-03-15 2019-04-09 MiMedia, Inc. Systems and methods providing a drive interface for content delivery
US9183232B1 (en) 2013-03-15 2015-11-10 MiMedia, Inc. Systems and methods for organizing content using content organization rules and robust content information
US10860401B2 (en) 2014-02-27 2020-12-08 Commvault Systems, Inc. Work flow management for an information management system
US10013188B2 (en) 2014-12-17 2018-07-03 Fujitsu Limited Storage control device and method of copying data in failover in storage cluster
US20170357462A1 (en) * 2016-06-08 2017-12-14 Intel Corporation Method and apparatus for improving performance of sequential logging in a storage device
US10296250B2 (en) * 2016-06-08 2019-05-21 Intel Corporation Method and apparatus for improving performance of sequential logging in a storage device
US20180217763A1 (en) * 2017-01-27 2018-08-02 Wyse Technology L.L.C. Attaching a windows file system to a remote non-windows disk stack
US10620835B2 (en) * 2017-01-27 2020-04-14 Wyse Technology L.L.C. Attaching a windows file system to a remote non-windows disk stack
US10838821B2 (en) 2017-02-08 2020-11-17 Commvault Systems, Inc. Migrating content and metadata from a backup system
US11467914B2 (en) 2017-02-08 2022-10-11 Commvault Systems, Inc. Migrating content and metadata from a backup system
US10776329B2 (en) 2017-03-28 2020-09-15 Commvault Systems, Inc. Migration of a database management system to cloud storage
US11520755B2 (en) 2017-03-28 2022-12-06 Commvault Systems, Inc. Migration of a database management system to cloud storage
US11354280B2 (en) 2017-04-04 2022-06-07 Oracle International Corporation Virtual configuration systems and methods
US10754829B2 (en) 2017-04-04 2020-08-25 Oracle International Corporation Virtual configuration systems and methods
US11409608B2 (en) * 2020-12-29 2022-08-09 Advanced Micro Devices, Inc. Providing host-based error detection capabilities in a remote execution device
US20240004563A1 (en) * 2022-07-01 2024-01-04 Dell Products, L.P. Performance Efficient and Resilient Creation of Network Attached Storage Obects
US11907551B2 (en) * 2022-07-01 2024-02-20 Dell Products, L.P. Performance efficient and resilient creation of network attached storage objects

Also Published As

Publication number Publication date
US20050235132A1 (en) 2005-10-20
US20050228937A1 (en) 2005-10-13
JP4750040B2 (en) 2011-08-17
US20050228950A1 (en) 2005-10-13
CN1906569A (en) 2007-01-31
JP2007516523A (en) 2007-06-21
US7689803B2 (en) 2010-03-30
WO2005055043A1 (en) 2005-06-16
EP1687706A1 (en) 2006-08-09
CN100552611C (en) 2009-10-21

Similar Documents

Publication Publication Date Title
US20050114595A1 (en) System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US7424592B1 (en) System and method for implementing volume sets in a storage system
US7873700B2 (en) Multi-protocol storage appliance that provides integrated support for file and block access protocols
US7930473B2 (en) System and method for supporting file and block access to storage object on a storage appliance
US7743035B2 (en) System and method for restoring a virtual disk from a snapshot
US9563469B2 (en) System and method for storage and deployment of virtual machines in a virtual server environment
US8321643B1 (en) System and methods for automatically re-signaturing multi-unit data storage volumes in distributed data storage systems
US7676628B1 (en) Methods, systems, and computer program products for providing access to shared storage by computing grids and clusters with large numbers of nodes
JP4175764B2 (en) Computer system
Teigland et al. Volume Managers in Linux.
US7457982B2 (en) Writable virtual disk of read-only snapshot file objects
US7415506B2 (en) Storage virtualization and storage management to provide higher level storage services
EP1949214B1 (en) System and method for optimizing multi-pathing support in a distributed storage system environment
US20120011176A1 (en) Location independent scalable file and block storage
EP1880324A2 (en) System and method for restriping data across a plurality of volumes
WO2002065275A1 (en) Storage virtualization system and methods
EP1763734A2 (en) System and method for supporting block-based protocols on a virtual storage appliance executing within a physical storage appliance
US7921262B1 (en) System and method for dynamic storage device expansion support in a storage virtualization environment
US10620843B2 (en) Methods for managing distributed snapshot for low latency storage and devices thereof
US7293152B1 (en) Consistent logical naming of initiator groups
US7783611B1 (en) System and method for managing file metadata during consistency points
US7506111B1 (en) System and method for determining a number of overwitten blocks between data containers
Brinkmann et al. Storage management as means to cope with exponential information growth

Legal Events

Date Code Title Description
AS Assignment

Owner name: VERITAS OPERATING CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KARR, RONALD S.;KISELEV, OLEG;MIROSCHNICHENKO, ALEX;REEL/FRAME:014750/0706

Effective date: 20031111

AS Assignment

Owner name: SYMANTEC CORPORATION, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:019872/0979

Effective date: 20061030

Owner name: SYMANTEC CORPORATION,CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:019872/0979

Effective date: 20061030

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION

AS Assignment

Owner name: SYMANTEC OPERATING CORPORATION, CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 019872 FRAME 979. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNEE IS SYMANTEC OPERATING CORPORATION;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:027819/0462

Effective date: 20061030