WO2005055043A1 - System and method for emulating operating system metadata to provide cross-platform access to storage volumes - Google Patents

System and method for emulating operating system metadata to provide cross-platform access to storage volumes Download PDF

Info

Publication number
WO2005055043A1
WO2005055043A1 PCT/US2004/039306 US2004039306W WO2005055043A1 WO 2005055043 A1 WO2005055043 A1 WO 2005055043A1 US 2004039306 W US2004039306 W US 2004039306W WO 2005055043 A1 WO2005055043 A1 WO 2005055043A1
Authority
WO
WIPO (PCT)
Prior art keywords
host
storage device
logical volume
storage
metadata
Prior art date
Application number
PCT/US2004/039306
Other languages
French (fr)
Inventor
Ronald S. Karr
Oleg Kiselev
Alex Miroschnichenko
Algaia Kong
Original Assignee
Veritas Operating Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Veritas Operating Corporation filed Critical Veritas Operating Corporation
Priority to JP2006541649A priority Critical patent/JP4750040B2/en
Priority to EP04811936A priority patent/EP1687706A1/en
Publication of WO2005055043A1 publication Critical patent/WO2005055043A1/en
Priority to US11/156,635 priority patent/US7689803B2/en
Priority to US11/156,820 priority patent/US7669032B2/en
Priority to US11/156,821 priority patent/US20050235132A1/en
Priority to US11/156,636 priority patent/US20050228950A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0664Virtualisation aspects at device level, e.g. emulation of a storage device or system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • TITLE SYSTEM AND METHOD FOR EMULATING OPERATING SYSTEM METADATA TO PROVIDE CROSS-PLATFORM ACCESS TO STORAGE VOLUMES
  • This invention relates to computer systems and, more particularly, to off-host virtualization within storage environments.
  • Such software and hardware may also be configured to present physical storage devices as virtual storage devices (e.g., virtual SCSI disks) to computer hosts, and to add storage features not present in individual storage devices to the storage model.
  • virtual storage devices e.g., virtual SCSI disks
  • features to increase fault tolerance such as data mirroring, snapshot/fixed image creation, or data parity
  • features to increase data access performance such as disk striping
  • the added storage features may be referred to as storage virtualization features, and the software and/or hardware providing the virtual storage devices and the added storage features may be termed "virtualizers" or “virtualization controllers”.
  • Virtualization may be performed within computer hosts, such as within a volume manager layer of a storage software stack at the host, and/or in devices external to the host, such as virtualization switches or virtualization appliances. Such external devices providing virtualization may be termed "off-host" virtualizers, and may be utilized in order to offload processing required for virtualization from the host. Off-host virtualizers may be connected to the external physical storage devices for which they provide virtualization functions via a variety of interconnects, such as Fiber Channel links, Internet Protocol (IP) networks, and the like.
  • IP Internet Protocol
  • Some of the storage software layers may form part of the operating system in use at the host, and may differ from one operating system to another.
  • a layer such as the disk driver layer for a given operating system may be configured to expect certain types of configuration information for the disk to be laid out in a specific format, for example in a header (located at the first few blocks of the disk) containing disk partition layout information.
  • the storage stack software layers used to access local physical disks may also be utilized to access external storage devices presented as virtual storage devices by off-host virtualizers. Therefore, it may be desirable for an off-host virtualizer to provide configuration information for the virtual storage devices in formats expected by the different operating systems that may be in use at the hosts in the storage environment. In addition, it may be desirable for the off-host virtualizer to implement a technique to flexibly and dynamically map storage within external physical storage devices to the virtual storage devices presented to the host storage software layers, e.g., without requiring a reboot of the host.
  • a system may include a first host and an off-host virtualizer, such as a virtualization switch or a virtualization appliance.
  • the off-host virtualizer may be configured to generate operating system metadata for a first virtual storage device, such as a virtual LUN, and to make the operating system metadata accessible to the first host.
  • the operating system metadata may be customized according to a requirement of the operating system in use at the host; that is, the off-host virtualizer may be capable of providing metadata customized for a number of different operating systems in use at different hosts.
  • the first host may be configured to specify the specific metadata requirements of its operating system to the off-host virtualizer.
  • the first host may include a storage software stack including a first layer, such as a disk driver layer, configured to use the operating system metadata provided by the off-host virtualizer to detect the existence of the first virtual storage device as an addressable storage device such as a partition.
  • the off-host virtualizer may be configured to aggregate storage within one or more physical storage devices into a logical volume, and to map the logical volume to the first virtual storage device or virtual LUN.
  • the off-host virtualizer may also be configured to provide logical volume metadata to a second layer of the storage software stack (such as an intermediate driver layer between a disk driver layer and a file system layer), allowing the second layer to perform I/O operations on the logical volume.
  • a second layer of the storage software stack such as an intermediate driver layer between a disk driver layer and a file system layer
  • the first virtual storage device may be initially unmapped to physical storage.
  • the recognition of the unmapped first virtual storage device as an addressable storage device may occur during a system initialization stage prior to an initiation of production I/O operations.
  • an unmapped or "blank" virtual LUN may be prepared for subsequent dynamic mapping by the off-host virtualizer.
  • the unmapped LUN may be given an initial size equal to the maximum allowed LUN size supported by the operating system in use at the host, so that the size of the virtual LUN may not require modification after initialization.
  • multiple virtual LUNs may be pre-generated for use at a single host, for example in order to isolate storage for different applications, or to accommodate limits on maximum LUN sizes.
  • the system may also include two or more physical storage devices, and the off- host virtualizer may be configured to dynamically map physical storage from a first and a second physical storage device to a respective range of addresses within the first virtual storage device.
  • the off-host virtualizer may be configured to perform an N-to-1 mapping between the physical storage devices (which may be called physical LUNs) and virtual LUNs, allowing storage in the physical storage devices to be accessed from the host via the pre-generated virtual LUNs.
  • Configuration information regarding the location of the first and/or the second address ranges within the virtual LUN may be passed from the off-host virtualizer to a second layer of the storage stack at the host (e.g., an intermediate driver layer above a disk driver layer) using a variety of different mechanisms.
  • Such mechanisms may include, for example, the off-host virtualizer writing the configuration information to certain special blocks within the virtual LUN, sending messages to the host over a network, or special extended SCSI mode pages.
  • two or more different ranges of physical storage within a single physical storage device may be mapped to corresponding pre-generated virtual storage devices such as virtual LUNs and presented to corresponding hosts.
  • the off-host virtualizer may allow each host of a plurality of hosts to access a respective portion of a physical storage device through a respective virtual LUN.
  • the off-host virtualizer may also be configured to implement a security policy isolating the ranges of physical storage within the shared physical storage device; i.e., to allow a host to access only those regions to which the host has been granted access, and to prevent unauthorized accesses.
  • the off-host virtualizer may be further configured to aggregate storage within a physical storage device into a logical volume, dynamically map the logical volume to a range of addresses within a pre-generated virtual storage device, and pass logical volume metadata to the second layer of the storage stack, allowing I/O operations to be performed on the logical volume.
  • Storage from a single physical storage device may be aggregated into any desired number of different logical volumes, and any desired number of logical volumes may be mapped to a single virtual storage device or virtual LUN.
  • the off-host virtualizer may be further configured to provide volume-level security, i.e., to prevent unauthorized access from a host to a logical volume, even when the physical storage corresponding to the logical volume is part of a shared physical storage device.
  • volume-level security i.e., to prevent unauthorized access from a host to a logical volume, even when the physical storage corresponding to the logical volume is part of a shared physical storage device.
  • physical storage from any desired number of physical storage devices may be aggregated into a logical volume, thereby allowing a single volume to extend over a larger address range than the maximum allowed size of a single physical storage device.
  • the virtual storage devices or virtual LUNs may be distributed among a number of independent front-end storage networks, such as fiber channel fabrics, and the physical storage devices backing the logical volumes may be distributed among a number of independent back-end storage networks.
  • a first host may access its virtual storage devices through a first storage network
  • a second host may access its virtual storage devices through a second storage network independent from the first (that is, reconfigurations and/or failures in the first storage network may not affect the second storage network).
  • the off-host virtualizer may access a first physical storage device through a third storage network, and a second physical storage device through a fourth storage network.
  • the ability of the off-host virtualizer to dynamically map storage across pre-generated virtual storage devices distributed among independent storage networks may support a robust and flexible storage environment.
  • FIG. la is a block diagram illustrating one embodiment of a computer system.
  • FIG. lb is a block diagram illustrating an embodiment of a system configured to utilize off-host block virtualization.
  • FIG. 2a is a block diagram illustrating the addition of operating-system specific metadata to a virtual logical unit (LUN) encapsulating a source volume, according to one embodiment.
  • LUN virtual logical unit
  • FIG. 2b is a block diagram illustrating an embodiment where an off-host virtualizer is configured to include a partition table within operating system metadata.
  • FIG. 2c is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage within physical storage devices into a logical volume and to map the logical volume to a virtual
  • FIG. 3 is a block diagram illustrating an embodiment including an off-host virtualizer configured to create a plurality of virtual LUNs.
  • FIG. 4 is a flow diagram illustrating aspects of the operation of a host and an off-host virtualizer configured to generate operating system metadata.
  • FIG. 5 is a flow diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage in one or more physical storage devices into a logical volume, and to map the logical volume to a virtual LUN.
  • FIG. 6 is a block diagram illustrating an example of an unmapped virtual LUN according to one embodiment.
  • FIG. 7 is a block diagram illustrating an embodiment including an off-host virtualizer configured to create a plurality of unmapped virtual LUNs.
  • FIG. 8 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to dynamically map physical storage from within two different physical storage devices to a single virtual LUN.
  • FIG. 9 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to dynamically map physical storage from within a single physical storage device to two virtual LUNs assigned to different hosts.
  • FIG. 10 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage of a physical storage device into a logical volume and dynamically map the logical volume to a range of blocks of a virtual LUN.
  • FIG. 11 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to dynamically map multiple logical volumes to a single virtual LUN.
  • FIG. 12 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage from a physical storage device into two logical volumes, and to dynamically map each of the two logical volumes to a different virtual LUN.
  • FIG. 13 is a block diagram illustrating an embodiment employing multiple storage networks.
  • FIG. 14 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage from two physical storage devices into a single logical volume.
  • FIG. 15 is a flow diagram illustrating aspects of the operation of a host and an off-host virtualizer according to one embodiment where the off-host virtualizer is configured to support dynamic physical LUN tunneling.
  • FIG. 16 is a flow diagram illustrating aspects of the operation of a host and an off-host virtualizer according to one embodiment, where off-host virtualizer 180 is configured to support dynamic volume tunneling.
  • FIG. 17 is a block diagram illustrating an embodiment where an off-host virtualizer comprises a virtualization switch.
  • FIG. 18 is a block diagram illustrating an embodiment where off-host virtualizer comprises a virtualization appliance.
  • FIG. 19 is a block diagram illustrating an exemplary host, according to one embodiment.
  • FIG. 20 is a block diagram illustrating a computer-accessible medium according to one embodiment.
  • FIG. la is a block diagram illustrating a computer system 100 according to one embodiment.
  • System 100 includes a host 110 coupled to a physical block device 120 via an interconnect 130.
  • Host 110 includes a traditional software storage stack 140A that may be used to perform I/O operations on physical block device 120 via interconnect 130.
  • a physical block device 120 may comprise any hardware entity that provides a collection of linearly addressed data blocks that can be read or written.
  • a physical block device may be a single disk drive configured to present all of its sectors as an indexed array of blocks.
  • the physical block device may be a disk array device, or a disk configured as part of a disk array device.
  • any suitable type of storage device may be configured as a block device, such as fixed or removable magnetic media drives (e.g., hard drives, floppy or Zip-based drives), writable or read-only optical media drives (e.g., CD or DVD), tape drives, solid-state mass storage devices, or any other type of storage device.
  • the interconnect 130 may utilize any desired storage connection technology, such as various variants of the Small Computer System Interface (SCSI) protocol, Fiber Channel, Internet Protocol (IP), Internet SCSI (iSCSI), or a combination of such storage networking technologies.
  • the software storage stack 140A may comprise layers of software within an operating system at host 110, and may be accessed by a client application to perform I/O (input/output) on a desired physical block device 120.
  • a client application may initiate an I/O request, for example as a request to read a block of data at a specified offset within a file.
  • the request may be received (e.g., in the form of a read() system call) at the file system layer 112, translated into a request to read a block within a particular device object (i.e., a software entity representing a storage device), and passed to the disk driver layer 114.
  • the disk driver layer 114 may then select the targeted physical block device 120 corresponding to the disk device object, and send a request to an address at the targeted physical block device over the interconnect 130 using the interconnect-dependent I/O driver layer 116.
  • a host bus adapter (such as a SCSI HBA) may be used to transfer the I/O request, formatted according to the appropriate storage protocol (e.g., SCSI), to a physical link of the interconnect (e.g., a SCSI bus).
  • an interconnect-dependent firmware layer 122 may receive the request, perform the desired physical I/O operation at the physical storage layer 124, and send the results back to the host over the interconnect.
  • the results (e.g., the desired blocks of the file to be read) may then be transferred through the various layers of storage stack 140A in reverse order (i.e., from the interconnect-dependent I/O driver to the file system) before being passed to the requesting client application.
  • a hierarchical scheme may be used for addressing storage within physical block devices 120.
  • an operating system may employ a four-level hierarchical addressing scheme of the form ⁇ "hba", "bus”, “target”, “lun”> for SCSI devices, including a SCSI HBA identifier ("hba”), a SCSI bus identifier ("bus”), a SCSI target identifier ("target”), and a logical unit identifier ("lun”), and may be configured to populate a device database with addresses for available SCSI devices during boot.
  • Host 110 may include multiple SCSI HBAs, and a different SCSI adapter identifier may be used for each HBA.
  • the SCSI adapter identifiers may be numbers issued by the operating system kernel, for example based on the physical placement of the HBA cards relative to each other (i.e., based on slot numbers used for the adapter cards).
  • Each HBA may control one or more SCSI buses, and a unique SCSI bus number may be used to identify each SCSI bus within an HBA.
  • the HBA may be configured to probe each bus to identify the SCSI devices currently attached to the bus.
  • the number of devices (such as disks or disk arrays) that may be attached on a SCSI bus may be limited, e.g., to 15 devices excluding the HBA itself.
  • SCSI devices that may initiate I/O operations, such as the HBA, are termed SCSI initiators, while devices where the physical I/O may be performed are called SCSI targets.
  • SCSI targets Each target on the SCSI bus may identify itself to the HBA in response to the probe.
  • each target device may also accommodate up to a protocol-specific maximum number of "logical units" (LUNs) representing independently addressable units of physical storage within the target device, and may inform the HBA of the logical unit identifiers.
  • LUNs protocol-specific maximum number of "logical units"
  • a target device may contain a single LUN (e.g., a LUN may represent an entire disk or even a disk array) in some embodiments.
  • the SCSI device configuration information such as the target device identifiers and LUN identifiers may be passed to the disk driver layer 114 by the HBAs.
  • disk driver layer 114 may utilize the hierarchical SCSI address described above.
  • OS-specific metadata When accessing a LUN, disk driver layer 114 may expect to see OS-specific metadata at certain specific locations within the LUN. For example, in many operating systems, the disk driver layer 114 may be responsible for implementing logical partitioning (i.e., subdividing the space within a physical disk into partitions, where each partition may be used for a smaller file system).
  • Metadata describing the layout of a partition may be stored in an operating- system dependent format, and in an operating system-dependent location, such as in a header or a trailer, within a LUN.
  • a virtual table of contents (VTOC) structure may be located in the first partition of a disk volume, and a copy of the VTOC may also be located in the last two cylinders of the volume.
  • the operating system metadata may include cylinder alignment and/or cylinder size information, as well as boot code if the volume is bootable.
  • Operating system metadata for various versions of Microsoft WindowsTM may include a "magic number" (a special number or numbers that the operating system expects to find, usually at or near the start of a disk), subdisk layout information, etc. If the disk driver layer 114 does not find the metadata in the expected location and in the expected format, the disk driver layer may not be able to perform I/O operations at the LUN.
  • magic number a special number or numbers that the operating system expects to find, usually at or near the start of a disk
  • subdisk layout information etc.
  • block virtualization refers to a process of creating or aggregating logical or virtual block devices out of one or more underlying physical or logical block devices, and making the virtual block devices accessible to block device consumers for storage operations.
  • storage within multiple physical block devices e.g. in a fiber channel storage area network (SAN) may be aggregated and presented to a host as a single virtual storage device such as a virtual LUN (VLUN), as described below in further detail.
  • VLUN virtual LUN
  • one or more layers of software may rearrange blocks from one or more block devices, such as disks, and add various kinds of functions.
  • the resulting rearranged collection of blocks may then be presented to a storage consumer, such as an application or a file system, as one or more aggregated devices with the appearance of one or more basic disk drives. That is, the more complex structure resulting from rearranging blocks and adding functionality may be presented as if it were one or more simple arrays of blocks, or logical block devices.
  • multiple layers of virtualization may be implemented.
  • one or more block devices may be mapped into a particular virtualized block device, which may be in turn mapped into still another virtualized block device, allowing complex storage functions to be implemented with simple block devices.
  • logical volume refers to a virtualized block device that may be presented directly for use by a file system, database, or other applications that can directly use block devices. Further details on block virtualization, and advanced storage features supported by block virtualization, are provided below.
  • Block virtualization may be implemented at various places within a storage stack and the associated storage environment, in both hardware and software.
  • a block virtualization layer in the form of a volume manager such as the VERITAS Volume ManagerTM from VERITAS Software Corporation, may be added between the disk driver layer 114 and the file system layer 112.
  • virtualization functionality may be added to host bus adapters, i.e., in a layer between the interconnect-dependent I/O driver layer 116 and interconnect 130.
  • Block virtualization may also be performed outside the host 110, e.g., in a virtualization appliance or a virtualizing switch, which may form part of the interconnect 130.
  • Such external devices providing block virtualization may be termed off-host virtualizers or off-host virtualization controllers.
  • block virtualization functionality may be implemented by an off-host virtualizer in cooperation with a host-based virtualizer. That is, some block virtualization functionality may be performed off-host, and other block virtualization features may be implemented at the host.
  • multiple devices external to the host 110 such as two or more virtualization switches, two or more virtualization appliances, or a combination of virtualization switches and virtualization appliances, may cooperate to provide block virtualization functionality; that is, the off-host virtualizer may include multiple cooperating devices.
  • off-host virtualizers may typically be implemented in a manner that allows the existing storage software layers to continue to operate, even when the storage devices being presented to the operating system are virtual rather than physical, and remote rather than local.
  • an off-host virtualizer may present a virtualized storage device to the disk driver layer as a virtual LUN. That is, as described below in further detail, on off-host virtualizer may encapsulate, or emulate the metadata for, a LUN when providing a host 110 access to a virtualized storage device.
  • one or more software modules or layers may be added to storage stack 140A to support additional forms of virtualization using virtual LUNs.
  • FIG. lb is a block diagram illustrating an embodiment of system 100 configured to utilize off-host block virtualization.
  • the system may include an off-host virtualizer 180, such as a virtualization switch or a virtualization appliance, which may be included within interconnect 130 linking host 110 to physical block device 120.
  • Host 110 may comprise an enhanced storage software stack 140B, which may include an intermediate driver layer 113 between the disk driver layer 114 and file system layer 112.
  • off-host virtualizer 180 may be configured to map storage within physical block device 120, or multiple physical block devices 120, into a virtual storage device (e.g., a virtual LUN or VLUN), and present the virtual storage device to host 110.
  • a virtual storage device e.g., a virtual LUN or VLUN
  • the back-end physical block device 120 that is mapped to a virtual LUN may be termed a "physical LUN (PLUN)" in the subsequent description.
  • off-host virtualizer 180 may be configured to map storage from a back-end physical LUN directly to a VLUN, without any additional virtualization (i.e., without creating a logical volume). Such a technique of mapping a PLUN to a VLUN may be termed "PLUN tunneling" in the subsequent description.
  • off-host virtualizer 180 may be configured to aggregate storage within one or more physical block devices 120 as one or more logical volumes, and map the logical volumes within the address space of a VLUN presented to host 110.
  • volume tunneling or "logical volume tunneling” may be used herein to refer to the technique of mapping a logical volume through a VLUN.
  • Off-host virtualizer 180 may further be configured to provide intermediate driver layer 130 with metadata or configuration information on the tunneled logical volumes, allowing intermediate driver layer 130 to locate and perform I/O operations on the logical volumes located within the virtual LUN on behalf of clients such as file system layer 112 or other applications.
  • an aliasing technique such as symbolic links may be used to associate a name with a tunneled logical volume, allowing file systems, applications and system administrators to refer to the tunneled logical volume using a conventional or recommended naming scheme.
  • the selection of a name may also be coordinated between the host 110 and the off-host virtualizer 180, e.g., using a field in an extended SCSI mode page, or through an external virtualization management infrastructure including one or more additional devices and/or hosts.
  • File system layer 112 and applications (such as database management systems) configured to utilize intermediate driver layer 113 and lower layers of storage stack 140B may be termed "virtual storage clients" or “virtual storage consumers” herein.
  • off-host virtualizer 180 is shown within interconnect 130 in the embodiment depicted in FIG.
  • off-host virtualization may also be provided within physical block device 120 (e.g., by a virtualization layer between physical storage layer 124 and firmware layer 122), or at another device outside interconnect 130.
  • a number of independently configurable interconnects may be employed to link off-host virtualizer 180 to hosts 110 and back-end storage devices 120: e.g., one or more front-end interconnects 130 or storage area network (SAN) fabrics may link off-host virtualizer 180 to hosts 110, while one or more back-end interconnects 130 may link off- host virtualizer to back-end storage devices.
  • SAN storage area network
  • FIG. 2a is a block diagram illustrating the addition of operating-system specific metadata to a virtual LUN 210 encapsulating a source volume 205, according to one embodiment.
  • the source volume 205 consists of N blocks, numbered 0 through (N-l).
  • the virtual LUN 210 may include two regions of inserted metadata: a header 215 containing H blocks of metadata, and a trailer 225 including T blocks of metadata.
  • blocks 220 of the virtual LUN 210 may be mapped to the source volume 205, thereby making the virtual LUN 210 a total of (H+N+T) blocks long (i.e., the virtual LUN may contain blocks numbered 0 through (H+N+T-l)).
  • Operating-system specific metadata included in header 215 and/of trailer 225 may be used by disk driver layer 114 to recognize the virtual LUN 210 as an addressable storage device.
  • An "addressable storage device”, as used herein, is a storage device whose blocks can be accessed (e.g., from a device driver such as a disk driver) using an address including a device identifier (such as a logical unit identifier) and an offset within the device.
  • a device driver such as a disk driver
  • additional configuration information or logical volume metadata may also be included within header 215 and/or trailer 225. The lengths of header 215 and trailer 225, as well as the format and content of the metadata, may vary with the operating system in use at host 110.
  • Off-host virtualizer 180 may be configured to customize the generated operating system metadata (e.g., in header 215 and/or trailer 225) based on the specific requirements of the operating system in use at host 110.
  • the requirements imposed by different operating systems may differ in the type of information to be included in the metadata (e.g., whether the metadata includes a partition table), the format in which the information is maintained (e.g., the units in which a partition or volume size is specified such as kilobytes or 512-byte blocks, whether offsets are expressed in hexadecimal or decimal numerals, etc.), the location within the LUN where the metadata is to be found, etc.
  • the requirements may be sent from the host 110 to the off-host-virtualizer 180.
  • a virtualization management infrastructure may be implemented using one or more devices and/or additional hosts (i.e., using devices and/or hosts other than host 110 and off-host virtualizer 180).
  • a device or host of the virtualization management infrastructure may be configured to communicate with the host 110, obtain the metadata requirements for the operating system from the host 110, and provide the requirements to the of -host virtualizer 180.
  • host 110 may be configured to provide the operating system requirements to off-host virtualizer, for example using an extension of a storage command (such as an extended SCSI request).
  • the storage command and/or the extension may be vendor unique (i.e., a storage command may need to be extended in different ways for different hardware storage vendors) in some embodiments.
  • only an identity of the operating system in use at the host 110 may be provided to the off-host virtualizer 180, and the off-host virtualizer may be configured to obtain the details of the operating system requirements using the operating system identity (e.g., from a database maintained at the off- host virtualizer 180 or from an external database).
  • Operating system identity and/or metadata requirements for a host 110 may also be specified to off-host virtualizer 180 manually in some embodiments, e.g., by a system administrator using a graphical user interface or a command-line interface.
  • a common metadata format shared by multiple operating systems may be used.
  • the metadata inserted within virtual LUN 210 may be stored in persistent storage, e.g., within some blocks of physical block device 120 or at storage within off-host virtualizer 180, in some embodiments, and logically concatenated with the mapped blocks 220.
  • the metadata may be maintained in non- persistent storage (e.g., within a memory at off-host virtualizer 180) and/or generated dynamically whenever a host 110 accesses the virtual LUN 210.
  • the metadata may be generated by an external agent other than off-host virtualizer 180.
  • the external agent may be capable of emulating metadata in a variety of formats for different operating systems, including operating systems that may not have been known when the off-host virtualizer 180 was deployed.
  • off-host virtualizer 180 may be configured to support more than one operating system; i.e., off-host virtualizer may logically insert metadata blocks corresponding to any one of a number of different operating systems when presenting virtual LUN 210 to a host 110, thereby allowing hosts with different operating systems to share access to a particular storage device 120.
  • off-host virtualizer 180 may be configured to generate operating system metadata including partition layout information.
  • FIG. 2b is a block diagram illustrating one embodiment where off-host virtualizer 180 is configured to include a partition table 255 within the generated operating system metadata header 215.
  • the partition table 255 may provide layout or mapping information describing the offset within the VLUN 210 at which the blocks mapped to a logical partition of a physical storage device 250 may be located.
  • the layout table may contain three entries corresponding to Partition 1, Partition 2 and Partition 3 of physical storage device 250, respectively.
  • Off-host virtualizer may also be configured to map one or more logical volumes to the address space of a virtual LUN in some embodiments.
  • FIG. 2c is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to aggregate storage within physical storage devices 250A and 250B into a logical volume 260, and to map logical volume 260 to blocks 261 of a virtual LUN 210.
  • off-host virtualizer 180 may also be configured to generate metadata specific to the logical volume in such embodiments.
  • Such logical volume metadata 263 may include information (such as a starting offset, volume length, virtualization layer information such as number of mirrors, etc.) allowing intermediate driver layer 113 to perform I/O operations on the logical volume 260.
  • FIG. 3 is a block diagram illustrating an embodiment including an off-host virtualizer 180 configured to create a plurality of virtual LUNs (VLUNs) 210.
  • VLUNs virtual LUNs
  • hosts 110A, HOB and HOC may be coupled to off-host virtualizer 180 via interconnect 130A
  • off-host virtualizer 180 may be coupled to storage devices 340A, 340B and 340C (collectively, storage devices 340) via interconnect 130B.
  • Storage devices 340 may include physical block devices 120 as well as virtual block devices (e.g., in embodiments employing multiple layers of virtualization, as described below).
  • Hosts 110A and HOB may utilize Operating System A
  • host HOC may utilize Operating System B; that is, in general, each host among hosts 110 may support any of a number of operating systems.
  • off-host virtualizer 180 may be configured to generate metadata for a number of VLUNs 210A-210E.
  • VLUN 210A may be assigned to host 110A (i.e., the operating system metadata generated for VLUN 210A may be made accessible to host HOA, allowing host 110A to detect an existence of VLUN 210A as an addressable storage device).
  • VLUN 210B may be assigned to host HOB, while VLUNs 210C, 210D and 210E may all be assigned to host HOC.
  • Storage within storage device 340A may be mapped to both VLUN 210A and VLUN 210E, as indicated by the dotted arrows in FIG. 3.
  • each of two or more hosts with different operating systems may be given access to the same storage device (e.g., 340A) using different VLUNs (e.g., 210A and 210E) in the illustrated embodiment.
  • off-host virtualizer 180 may be configured to support any desired total number of VLUNs 210, to assign any desired VLUNs to a given host, and to map storage from any desired combination of storage devices 340 to a given VLUN.
  • FIG. 4 is a flow diagram illustrating aspects of the operation of a host 110 and an off-host virtualizer 180 configured to generate operating system metadata as described above.
  • Off-host virtualizer 180 may be configured to receive operating system metadata requirements (block 410) from host 110, e.g., using one of the techniques described above, such as an extended SCSI command.
  • the off-host virtualizer 180 may then generate operating system metadata (block 420) for a virtual storage device such as a VLUN 210 in accordance with the requirements, and make the operating system metadata accessible to the host 110 (block 430).
  • the metadata may be made accessible to host 110 via a message sent by off-host virtualizer 180 over interconnect 130A in some embodiments.
  • host HOA may be configured to send a query to off-host virtualizer 180, requesting information on the VLUNs assigned to the host 110, and the metadata may be provided in response to the query.
  • a first layer of the storage software stack (e.g., disk driver layer 114) at host 110 may be configured to use the metadata to detect the existence of the virtual storage device as an addressable storage device (block 440).
  • FIG. 5 is a flow diagram illustrating an embodiment where an off-host virtualizer 180 is configured to aggregate storage in one or more physical storage devices into a logical volume 260, and to map the logical volume to a VLUN 210. As in FIG.
  • operating system metadata requirements may be received by off-host virtualizer 180 (block 510), and operating system metadata for a virtual storage device or VLUN 210 may be generated by off-host virtualizer in accordance with the requirements (block 520).
  • Off-host virtualizer may be configured to aggregate storage within one or more physical storage devices 340 into a logical volume 260 (block 530) and to map the logical volume into the address space of the virtual storage device or VLUN (block 540).
  • the operating system metadata, as well as additional metadata specific to the logical volume, may be provided to a host 110 (block 550).
  • a first layer of a software storage stack at the host may be configured to use the operating system metadata to detect the existence of the virtual storage device (block 560), while a second layer (such as intermediate driver 113) may be configured to use the logical volume metadata to access and initiate I/O operations on the logical volume 260 (block 570).
  • the basic techniques of PLUN tunneling and volume tunneling described above may be extended to allow dynamic association of PLUNs and/or logical volumes with VLUNs.
  • logical volumes such as logical volume 260 may typically be created and dynamically reconfigured (e.g., grown or shrunk, imported to hosts 110 or exported from hosts 110) efficiently, similar configuration operations on LUNs (i.e., physical and/or virtual LUNs) may typically be fairly slow.
  • Some LUN reconfiguration operations may be at least partially asynchronous, and may have unbounded completion times and/or ambiguous failure states.
  • LUN reconfiguration may only be completed after a system reboot; for example, a newly created physical or virtual LUN may not be detected by the operating system without a reboot.
  • LUN reconfiguration it may be advisable to generate unmapped virtual LUNs (i.e., to create operating system metadata for virtual LUNs that are not initially mapped to any physical LUNs or to logical volumes) and pre-assign the unmapped virtual LUNs to hosts 110 as part of an initialization process.
  • the initialization process may be completed prior to performing storage operations on the virtual LUNs on behalf of applications.
  • the layers of the software storage stack 140B may be configured to detect the existence of the virtual LUNs as addressable storage devices.
  • off-host virtualizer 180 may dynamically map physical LUNs and/or logical volumes to the virtual LUNs (e.g., by modifying portions of the operating system metadata), as described below in further detail.
  • the term "dynamic mapping”, as used herein, refers to a mapping of a virtual storage device (such as a VLUN) that is performed by modifying one or more blocks of metadata, and/or by communicating via one or more messages to a host 110, without requiring a reboot of the host 110 to which the virtual storage device is presented.
  • FIG. 6 is a block diagram illustrating an example of an unmapped virtual LUN 230 according to one embodiment.
  • the unmapped virtual LUN 230 may include an operating system metadata header 215 and an operating system metadata trailer 225, as well as a region of unmapped blocks 235.
  • the size of the region of unmapped blocks (X blocks in the depicted example) may be set to a maximum permissible LUN or maximum logical volume size supported by an operating system, so that any subsequent mapping of a logical volume or physical LUN to the virtual LUN does not require an expansion of the size of the virtual LUN.
  • the unmapped virtual LUN may consist of only the emulated metadata (e.g., header 215 and/or trailer 225), and the size of the virtual LUN may be increased when the volume or physical LUN is dynamically mapped.
  • disk driver layer 114 may have to modify some of its internal data structures when the virtual LUN is expanded, and may have to re-read the emulated metadata in order to do so.
  • Off- host virtualizer 180 may be configured to send a metadata change notification message to disk driver layer 114 in order to trigger the re-reading of the metadata.
  • FIG. 7 is a block diagram illustrating an embodiment including an off-host virtualizer 180 configured to create a plurality of unmapped virtual LUNs (VLUNs) 230.
  • VLUNs unmapped virtual LUNs
  • more than one unmapped VLUN may be associated with a single host 110.
  • off-host virtualizer 180 may assign unmapped VLUNs 230A and 230B to host HOA, and unmapped VLUNs 230C, 230D and 230E to host HOB.
  • multiple VLUNs may be associated with a given host to allow for isolation of storage used for different applications, or to allow access to storage beyond the maximum allowable LUN size supported in the system.
  • Off-host virtualizer 180 may be configured to dynamically map physical and/or virtual storage from storage devices 340 to the unmapped virtual LUNs 230.
  • Hosts HOA and HOB may be configured to use different operating systems in some embodiments, and may utilize the same operating system in other embodiments.
  • the previously unmapped VLUN 230 may provide the same general functionality as described previously for VLUNs 210 provided in basic (i.e., non-dynamic) PLUN tunneling and basic volume tunneling.
  • basic (i.e., non-dynamic) PLUN tunneling and basic volume tunneling A number of different techniques for dynamic mapping of physical storage and logical volumes to VLUNs are described below. It is noted that each such technique (e.g., a mapping from two PLUNs to a single VLUN) may also be implemented using basic PLUN tunneling or basic volume tunneling in some embodiments, i.e., without starting with unmapped VLUNs pre-assigned to hosts 110.
  • operating system-specific metadata (e.g., in header 215 and/or trailer 225 of FIG. 2b) generated by off-host virtualizer 180 may allow disk driver layer 114 at a host 110 to detect the existence of an unmapped virtual LUN 230 as an addressable storage device. After VLUN 230 has been recognized by disk driver layer 114, a block at any offset within the VLUN address space may be accessed by the disk driver layer 114, and thus by any other layer above the disk driver layer.
  • intermediate driver layer 113 may be configured to communicate with off-host virtualizer 180 by reading from, and/or writing to, a designated set of blocks emulated within VLUN 230.
  • off-host virtualizer 180 may provide a mechanism for off-host virtualizer 180 to provide intermediate driver layer 113 with configuration information associated with logical volumes or physical LUNs mapped to VLUN 230 in some embodiments.
  • off-host virtualizer 180 may be configured to dynamically map storage from a back-end physical LUN directly to an unmapped VLUN 230, without any additional virtualization (i.e., without creating a logical volume).
  • Such a technique of dynamically mapping a PLUN to a VLUN 230 may be termed "dynamic PLUN tunneling" (in contrast to the basic PLUN tunneling described above, where unmapped VLUNs may not be used).
  • Each PLUN may be mapped to a corresponding VLUN 230 (i.e., a 1-to-l mapping of PLUNs to VLUNs may be implemented by off-host virtualizer 180) in some embodiments. In other embodiments, as described below in conjunction with the description of FIG. 8, storage from multiple PLUNs may be mapped into subranges of a given VLUN 230.
  • both basic and dynamic PLUN tunneling may allow the off-host virtualizer 180 to act as an isolation layer between VLUNs 230 (the storage entities directly accessible to hosts 110) and back-end PLUNs, allowing the off-host virtualizer to hide details related to physical storage protocol implementation from the hosts.
  • the back-end PLUNs may implement a different version of a storage protocol (e.g., SCSI-3) than the version seen by hosts 100 (e.g., SCSI-2), and the off-host virtualizer may provide any needed translation between the two versions.
  • off-host virtualizer 180 may be configured to implement a cooperative access control mechanism for the back-end PLUNs, and the details of the mechanism may remain hidden from the hosts 110.
  • off-host virtualizer 180 may also be configured to increase the level of data sharing using PLUN tunneling (i.e., either basic PLUN tunneling or dynamic PLUN tunneling).
  • Off-host virtualizers 180 may allow multiple hosts to access the disk arrays through a single login. That is, for example, multiple hosts 110 may log in to the off-host virtualizer 180, while the off-host virtualizer may log in to a disk array PLUN once on behalf of the multiple hosts 110. Off-host virtualizer 180 may then pass on I/O requests from the multiple hosts 110 to the disk array PLUN using a single login.
  • the number of logins i.e., distinct entities logged in
  • the number of logins may thereby be reduced as a result of PLUN tunneling, without reducing the number of hosts 110 from which I/O operations targeted at the disk array PLUN may be initiated.
  • the total number of hosts 110 that may access storage at a single disk array PLUN with login count restrictions may thereby be increased, thus increasing the overall level of data sharing.
  • FIG. 8 is a block diagram illustrating an embodiment where an off-host virtualizer 180 is configured to dynamically map physical storage from within two different physical storage devices 340A and 340B to a single VLUN 230B. That is, off-host virtualizer 180 may be configured to map a first range of physical storage from device 340A into a first region of mapped blocks 821A within VLUN 230B, and map a second range of physical storage from device 340B into a second region of mapped blocks 821B within VLUN 230B.
  • the first and second ranges of physical storage may each represent a respective PLUN, such as a disk array, or a respective subset of a PLUN.
  • Configuration information indicating the offsets within VLUN 230B at which mapped blocks 821A and 821B are located may be provided by off-host virtualizer 180 to intermediate driver layer 113 using a variety of mechanisms in different embodiments.
  • off-host virtualizer 180 may write the configuration information to a designated set of blocks within VLUN 230, and intermediate driver layer 113 may be configured to read the designated set of blocks, as described above.
  • off-host virtualizer 180 may send a message containing the configuration information to host HOA, either directly (over interconnect 350A or another network) or through an intermediate coordination server.
  • the configuration information may be supplied within a special SCSI mode page (i.e., intermediate driver layer 113 may be configured to read a special SCSI mode page containing configuration information updated by off-host virtualizer 180).
  • off-host virtualizer 180 may send a message to intermediate driver layer 113 requesting that intermediate driver layer read a special SCSI mode page containing the configuration information.
  • FIG. 9 is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to dynamically map physical storage from within a single physical storage device 340A to two VLUNs 230 assigned to different hosts HOA and HOB.
  • a first range of physical storage 955A of physical storage device 340A may be mapped to a first range of mapped blocks 821 A within VLUN 230B assigned to host HOA.
  • a second range of physical storage 955B of the same physical storage device 340A may be mapped to a second range of mapped blocks 821C of VLUN 230E assigned to host HOB.
  • off-host virtualizer 180 may be configured to prevent unauthorized access to physical storage range 955 A from host HOB, and to prevent unauthorized access to physical storage 955B from host HOA.
  • off-host virtualizer 180 may also be configured to provide security for each range of physical storage 955A and 955B, e.g., in accordance with a specified security protocol.
  • the security protocol may allow I/O operations to a given VLUN 230 (and to its backing physical storage) from only a single host 110.
  • Off-host virtualizer 180 may be configured to maintain access rights information for the hosts 110 and VLUNs 230 in some embodiments, while in other embodiments security tokens may be provided to each host 110 indicating the specific VLUNs to which access from the host is allowed, and the security tokens may be included with I/O requests.
  • off-host virtualizer 180 may be configured to aggregate physical storage into a logical volume, and dynamically map the logical volume to an address range within a VLUN 230.
  • Such a technique of dynamically mapping a logical volume to a VLUN 230 may be termed "dynamic volume tunneling" (in contrast to the basic volume tunneling described above, where unmapped VLUNs may not be used).
  • FIG. 10 is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to aggregate storage 1055A of physical storage device 340A into a logical volume 1060A, and dynamically map logical volume 1060A to a range of blocks (designated as mapped volume 1065A in FIG. 10) of VLUN 230B.
  • configuration information or metadata associated with the tunneled logical volume 1060A may be provided to intermediate driver layer 113 using any of a variety of mechanisms, such as an extended SCSI mode page, emulated virtual blocks within VLUN 230A, and/or direct or indirect messages dent from off-host virtualizer 180 to host HOA. While logical volume 1060A is shown as being backed by a portion of a single physical storage device 340A in the depicted embodiment, in other embodiments logical volume 1060A may be aggregated from all the storage within a single physical storage device, or from storage of two or more physical devices.
  • logical volume 1060A may itself be aggregated from other logical storage devices rather than directly from physical storage devices.
  • each host 110 i.e., host HOB in addition to host HOA
  • FIG. 11 is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to dynamically map multiple logical volumes to a single VLUN 230.
  • off-host virtualizer 180 may be configured to aggregate storage range 1155C from physical storage device 340A, and physical storage range 1155B from physical storage device 340B, into a logical volume 1160A, and map logical volume 1160A to a first mapped volume region 1165A of VLUN 230B.
  • off-host virtualizer 180 may also aggregate physical storage range 1155B from physical storage device 340A into a second logical volume 1160B, and map logical volume 760B to a second mapped volume region 1165B of VLUN 230B.
  • off-host virtualizer 180 may aggregate any suitable selection of physical storage blocks from one or more physical storage devices 340 into one or more logical volumes, and map the logical volumes to one or more of the pre-generated unmapped VLUNs 230.
  • FIG. 12 is a block diagram illustrating another embodiment, where off-host virtualizer 180 is configured to aggregate storage from physical storage device 340A into logical volumes 1260A and 1260B, and to dynamically map each of the two logical volumes to a different VLUN 230.
  • logical volume 1260A may be mapped to a first address range within VLUN 230B, accessible from host HOA, while logical volume 1260B may be mapped to a second address range within VLUN 230E, accessible from host HOB.
  • Off-host virtualizer 180 may further be configured to implement a security protocol to prevent unauthorized access and/or data corruption, similar to the security protocol described above for PLUN tunneling. Off-host virtualizer 180 may implement the security protocol at the logical volume level: that is, off-host virtualizer 180 may prevent unauthorized access to logical volumes 1260A (e.g., from host HOB) and 1260B (e.g., from host HOA) whose data may be stored within a single physical storage device 340A.
  • off-host virtualizer 180 may be configured to maintain access rights information for logical volumes 1260 to which each host 110 has been granted access.
  • security tokens may be provided to each host 110 (e.g., by off-host virtualizer 180, or by an external security server) indicating the specific logical volumes 1260 to which access from the host is allowed, and the security tokens may be included with I/O requests.
  • SANs storage area networks
  • SAN fabric reconfiguration e.g., to provide access to a particular PLUN or logical volume from a particular host that did not previously have access to the desired PLUN or logical volume
  • switch reconfigurations recabling, rebooting, etc.
  • the techniques of dynamic PLUN tunneling and dynamic volume tunneling, described above, may allow a simplification of SAN reconfiguration operations.
  • PLUN tunneling and volume tunneling may also support storage interconnection across independently configured storage networks (e.g., interconnection across multiple fiber channel fabrics).
  • FIG. 13 is a block diagram illustrating an embodiment employing multiple storage networks.
  • off-host virtualizer 180 may be configured to access physical storage device 340A via a first storage network 1310A, and to access physical storage device 340B via a second storage network 1310B.
  • Off-host virtualizer 180 may aggregate storage from physical storage device 340A into logical volume 1360A, and map logical volume 1360A to VLUN 230B.
  • off-host virtualizer 180 may aggregate storage from physical storage device 340B into logical volume 1360B, and map logical volume 1360B to VLUN 230E.
  • Host HOA may be configured to access VLUN 230A via a third storage network 1310C, and to access VLUN 230B via a fourth storage network 1310D.
  • Each storage network 1310 i.e., storage network 1310A, 1310B, 1310C, or 1310D
  • a failure or a misconfiguration within a given storage network 1310 may also not affect any other independent storage network 1310.
  • hosts 110 may include multiple HBAs, allowing each host to access multiple independent storage networks.
  • host HOA may include two HBAs in the embodiment depicted in FIG.
  • FIG. 13 depicts the use of multiple independent storage networks in conjunction with volume tunneling, in other embodiments multiple independent storage networks may also be used with PLUN tunneling, or with a combination of PLUN and volume tunneling.
  • independent storage networks 1310 may be asymmetric: e.g., in one embodiment, multiple independent storage networks 1310 may be used for front-end connections (i.e., between off-host virtualizer 180 and hosts 110), while only a single storage network may be used for back-end connections (i.e., between off-host virtualizer 180 and physical storage devices 340). Any desired interconnection technology and/or protocol may be used to implement storage networks 1310, such as fiber channel, IP-based protocols, etc. For example, in one embodiment each storage network 1310 may be a fibre channel fabric. In another embodiment, the interconnect technology or protocol used within a first storage network 1310 may differ from the interconnect technology or protocol used within a second storage network 1310.
  • volume tunneling may also allow maximum LUN size limitations to be overcome.
  • the SCSI protocol may be configured to use a 32-bit unsigned integer as a LUN block address, thereby limiting the maximum amount of storage that can be accessed at a single LUN to 2 terabytes (for 512-byte blocks) or 32 terabytes (for 8-kilobyte blocks).
  • Volume tunneling may allow an intermediate driver layer 113 to access storage from multiple physical LUNs as a volume mapped to a single VLUN, thereby overcoming the maximum LUN size limitation.
  • off-host virtualizer 180 may be configured to aggregate storage from two physical storage devices 340A and 340B into a single logical volume 1460A, where the size of the volume 1460A exceeds the allowed maximum LUN size supported by the storage protocol in use at storage devices 340.
  • Off-host virtualizer 180 may further be configured to dynamically map logical volume 1460A to VLUN 230B, and to provide logical volume metadata to intermediate driver layer 113 at host HOA.
  • the logical volume metadata may include sufficient information for intermediate driver layer 113 to access a larger address space within VLUN 230B than the maximum allowed LUN size.
  • FIG. 15 is a flow diagram illustrating aspects of the operation of host 110 and off-host virtualizer 180 according to one embodiment, where off-host virtualizer 180 is configured to support dynamic PLUN tunneling.
  • off-host virtualizer 180 may be configured to generate operating system metadata for an unmapped virtual storage device (e.g., a VLUN) (block 1510) and make the metadata accessible to a host 110 (block 1515).
  • a first layer of a storage software stack at host 110 such as disk driver layer 114 of FIG. lb, may be configured to use the O.S.
  • off-host virtualizer 180 may be configured to dynamically map physical storage from one or more back-end physical storage devices 340 (e.g., PLUNs) to an address range within the virtual storage device (block 1525).
  • back-end physical storage devices 340 e.g., PLUNs
  • FIG. 16 is a flow diagram illustrating aspects of the operation of a host 110 and an off-host virtualizer 180 according to one embodiment, where off-host virtualizer 180 is configured to support dynamic volume tunneling.
  • the first three blocks depicted in FIG. 16 may represent functionality similar to the first three blocks shown in FIG. 15. That is, off-host virtualizer 180 may be configured to receive operating system metadata requirements (block 1605), generate operating system metadata for an unmapped virtual storage device (e.g., a VLUN) (block 1610) and make the metadata accessible to a host 110 (block 1615).
  • a first layer of a storage software stack such as disk driver layer 114 of FIG. lb, may be configured to use the O.S.
  • off-host virtualizer 180 may be configured to aggregate storage at one or physical storage devices 340 into a logical volume (block 1625), and to dynamically map the logical volume to an address range within the previously unmapped virtual storage device (block 1630).
  • Off-host virtualizer 180 may further be configured to provide logical volume metadata to a second layer of the storage software stack at host 110 (e.g., intermediate driver layer 113) (block 1635), allowing the second layer to locate the blocks of the logical volume and to perform desired I/O operations on the logical volume (block 1640).
  • host 110 e.g., intermediate driver layer 113
  • off-host virtualizer 180 may implement numerous different types of storage functions using block virtualization.
  • a virtual block device such as a logical volume may implement device striping, where data blocks may be distributed among multiple physical or logical block devices, and/or device spanning, in which multiple physical or logical block devices may be joined to appear as a single large logical block device.
  • virtualized block devices may provide mirroring and other forms of redundant data storage, the ability to create a snapshot or static image of a particular block device at a point in time, and/or the ability to replicate data blocks among storage systems connected through a network such as a local area network (LAN) or a wide area network (WAN), for example. Additionally, in some embodiments virtualized block devices may implement certain performance optimizations, such as load distribution, and/or various capabilities for online reorganization of virtual device structure, such as online data migration between devices. In other embodiments, one or more block devices may be mapped into a particular virtualized block device, which may be in turn mapped into still another virtualized block device, allowing complex storage functions to be implemented with simple block devices. More than one virtualization feature, such as striping and mirroring, may thus be combined within a single virtual block device in some embodiments, creating a logically hierarchical virtual storage device.
  • LAN local area network
  • WAN wide area network
  • the off-host virtualizer 180 may provide functions such as configuration management of virtualized block devices and distributed coordination of block device virtualization. For example, after a reconfiguration of a logical volume shared by two hosts 110 (e.g., when the logical volume is expanded, or when a new mirror is added to the logical volume), the off-host virtualizer 180 may be configured to distribute metadata or a volume description indicating the reconfiguration to the two hosts 110.
  • the storage stacks at the hosts may be configured to interact directly with various storage devices 340 according to the volume description (i.e., to transform logical I/O requests into physical I/O requests using the volume description).
  • Distribution of a virtualized block device as a volume to one or more virtual device clients, such as hosts 110, may be referred to as distributed block virtualization.
  • multiple layers of virtualization may be employed, for example at the host level as well as at an off-host level, such as at a virtualization switch or at a virtualization appliance.
  • some aspects of virtualization may be visible to a virtual device consumer such as file system layer 112, while other aspects may be implemented transparently by the off-host level.
  • the virtualization details of one block device e.g., one volume
  • the virtualization details of another block device may be partially or entirely transparent to the virtual device consumer.
  • a virtualizer such as off-host virtualizer 180
  • off-host virtualizer 180 may be any type of device, external to host 110, that is capable of providing the virtualization functionality, including PLUN and volume tunneling, described above.
  • off-host virtualizer 180 may include a virtualization switch, a virtualization appliance, a special additional host dedicated to providing block virtualization, or an embedded system configured to use application specific integrated circuit (ASIC) or field-programmable gate array (FPGA) technology to execute provide block virtualization functionality.
  • FIG. 17 is a block diagram illustrating an embodiment where off-host virtualizer 180 comprises a virtualization switch 1710
  • FIG. 18 is a block diagram illustrating another embodiment where off-host virtualizer 180 comprises a virtualization appliance 1810.
  • a virtualization switch 1710 may be an intelligent fibre channel switch, configured with sufficient processing capacity to perform virtualization functions in addition to providing fibre channel connectivity.
  • a virtualization appliance 1810 may be an intelligent device programmed to provide virtualization functions, such as mirroring, striping, snapshots, and the like.
  • An appliance may differ from a general-purpose computer in that the appliance software is normally customized for the function (such as virtualization) to which the appliance is dedicated, pre-installed by the vendor, and not easily modifiable by a user.
  • off-host block virtualization may be provided by a collection of cooperating devices, such as two or more virtualizing switches, instead of a single device. Such a collection of cooperating devices may be configured for failover, i.e., a standby cooperating device may be configured to take over the virtualization functions supported by a failed cooperating device.
  • An off-host virtualizer 180 may incorporate one or more processors, as well as volatile and/or non-volatile memory.
  • configuration information associated with virtualization may be maintained at a database separate from the off-host virtualizer 180, and may be accessed by off-host virtualizer over a network.
  • an off-host virtualizer may be programmable and/or configurable. Numerous other configurations of off-host virtualizer 180 are possible and contemplated.
  • FIG. 19 is a block diagram illustrating an exemplary host 110, according to one embodiment.
  • Host 110 may be any computer system, such as a server comprising one or more processors 1910 and one or more memories 1920, capable of supporting the storage software stack 140B described above.
  • host 110 may include one or more local storage devices 1930 (such as disks) as well as one or more network interfaces 1940 providing an interface to an interconnect 130. Portions of the storage software stack 140B may be resident in a memory 1920, and may be loaded into memory 1920 from storage devices 1930 as needed.
  • a host 110 may also be a diskless computer, configured to access storage from a remote location instead of using local storage devices 1930.
  • Various other components such as video cards, monitors, mice, etc.
  • the intermediate driver layer 113 may be included within a volume manager in some embodiments.
  • FIG. 20 is a block diagram illustrating a computer-accessible medium 2000 comprising virtualization software 2010 capable of providing the functionality of off-host virtualizer 180 and block storage software stack 140B described above.
  • Virtualization software 2010 may be provided to a computer system using a variety of computer-accessible media including electronic media (e.g., flash memory), magnetic media such as RAM (e.g., SDRAM, RDRAM, SRAM, etc.), optical storage media such as CD-ROM, etc., as well as transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link.
  • electronic media e.g., flash memory
  • magnetic media e.g., SDRAM, RDRAM, SRAM, etc.
  • optical storage media such as CD-ROM, etc.
  • transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link.

Abstract

A system and method for emulating operating system metadata to provide cross-platform access to storage volumes may include a first host and an off-host virtualizer. The off-host virtualizer may be configured to generate operating system metadata for a first virtual storage device, such as a virtual LUN, and to make the operating system metadata accessible to the first host. The metadata may be customized in accordance with a requirement of the operating system in use at the first host. The first host may include a storage software stack including a first layer, configured to use the operating system metadata to detect the existence of the first virtual storage device as an addressable storage device.

Description

TITLE: SYSTEM AND METHOD FOR EMULATING OPERATING SYSTEM METADATA TO PROVIDE CROSS-PLATFORM ACCESS TO STORAGE VOLUMES
BACKGROUND OF THE INVENTION
Field of the Invention
[0001] This invention relates to computer systems and, more particularly, to off-host virtualization within storage environments.
Description of the Related Art
[0002] Many business organizations and governmental entities rely upon applications that access large amounts of data, often exceeding a terabyte of data, for mission-critical applications. Often such data is stored on many different storage devices, which may be heterogeneous in nature, including many different types of devices from many different manufacturers. [0003] Configuring individual applications that consume data, or application server systems that host such applications, to recognize and directly interact with each different storage device that may possibly be encountered in a heterogeneous storage environment would be increasingly difficult as the environment scaled in size and complexity. Therefore, in some storage environments, specialized storage management software and hardware may be used to provide a more uniform storage model to storage consumers. Such software and hardware may also be configured to present physical storage devices as virtual storage devices (e.g., virtual SCSI disks) to computer hosts, and to add storage features not present in individual storage devices to the storage model. For example, features to increase fault tolerance, such as data mirroring, snapshot/fixed image creation, or data parity, as well as features to increase data access performance, such as disk striping, may be implemented in the storage model via hardware or software. The added storage features may be referred to as storage virtualization features, and the software and/or hardware providing the virtual storage devices and the added storage features may be termed "virtualizers" or "virtualization controllers". Virtualization may be performed within computer hosts, such as within a volume manager layer of a storage software stack at the host, and/or in devices external to the host, such as virtualization switches or virtualization appliances. Such external devices providing virtualization may be termed "off-host" virtualizers, and may be utilized in order to offload processing required for virtualization from the host. Off-host virtualizers may be connected to the external physical storage devices for which they provide virtualization functions via a variety of interconnects, such as Fiber Channel links, Internet Protocol (IP) networks, and the like. [0004] Traditionally, storage software within a computer host consists of a number of layers, such as a file system layer, a disk driver layer, etc. Some of the storage software layers may form part of the operating system in use at the host, and may differ from one operating system to another. When accessing a physical disk, a layer such as the disk driver layer for a given operating system may be configured to expect certain types of configuration information for the disk to be laid out in a specific format, for example in a header (located at the first few blocks of the disk) containing disk partition layout information. The storage stack software layers used to access local physical disks may also be utilized to access external storage devices presented as virtual storage devices by off-host virtualizers. Therefore, it may be desirable for an off-host virtualizer to provide configuration information for the virtual storage devices in formats expected by the different operating systems that may be in use at the hosts in the storage environment. In addition, it may be desirable for the off-host virtualizer to implement a technique to flexibly and dynamically map storage within external physical storage devices to the virtual storage devices presented to the host storage software layers, e.g., without requiring a reboot of the host.
SUMMARY OF THE INVENTION [0005] Various embodiments of a system and method for emulating operating system metadata to provide cross-platform access to storage volumes are disclosed. According to a first embodiment, a system may include a first host and an off-host virtualizer, such as a virtualization switch or a virtualization appliance. The off-host virtualizer may be configured to generate operating system metadata for a first virtual storage device, such as a virtual LUN, and to make the operating system metadata accessible to the first host. The operating system metadata may be customized according to a requirement of the operating system in use at the host; that is, the off-host virtualizer may be capable of providing metadata customized for a number of different operating systems in use at different hosts. The first host may be configured to specify the specific metadata requirements of its operating system to the off-host virtualizer. The first host may include a storage software stack including a first layer, such as a disk driver layer, configured to use the operating system metadata provided by the off-host virtualizer to detect the existence of the first virtual storage device as an addressable storage device such as a partition. In another embodiment, the off-host virtualizer may be configured to aggregate storage within one or more physical storage devices into a logical volume, and to map the logical volume to the first virtual storage device or virtual LUN. In such an embodiment, the off-host virtualizer may also be configured to provide logical volume metadata to a second layer of the storage software stack (such as an intermediate driver layer between a disk driver layer and a file system layer), allowing the second layer to perform I/O operations on the logical volume.
[0006] In one embodiment, the first virtual storage device may be initially unmapped to physical storage. The recognition of the unmapped first virtual storage device as an addressable storage device may occur during a system initialization stage prior to an initiation of production I/O operations. In this way, an unmapped or "blank" virtual LUN may be prepared for subsequent dynamic mapping by the off-host virtualizer. The unmapped LUN may be given an initial size equal to the maximum allowed LUN size supported by the operating system in use at the host, so that the size of the virtual LUN may not require modification after initialization. In some embodiments, multiple virtual LUNs may be pre-generated for use at a single host, for example in order to isolate storage for different applications, or to accommodate limits on maximum LUN sizes.
[0007] In another embodiment, the system may also include two or more physical storage devices, and the off- host virtualizer may be configured to dynamically map physical storage from a first and a second physical storage device to a respective range of addresses within the first virtual storage device. For example, the off-host virtualizer may be configured to perform an N-to-1 mapping between the physical storage devices (which may be called physical LUNs) and virtual LUNs, allowing storage in the physical storage devices to be accessed from the host via the pre-generated virtual LUNs. Configuration information regarding the location of the first and/or the second address ranges within the virtual LUN (i.e., the regions of the virtual LUN that are mapped to the physical storage devices) may be passed from the off-host virtualizer to a second layer of the storage stack at the host (e.g., an intermediate driver layer above a disk driver layer) using a variety of different mechanisms. Such mechanisms may include, for example, the off-host virtualizer writing the configuration information to certain special blocks within the virtual LUN, sending messages to the host over a network, or special extended SCSI mode pages. In one embodiment, two or more different ranges of physical storage within a single physical storage device may be mapped to corresponding pre-generated virtual storage devices such as virtual LUNs and presented to corresponding hosts. That is, the off-host virtualizer may allow each host of a plurality of hosts to access a respective portion of a physical storage device through a respective virtual LUN. In such embodiments, the off-host virtualizer may also be configured to implement a security policy isolating the ranges of physical storage within the shared physical storage device; i.e., to allow a host to access only those regions to which the host has been granted access, and to prevent unauthorized accesses. [0008] In one embodiment, the off-host virtualizer may be further configured to aggregate storage within a physical storage device into a logical volume, dynamically map the logical volume to a range of addresses within a pre-generated virtual storage device, and pass logical volume metadata to the second layer of the storage stack, allowing I/O operations to be performed on the logical volume. Storage from a single physical storage device may be aggregated into any desired number of different logical volumes, and any desired number of logical volumes may be mapped to a single virtual storage device or virtual LUN. The off-host virtualizer may be further configured to provide volume-level security, i.e., to prevent unauthorized access from a host to a logical volume, even when the physical storage corresponding to the logical volume is part of a shared physical storage device. In addition, physical storage from any desired number of physical storage devices may be aggregated into a logical volume, thereby allowing a single volume to extend over a larger address range than the maximum allowed size of a single physical storage device. The virtual storage devices or virtual LUNs may be distributed among a number of independent front-end storage networks, such as fiber channel fabrics, and the physical storage devices backing the logical volumes may be distributed among a number of independent back-end storage networks. For example, a first host may access its virtual storage devices through a first storage network, and a second host may access its virtual storage devices through a second storage network independent from the first (that is, reconfigurations and/or failures in the first storage network may not affect the second storage network). Similarly, the off-host virtualizer may access a first physical storage device through a third storage network, and a second physical storage device through a fourth storage network. The ability of the off-host virtualizer to dynamically map storage across pre-generated virtual storage devices distributed among independent storage networks may support a robust and flexible storage environment.
BRTEF DESCRIPTION OF THE DRAWINGS [0009] FIG. la is a block diagram illustrating one embodiment of a computer system.
[0010] FIG. lb is a block diagram illustrating an embodiment of a system configured to utilize off-host block virtualization. [0011] FIG. 2a is a block diagram illustrating the addition of operating-system specific metadata to a virtual logical unit (LUN) encapsulating a source volume, according to one embodiment.
[0012] FIG. 2b is a block diagram illustrating an embodiment where an off-host virtualizer is configured to include a partition table within operating system metadata. [0013] FIG. 2c is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage within physical storage devices into a logical volume and to map the logical volume to a virtual
LUN.
[0014] FIG. 3 is a block diagram illustrating an embodiment including an off-host virtualizer configured to create a plurality of virtual LUNs.
[0015] FIG. 4 is a flow diagram illustrating aspects of the operation of a host and an off-host virtualizer configured to generate operating system metadata.
[0016] FIG. 5 is a flow diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage in one or more physical storage devices into a logical volume, and to map the logical volume to a virtual LUN.
[0017] FIG. 6 is a block diagram illustrating an example of an unmapped virtual LUN according to one embodiment.
[0018] FIG. 7 is a block diagram illustrating an embodiment including an off-host virtualizer configured to create a plurality of unmapped virtual LUNs. [0019] FIG. 8 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to dynamically map physical storage from within two different physical storage devices to a single virtual LUN.
[0020] FIG. 9 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to dynamically map physical storage from within a single physical storage device to two virtual LUNs assigned to different hosts. [0021] FIG. 10 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage of a physical storage device into a logical volume and dynamically map the logical volume to a range of blocks of a virtual LUN.
[0022] FIG. 11 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to dynamically map multiple logical volumes to a single virtual LUN. [0023] FIG. 12 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage from a physical storage device into two logical volumes, and to dynamically map each of the two logical volumes to a different virtual LUN.
[0024] FIG. 13 is a block diagram illustrating an embodiment employing multiple storage networks.
[0025] FIG. 14 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage from two physical storage devices into a single logical volume.
[0026] FIG. 15 is a flow diagram illustrating aspects of the operation of a host and an off-host virtualizer according to one embodiment where the off-host virtualizer is configured to support dynamic physical LUN tunneling.
[0027] FIG. 16 is a flow diagram illustrating aspects of the operation of a host and an off-host virtualizer according to one embodiment, where off-host virtualizer 180 is configured to support dynamic volume tunneling.
[0028] FIG. 17 is a block diagram illustrating an embodiment where an off-host virtualizer comprises a virtualization switch. [0029] FIG. 18 is a block diagram illustrating an embodiment where off-host virtualizer comprises a virtualization appliance.
[0030] FIG. 19 is a block diagram illustrating an exemplary host, according to one embodiment.
[0031] FIG. 20 is a block diagram illustrating a computer-accessible medium according to one embodiment.
[0032] While the invention is susceptible to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.
DETAILED DESCRIPTION [0033] FIG. la is a block diagram illustrating a computer system 100 according to one embodiment. System 100 includes a host 110 coupled to a physical block device 120 via an interconnect 130. Host 110 includes a traditional software storage stack 140A that may be used to perform I/O operations on physical block device 120 via interconnect 130.
[0034] Generally speaking, a physical block device 120 may comprise any hardware entity that provides a collection of linearly addressed data blocks that can be read or written. For example, in one embodiment a physical block device may be a single disk drive configured to present all of its sectors as an indexed array of blocks. In another embodiment the physical block device may be a disk array device, or a disk configured as part of a disk array device. It is contemplated that any suitable type of storage device may be configured as a block device, such as fixed or removable magnetic media drives (e.g., hard drives, floppy or Zip-based drives), writable or read-only optical media drives (e.g., CD or DVD), tape drives, solid-state mass storage devices, or any other type of storage device. The interconnect 130 may utilize any desired storage connection technology, such as various variants of the Small Computer System Interface (SCSI) protocol, Fiber Channel, Internet Protocol (IP), Internet SCSI (iSCSI), or a combination of such storage networking technologies. The software storage stack 140A may comprise layers of software within an operating system at host 110, and may be accessed by a client application to perform I/O (input/output) on a desired physical block device 120.
[0035] In the traditional storage software stack for block device access, a client application may initiate an I/O request, for example as a request to read a block of data at a specified offset within a file. The request may be received (e.g., in the form of a read() system call) at the file system layer 112, translated into a request to read a block within a particular device object (i.e., a software entity representing a storage device), and passed to the disk driver layer 114. The disk driver layer 114 may then select the targeted physical block device 120 corresponding to the disk device object, and send a request to an address at the targeted physical block device over the interconnect 130 using the interconnect-dependent I/O driver layer 116. For example, a host bus adapter (HBA) (such as a SCSI HBA) may be used to transfer the I/O request, formatted according to the appropriate storage protocol (e.g., SCSI), to a physical link of the interconnect (e.g., a SCSI bus). At the physical block device 120, an interconnect-dependent firmware layer 122 may receive the request, perform the desired physical I/O operation at the physical storage layer 124, and send the results back to the host over the interconnect. The results (e.g., the desired blocks of the file to be read) may then be transferred through the various layers of storage stack 140A in reverse order (i.e., from the interconnect-dependent I/O driver to the file system) before being passed to the requesting client application. [0036] In some operating systems, a hierarchical scheme may be used for addressing storage within physical block devices 120. For example, an operating system may employ a four-level hierarchical addressing scheme of the form < "hba", "bus", "target", "lun"> for SCSI devices, including a SCSI HBA identifier ("hba"), a SCSI bus identifier ("bus"), a SCSI target identifier ("target"), and a logical unit identifier ("lun"), and may be configured to populate a device database with addresses for available SCSI devices during boot. Host 110 may include multiple SCSI HBAs, and a different SCSI adapter identifier may be used for each HBA. The SCSI adapter identifiers may be numbers issued by the operating system kernel, for example based on the physical placement of the HBA cards relative to each other (i.e., based on slot numbers used for the adapter cards). Each HBA may control one or more SCSI buses, and a unique SCSI bus number may be used to identify each SCSI bus within an HBA. During system initialization, or in response to special configuration commands, the HBA may be configured to probe each bus to identify the SCSI devices currently attached to the bus. Depending on the version of the SCSI protocol in use, the number of devices (such as disks or disk arrays) that may be attached on a SCSI bus may be limited, e.g., to 15 devices excluding the HBA itself. SCSI devices that may initiate I/O operations, such as the HBA, are termed SCSI initiators, while devices where the physical I/O may be performed are called SCSI targets. Each target on the SCSI bus may identify itself to the HBA in response to the probe. In addition, each target device may also accommodate up to a protocol-specific maximum number of "logical units" (LUNs) representing independently addressable units of physical storage within the target device, and may inform the HBA of the logical unit identifiers. A target device may contain a single LUN (e.g., a LUN may represent an entire disk or even a disk array) in some embodiments. The SCSI device configuration information, such as the target device identifiers and LUN identifiers may be passed to the disk driver layer 114 by the HBAs. When issuing an I/O request, disk driver layer 114 may utilize the hierarchical SCSI address described above. [0037] When accessing a LUN, disk driver layer 114 may expect to see OS-specific metadata at certain specific locations within the LUN. For example, in many operating systems, the disk driver layer 114 may be responsible for implementing logical partitioning (i.e., subdividing the space within a physical disk into partitions, where each partition may be used for a smaller file system). Metadata describing the layout of a partition (e.g., a starting block offset for the partition within the LUN, and the length of a partition) may be stored in an operating- system dependent format, and in an operating system-dependent location, such as in a header or a trailer, within a LUN. In the Solaris™ operating system from Sun Microsystems, for example, a virtual table of contents (VTOC) structure may be located in the first partition of a disk volume, and a copy of the VTOC may also be located in the last two cylinders of the volume. In addition, the operating system metadata may include cylinder alignment and/or cylinder size information, as well as boot code if the volume is bootable. Operating system metadata for various versions of Microsoft Windows™ may include a "magic number" (a special number or numbers that the operating system expects to find, usually at or near the start of a disk), subdisk layout information, etc. If the disk driver layer 114 does not find the metadata in the expected location and in the expected format, the disk driver layer may not be able to perform I/O operations at the LUN.
[0038] The relatively simple traditional storage software stack 140A has been enhanced over time to help provide advanced storage features, most significantly by introducing block virtualization layers. In general, block virtualization refers to a process of creating or aggregating logical or virtual block devices out of one or more underlying physical or logical block devices, and making the virtual block devices accessible to block device consumers for storage operations. For example, in one embodiment of block virtualization, storage within multiple physical block devices, e.g. in a fiber channel storage area network (SAN), may be aggregated and presented to a host as a single virtual storage device such as a virtual LUN (VLUN), as described below in further detail. In another embodiment, one or more layers of software may rearrange blocks from one or more block devices, such as disks, and add various kinds of functions. The resulting rearranged collection of blocks may then be presented to a storage consumer, such as an application or a file system, as one or more aggregated devices with the appearance of one or more basic disk drives. That is, the more complex structure resulting from rearranging blocks and adding functionality may be presented as if it were one or more simple arrays of blocks, or logical block devices. In some embodiments, multiple layers of virtualization may be implemented. In such embodiments, one or more block devices may be mapped into a particular virtualized block device, which may be in turn mapped into still another virtualized block device, allowing complex storage functions to be implemented with simple block devices. As used herein, the term "logical volume" refers to a virtualized block device that may be presented directly for use by a file system, database, or other applications that can directly use block devices. Further details on block virtualization, and advanced storage features supported by block virtualization, are provided below.
[0039] Block virtualization may be implemented at various places within a storage stack and the associated storage environment, in both hardware and software. For example, a block virtualization layer in the form of a volume manager, such as the VERITAS Volume Manager™ from VERITAS Software Corporation, may be added between the disk driver layer 114 and the file system layer 112. In some storage environments, virtualization functionality may be added to host bus adapters, i.e., in a layer between the interconnect-dependent I/O driver layer 116 and interconnect 130. Block virtualization may also be performed outside the host 110, e.g., in a virtualization appliance or a virtualizing switch, which may form part of the interconnect 130. Such external devices providing block virtualization (i.e., devices that are not incorporated within host 110) may be termed off-host virtualizers or off-host virtualization controllers. In some storage environments, block virtualization functionality may be implemented by an off-host virtualizer in cooperation with a host-based virtualizer. That is, some block virtualization functionality may be performed off-host, and other block virtualization features may be implemented at the host. In another embodiment, multiple devices external to the host 110, such as two or more virtualization switches, two or more virtualization appliances, or a combination of virtualization switches and virtualization appliances, may cooperate to provide block virtualization functionality; that is, the off-host virtualizer may include multiple cooperating devices.
[0040] While additional layers may be added to the storage software stack 140A, it is generally difficult to remove or completely bypass existing storage software layers of operating systems. Therefore, off-host virtualizers may typically be implemented in a manner that allows the existing storage software layers to continue to operate, even when the storage devices being presented to the operating system are virtual rather than physical, and remote rather than local. For example, because disk driver layer 114 expects to deal with SCSI LUNs when performing I/O operations, an off-host virtualizer may present a virtualized storage device to the disk driver layer as a virtual LUN. That is, as described below in further detail, on off-host virtualizer may encapsulate, or emulate the metadata for, a LUN when providing a host 110 access to a virtualized storage device. In addition, as also described below, one or more software modules or layers may be added to storage stack 140A to support additional forms of virtualization using virtual LUNs.
[0041] FIG. lb is a block diagram illustrating an embodiment of system 100 configured to utilize off-host block virtualization. As shown, the system may include an off-host virtualizer 180, such as a virtualization switch or a virtualization appliance, which may be included within interconnect 130 linking host 110 to physical block device 120. Host 110 may comprise an enhanced storage software stack 140B, which may include an intermediate driver layer 113 between the disk driver layer 114 and file system layer 112. In one embodiment, off-host virtualizer 180 may be configured to map storage within physical block device 120, or multiple physical block devices 120, into a virtual storage device (e.g., a virtual LUN or VLUN), and present the virtual storage device to host 110. [0042] The back-end physical block device 120 that is mapped to a virtual LUN may be termed a "physical LUN (PLUN)" in the subsequent description. In one embodiment, off-host virtualizer 180 may be configured to map storage from a back-end physical LUN directly to a VLUN, without any additional virtualization (i.e., without creating a logical volume). Such a technique of mapping a PLUN to a VLUN may be termed "PLUN tunneling" in the subsequent description. In another embodiment, off-host virtualizer 180 may be configured to aggregate storage within one or more physical block devices 120 as one or more logical volumes, and map the logical volumes within the address space of a VLUN presented to host 110. The terms "volume tunneling" or "logical volume tunneling" may be used herein to refer to the technique of mapping a logical volume through a VLUN. Off-host virtualizer 180 may further be configured to provide intermediate driver layer 130 with metadata or configuration information on the tunneled logical volumes, allowing intermediate driver layer 130 to locate and perform I/O operations on the logical volumes located within the virtual LUN on behalf of clients such as file system layer 112 or other applications. In some embodiments, an aliasing technique such as symbolic links may be used to associate a name with a tunneled logical volume, allowing file systems, applications and system administrators to refer to the tunneled logical volume using a conventional or recommended naming scheme. In one embodiment, the selection of a name may also be coordinated between the host 110 and the off-host virtualizer 180, e.g., using a field in an extended SCSI mode page, or through an external virtualization management infrastructure including one or more additional devices and/or hosts. File system layer 112 and applications (such as database management systems) configured to utilize intermediate driver layer 113 and lower layers of storage stack 140B may be termed "virtual storage clients" or "virtual storage consumers" herein. While off-host virtualizer 180 is shown within interconnect 130 in the embodiment depicted in FIG. lb, it is noted that in other embodiments, off-host virtualization may also be provided within physical block device 120 (e.g., by a virtualization layer between physical storage layer 124 and firmware layer 122), or at another device outside interconnect 130. As described below in further detail, in some embodiments a number of independently configurable interconnects may be employed to link off-host virtualizer 180 to hosts 110 and back-end storage devices 120: e.g., one or more front-end interconnects 130 or storage area network (SAN) fabrics may link off-host virtualizer 180 to hosts 110, while one or more back-end interconnects 130 may link off- host virtualizer to back-end storage devices.
[0043] As described above, in some embodiments, disk driver layer 114 may expect certain operating system- specific metadata to be present at operating-system specific locations or offsets within a LUN. When presenting a virtual LUN to a host 110, therefore, in such embodiments off-host virtualizer 180 may logically insert the expected metadata at the expected locations. FIG. 2a is a block diagram illustrating the addition of operating-system specific metadata to a virtual LUN 210 encapsulating a source volume 205, according to one embodiment. As shown, the source volume 205 consists of N blocks, numbered 0 through (N-l). The virtual LUN 210 may include two regions of inserted metadata: a header 215 containing H blocks of metadata, and a trailer 225 including T blocks of metadata. Between the header 215 and the trailer 225, blocks 220 of the virtual LUN 210 may be mapped to the source volume 205, thereby making the virtual LUN 210 a total of (H+N+T) blocks long (i.e., the virtual LUN may contain blocks numbered 0 through (H+N+T-l)). Operating-system specific metadata included in header 215 and/of trailer 225 may be used by disk driver layer 114 to recognize the virtual LUN 210 as an addressable storage device. An "addressable storage device", as used herein, is a storage device whose blocks can be accessed (e.g., from a device driver such as a disk driver) using an address including a device identifier (such as a logical unit identifier) and an offset within the device. In some embodiments, additional configuration information or logical volume metadata may also be included within header 215 and/or trailer 225. The lengths of header 215 and trailer 225, as well as the format and content of the metadata, may vary with the operating system in use at host 110. It is noted that in some embodiments, the metadata may require only a header 215, or only a trailer 225, rather than both a header and a trailer; and in other embodiments, the metadata may be stored at any arbitrary offset within the LUN. [0044] Off-host virtualizer 180 may be configured to customize the generated operating system metadata (e.g., in header 215 and/or trailer 225) based on the specific requirements of the operating system in use at host 110. The requirements imposed by different operating systems may differ in the type of information to be included in the metadata (e.g., whether the metadata includes a partition table), the format in which the information is maintained (e.g., the units in which a partition or volume size is specified such as kilobytes or 512-byte blocks, whether offsets are expressed in hexadecimal or decimal numerals, etc.), the location within the LUN where the metadata is to be found, etc. In some embodiments, the requirements may be sent from the host 110 to the off-host-virtualizer 180. For example, in one embodiment a virtualization management infrastructure may be implemented using one or more devices and/or additional hosts (i.e., using devices and/or hosts other than host 110 and off-host virtualizer 180). A device or host of the virtualization management infrastructure may be configured to communicate with the host 110, obtain the metadata requirements for the operating system from the host 110, and provide the requirements to the of -host virtualizer 180. In another embodiment, host 110 may be configured to provide the operating system requirements to off-host virtualizer, for example using an extension of a storage command (such as an extended SCSI request). The storage command and/or the extension may be vendor unique (i.e., a storage command may need to be extended in different ways for different hardware storage vendors) in some embodiments. In one embodiment, only an identity of the operating system in use at the host 110 (e.g., the operating system name and version number) may be provided to the off-host virtualizer 180, and the off-host virtualizer may be configured to obtain the details of the operating system requirements using the operating system identity (e.g., from a database maintained at the off- host virtualizer 180 or from an external database). Operating system identity and/or metadata requirements for a host 110 may also be specified to off-host virtualizer 180 manually in some embodiments, e.g., by a system administrator using a graphical user interface or a command-line interface. In some embodiments, a common metadata format shared by multiple operating systems may be used.
[0045] The metadata inserted within virtual LUN 210 may be stored in persistent storage, e.g., within some blocks of physical block device 120 or at storage within off-host virtualizer 180, in some embodiments, and logically concatenated with the mapped blocks 220. In other embodiments, the metadata may be maintained in non- persistent storage (e.g., within a memory at off-host virtualizer 180) and/or generated dynamically whenever a host 110 accesses the virtual LUN 210. In some embodiments, the metadata may be generated by an external agent other than off-host virtualizer 180. The external agent may be capable of emulating metadata in a variety of formats for different operating systems, including operating systems that may not have been known when the off-host virtualizer 180 was deployed. In one embodiment, off-host virtualizer 180 may be configured to support more than one operating system; i.e., off-host virtualizer may logically insert metadata blocks corresponding to any one of a number of different operating systems when presenting virtual LUN 210 to a host 110, thereby allowing hosts with different operating systems to share access to a particular storage device 120. [0046] As stated earlier, in some embodiments off-host virtualizer 180 may be configured to generate operating system metadata including partition layout information. FIG. 2b is a block diagram illustrating one embodiment where off-host virtualizer 180 is configured to include a partition table 255 within the generated operating system metadata header 215. The partition table 255 may provide layout or mapping information describing the offset within the VLUN 210 at which the blocks mapped to a logical partition of a physical storage device 250 may be located. In the illustrated embodiment, the layout table may contain three entries corresponding to Partition 1, Partition 2 and Partition 3 of physical storage device 250, respectively.
[0047] Off-host virtualizer may also be configured to map one or more logical volumes to the address space of a virtual LUN in some embodiments. FIG. 2c is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to aggregate storage within physical storage devices 250A and 250B into a logical volume 260, and to map logical volume 260 to blocks 261 of a virtual LUN 210. In addition to generating operating system metadata as described above, off-host virtualizer 180 may also be configured to generate metadata specific to the logical volume in such embodiments. Such logical volume metadata 263 may include information (such as a starting offset, volume length, virtualization layer information such as number of mirrors, etc.) allowing intermediate driver layer 113 to perform I/O operations on the logical volume 260. It is noted that in one embodiment, logical volume metadata may be included within header 215 or trailer 225, i.e., logical volume metadata intended for use by intermediate driver 113 may be co-located with operating system metadata intended for use by disk driver layer 114. While the logical volume 260 illustrated in FIG. 2c is backed by two physical storage devices 250A and 250B, in general, storage from any number of physical storage devices 250 (including from a single physical storage device) may be aggregated into a logical volume 260. [0048] FIG. 3 is a block diagram illustrating an embodiment including an off-host virtualizer 180 configured to create a plurality of virtual LUNs (VLUNs) 210. In the illustrated embodiment, hosts 110A, HOB and HOC (collectively, hosts 110) may be coupled to off-host virtualizer 180 via interconnect 130A, and off-host virtualizer 180 may be coupled to storage devices 340A, 340B and 340C (collectively, storage devices 340) via interconnect 130B. Storage devices 340 may include physical block devices 120 as well as virtual block devices (e.g., in embodiments employing multiple layers of virtualization, as described below). Hosts 110A and HOB may utilize Operating System A, while host HOC may utilize Operating System B; that is, in general, each host among hosts 110 may support any of a number of operating systems. As shown, off-host virtualizer 180 may be configured to generate metadata for a number of VLUNs 210A-210E. VLUN 210A may be assigned to host 110A (i.e., the operating system metadata generated for VLUN 210A may be made accessible to host HOA, allowing host 110A to detect an existence of VLUN 210A as an addressable storage device). VLUN 210B may be assigned to host HOB, while VLUNs 210C, 210D and 210E may all be assigned to host HOC. Storage within storage device 340A may be mapped to both VLUN 210A and VLUN 210E, as indicated by the dotted arrows in FIG. 3. Thus, each of two or more hosts with different operating systems (e.g., host HOA and host HOC) may be given access to the same storage device (e.g., 340A) using different VLUNs (e.g., 210A and 210E) in the illustrated embodiment. In general, off-host virtualizer 180 may be configured to support any desired total number of VLUNs 210, to assign any desired VLUNs to a given host, and to map storage from any desired combination of storage devices 340 to a given VLUN. It is noted that certain operating systems may impose a limit on the total number of LUNs (or VLUNs) that may be visible to a host 110, and/or the size of a given LUN or VLUN, and that off-host virtualizer may be configured to comply with such limits in some embodiments. [0049] FIG. 4 is a flow diagram illustrating aspects of the operation of a host 110 and an off-host virtualizer 180 configured to generate operating system metadata as described above. Off-host virtualizer 180 may be configured to receive operating system metadata requirements (block 410) from host 110, e.g., using one of the techniques described above, such as an extended SCSI command. The off-host virtualizer 180 may then generate operating system metadata (block 420) for a virtual storage device such as a VLUN 210 in accordance with the requirements, and make the operating system metadata accessible to the host 110 (block 430). The metadata may be made accessible to host 110 via a message sent by off-host virtualizer 180 over interconnect 130A in some embodiments. In other embodiments, host HOA may be configured to send a query to off-host virtualizer 180, requesting information on the VLUNs assigned to the host 110, and the metadata may be provided in response to the query. A first layer of the storage software stack (e.g., disk driver layer 114) at host 110 may be configured to use the metadata to detect the existence of the virtual storage device as an addressable storage device (block 440). It is noted that in some embodiments, where for example all the hosts 110 in the storage environment are configured to use the same operating system, the operations corresponding to block 410 may be omitted, i.e., there may be no need to provide operating system-specific requirements to an off-host virtualizer 180 in homogeneous environments where a single operating system is in use at all the hosts 110. [0050] FIG. 5 is a flow diagram illustrating an embodiment where an off-host virtualizer 180 is configured to aggregate storage in one or more physical storage devices into a logical volume 260, and to map the logical volume to a VLUN 210. As in FIG. 4, operating system metadata requirements may be received by off-host virtualizer 180 (block 510), and operating system metadata for a virtual storage device or VLUN 210 may be generated by off-host virtualizer in accordance with the requirements (block 520). Off-host virtualizer may be configured to aggregate storage within one or more physical storage devices 340 into a logical volume 260 (block 530) and to map the logical volume into the address space of the virtual storage device or VLUN (block 540). The operating system metadata, as well as additional metadata specific to the logical volume, may be provided to a host 110 (block 550). A first layer of a software storage stack at the host (such as disk driver layer 114) may be configured to use the operating system metadata to detect the existence of the virtual storage device (block 560), while a second layer (such as intermediate driver 113) may be configured to use the logical volume metadata to access and initiate I/O operations on the logical volume 260 (block 570).
[0051] In some embodiments, the basic techniques of PLUN tunneling and volume tunneling described above may be extended to allow dynamic association of PLUNs and/or logical volumes with VLUNs. While logical volumes such as logical volume 260 may typically be created and dynamically reconfigured (e.g., grown or shrunk, imported to hosts 110 or exported from hosts 110) efficiently, similar configuration operations on LUNs (i.e., physical and/or virtual LUNs) may typically be fairly slow. Some LUN reconfiguration operations may be at least partially asynchronous, and may have unbounded completion times and/or ambiguous failure states. On many operating systems, LUN reconfiguration may only be completed after a system reboot; for example, a newly created physical or virtual LUN may not be detected by the operating system without a reboot. In order to be able to flexibly map logical volumes to virtual LUNs, while avoiding the delays and problems associated with LUN reconfigurations, therefore, it may be advisable to generate unmapped virtual LUNs (i.e., to create operating system metadata for virtual LUNs that are not initially mapped to any physical LUNs or to logical volumes) and pre-assign the unmapped virtual LUNs to hosts 110 as part of an initialization process. The initialization process may be completed prior to performing storage operations on the virtual LUNs on behalf of applications. During the initialization process (which may include a reboot of the system in some embodiments) the layers of the software storage stack 140B may be configured to detect the existence of the virtual LUNs as addressable storage devices. Subsequent to the initialization, off-host virtualizer 180 may dynamically map physical LUNs and/or logical volumes to the virtual LUNs (e.g., by modifying portions of the operating system metadata), as described below in further detail. The term "dynamic mapping", as used herein, refers to a mapping of a virtual storage device (such as a VLUN) that is performed by modifying one or more blocks of metadata, and/or by communicating via one or more messages to a host 110, without requiring a reboot of the host 110 to which the virtual storage device is presented. [0052] FIG. 6 is a block diagram illustrating an example of an unmapped virtual LUN 230 according to one embodiment. As shown, the unmapped virtual LUN 230 may include an operating system metadata header 215 and an operating system metadata trailer 225, as well as a region of unmapped blocks 235. In some embodiments, the size of the region of unmapped blocks (X blocks in the depicted example) may be set to a maximum permissible LUN or maximum logical volume size supported by an operating system, so that any subsequent mapping of a logical volume or physical LUN to the virtual LUN does not require an expansion of the size of the virtual LUN. In one alternative embodiment, the unmapped virtual LUN may consist of only the emulated metadata (e.g., header 215 and/or trailer 225), and the size of the virtual LUN may be increased when the volume or physical LUN is dynamically mapped. In such embodiments, disk driver layer 114 may have to modify some of its internal data structures when the virtual LUN is expanded, and may have to re-read the emulated metadata in order to do so. Off- host virtualizer 180 may be configured to send a metadata change notification message to disk driver layer 114 in order to trigger the re-reading of the metadata. [0053] FIG. 7 is a block diagram illustrating an embodiment including an off-host virtualizer 180 configured to create a plurality of unmapped virtual LUNs (VLUNs) 230. As shown, more than one unmapped VLUN may be associated with a single host 110. For example, off-host virtualizer 180 may assign unmapped VLUNs 230A and 230B to host HOA, and unmapped VLUNs 230C, 230D and 230E to host HOB. In some embodiments, multiple VLUNs may be associated with a given host to allow for isolation of storage used for different applications, or to allow access to storage beyond the maximum allowable LUN size supported in the system. Off-host virtualizer 180 may be configured to dynamically map physical and/or virtual storage from storage devices 340 to the unmapped virtual LUNs 230. Hosts HOA and HOB may be configured to use different operating systems in some embodiments, and may utilize the same operating system in other embodiments. After the physical or virtual storage has been dynamically mapped, the previously unmapped VLUN 230 may provide the same general functionality as described previously for VLUNs 210 provided in basic (i.e., non-dynamic) PLUN tunneling and basic volume tunneling. A number of different techniques for dynamic mapping of physical storage and logical volumes to VLUNs are described below. It is noted that each such technique (e.g., a mapping from two PLUNs to a single VLUN) may also be implemented using basic PLUN tunneling or basic volume tunneling in some embodiments, i.e., without starting with unmapped VLUNs pre-assigned to hosts 110.
[0054] As described earlier, operating system-specific metadata (e.g., in header 215 and/or trailer 225 of FIG. 2b) generated by off-host virtualizer 180 may allow disk driver layer 114 at a host 110 to detect the existence of an unmapped virtual LUN 230 as an addressable storage device. After VLUN 230 has been recognized by disk driver layer 114, a block at any offset within the VLUN address space may be accessed by the disk driver layer 114, and thus by any other layer above the disk driver layer. For example, intermediate driver layer 113 may be configured to communicate with off-host virtualizer 180 by reading from, and/or writing to, a designated set of blocks emulated within VLUN 230. Such designated blocks may provide a mechanism for off-host virtualizer 180 to provide intermediate driver layer 113 with configuration information associated with logical volumes or physical LUNs mapped to VLUN 230 in some embodiments. [0055] In one embodiment, off-host virtualizer 180 may be configured to dynamically map storage from a back-end physical LUN directly to an unmapped VLUN 230, without any additional virtualization (i.e., without creating a logical volume). Such a technique of dynamically mapping a PLUN to a VLUN 230 may be termed "dynamic PLUN tunneling" (in contrast to the basic PLUN tunneling described above, where unmapped VLUNs may not be used). Each PLUN may be mapped to a corresponding VLUN 230 (i.e., a 1-to-l mapping of PLUNs to VLUNs may be implemented by off-host virtualizer 180) in some embodiments. In other embodiments, as described below in conjunction with the description of FIG. 8, storage from multiple PLUNs may be mapped into subranges of a given VLUN 230. In general, both basic and dynamic PLUN tunneling may allow the off-host virtualizer 180 to act as an isolation layer between VLUNs 230 (the storage entities directly accessible to hosts 110) and back-end PLUNs, allowing the off-host virtualizer to hide details related to physical storage protocol implementation from the hosts. In one implementation, for example, the back-end PLUNs may implement a different version of a storage protocol (e.g., SCSI-3) than the version seen by hosts 100 (e.g., SCSI-2), and the off-host virtualizer may provide any needed translation between the two versions. In another implementation, off-host virtualizer 180 may be configured to implement a cooperative access control mechanism for the back-end PLUNs, and the details of the mechanism may remain hidden from the hosts 110. [0056] In addition, off-host virtualizer 180 may also be configured to increase the level of data sharing using PLUN tunneling (i.e., either basic PLUN tunneling or dynamic PLUN tunneling). Disk array devices often impose limits on the total number of concurrent "logins", i.e., the total number of entities that may access a given disk array device. In a storage environment employing PLUN tunneling for disk arrays (i.e., where the PLUNs are disk array devices), off-host virtualizers 180 may allow multiple hosts to access the disk arrays through a single login. That is, for example, multiple hosts 110 may log in to the off-host virtualizer 180, while the off-host virtualizer may log in to a disk array PLUN once on behalf of the multiple hosts 110. Off-host virtualizer 180 may then pass on I/O requests from the multiple hosts 110 to the disk array PLUN using a single login. The number of logins (i.e., distinct entities logged in) as seen by a disk array PLUN may thereby be reduced as a result of PLUN tunneling, without reducing the number of hosts 110 from which I/O operations targeted at the disk array PLUN may be initiated. The total number of hosts 110 that may access storage at a single disk array PLUN with login count restrictions may thereby be increased, thus increasing the overall level of data sharing.
[0057] FIG. 8 is a block diagram illustrating an embodiment where an off-host virtualizer 180 is configured to dynamically map physical storage from within two different physical storage devices 340A and 340B to a single VLUN 230B. That is, off-host virtualizer 180 may be configured to map a first range of physical storage from device 340A into a first region of mapped blocks 821A within VLUN 230B, and map a second range of physical storage from device 340B into a second region of mapped blocks 821B within VLUN 230B. The first and second ranges of physical storage may each represent a respective PLUN, such as a disk array, or a respective subset of a PLUN. Configuration information indicating the offsets within VLUN 230B at which mapped blocks 821A and 821B are located may be provided by off-host virtualizer 180 to intermediate driver layer 113 using a variety of mechanisms in different embodiments. For example, in one embodiment, off-host virtualizer 180 may write the configuration information to a designated set of blocks within VLUN 230, and intermediate driver layer 113 may be configured to read the designated set of blocks, as described above. In another embodiment, off-host virtualizer 180 may send a message containing the configuration information to host HOA, either directly (over interconnect 350A or another network) or through an intermediate coordination server. In yet another embodiment, the configuration information may be supplied within a special SCSI mode page (i.e., intermediate driver layer 113 may be configured to read a special SCSI mode page containing configuration information updated by off-host virtualizer 180). Combinations of these techniques may be used in some embodiments: for example, in one embodiment off-host virtualizer 180 may send a message to intermediate driver layer 113 requesting that intermediate driver layer read a special SCSI mode page containing the configuration information.
[0058] FIG. 9 is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to dynamically map physical storage from within a single physical storage device 340A to two VLUNs 230 assigned to different hosts HOA and HOB. As shown, a first range of physical storage 955A of physical storage device 340A may be mapped to a first range of mapped blocks 821 A within VLUN 230B assigned to host HOA. A second range of physical storage 955B of the same physical storage device 340A may be mapped to a second range of mapped blocks 821C of VLUN 230E assigned to host HOB. In addition, in some embodiments, off-host virtualizer 180 may be configured to prevent unauthorized access to physical storage range 955 A from host HOB, and to prevent unauthorized access to physical storage 955B from host HOA. Thus, in addition to allowing access to a single physical storage device 340A from multiple hosts 110, off-host virtualizer 180 may also be configured to provide security for each range of physical storage 955A and 955B, e.g., in accordance with a specified security protocol. In one embodiment, for example, the security protocol may allow I/O operations to a given VLUN 230 (and to its backing physical storage) from only a single host 110. Off-host virtualizer 180 may be configured to maintain access rights information for the hosts 110 and VLUNs 230 in some embodiments, while in other embodiments security tokens may be provided to each host 110 indicating the specific VLUNs to which access from the host is allowed, and the security tokens may be included with I/O requests.
[0059] In some embodiments off-host virtualizer 180 may be configured to aggregate physical storage into a logical volume, and dynamically map the logical volume to an address range within a VLUN 230. Such a technique of dynamically mapping a logical volume to a VLUN 230 may be termed "dynamic volume tunneling" (in contrast to the basic volume tunneling described above, where unmapped VLUNs may not be used).FIG. 10 is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to aggregate storage 1055A of physical storage device 340A into a logical volume 1060A, and dynamically map logical volume 1060A to a range of blocks (designated as mapped volume 1065A in FIG. 10) of VLUN 230B. As in the case of PLUN tunneling, configuration information or metadata associated with the tunneled logical volume 1060A may be provided to intermediate driver layer 113 using any of a variety of mechanisms, such as an extended SCSI mode page, emulated virtual blocks within VLUN 230A, and/or direct or indirect messages dent from off-host virtualizer 180 to host HOA. While logical volume 1060A is shown as being backed by a portion of a single physical storage device 340A in the depicted embodiment, in other embodiments logical volume 1060A may be aggregated from all the storage within a single physical storage device, or from storage of two or more physical devices. In some embodiments employing multiple layers of virtualization, logical volume 1060A may itself be aggregated from other logical storage devices rather than directly from physical storage devices. In one embodiment, each host 110 (i.e., host HOB in addition to host HOA) may be provided access to logical volume 1060A via a separate VLUN, while in another embodiment different sets of logical volumes may be presented to different hosts 110. [0060] FIG. 11 is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to dynamically map multiple logical volumes to a single VLUN 230. As shown, off-host virtualizer 180 may be configured to aggregate storage range 1155C from physical storage device 340A, and physical storage range 1155B from physical storage device 340B, into a logical volume 1160A, and map logical volume 1160A to a first mapped volume region 1165A of VLUN 230B. In addition, off-host virtualizer 180 may also aggregate physical storage range 1155B from physical storage device 340A into a second logical volume 1160B, and map logical volume 760B to a second mapped volume region 1165B of VLUN 230B. In general, off-host virtualizer 180 may aggregate any suitable selection of physical storage blocks from one or more physical storage devices 340 into one or more logical volumes, and map the logical volumes to one or more of the pre-generated unmapped VLUNs 230. [0061] FIG. 12 is a block diagram illustrating another embodiment, where off-host virtualizer 180 is configured to aggregate storage from physical storage device 340A into logical volumes 1260A and 1260B, and to dynamically map each of the two logical volumes to a different VLUN 230. For example, as shown, logical volume 1260A may be mapped to a first address range within VLUN 230B, accessible from host HOA, while logical volume 1260B may be mapped to a second address range within VLUN 230E, accessible from host HOB. Off-host virtualizer 180 may further be configured to implement a security protocol to prevent unauthorized access and/or data corruption, similar to the security protocol described above for PLUN tunneling. Off-host virtualizer 180 may implement the security protocol at the logical volume level: that is, off-host virtualizer 180 may prevent unauthorized access to logical volumes 1260A (e.g., from host HOB) and 1260B (e.g., from host HOA) whose data may be stored within a single physical storage device 340A. In one embodiment, off-host virtualizer 180 may be configured to maintain access rights information for logical volumes 1260 to which each host 110 has been granted access. In other embodiments security tokens may be provided to each host 110 (e.g., by off-host virtualizer 180, or by an external security server) indicating the specific logical volumes 1260 to which access from the host is allowed, and the security tokens may be included with I/O requests.
[0062] Many storage environments utilize storage area networks (SANs), such as fibre channel fabrics, to access physical storage devices. SAN fabric reconfiguration (e.g., to provide access to a particular PLUN or logical volume from a particular host that did not previously have access to the desired PLUN or logical volume), which may require switch reconfigurations, recabling, rebooting, etc., may typically be fairly complex and error-prone. The techniques of dynamic PLUN tunneling and dynamic volume tunneling, described above, may allow a simplification of SAN reconfiguration operations. By associating pre-generated, unmapped VLUNs to hosts, and mapping PLUNs and logical volumes to VLUNs dynamically as needed, many reconfiguration operations may require only a change of a mapping table at a switch, and a recognition of new metadata by intermediate driver layer 113. Storage devices may be more easily shared across multiple hosts 110, or logically transferred from one host to another, using PLUN tunneling and/or volume tunneling. Allocation and/or provisioning of storage, e.g., from a pool maintained by a coordinating storage allocator, may also be simplified. [0063] In addition to simplifying SAN configuration changes, PLUN tunneling and volume tunneling may also support storage interconnection across independently configured storage networks (e.g., interconnection across multiple fiber channel fabrics). FIG. 13 is a block diagram illustrating an embodiment employing multiple storage networks. As shown, off-host virtualizer 180 may be configured to access physical storage device 340A via a first storage network 1310A, and to access physical storage device 340B via a second storage network 1310B. Off-host virtualizer 180 may aggregate storage from physical storage device 340A into logical volume 1360A, and map logical volume 1360A to VLUN 230B. Similarly, off-host virtualizer 180 may aggregate storage from physical storage device 340B into logical volume 1360B, and map logical volume 1360B to VLUN 230E. Host HOA may be configured to access VLUN 230A via a third storage network 1310C, and to access VLUN 230B via a fourth storage network 1310D. [0064] Each storage network 1310 (i.e., storage network 1310A, 1310B, 1310C, or 1310D) may be independently configurable: that is, a reconfiguration operation performed within a given storage network 1310 may not affect any other storage network 1310. A failure or a misconfiguration within a given storage network 1310 may also not affect any other independent storage network 1310. In some embodiments, hosts 110 may include multiple HBAs, allowing each host to access multiple independent storage networks. For example, host HOA may include two HBAs in the embodiment depicted in FIG. 13, with the first HBA allowing access to storage network 1310C, and the second HBA to storage network 1310D. In such an embodiment, host HOA may be provided full connectivity to back-end physical storage devices 340, while still maintaining the advantages of configuration isolation. While FIG. 13 depicts the use of multiple independent storage networks in conjunction with volume tunneling, in other embodiments multiple independent storage networks may also be used with PLUN tunneling, or with a combination of PLUN and volume tunneling. In addition, it is noted that in some embodiments, the use of independent storage networks 1310 may be asymmetric: e.g., in one embodiment, multiple independent storage networks 1310 may be used for front-end connections (i.e., between off-host virtualizer 180 and hosts 110), while only a single storage network may be used for back-end connections (i.e., between off-host virtualizer 180 and physical storage devices 340). Any desired interconnection technology and/or protocol may be used to implement storage networks 1310, such as fiber channel, IP-based protocols, etc. For example, in one embodiment each storage network 1310 may be a fibre channel fabric. In another embodiment, the interconnect technology or protocol used within a first storage network 1310 may differ from the interconnect technology or protocol used within a second storage network 1310.
[0065] In one embodiment, volume tunneling may also allow maximum LUN size limitations to be overcome. For example, the SCSI protocol may be configured to use a 32-bit unsigned integer as a LUN block address, thereby limiting the maximum amount of storage that can be accessed at a single LUN to 2 terabytes (for 512-byte blocks) or 32 terabytes (for 8-kilobyte blocks). Volume tunneling may allow an intermediate driver layer 113 to access storage from multiple physical LUNs as a volume mapped to a single VLUN, thereby overcoming the maximum LUN size limitation. FIG. 14 is a block diagram illustrating an embodiment where off-host virtualizer 180 may be configured to aggregate storage from two physical storage devices 340A and 340B into a single logical volume 1460A, where the size of the volume 1460A exceeds the allowed maximum LUN size supported by the storage protocol in use at storage devices 340. Off-host virtualizer 180 may further be configured to dynamically map logical volume 1460A to VLUN 230B, and to provide logical volume metadata to intermediate driver layer 113 at host HOA. The logical volume metadata may include sufficient information for intermediate driver layer 113 to access a larger address space within VLUN 230B than the maximum allowed LUN size.
[0066] FIG. 15 is a flow diagram illustrating aspects of the operation of host 110 and off-host virtualizer 180 according to one embodiment, where off-host virtualizer 180 is configured to support dynamic PLUN tunneling. After receiving operating system metadata requirements (block 1505), off-host virtualizer 180 may be configured to generate operating system metadata for an unmapped virtual storage device (e.g., a VLUN) (block 1510) and make the metadata accessible to a host 110 (block 1515). A first layer of a storage software stack at host 110, such as disk driver layer 114 of FIG. lb, may be configured to use the O.S. metadata to detect the existence of the virtual storage device as an addressable storage device (e.g., as a LUN) (block 1520), for example during system initialization or boot. After the unmapped virtual storage device is detected, off-host virtualizer 180 may be configured to dynamically map physical storage from one or more back-end physical storage devices 340 (e.g., PLUNs) to an address range within the virtual storage device (block 1525).
[0067] FIG. 16 is a flow diagram illustrating aspects of the operation of a host 110 and an off-host virtualizer 180 according to one embodiment, where off-host virtualizer 180 is configured to support dynamic volume tunneling. The first three blocks depicted in FIG. 16 may represent functionality similar to the first three blocks shown in FIG. 15. That is, off-host virtualizer 180 may be configured to receive operating system metadata requirements (block 1605), generate operating system metadata for an unmapped virtual storage device (e.g., a VLUN) (block 1610) and make the metadata accessible to a host 110 (block 1615). A first layer of a storage software stack, such as disk driver layer 114 of FIG. lb, may be configured to use the O.S. metadata to detect the existence of the virtual storage device as an addressable storage device (e.g., as a LUN) (block 1620), for example during system initialization or boot. In addition, off-host virtualizer 180 may be configured to aggregate storage at one or physical storage devices 340 into a logical volume (block 1625), and to dynamically map the logical volume to an address range within the previously unmapped virtual storage device (block 1630). Off-host virtualizer 180 may further be configured to provide logical volume metadata to a second layer of the storage software stack at host 110 (e.g., intermediate driver layer 113) (block 1635), allowing the second layer to locate the blocks of the logical volume and to perform desired I/O operations on the logical volume (block 1640).heer [0068] In various embodiments, off-host virtualizer 180 may implement numerous different types of storage functions using block virtualization. For example, in one embodiment a virtual block device such as a logical volume may implement device striping, where data blocks may be distributed among multiple physical or logical block devices, and/or device spanning, in which multiple physical or logical block devices may be joined to appear as a single large logical block device. In some embodiments, virtualized block devices may provide mirroring and other forms of redundant data storage, the ability to create a snapshot or static image of a particular block device at a point in time, and/or the ability to replicate data blocks among storage systems connected through a network such as a local area network (LAN) or a wide area network (WAN), for example. Additionally, in some embodiments virtualized block devices may implement certain performance optimizations, such as load distribution, and/or various capabilities for online reorganization of virtual device structure, such as online data migration between devices. In other embodiments, one or more block devices may be mapped into a particular virtualized block device, which may be in turn mapped into still another virtualized block device, allowing complex storage functions to be implemented with simple block devices. More than one virtualization feature, such as striping and mirroring, may thus be combined within a single virtual block device in some embodiments, creating a logically hierarchical virtual storage device.
[0069] The off-host virtualizer 180, either alone or in cooperation with one or more other virtualizers such as a volume manager at host 110 or other off-host virtualizers, may provide functions such as configuration management of virtualized block devices and distributed coordination of block device virtualization. For example, after a reconfiguration of a logical volume shared by two hosts 110 (e.g., when the logical volume is expanded, or when a new mirror is added to the logical volume), the off-host virtualizer 180 may be configured to distribute metadata or a volume description indicating the reconfiguration to the two hosts 110. In one embodiment, once the volume description has been provided to the hosts, the storage stacks at the hosts may be configured to interact directly with various storage devices 340 according to the volume description (i.e., to transform logical I/O requests into physical I/O requests using the volume description). Distribution of a virtualized block device as a volume to one or more virtual device clients, such as hosts 110, may be referred to as distributed block virtualization.
[0070] As noted previously, in some embodiments, multiple layers of virtualization may be employed, for example at the host level as well as at an off-host level, such as at a virtualization switch or at a virtualization appliance. In such embodiments, some aspects of virtualization may be visible to a virtual device consumer such as file system layer 112, while other aspects may be implemented transparently by the off-host level. Further, in some multilayer embodiments, the virtualization details of one block device (e.g., one volume) may be fully defined to a virtual device consumer (i.e., without further virtualization at an off-host level), while the virtualization details of another block device (e.g., another volume) may be partially or entirely transparent to the virtual device consumer. [0071] In some embodiments, a virtualizer, such as off-host virtualizer 180, may be configured to distribute all defined logical volumes to each virtual device client, such as host 110, present within a system. Such embodiments may be referred to as symmetric distributed block virtualization systems. In other embodiments, specific volumes may be distributed only to respective virtual device consumers or hosts, such that at least one volume is not common to two virtual device consumers. Such embodiments may be referred to as asymmetric distributed block virtualization systems. [0072] It is noted that off-host virtualizer 180 may be any type of device, external to host 110, that is capable of providing the virtualization functionality, including PLUN and volume tunneling, described above. For example, off-host virtualizer 180 may include a virtualization switch, a virtualization appliance, a special additional host dedicated to providing block virtualization, or an embedded system configured to use application specific integrated circuit (ASIC) or field-programmable gate array (FPGA) technology to execute provide block virtualization functionality. FIG. 17 is a block diagram illustrating an embodiment where off-host virtualizer 180 comprises a virtualization switch 1710, and FIG. 18 is a block diagram illustrating another embodiment where off-host virtualizer 180 comprises a virtualization appliance 1810. A virtualization switch 1710 may be an intelligent fibre channel switch, configured with sufficient processing capacity to perform virtualization functions in addition to providing fibre channel connectivity. A virtualization appliance 1810 may be an intelligent device programmed to provide virtualization functions, such as mirroring, striping, snapshots, and the like. An appliance may differ from a general-purpose computer in that the appliance software is normally customized for the function (such as virtualization) to which the appliance is dedicated, pre-installed by the vendor, and not easily modifiable by a user. In some embodiments, off-host block virtualization may be provided by a collection of cooperating devices, such as two or more virtualizing switches, instead of a single device. Such a collection of cooperating devices may be configured for failover, i.e., a standby cooperating device may be configured to take over the virtualization functions supported by a failed cooperating device. An off-host virtualizer 180 may incorporate one or more processors, as well as volatile and/or non-volatile memory. In some embodiments, configuration information associated with virtualization may be maintained at a database separate from the off-host virtualizer 180, and may be accessed by off-host virtualizer over a network. In one embodiment, an off-host virtualizer may be programmable and/or configurable. Numerous other configurations of off-host virtualizer 180 are possible and contemplated.
[0073] FIG. 19 is a block diagram illustrating an exemplary host 110, according to one embodiment. Host 110 may be any computer system, such as a server comprising one or more processors 1910 and one or more memories 1920, capable of supporting the storage software stack 140B described above. As shown, host 110 may include one or more local storage devices 1930 (such as disks) as well as one or more network interfaces 1940 providing an interface to an interconnect 130. Portions of the storage software stack 140B may be resident in a memory 1920, and may be loaded into memory 1920 from storage devices 1930 as needed. In some embodiments, a host 110 may also be a diskless computer, configured to access storage from a remote location instead of using local storage devices 1930. Various other components, such as video cards, monitors, mice, etc. may also be included within a host 110 in different embodiments. Any desired operating system may be used at a host 110, including various versions of Microsoft Windows™, Solaris™ from Sun Microsystems, various versions of Linux, other operating systems based on UNIX, and the like. The intermediate driver layer 113 may be included within a volume manager in some embodiments.
[0074] FIG. 20 is a block diagram illustrating a computer-accessible medium 2000 comprising virtualization software 2010 capable of providing the functionality of off-host virtualizer 180 and block storage software stack 140B described above. Virtualization software 2010 may be provided to a computer system using a variety of computer-accessible media including electronic media (e.g., flash memory), magnetic media such as RAM (e.g., SDRAM, RDRAM, SRAM, etc.), optical storage media such as CD-ROM, etc., as well as transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link. [0075] Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.

Claims

WHAT IS CLAIMED IS:
1. A system comprising: a first host; and an off-host virtualizer; wherein the off-host virtualizer is configured to: generate first operating system metadata for a first virtual storage device; and make the first operating system metadata accessible to the first host; wherein the first host comprises a storage software stack including a first layer; wherein the first layer is configured to use the first operating system metadata to detect an existence of the first virtual storage device as a first addressable storage device.
2. The system as recited in Claim 1, wherein the off-host virtualizer is further configured to customize the operating system metadata in response to a requirement of a particular operating system in use at the first host.
3. The system as recited in Claim 2, wherein the first host is configured to provide the requirement of the particular operating system to the off-host virtualizer.
4. The system as recited in Claim 1, further comprising a storage device, wherein the first addressable storage device is a partition of the storage device.
5. The system as recited in Claim 1, further comprising one or more physical storage devices, wherein the storage software stack includes a second layer, wherein the off-host virtualizer is further configured to: aggregate storage in the one or more physical storage devices into a logical volume; map the logical volume to an address space of the first virtual storage device; provide logical volume metadata for the logical volume to the second layer of the storage software stack; wherein the second layer of the storage software stack is configured to use the logical volume metadata to access the logical volume for I/O operations.
6. The system as recited in Claim 1, wherein the first virtual storage device is a virtual logical unit (LUN).
7. The system as recited in Claim 1, wherein the off-host virtualizer includes a virtualization switch.
8. The system as recited in Claim 1, wherein the off-host virtualizer includes a virtualization appliance.
9. The system as recited in Claim 1, wherein the first virtual storage device is unmapped to physical storage.
10. The system as recited in Claim 9, further comprising two or more physical storage devices, wherein the off- host virtualizer is further configured to dynamically map each physical storage device of the two or more physical storage devices to a respective address range within the first virtual storage device.
11. The system as recited in recited in Claim 9, further comprising a physical storage device and a second host, wherein the off-host virtualizer is further configured to: generate second operating system metadata for a second virtual storage device, wherein the second virtual storage device is unmapped to physical storage; and make the second operating system metadata accessible to the second host; wherein the second host comprises a second storage software stack including a first layer; wherein the first layer of the second storage software stack is configured to use the second operating system metadata to detect an existence of the second virtual storage device as a second addressable storage device; wherein the off-host virtualizer is further configured to: dynamically map a first physical address range of the physical storage device to a first virtual address range within the first virtual storage device; dynamically map a second physical address range of the physical storage device to a second virtual address range within the second virtual storage device; and prevent access to the second physical address range from the first host.
12. The system as recited in Claim 9, further comprising a first physical storage device, wherein the storage software stack includes a second layer, wherein the off-host virtualizer is further configured to: aggregate a first range of physical storage within the first physical storage device into a first logical volume; dynamically map a first range of virtual storage within the first virtual storage device to the first logical volume; and provide first logical volume metadata for the first logical volume to the second layer of the storage software stack; wherein the second layer of the storage software stack is configured to use the first logical volume metadata to access the first logical volume for I/O operations.
13. The system as recited in Claim 12, wherein the off-host virtualizer is further configured to: aggregate a second range of physical storage within the first physical storage device into a second logical volume; dynamically map a second range of virtual storage within the first virtual storage device to the second logical volume; and provide additional logical volume metadata for the second logical volume to the second layer of the storage software stack; wherein the second layer of the storage software stack is configured to use the additional logical volume metadata to access the second logical volume for I/O operations.
14. The system as recited in Claim 12, further comprising a second host, wherein the off-host virtualizer is further configured to: generate second operating system metadata for a second virtual storage device, wherein the second virtual storage device is unmapped to physical storage; make the second operating system metadata accessible to the second host; aggregate a second range of physical storage within the first physical storage device into a second logical volume; and provide second logical volume metadata for the second logical volume to the second host; wherein the second host comprises a second storage software stack comprising a first and a second layer, wherein the first layer of the second software stack is configured to use the second operating system metadata to detect an existence of the second virtual storage device as a second addressable storage device, and wherein the second layer of the second software stack is configured to use the second logical volume metadata to access the second logical volume for I/O operations.
15. The system as recited in Claim 14, wherein the off-host virtualizer is further configured to prevent the second host from performing an I/O operation on the first logical volume.
16. The system as recited in Claim 14, wherein the first addressable storage device is accessible to the first host via a first storage network, and the second addressable storage device is accessible to the second host via a second storage network.
17. The system as recited in Claim 12, further comprising a second host and a second physical storage device, wherein the off-host virtualizer is further configured to: generate second operating system metadata for a second virtual storage device, wherein the second virtual storage device is unmapped to physical storage; make the second operating system metadata accessible to the second host; aggregate a second range of physical storage within the second physical storage device into a second logical volume; dynamically map a second range of virtual storage within the first virtual storage device to the second logical volume; and provide second logical volume metadata for the second logical volume to the second host; wherein the second host comprises a second storage software stack comprising a first and a second layer, wherein the first layer of the second software stack is configured to use the second operating system metadata to detect an existence of the second virtual storage device as a second addressable storage device, and wherein the second layer of the second software stack is configured to use the second logical volume metadata to access the second logical volume for I/O operations; wherein the first addressable storage device is accessible to the first host via a first storage network, the second addressable storage device is accessible to the second host via a second storage network, the first physical storage device is accessible to the off-host virtualizer via a third storage network, and the second physical storage device is accessible to the off-host virtualizer via a fourth storage network.
18. The system as recited in Claim 17, wherein each storage network of the first, second, third and fourth storage networks includes an independently configurable fibre channel fabric.
19. The system as recited in Claim 1, wherein the storage software stack includes a second layer, further comprising a first and a second physical storage device, wherein the off-host virtualizer is further configured to: aggregate a first range of storage within the first physical storage device and a second range of storage within the second physical storage device into a single logical volume; and provide logical volume metadata for the single logical volume to the second layer of the storage software stack; wherein the second layer of the storage software stack is configured to use the logical volume metadata to access the single logical volume for I/O operations.
20. A virtualizer operable to generate operating system metadata for a storage device, wherein the operating system metadata emulates a storage volume hosted under a particular operating system.
21. A method comprising: generating first operating system metadata for a first virtual storage device; making the first operating system metadata for the first virtual storage device accessible to a first host; and using the first operating system metadata at a first layer of a storage software stack at the first host to detect an existence of the first virtual storage device as a first addressable storage device.
22. The method as recited in Claim 21, further comprising: customizing the operating system metadata in response to a requirement of a particular operating system in use at the first host.
23. The method as recited in Claim 21, wherein the first virtual storage device is unmapped to physical storage.
24. The method as recited in Claim 23, further comprising: dynamically mapping a first and a second physical storage device to a respective address range within the first virtual storage device.
25. The method as recited in Claim 23, further comprising: aggregating a first range of physical storage within a first physical storage device into a first logical volume; dynamically mapping a first range of virtual storage within the first virtual storage device to the first logical volume; providing first logical volume metadata for the first logical volume to a second layer of the storage software stack; and using the first logical volume metadata to access the first logical volume for I/O operations from the second layer of the storage software stack.
26. The method as recited in Claim 25, further comprising: aggregating a second range of physical storage within the first physical storage device into a second logical volume; dynamically mapping a second range of virtual storage within the first virtual storage device to the second logical volume; providing additional logical volume metadata for the second logical volume to the second layer of the storage software stack; and using the additional logical volume metadata at the second layer of the storage software stack to access the second logical volume for I/O operations.
27. The method as recited in Claim 25, further comprising: generating second operating system metadata for a second virtual storage device, wherein the second virtual storage device is unmapped to physical storage; making the second operating system metadata accessible to a second host; using the second operating system metadata at a first layer of a storage software stack at the second host to detect an existence of the second virtual storage device as a second addressable storage device, aggregating a second range of physical storage within the first physical storage device into a second logical volume; dynamically mapping a second range of virtual storage within the second virtual storage device to the second logical volume; providing second logical volume metadata for the second logical volume to a second layer of the storage software stack at the second host; and using the second logical volume metadata to access the second logical volume for I/O operations from the second layer of the storage software stack at the second host.
28. The method as recited in Claim 27, further comprising: preventing the second host from performing an I/O operation on the first logical volume.
29. The method as recited in Claim 25, further comprising: generating second operating system metadata for a second virtual storage device, wherein the second virtual storage device is unmapped to physical storage; making the second operating system metadata accessible to a second host; using the second operating system metadata at a first layer of a storage software stack at the second host to detect an existence of the second virtual storage device as a second addressable storage device, aggregating a second range of physical storage within a second physical storage device into a second logical volume at an off-host virtualizer; dynamically mapping a second range of virtual storage within the second virtual storage device to the second logical volume; providing second logical volume metadata for the second logical volume to a second layer of the storage software stack at the second host; and using the second logical volume metadata to access the second logical volume for I/O operations from the second layer of the storage software stack at the second host; wherein the first range of physical storage is aggregated into the first logical volume by the off host virtualizer; and wherein the first addressable storage device is accessible to the first host via a first storage network, the second addressable storage device is accessible to the second host via a second storage network, the first physical storage device is accessible to an off-host virtualizer via a third storage network, and the second physical storage device is accessible to the off-host virtualizer via a fourth storage network.
30. The method as recited in Claim 29, wherein each storage network of the first, second, third and fourth storage networks includes an independently configurable fibre channel fabric.
31. The method as recited in Claim 23, further comprising: aggregating a first range of storage within a first physical storage device and a second range of storage within a second physical storage device into a single logical volume; dynamically mapping a range of virtual storage within the virtual storage device to the second logical volume; providing logical volume metadata for the single logical volume to a second layer of the storage software stack; and using the logical volume metadata to access the single logical volume for I/O operations from the second layer of the storage software stack.
32. A computer accessible medium comprising program instructions, wherein the instructions are executable to: generate first operating system metadata for a first virtual storage device; make the first operating system metadata for the first virtual storage device accessible to a first host; and use the first operating system metadata at a first layer of a storage software stack at the host to detect an existence of the first virtual storage device as a first addressable storage device.
33. The computer accessible medium as recited in Claim 32, wherein the instructions are further executable to: customize the operating system metadata in response to a requirement of a particular operating system in use at the first host.
34. The computer accessible medium as recited in Claim 32, wherein the first virtual storage device is unmapped to physical storage.
35. The computer accessible medium as recited in Claim 34, wherein the instructions are further executable to: map a first and a second physical storage device to a respective address range within the first virtual storage device.
6. The computer accessible medium as recited in Claim 34, wherein the instructions are further executable to: aggregate a first range of physical storage within a first physical storage device into a first logical volume; dynamically map a first range of virtual storage within the first virtual storage device to the first logical volume; provide first logical volume metadata for the first logical volume to a second layer of the storage software stack; and use the first logical volume metadata to access the first logical volume for I/O operations from the second layer of the storage software stack.
PCT/US2004/039306 2003-11-26 2004-11-22 System and method for emulating operating system metadata to provide cross-platform access to storage volumes WO2005055043A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2006541649A JP4750040B2 (en) 2003-11-26 2004-11-22 System and method for emulating operating system metadata enabling cross-platform access to storage volumes
EP04811936A EP1687706A1 (en) 2003-11-26 2004-11-22 System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US11/156,635 US7689803B2 (en) 2003-11-26 2005-06-20 System and method for communication using emulated LUN blocks in storage virtualization environments
US11/156,820 US7669032B2 (en) 2003-11-26 2005-06-20 Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US11/156,821 US20050235132A1 (en) 2003-11-26 2005-06-20 System and method for dynamic LUN mapping
US11/156,636 US20050228950A1 (en) 2003-11-26 2005-06-20 External encapsulation of a volume into a LUN to allow booting and installation on a complex volume

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/722,614 2003-11-26
US10/722,614 US20050114595A1 (en) 2003-11-26 2003-11-26 System and method for emulating operating system metadata to provide cross-platform access to storage volumes

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/722,614 Continuation-In-Part US20050114595A1 (en) 2003-11-26 2003-11-26 System and method for emulating operating system metadata to provide cross-platform access to storage volumes

Publications (1)

Publication Number Publication Date
WO2005055043A1 true WO2005055043A1 (en) 2005-06-16

Family

ID=34592023

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/039306 WO2005055043A1 (en) 2003-11-26 2004-11-22 System and method for emulating operating system metadata to provide cross-platform access to storage volumes

Country Status (5)

Country Link
US (4) US20050114595A1 (en)
EP (1) EP1687706A1 (en)
JP (1) JP4750040B2 (en)
CN (1) CN100552611C (en)
WO (1) WO2005055043A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008112399A (en) * 2006-10-31 2008-05-15 Fujitsu Ltd Storage virtualization switch and computer system
JP2009508192A (en) * 2005-08-25 2009-02-26 シリコン イメージ,インコーポレイテッド Smart scalable memory switch architecture
US10013188B2 (en) 2014-12-17 2018-07-03 Fujitsu Limited Storage control device and method of copying data in failover in storage cluster
WO2022157785A1 (en) * 2021-01-25 2022-07-28 Volumez Technologies Ltd. Method and system for orchestrating remote block device connection between servers
US11681614B1 (en) * 2013-01-28 2023-06-20 Radian Memory Systems, Inc. Storage device with subdivisions, subdivision query, and write operations

Families Citing this family (301)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9603582D0 (en) 1996-02-20 1996-04-17 Hewlett Packard Co Method of accessing service resource items that are for use in a telecommunications system
US8032701B1 (en) * 2004-03-26 2011-10-04 Emc Corporation System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network
US7024427B2 (en) * 2001-12-19 2006-04-04 Emc Corporation Virtual file system
US7769722B1 (en) 2006-12-08 2010-08-03 Emc Corporation Replication and restoration of multiple data storage object types in a data network
US20050114595A1 (en) * 2003-11-26 2005-05-26 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US7669032B2 (en) * 2003-11-26 2010-02-23 Symantec Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US7461141B2 (en) * 2004-01-30 2008-12-02 Applied Micro Circuits Corporation System and method for performing driver configuration operations without a system reboot
US20050216680A1 (en) * 2004-03-25 2005-09-29 Itzhak Levy Device to allow multiple data processing channels to share a single disk drive
US7945657B1 (en) * 2005-03-30 2011-05-17 Oracle America, Inc. System and method for emulating input/output performance of an application
WO2005114374A2 (en) * 2004-05-21 2005-12-01 Computer Associates Think, Inc. Object-based storage
US9264384B1 (en) * 2004-07-22 2016-02-16 Oracle International Corporation Resource virtualization mechanism including virtual host bus adapters
US7409495B1 (en) * 2004-12-22 2008-08-05 Symantec Operating Corporation Method and apparatus for providing a temporal storage appliance with block virtualization in storage networks
US7493462B2 (en) * 2005-01-20 2009-02-17 International Business Machines Corporation Apparatus, system, and method for validating logical volume configuration
US7657780B2 (en) * 2005-02-07 2010-02-02 Mimosa Systems, Inc. Enterprise service availability through identity preservation
US8543542B2 (en) * 2005-02-07 2013-09-24 Mimosa Systems, Inc. Synthetic full copies of data and dynamic bulk-to-brick transformation
US7917475B2 (en) * 2005-02-07 2011-03-29 Mimosa Systems, Inc. Enterprise server version migration through identity preservation
US8812433B2 (en) * 2005-02-07 2014-08-19 Mimosa Systems, Inc. Dynamic bulk-to-brick transformation of data
US8275749B2 (en) * 2005-02-07 2012-09-25 Mimosa Systems, Inc. Enterprise server version migration through identity preservation
US8271436B2 (en) * 2005-02-07 2012-09-18 Mimosa Systems, Inc. Retro-fitting synthetic full copies of data
US7778976B2 (en) * 2005-02-07 2010-08-17 Mimosa, Inc. Multi-dimensional surrogates for data management
US8799206B2 (en) * 2005-02-07 2014-08-05 Mimosa Systems, Inc. Dynamic bulk-to-brick transformation of data
US8918366B2 (en) * 2005-02-07 2014-12-23 Mimosa Systems, Inc. Synthetic full copies of data and dynamic bulk-to-brick transformation
US7870416B2 (en) * 2005-02-07 2011-01-11 Mimosa Systems, Inc. Enterprise service availability through identity preservation
US8161318B2 (en) * 2005-02-07 2012-04-17 Mimosa Systems, Inc. Enterprise service availability through identity preservation
US7519851B2 (en) * 2005-02-08 2009-04-14 Hitachi, Ltd. Apparatus for replicating volumes between heterogenous storage systems
US7774514B2 (en) * 2005-05-16 2010-08-10 Infortrend Technology, Inc. Method of transmitting data between storage virtualization controllers and storage virtualization controller designed to implement the method
US7630998B2 (en) * 2005-06-10 2009-12-08 Microsoft Corporation Performing a deletion of a node in a tree data storage structure
US20070038749A1 (en) * 2005-07-29 2007-02-15 Broadcom Corporation Combined local and network storage interface
US8433770B2 (en) * 2005-07-29 2013-04-30 Broadcom Corporation Combined local and network storage interface
US7802000B1 (en) * 2005-08-01 2010-09-21 Vmware Virtual network in server farm
US9813283B2 (en) 2005-08-09 2017-11-07 Oracle International Corporation Efficient data transfer between servers and remote peripherals
US20070083653A1 (en) * 2005-09-16 2007-04-12 Balasubramanian Chandrasekaran System and method for deploying information handling system images through fibre channel
JP2007094578A (en) * 2005-09-27 2007-04-12 Fujitsu Ltd Storage system and its component replacement processing method
US7765187B2 (en) * 2005-11-29 2010-07-27 Emc Corporation Replication of a consistency group of data storage objects from servers in a data network
US8572330B2 (en) * 2005-12-19 2013-10-29 Commvault Systems, Inc. Systems and methods for granular resource management in a storage network
JP4474356B2 (en) * 2005-12-27 2010-06-02 富士通株式会社 Computer system and storage virtualization apparatus
JP4797636B2 (en) * 2006-01-16 2011-10-19 株式会社日立製作所 Complex information platform apparatus and information processing apparatus configuration method thereof
US8533409B2 (en) * 2006-01-26 2013-09-10 Infortrend Technology, Inc. Method of managing data snapshot images in a storage system
US20070180287A1 (en) * 2006-01-31 2007-08-02 Dell Products L. P. System and method for managing node resets in a cluster
US20070180167A1 (en) * 2006-02-02 2007-08-02 Seagate Technology Llc Dynamic partition mapping in a hot-pluggable data storage apparatus
US7904492B2 (en) * 2006-03-23 2011-03-08 Network Appliance, Inc. Method and apparatus for concurrent read-only access to filesystem
JP2007265001A (en) * 2006-03-28 2007-10-11 Hitachi Ltd Storage device
JP5037881B2 (en) * 2006-04-18 2012-10-03 株式会社日立製作所 Storage system and control method thereof
US7617373B2 (en) * 2006-05-23 2009-11-10 International Business Machines Corporation Apparatus, system, and method for presenting a storage volume as a virtual volume
US7987305B2 (en) * 2006-05-30 2011-07-26 Schneider Electric USA, Inc. Remote virtual placeholder configuration for distributed input/output modules
US8261068B1 (en) 2008-09-30 2012-09-04 Emc Corporation Systems and methods for selective encryption of operating system metadata for host-based encryption of data at rest on a logical unit
US8416954B1 (en) 2008-09-30 2013-04-09 Emc Corporation Systems and methods for accessing storage or network based replicas of encrypted volumes with no additional key management
US7904681B1 (en) * 2006-06-30 2011-03-08 Emc Corporation Methods and systems for migrating data with minimal disruption
US7536503B1 (en) * 2006-06-30 2009-05-19 Emc Corporation Methods and systems for preserving disk geometry when migrating existing data volumes
US9003000B2 (en) * 2006-07-25 2015-04-07 Nvidia Corporation System and method for operating system installation on a diskless computing platform
US7610483B2 (en) * 2006-07-25 2009-10-27 Nvidia Corporation System and method to accelerate identification of hardware platform classes
US8909746B2 (en) * 2006-07-25 2014-12-09 Nvidia Corporation System and method for operating system installation on a diskless computing platform
US10013268B2 (en) * 2006-08-29 2018-07-03 Prometric Inc. Performance-based testing system and method employing emulation and virtualization
US8095715B1 (en) * 2006-09-05 2012-01-10 Nvidia Corporation SCSI HBA management using logical units
US7761738B2 (en) 2006-09-07 2010-07-20 International Business Machines Corporation Establishing communications across virtual enclosure boundaries
US7584378B2 (en) 2006-09-07 2009-09-01 International Business Machines Corporation Reconfigurable FC-AL storage loops in a data storage system
US8332613B1 (en) * 2006-09-29 2012-12-11 Emc Corporation Methods and systems for managing I/O requests to minimize disruption required for data encapsulation and de-encapsulation
JP2008090657A (en) * 2006-10-03 2008-04-17 Hitachi Ltd Storage system and control method
US7975135B2 (en) * 2006-11-23 2011-07-05 Dell Products L.P. Apparatus, method and product for selecting an iSCSI target for automated initiator booting
US9104599B2 (en) 2007-12-06 2015-08-11 Intelligent Intellectual Property Holdings 2 Llc Apparatus, system, and method for destaging cached data
US8296337B2 (en) 2006-12-06 2012-10-23 Fusion-Io, Inc. Apparatus, system, and method for managing data from a requesting device with an empty data token directive
US8706968B2 (en) 2007-12-06 2014-04-22 Fusion-Io, Inc. Apparatus, system, and method for redundant write caching
US8443134B2 (en) 2006-12-06 2013-05-14 Fusion-Io, Inc. Apparatus, system, and method for graceful cache device degradation
US8489817B2 (en) 2007-12-06 2013-07-16 Fusion-Io, Inc. Apparatus, system, and method for caching data
US8706833B1 (en) * 2006-12-08 2014-04-22 Emc Corporation Data storage server having common replication architecture for multiple storage object types
JP4813385B2 (en) * 2007-01-29 2011-11-09 株式会社日立製作所 Control device that controls multiple logical resources of a storage system
US7840790B1 (en) * 2007-02-16 2010-11-23 Vmware, Inc. Method and system for providing device drivers in a virtualization system
JP5104855B2 (en) * 2007-03-23 2012-12-19 富士通株式会社 Load distribution program, load distribution method, and storage management apparatus
CN100547566C (en) * 2007-06-28 2009-10-07 忆正存储技术(深圳)有限公司 Control method based on multi-passage flash memory apparatus logic strip
US8635429B1 (en) 2007-06-29 2014-01-21 Symantec Corporation Method and apparatus for mapping virtual drives
US8738871B1 (en) * 2007-06-29 2014-05-27 Symantec Corporation Method and apparatus for mapping virtual drives
US7568051B1 (en) * 2007-06-29 2009-07-28 Emc Corporation Flexible UCB
US8176405B2 (en) * 2007-09-24 2012-05-08 International Business Machines Corporation Data integrity validation in a computing environment
US20090089498A1 (en) * 2007-10-02 2009-04-02 Michael Cameron Hay Transparently migrating ongoing I/O to virtualized storage
US20090119452A1 (en) * 2007-11-02 2009-05-07 Crossroads Systems, Inc. Method and system for a sharable storage device
US7836226B2 (en) 2007-12-06 2010-11-16 Fusion-Io, Inc. Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment
US9519540B2 (en) 2007-12-06 2016-12-13 Sandisk Technologies Llc Apparatus, system, and method for destaging cached data
WO2009070898A1 (en) * 2007-12-07 2009-06-11 Scl Elements Inc. Auto-configuring multi-layer network
US8032689B2 (en) * 2007-12-18 2011-10-04 Hitachi Global Storage Technologies Netherlands, B.V. Techniques for data storage device virtualization
US8028062B1 (en) * 2007-12-26 2011-09-27 Emc Corporation Non-disruptive data mobility using virtual storage area networks with split-path virtualization
US8055867B2 (en) * 2008-01-11 2011-11-08 International Business Machines Corporation Methods, apparatuses, and computer program products for protecting pre-staged provisioned data in a storage system
US8074020B2 (en) * 2008-02-13 2011-12-06 International Business Machines Corporation On-line volume coalesce operation to enable on-line storage subsystem volume consolidation
US20090216944A1 (en) * 2008-02-22 2009-08-27 International Business Machines Corporation Efficient validation of writes for protection against dropped writes
DE112008003788T5 (en) * 2008-03-27 2011-02-24 Hewlett-Packard Co. Intellectual Property Administration, Fort Collins Raid array access by an operating system that does not know the RAID array
JP2009238114A (en) * 2008-03-28 2009-10-15 Hitachi Ltd Storage management method, storage management program, storage management apparatus, and storage management system
US7979260B1 (en) * 2008-03-31 2011-07-12 Symantec Corporation Simulating PXE booting for virtualized machines
US8745336B2 (en) * 2008-05-29 2014-06-03 Vmware, Inc. Offloading storage operations to storage hardware
US8893160B2 (en) * 2008-06-09 2014-11-18 International Business Machines Corporation Block storage interface for virtual memory
GB2460841B (en) 2008-06-10 2012-01-11 Virtensys Ltd Methods of providing access to I/O devices
US8725688B2 (en) * 2008-09-05 2014-05-13 Commvault Systems, Inc. Image level copy or restore, such as image level restore without knowledge of data object metadata
US8073674B2 (en) * 2008-09-23 2011-12-06 Oracle America, Inc. SCSI device emulation in user space facilitating storage virtualization
US8516190B1 (en) * 2008-09-26 2013-08-20 Nvidia Corporation Reporting logical sector alignment for ATA mass storage devices
US8055842B1 (en) 2008-09-26 2011-11-08 Nvidia Corporation Using raid with large sector size ATA mass storage devices
US20100082715A1 (en) * 2008-09-30 2010-04-01 Karl Dohm Reduced-Resource Block Thin Provisioning
US8510352B2 (en) 2008-10-24 2013-08-13 Microsoft Corporation Virtualized boot block with discovery volume
US8166314B1 (en) 2008-12-30 2012-04-24 Emc Corporation Selective I/O to logical unit when encrypted, but key is not available or when encryption status is unknown
US8417969B2 (en) * 2009-02-19 2013-04-09 Microsoft Corporation Storage volume protection supporting legacy systems
US8073886B2 (en) * 2009-02-20 2011-12-06 Microsoft Corporation Non-privileged access to data independent of filesystem implementation
US8074038B2 (en) 2009-05-12 2011-12-06 Microsoft Corporation Converting luns into files or files into luns in real time
US9015198B2 (en) * 2009-05-26 2015-04-21 Pi-Coral, Inc. Method and apparatus for large scale data storage
US8238538B2 (en) 2009-05-28 2012-08-07 Comcast Cable Communications, Llc Stateful home phone service
US9973446B2 (en) 2009-08-20 2018-05-15 Oracle International Corporation Remote shared server peripherals over an Ethernet network for resource virtualization
US8495289B2 (en) * 2010-02-24 2013-07-23 Red Hat, Inc. Automatically detecting discrepancies between storage subsystem alignments
US8539124B1 (en) * 2010-03-31 2013-09-17 Emc Corporation Storage integration plugin for virtual servers
US8756338B1 (en) * 2010-04-29 2014-06-17 Netapp, Inc. Storage server with embedded communication agent
US8560825B2 (en) * 2010-06-30 2013-10-15 International Business Machines Corporation Streaming virtual machine boot services over a network
US8261003B2 (en) * 2010-08-11 2012-09-04 Lsi Corporation Apparatus and methods for managing expanded capacity of virtual volumes in a storage system
JP2012058912A (en) * 2010-09-07 2012-03-22 Nec Corp Logical unit number management device, logical unit number management method and program therefor
US11614893B2 (en) 2010-09-15 2023-03-28 Pure Storage, Inc. Optimizing storage device access based on latency
US9331963B2 (en) 2010-09-24 2016-05-03 Oracle International Corporation Wireless host I/O using virtualized I/O controllers
CN101986655A (en) * 2010-10-21 2011-03-16 浪潮(北京)电子信息产业有限公司 Storage network and data reading and writing method thereof
US8458145B2 (en) * 2011-01-20 2013-06-04 Infinidat Ltd. System and method of storage optimization
US9092337B2 (en) 2011-01-31 2015-07-28 Intelligent Intellectual Property Holdings 2 Llc Apparatus, system, and method for managing eviction of data
US9141527B2 (en) 2011-02-25 2015-09-22 Intelligent Intellectual Property Holdings 2 Llc Managing cache pools
US9606747B2 (en) 2011-05-04 2017-03-28 International Business Machines Corporation Importing pre-existing data of a prior storage solution into a storage pool for use with a new storage solution
US8838931B1 (en) * 2012-03-30 2014-09-16 Emc Corporation Techniques for automated discovery and performing storage optimizations on a component external to a data storage system
US8996800B2 (en) 2011-07-07 2015-03-31 Atlantis Computing, Inc. Deduplication of virtual machine files in a virtualized desktop environment
US9152404B2 (en) 2011-07-13 2015-10-06 Z124 Remote device filter
US20130268559A1 (en) 2011-07-13 2013-10-10 Z124 Virtual file system remote search
US8909891B2 (en) * 2011-07-21 2014-12-09 International Business Machines Corporation Virtual logical volume for overflow storage of special data sets
US8589640B2 (en) 2011-10-14 2013-11-19 Pure Storage, Inc. Method for maintaining multiple fingerprint tables in a deduplicating storage system
US20130268703A1 (en) * 2011-09-27 2013-10-10 Z124 Rules based hierarchical data virtualization
CN102567217B (en) * 2012-01-04 2014-12-24 北京航空航天大学 MIPS platform-oriented memory virtualization method
US9767032B2 (en) 2012-01-12 2017-09-19 Sandisk Technologies Llc Systems and methods for cache endurance
US9251086B2 (en) 2012-01-24 2016-02-02 SanDisk Technologies, Inc. Apparatus, system, and method for managing a cache
US9158568B2 (en) 2012-01-30 2015-10-13 Hewlett-Packard Development Company, L.P. Input/output operations at a virtual block device of a storage server
US9239776B2 (en) * 2012-02-09 2016-01-19 Vmware, Inc. Systems and methods to simulate storage
US9946559B1 (en) * 2012-02-13 2018-04-17 Veritas Technologies Llc Techniques for managing virtual machine backups
US9098325B2 (en) 2012-02-28 2015-08-04 Hewlett-Packard Development Company, L.P. Persistent volume at an offset of a virtual block device of a storage server
US10817202B2 (en) 2012-05-29 2020-10-27 International Business Machines Corporation Application-controlled sub-LUN level data migration
US10831727B2 (en) 2012-05-29 2020-11-10 International Business Machines Corporation Application-controlled sub-LUN level data migration
US10831728B2 (en) 2012-05-29 2020-11-10 International Business Machines Corporation Application-controlled sub-LUN level data migration
US8856484B2 (en) * 2012-08-14 2014-10-07 Infinidat Ltd. Mass storage system and methods of controlling resources thereof
US9116623B2 (en) * 2012-08-14 2015-08-25 International Business Machines Corporation Optimizing storage system behavior in virtualized cloud computing environments by tagging input/output operation data to indicate storage policy
US9083550B2 (en) 2012-10-29 2015-07-14 Oracle International Corporation Network virtualization over infiniband
US9454670B2 (en) 2012-12-03 2016-09-27 International Business Machines Corporation Hybrid file systems
US20140164581A1 (en) * 2012-12-10 2014-06-12 Transparent Io, Inc. Dispersed Storage System with Firewall
US9280359B2 (en) * 2012-12-11 2016-03-08 Cisco Technology, Inc. System and method for selecting a least cost path for performing a network boot in a data center network environment
US9912713B1 (en) 2012-12-17 2018-03-06 MiMedia LLC Systems and methods for providing dynamically updated image sets for applications
US9277010B2 (en) 2012-12-21 2016-03-01 Atlantis Computing, Inc. Systems and apparatuses for aggregating nodes to form an aggregated virtual storage for a virtualized desktop environment
US9069472B2 (en) 2012-12-21 2015-06-30 Atlantis Computing, Inc. Method for dispersing and collating I/O's from virtual machines for parallelization of I/O access and redundancy of storing virtual machine data
US9633216B2 (en) 2012-12-27 2017-04-25 Commvault Systems, Inc. Application of information management policies based on operation with a geographic entity
US9250946B2 (en) 2013-02-12 2016-02-02 Atlantis Computing, Inc. Efficient provisioning of cloned virtual machine images using deduplication metadata
US9471590B2 (en) 2013-02-12 2016-10-18 Atlantis Computing, Inc. Method and apparatus for replicating virtual machine images using deduplication metadata
US9372865B2 (en) 2013-02-12 2016-06-21 Atlantis Computing, Inc. Deduplication metadata access in deduplication file system
US9459968B2 (en) 2013-03-11 2016-10-04 Commvault Systems, Inc. Single index to query multiple backup formats
US9298758B1 (en) 2013-03-13 2016-03-29 MiMedia, Inc. Systems and methods providing media-to-media connection
US9465521B1 (en) 2013-03-13 2016-10-11 MiMedia, Inc. Event based media interface
US9183232B1 (en) 2013-03-15 2015-11-10 MiMedia, Inc. Systems and methods for organizing content using content organization rules and robust content information
US10257301B1 (en) 2013-03-15 2019-04-09 MiMedia, Inc. Systems and methods providing a drive interface for content delivery
US20140359612A1 (en) * 2013-06-03 2014-12-04 Microsoft Corporation Sharing a Virtual Hard Disk Across Multiple Virtual Machines
US9176890B2 (en) 2013-06-07 2015-11-03 Globalfoundries Inc. Non-disruptive modification of a device mapper stack
US9798596B2 (en) 2014-02-27 2017-10-24 Commvault Systems, Inc. Automatic alert escalation for an information management system
US9871889B1 (en) * 2014-03-18 2018-01-16 EMC IP Holing Company LLC Techniques for automated capture of configuration data for simulation
US9213485B1 (en) 2014-06-04 2015-12-15 Pure Storage, Inc. Storage system architecture
US11960371B2 (en) 2014-06-04 2024-04-16 Pure Storage, Inc. Message persistence in a zoned system
US10574754B1 (en) 2014-06-04 2020-02-25 Pure Storage, Inc. Multi-chassis array with multi-level load balancing
US11068363B1 (en) 2014-06-04 2021-07-20 Pure Storage, Inc. Proactively rebuilding data in a storage cluster
US11652884B2 (en) 2014-06-04 2023-05-16 Pure Storage, Inc. Customized hash algorithms
US9218244B1 (en) 2014-06-04 2015-12-22 Pure Storage, Inc. Rebuilding data across storage nodes
US9836234B2 (en) 2014-06-04 2017-12-05 Pure Storage, Inc. Storage cluster
US8850108B1 (en) 2014-06-04 2014-09-30 Pure Storage, Inc. Storage cluster
US9367243B1 (en) 2014-06-04 2016-06-14 Pure Storage, Inc. Scalable non-uniform storage sizes
US11399063B2 (en) 2014-06-04 2022-07-26 Pure Storage, Inc. Network authentication for a storage system
US9003144B1 (en) 2014-06-04 2015-04-07 Pure Storage, Inc. Mechanism for persisting messages in a storage system
US11604598B2 (en) 2014-07-02 2023-03-14 Pure Storage, Inc. Storage cluster with zoned drives
US9021297B1 (en) 2014-07-02 2015-04-28 Pure Storage, Inc. Redundant, fault-tolerant, distributed remote procedure call cache in a storage system
US9836245B2 (en) 2014-07-02 2017-12-05 Pure Storage, Inc. Non-volatile RAM and flash memory in a non-volatile solid-state storage
US11886308B2 (en) 2014-07-02 2024-01-30 Pure Storage, Inc. Dual class of service for unified file and object messaging
US8868825B1 (en) 2014-07-02 2014-10-21 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US10114757B2 (en) 2014-07-02 2018-10-30 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US9811677B2 (en) 2014-07-03 2017-11-07 Pure Storage, Inc. Secure data replication in a storage grid
US10853311B1 (en) 2014-07-03 2020-12-01 Pure Storage, Inc. Administration through files in a storage system
US8874836B1 (en) 2014-07-03 2014-10-28 Pure Storage, Inc. Scheduling policy for queues in a non-volatile solid-state storage
US9747229B1 (en) 2014-07-03 2017-08-29 Pure Storage, Inc. Self-describing data format for DMA in a non-volatile solid-state storage
US9558069B2 (en) 2014-08-07 2017-01-31 Pure Storage, Inc. Failure mapping in a storage array
US9495255B2 (en) 2014-08-07 2016-11-15 Pure Storage, Inc. Error recovery in a storage cluster
US10983859B2 (en) 2014-08-07 2021-04-20 Pure Storage, Inc. Adjustable error correction based on memory health in a storage unit
US9483346B2 (en) 2014-08-07 2016-11-01 Pure Storage, Inc. Data rebuild on feedback from a queue in a non-volatile solid-state storage
US9766972B2 (en) 2014-08-07 2017-09-19 Pure Storage, Inc. Masking defective bits in a storage array
US9082512B1 (en) 2014-08-07 2015-07-14 Pure Storage, Inc. Die-level monitoring in a storage cluster
US10079711B1 (en) 2014-08-20 2018-09-18 Pure Storage, Inc. Virtual file server with preserved MAC address
US10001927B1 (en) * 2014-09-30 2018-06-19 EMC IP Holding Company LLC Techniques for optimizing I/O operations
US9389789B2 (en) 2014-12-15 2016-07-12 International Business Machines Corporation Migration of executing applications and associated stored data
US9948615B1 (en) 2015-03-16 2018-04-17 Pure Storage, Inc. Increased storage unit encryption based on loss of trust
US11294893B2 (en) 2015-03-20 2022-04-05 Pure Storage, Inc. Aggregation of queries
US9940234B2 (en) 2015-03-26 2018-04-10 Pure Storage, Inc. Aggressive data deduplication using lazy garbage collection
US10082985B2 (en) 2015-03-27 2018-09-25 Pure Storage, Inc. Data striping across storage nodes that are assigned to multiple logical arrays
US10178169B2 (en) * 2015-04-09 2019-01-08 Pure Storage, Inc. Point to point based backend communication layer for storage processing
US9672125B2 (en) 2015-04-10 2017-06-06 Pure Storage, Inc. Ability to partition an array into two or more logical arrays with independently running software
US10140149B1 (en) 2015-05-19 2018-11-27 Pure Storage, Inc. Transactional commits with hardware assists in remote memory
US9817576B2 (en) 2015-05-27 2017-11-14 Pure Storage, Inc. Parallel update to NVRAM
WO2016197155A1 (en) * 2015-06-02 2016-12-08 Viirii, Llc Operating system independent, secure data storage subsystem
US10846275B2 (en) 2015-06-26 2020-11-24 Pure Storage, Inc. Key management in a storage device
WO2017006458A1 (en) * 2015-07-08 2017-01-12 株式会社日立製作所 Computer and memory region management method
US10983732B2 (en) 2015-07-13 2021-04-20 Pure Storage, Inc. Method and system for accessing a file
US11232079B2 (en) 2015-07-16 2022-01-25 Pure Storage, Inc. Efficient distribution of large directories
JP6461347B2 (en) * 2015-07-27 2019-01-30 株式会社日立製作所 Storage system and storage control method
US10108355B2 (en) 2015-09-01 2018-10-23 Pure Storage, Inc. Erase block state detection
US11341136B2 (en) 2015-09-04 2022-05-24 Pure Storage, Inc. Dynamically resizable structures for approximate membership queries
US10853266B2 (en) 2015-09-30 2020-12-01 Pure Storage, Inc. Hardware assisted data lookup methods
US9768953B2 (en) 2015-09-30 2017-09-19 Pure Storage, Inc. Resharing of a split secret
US10762069B2 (en) 2015-09-30 2020-09-01 Pure Storage, Inc. Mechanism for a system where data and metadata are located closely together
US9965184B2 (en) 2015-10-19 2018-05-08 International Business Machines Corporation Multiple storage subpools of a virtual storage pool in a multiple processor environment
US9843453B2 (en) 2015-10-23 2017-12-12 Pure Storage, Inc. Authorizing I/O commands with I/O tokens
US10007457B2 (en) 2015-12-22 2018-06-26 Pure Storage, Inc. Distributed transactions with token-associated execution
US10261690B1 (en) 2016-05-03 2019-04-16 Pure Storage, Inc. Systems and methods for operating a storage system
US10296250B2 (en) * 2016-06-08 2019-05-21 Intel Corporation Method and apparatus for improving performance of sequential logging in a storage device
US11861188B2 (en) 2016-07-19 2024-01-02 Pure Storage, Inc. System having modular accelerators
US11449232B1 (en) 2016-07-22 2022-09-20 Pure Storage, Inc. Optimal scheduling of flash operations
US9672905B1 (en) 2016-07-22 2017-06-06 Pure Storage, Inc. Optimize data protection layouts based on distributed flash wear leveling
US10768819B2 (en) 2016-07-22 2020-09-08 Pure Storage, Inc. Hardware support for non-disruptive upgrades
US11604690B2 (en) 2016-07-24 2023-03-14 Pure Storage, Inc. Online failure span determination
US11080155B2 (en) 2016-07-24 2021-08-03 Pure Storage, Inc. Identifying error types among flash memory
US10216420B1 (en) 2016-07-24 2019-02-26 Pure Storage, Inc. Calibration of flash channels in SSD
US10366004B2 (en) 2016-07-26 2019-07-30 Pure Storage, Inc. Storage system with elective garbage collection to reduce flash contention
US11886334B2 (en) 2016-07-26 2024-01-30 Pure Storage, Inc. Optimizing spool and memory space management
US10203903B2 (en) 2016-07-26 2019-02-12 Pure Storage, Inc. Geometry based, space aware shelf/writegroup evacuation
US11797212B2 (en) 2016-07-26 2023-10-24 Pure Storage, Inc. Data migration for zoned drives
US11734169B2 (en) 2016-07-26 2023-08-22 Pure Storage, Inc. Optimizing spool and memory space management
US11422719B2 (en) 2016-09-15 2022-08-23 Pure Storage, Inc. Distributed file deletion and truncation
US10756816B1 (en) 2016-10-04 2020-08-25 Pure Storage, Inc. Optimized fibre channel and non-volatile memory express access
US9747039B1 (en) 2016-10-04 2017-08-29 Pure Storage, Inc. Reservations over multiple paths on NVMe over fabrics
US11550481B2 (en) 2016-12-19 2023-01-10 Pure Storage, Inc. Efficiently writing data in a zoned drive storage system
US9747158B1 (en) 2017-01-13 2017-08-29 Pure Storage, Inc. Intelligent refresh of 3D NAND
US11955187B2 (en) 2017-01-13 2024-04-09 Pure Storage, Inc. Refresh of differing capacity NAND
US10620835B2 (en) * 2017-01-27 2020-04-14 Wyse Technology L.L.C. Attaching a windows file system to a remote non-windows disk stack
US10979223B2 (en) 2017-01-31 2021-04-13 Pure Storage, Inc. Separate encryption for a solid-state drive
US10838821B2 (en) 2017-02-08 2020-11-17 Commvault Systems, Inc. Migrating content and metadata from a backup system
US10776329B2 (en) 2017-03-28 2020-09-15 Commvault Systems, Inc. Migration of a database management system to cloud storage
US10528488B1 (en) 2017-03-30 2020-01-07 Pure Storage, Inc. Efficient name coding
US10754829B2 (en) 2017-04-04 2020-08-25 Oracle International Corporation Virtual configuration systems and methods
US11016667B1 (en) 2017-04-05 2021-05-25 Pure Storage, Inc. Efficient mapping for LUNs in storage memory with holes in address space
US10944671B2 (en) 2017-04-27 2021-03-09 Pure Storage, Inc. Efficient data forwarding in a networked device
US10516645B1 (en) 2017-04-27 2019-12-24 Pure Storage, Inc. Address resolution broadcasting in a networked device
US10141050B1 (en) 2017-04-27 2018-11-27 Pure Storage, Inc. Page writes for triple level cell flash memory
US10524022B2 (en) * 2017-05-02 2019-12-31 Seagate Technology Llc Data storage system with adaptive data path routing
US11467913B1 (en) 2017-06-07 2022-10-11 Pure Storage, Inc. Snapshots with crash consistency in a storage system
US11947814B2 (en) 2017-06-11 2024-04-02 Pure Storage, Inc. Optimizing resiliency group formation stability
US11782625B2 (en) 2017-06-11 2023-10-10 Pure Storage, Inc. Heterogeneity supportive resiliency groups
US11138103B1 (en) 2017-06-11 2021-10-05 Pure Storage, Inc. Resiliency groups
US10425473B1 (en) 2017-07-03 2019-09-24 Pure Storage, Inc. Stateful connection reset in a storage cluster with a stateless load balancer
US10402266B1 (en) 2017-07-31 2019-09-03 Pure Storage, Inc. Redundant array of independent disks in a direct-mapped flash storage system
US10877827B2 (en) 2017-09-15 2020-12-29 Pure Storage, Inc. Read voltage optimization
US10210926B1 (en) 2017-09-15 2019-02-19 Pure Storage, Inc. Tracking of optimum read voltage thresholds in nand flash devices
US10496330B1 (en) 2017-10-31 2019-12-03 Pure Storage, Inc. Using flash storage devices with different sized erase blocks
US11024390B1 (en) 2017-10-31 2021-06-01 Pure Storage, Inc. Overlapping RAID groups
US10515701B1 (en) 2017-10-31 2019-12-24 Pure Storage, Inc. Overlapping raid groups
US10884919B2 (en) 2017-10-31 2021-01-05 Pure Storage, Inc. Memory management in a storage system
US10545687B1 (en) 2017-10-31 2020-01-28 Pure Storage, Inc. Data rebuild when changing erase block sizes during drive replacement
US10860475B1 (en) 2017-11-17 2020-12-08 Pure Storage, Inc. Hybrid flash translation layer
US10990566B1 (en) 2017-11-20 2021-04-27 Pure Storage, Inc. Persistent file locks in a storage system
US10719265B1 (en) 2017-12-08 2020-07-21 Pure Storage, Inc. Centralized, quorum-aware handling of device reservation requests in a storage system
US10929053B2 (en) 2017-12-08 2021-02-23 Pure Storage, Inc. Safe destructive actions on drives
US10929031B2 (en) 2017-12-21 2021-02-23 Pure Storage, Inc. Maximizing data reduction in a partially encrypted volume
US10976948B1 (en) 2018-01-31 2021-04-13 Pure Storage, Inc. Cluster expansion mechanism
US10733053B1 (en) 2018-01-31 2020-08-04 Pure Storage, Inc. Disaster recovery for high-bandwidth distributed archives
US10467527B1 (en) 2018-01-31 2019-11-05 Pure Storage, Inc. Method and apparatus for artificial intelligence acceleration
US11036596B1 (en) 2018-02-18 2021-06-15 Pure Storage, Inc. System for delaying acknowledgements on open NAND locations until durability has been confirmed
US11494109B1 (en) 2018-02-22 2022-11-08 Pure Storage, Inc. Erase block trimming for heterogenous flash memory storage devices
US10853146B1 (en) 2018-04-27 2020-12-01 Pure Storage, Inc. Efficient data forwarding in a networked device
US11385792B2 (en) 2018-04-27 2022-07-12 Pure Storage, Inc. High availability controller pair transitioning
US10931450B1 (en) 2018-04-27 2021-02-23 Pure Storage, Inc. Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers
US20190362075A1 (en) * 2018-05-22 2019-11-28 Fortinet, Inc. Preventing users from accessing infected files by using multiple file storage repositories and a secure data transfer agent logically interposed therebetween
US11436023B2 (en) 2018-05-31 2022-09-06 Pure Storage, Inc. Mechanism for updating host file system and flash translation layer based on underlying NAND technology
US11438279B2 (en) 2018-07-23 2022-09-06 Pure Storage, Inc. Non-disruptive conversion of a clustered service from single-chassis to multi-chassis
US11868309B2 (en) 2018-09-06 2024-01-09 Pure Storage, Inc. Queue management for data relocation
US11500570B2 (en) 2018-09-06 2022-11-15 Pure Storage, Inc. Efficient relocation of data utilizing different programming modes
US11520514B2 (en) 2018-09-06 2022-12-06 Pure Storage, Inc. Optimized relocation of data based on data characteristics
US11354058B2 (en) 2018-09-06 2022-06-07 Pure Storage, Inc. Local relocation of data stored at a storage device of a storage system
US11036856B2 (en) 2018-09-16 2021-06-15 Fortinet, Inc. Natively mounting storage for inspection and sandboxing in the cloud
US10454498B1 (en) 2018-10-18 2019-10-22 Pure Storage, Inc. Fully pipelined hardware engine design for fast and efficient inline lossless data compression
US10976947B2 (en) 2018-10-26 2021-04-13 Pure Storage, Inc. Dynamically selecting segment heights in a heterogeneous RAID group
US11334254B2 (en) 2019-03-29 2022-05-17 Pure Storage, Inc. Reliability based flash page sizing
US11775189B2 (en) 2019-04-03 2023-10-03 Pure Storage, Inc. Segment level heterogeneity
US11099986B2 (en) 2019-04-12 2021-08-24 Pure Storage, Inc. Efficient transfer of memory contents
US11714572B2 (en) 2019-06-19 2023-08-01 Pure Storage, Inc. Optimized data resiliency in a modular storage system
US11281394B2 (en) 2019-06-24 2022-03-22 Pure Storage, Inc. Replication across partitioning schemes in a distributed storage system
US11893126B2 (en) 2019-10-14 2024-02-06 Pure Storage, Inc. Data deletion for a multi-tenant environment
CN112748848A (en) * 2019-10-29 2021-05-04 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for storage management
US11704192B2 (en) 2019-12-12 2023-07-18 Pure Storage, Inc. Budgeting open blocks based on power loss protection
US11416144B2 (en) 2019-12-12 2022-08-16 Pure Storage, Inc. Dynamic use of segment or zone power loss protection in a flash device
US11847331B2 (en) 2019-12-12 2023-12-19 Pure Storage, Inc. Budgeting open blocks of a storage unit based on power loss prevention
US10990537B1 (en) 2020-01-07 2021-04-27 International Business Machines Corporation Logical to virtual and virtual to physical translation in storage class memory
US11188432B2 (en) 2020-02-28 2021-11-30 Pure Storage, Inc. Data resiliency by partially deallocating data blocks of a storage device
US11507297B2 (en) 2020-04-15 2022-11-22 Pure Storage, Inc. Efficient management of optimal read levels for flash storage systems
US11256587B2 (en) 2020-04-17 2022-02-22 Pure Storage, Inc. Intelligent access to a storage device
US11416338B2 (en) 2020-04-24 2022-08-16 Pure Storage, Inc. Resiliency scheme to enhance storage performance
US11474986B2 (en) 2020-04-24 2022-10-18 Pure Storage, Inc. Utilizing machine learning to streamline telemetry processing of storage media
US11768763B2 (en) 2020-07-08 2023-09-26 Pure Storage, Inc. Flash secure erase
US11681448B2 (en) 2020-09-08 2023-06-20 Pure Storage, Inc. Multiple device IDs in a multi-fabric module storage system
US11513974B2 (en) 2020-09-08 2022-11-29 Pure Storage, Inc. Using nonce to control erasure of data blocks of a multi-controller storage system
US11487455B2 (en) 2020-12-17 2022-11-01 Pure Storage, Inc. Dynamic block allocation to optimize storage system performance
US11409608B2 (en) * 2020-12-29 2022-08-09 Advanced Micro Devices, Inc. Providing host-based error detection capabilities in a remote execution device
US11614880B2 (en) 2020-12-31 2023-03-28 Pure Storage, Inc. Storage system with selectable write paths
US11847324B2 (en) 2020-12-31 2023-12-19 Pure Storage, Inc. Optimizing resiliency groups for data regions of a storage system
US11630593B2 (en) 2021-03-12 2023-04-18 Pure Storage, Inc. Inline flash memory qualification in a storage system
US11507597B2 (en) 2021-03-31 2022-11-22 Pure Storage, Inc. Data replication to meet a recovery point objective
US11832410B2 (en) 2021-09-14 2023-11-28 Pure Storage, Inc. Mechanical energy absorbing bracket apparatus
US11816363B2 (en) * 2021-11-04 2023-11-14 International Business Machines Corporation File based virtual disk management
US11907551B2 (en) * 2022-07-01 2024-02-20 Dell Products, L.P. Performance efficient and resilient creation of network attached storage objects

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5829053A (en) * 1996-05-10 1998-10-27 Apple Computer, Inc. Block storage memory management system and method utilizing independent partition managers and device drivers
US20020087544A1 (en) * 2000-06-20 2002-07-04 Selkirk Stephen S. Dynamically changeable virtual mapping scheme
US20020103889A1 (en) * 2000-02-11 2002-08-01 Thomas Markson Virtual storage layer approach for dynamically associating computer storage with processing hosts
WO2002065249A2 (en) * 2001-02-13 2002-08-22 Candera, Inc. Storage virtualization and storage management to provide higher level storage services
US20020156984A1 (en) * 2001-02-20 2002-10-24 Storageapps Inc. System and method for accessing a storage area network as network attached storage
US6493811B1 (en) * 1998-01-26 2002-12-10 Computer Associated Think, Inc. Intelligent controller accessed through addressable virtual space
US20030004972A1 (en) * 2001-07-02 2003-01-02 Alexander Winokur Method and apparatus for implementing a reliable open file system

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5193184A (en) * 1990-06-18 1993-03-09 Storage Technology Corporation Deleted data file space release system for a dynamically mapped virtual data storage subsystem
US6044367A (en) * 1996-08-02 2000-03-28 Hewlett-Packard Company Distributed I/O store
US6240416B1 (en) * 1998-09-11 2001-05-29 Ambeo, Inc. Distributed metadata system and method
US6311213B2 (en) * 1998-10-27 2001-10-30 International Business Machines Corporation System and method for server-to-server data storage in a network environment
US6434637B1 (en) * 1998-12-31 2002-08-13 Emc Corporation Method and apparatus for balancing workloads among paths in a multi-path computer system based on the state of previous I/O operations
US6347371B1 (en) * 1999-01-25 2002-02-12 Dell Usa, L.P. System and method for initiating operation of a computer system
US6370605B1 (en) * 1999-03-04 2002-04-09 Sun Microsystems, Inc. Switch based scalable performance storage architecture
US6467023B1 (en) * 1999-03-23 2002-10-15 Lsi Logic Corporation Method for logical unit creation with immediate availability in a raid storage environment
US6779016B1 (en) * 1999-08-23 2004-08-17 Terraspring, Inc. Extensible computing system
EP1229435A4 (en) * 1999-10-22 2008-08-06 Hitachi Ltd Storage area network system
JP4651230B2 (en) * 2001-07-13 2011-03-16 株式会社日立製作所 Storage system and access control method to logical unit
US6658563B1 (en) * 2000-05-18 2003-12-02 International Business Machines Corporation Virtual floppy diskette image within a primary partition in a hard disk drive and method for booting system with virtual diskette
US6532527B2 (en) * 2000-06-19 2003-03-11 Storage Technology Corporation Using current recovery mechanisms to implement dynamic mapping operations
AU2002220108A1 (en) * 2000-11-02 2002-05-15 Pirus Networks Switching system
US6871245B2 (en) * 2000-11-29 2005-03-22 Radiant Data Corporation File system translators and methods for implementing the same
JP4187403B2 (en) * 2000-12-20 2008-11-26 インターナショナル・ビジネス・マシーンズ・コーポレーション Data recording system, data recording method, and network system
JP4105398B2 (en) * 2001-02-28 2008-06-25 株式会社日立製作所 Information processing system
US6779063B2 (en) * 2001-04-09 2004-08-17 Hitachi, Ltd. Direct access storage system having plural interfaces which permit receipt of block and file I/O requests
US20040015864A1 (en) * 2001-06-05 2004-01-22 Boucher Michael L. Method and system for testing memory operations of computer program
US7548975B2 (en) * 2002-01-09 2009-06-16 Cisco Technology, Inc. Methods and apparatus for implementing virtualization of storage within a storage area network through a virtual enclosure
US7433948B2 (en) * 2002-01-23 2008-10-07 Cisco Technology, Inc. Methods and apparatus for implementing virtualization of storage within a storage area network
US6934799B2 (en) * 2002-01-18 2005-08-23 International Business Machines Corporation Virtualization of iSCSI storage
EP1345113A3 (en) * 2002-03-13 2008-02-06 Hitachi, Ltd. Management server
US6889309B1 (en) * 2002-04-15 2005-05-03 Emc Corporation Method and apparatus for implementing an enterprise virtual storage system
US6954852B2 (en) * 2002-04-18 2005-10-11 Ardence, Inc. System for and method of network booting of an operating system to a client computer using hibernation
US7188194B1 (en) * 2002-04-22 2007-03-06 Cisco Technology, Inc. Session-based target/LUN mapping for a storage area network and associated method
US6973587B1 (en) * 2002-05-03 2005-12-06 American Megatrends, Inc. Systems and methods for out-of-band booting of a computer
US7107385B2 (en) * 2002-08-09 2006-09-12 Network Appliance, Inc. Storage virtualization by layering virtual disk objects on a file system
US7100089B1 (en) * 2002-09-06 2006-08-29 3Pardata, Inc. Determining differences between snapshots
US7263593B2 (en) * 2002-11-25 2007-08-28 Hitachi, Ltd. Virtualization controller and data transfer control method
US7797392B2 (en) * 2002-11-26 2010-09-14 International Business Machines Corporation System and method for efficiently supporting multiple native network protocol implementations in a single system
US7020760B2 (en) * 2002-12-16 2006-03-28 International Business Machines Corporation Hybrid logical block virtualization system for a storage area network
US6816917B2 (en) * 2003-01-15 2004-11-09 Hewlett-Packard Development Company, L.P. Storage system with LUN virtualization
US7606239B2 (en) * 2003-01-31 2009-10-20 Brocade Communications Systems, Inc. Method and apparatus for providing virtual ports with attached virtual devices in a storage area network
US6990573B2 (en) * 2003-02-05 2006-01-24 Dell Products L.P. System and method for sharing storage to boot multiple servers
JP2007510198A (en) * 2003-10-08 2007-04-19 ユニシス コーポレーション Paravirtualization of computer systems using hypervisors implemented in host system partitions
US7669032B2 (en) * 2003-11-26 2010-02-23 Symantec Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US20050114595A1 (en) 2003-11-26 2005-05-26 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US20050125538A1 (en) * 2003-12-03 2005-06-09 Dell Products L.P. Assigning logical storage units to host computers
US8190714B2 (en) * 2004-04-15 2012-05-29 Raytheon Company System and method for computer cluster virtualization using dynamic boot images and virtual disk

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5829053A (en) * 1996-05-10 1998-10-27 Apple Computer, Inc. Block storage memory management system and method utilizing independent partition managers and device drivers
US6493811B1 (en) * 1998-01-26 2002-12-10 Computer Associated Think, Inc. Intelligent controller accessed through addressable virtual space
US20020103889A1 (en) * 2000-02-11 2002-08-01 Thomas Markson Virtual storage layer approach for dynamically associating computer storage with processing hosts
US20020087544A1 (en) * 2000-06-20 2002-07-04 Selkirk Stephen S. Dynamically changeable virtual mapping scheme
WO2002065249A2 (en) * 2001-02-13 2002-08-22 Candera, Inc. Storage virtualization and storage management to provide higher level storage services
US20020156984A1 (en) * 2001-02-20 2002-10-24 Storageapps Inc. System and method for accessing a storage area network as network attached storage
US20030004972A1 (en) * 2001-07-02 2003-01-02 Alexander Winokur Method and apparatus for implementing a reliable open file system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "IBM Storage Tank TM. A Distributed Storage System", IBM, 24 January 2002 (2002-01-24), XP002270407 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009508192A (en) * 2005-08-25 2009-02-26 シリコン イメージ,インコーポレイテッド Smart scalable memory switch architecture
US8595434B2 (en) 2005-08-25 2013-11-26 Silicon Image, Inc. Smart scalable storage switch architecture
KR101340176B1 (en) * 2005-08-25 2013-12-10 실리콘 이미지, 인크. Smart scalable storage switch architecture
US9201778B2 (en) 2005-08-25 2015-12-01 Lattice Semiconductor Corporation Smart scalable storage switch architecture
JP2008112399A (en) * 2006-10-31 2008-05-15 Fujitsu Ltd Storage virtualization switch and computer system
US11681614B1 (en) * 2013-01-28 2023-06-20 Radian Memory Systems, Inc. Storage device with subdivisions, subdivision query, and write operations
US11748257B1 (en) * 2013-01-28 2023-09-05 Radian Memory Systems, Inc. Host, storage system, and methods with subdivisions and query based write operations
US10013188B2 (en) 2014-12-17 2018-07-03 Fujitsu Limited Storage control device and method of copying data in failover in storage cluster
WO2022157785A1 (en) * 2021-01-25 2022-07-28 Volumez Technologies Ltd. Method and system for orchestrating remote block device connection between servers
US11853557B2 (en) 2021-01-25 2023-12-26 Volumez Technologies Ltd. Shared drive storage stack distributed QoS method and system

Also Published As

Publication number Publication date
CN100552611C (en) 2009-10-21
CN1906569A (en) 2007-01-31
EP1687706A1 (en) 2006-08-09
JP2007516523A (en) 2007-06-21
US20050235132A1 (en) 2005-10-20
US7689803B2 (en) 2010-03-30
US20050228937A1 (en) 2005-10-13
JP4750040B2 (en) 2011-08-17
US20050114595A1 (en) 2005-05-26
US20050228950A1 (en) 2005-10-13

Similar Documents

Publication Publication Date Title
JP4750040B2 (en) System and method for emulating operating system metadata enabling cross-platform access to storage volumes
US11093155B2 (en) Automated seamless migration with signature issue resolution
US9563469B2 (en) System and method for storage and deployment of virtual machines in a virtual server environment
RU2302034C9 (en) Multi-protocol data storage device realizing integrated support of file access and block access protocols
US8819383B1 (en) Non-disruptive realignment of virtual data
US8069217B2 (en) System and method for providing access to a shared system image
US20090049160A1 (en) System and Method for Deployment of a Software Image
US20050289218A1 (en) Method to enable remote storage utilization
US20130167155A1 (en) File system independent content aware cache
US8984224B2 (en) Multiple instances of mapping configurations in a storage system or storage appliance
US20100146039A1 (en) System and Method for Providing Access to a Shared System Image
US8972657B1 (en) Managing active—active mapped logical volumes
US11860791B2 (en) Methods for managing input-output operations in zone translation layer architecture and devices thereof
US11853234B2 (en) Techniques for providing access of host-local storage to a programmable network interface component while preventing direct host CPU access
US11093144B1 (en) Non-disruptive transformation of a logical storage device from a first access protocol to a second access protocol
US11388135B2 (en) Automated management server discovery
US20220012208A1 (en) Configuring a file server
US11379246B2 (en) Automatic configuration of multiple virtual storage processors
US11334441B2 (en) Distribution of snaps for load balancing data node clusters
US11455116B2 (en) Reservation handling in conjunction with switching between storage access protocols
US11789624B1 (en) Host device with differentiated alerting for single points of failure in distributed storage systems
US11385824B2 (en) Automated seamless migration across access protocols for a logical storage device
US20230418500A1 (en) Migration processes utilizing mapping entry timestamps for selection of target logical storage devices

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 11156636

Country of ref document: US

Ref document number: 11156821

Country of ref document: US

Ref document number: 11156635

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006541649

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

WWE Wipo information: entry into national phase

Ref document number: 2004811936

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 200480040583.5

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2004811936

Country of ref document: EP