US20060064558A1 - Internal mirroring operations in storage networks - Google Patents

Internal mirroring operations in storage networks Download PDF

Info

Publication number
US20060064558A1
US20060064558A1 US10/945,183 US94518304A US2006064558A1 US 20060064558 A1 US20060064558 A1 US 20060064558A1 US 94518304 A US94518304 A US 94518304A US 2006064558 A1 US2006064558 A1 US 2006064558A1
Authority
US
United States
Prior art keywords
storage
write
network
cache memory
cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/945,183
Inventor
Robert Cochran
Titus Davis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US10/945,183 priority Critical patent/US20060064558A1/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: COCHRAN, ROBERT A., DAVIS, TITUS E.
Priority to JP2005266241A priority patent/JP2006092535A/en
Publication of US20060064558A1 publication Critical patent/US20060064558A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2058Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using more than 2 mirrored copies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2074Asynchronous techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2076Synchronous techniques

Definitions

  • the described subject matter relates to electronic computing, and more particularly to systems and methods for managing storage in electronic computing systems.
  • Data management is an important component of a computer-based information management system.
  • Many users implement storage networks to manage data operations in computer-based information management systems.
  • Storage networks have evolved in computing power and complexity to provide highly reliable, managed storage solutions that may be distributed across a wide geographic area.
  • Data redundancy is one aspect of reliability in storage networks.
  • a single copy of data is vulnerable if the network element on which the data resides fails. If the vulnerable data or the network element on which it resides can be recovered, then the loss may be temporary. However, if either the data or the network element cannot be recovered then the vulnerable data may be lost permanently.
  • Storage networks implement remote copy procedures to provide data redundancy and failover procedures to provide data consistency in the event of a failure of one or more network elements.
  • Remote copy procedures replicate one or more data sets resident from a first storage site onto at least a second storage site, and frequently onto a third storage site.
  • Adroit resource management is desirable to balance competing demands between reducing host response times and ensuring data consistency between multiple storage sites.
  • a storage network comprises a first storage cell at a first location, the first storage cell including physical storage media and a storage controller that controls data transfer operations with the storage media; a second storage cell at a second location, the second storage cell including physical storage media and a storage controller that controls data transfer operations with the storage media; and a third storage cell at a third location, the third storage cell including physical storage media and a storage media controller that controls data transfer operations with the storage media.
  • write operations executed on the first storage cell are copied remotely in an ordered sequence to a cache memory in the second storage cell; write operations in the cache memory are mirrored onto a primary and secondary storage media in the second storage cell; and write operations in the mirrored secondary storage media are copied remotely to the third storage cell.
  • FIG. 1 is a schematic illustration of an exemplary implementation of a networked computing system that utilizes a storage network.
  • FIG. 2 is a schematic illustration of an exemplary implementation of a storage network.
  • FIG. 3 is a schematic illustration of an exemplary implementation of a computing device that can be utilized to implement a host.
  • FIG. 4 is a schematic illustration of an exemplary implementation of a storage cell.
  • FIG. 5 is a flowchart illustrating operations in a first exemplary implementation for executing write operations in a storage network.
  • FIG. 6 is a flowchart illustrating operations in a second exemplary implementation for executing write operations in a storage network.
  • FIG. 7 is a flowchart illustrating operations in a third exemplary implementation for executing write operations in a storage network.
  • FIG. 8 is a flowchart illustrating operations in a fourth exemplary implementation for executing write operations in a storage network.
  • FIG. 9 is a schematic illustration of an exemplary implementation of a three-site data replication architecture.
  • Described herein are exemplary storage network architectures and methods for performing internal mirroring operations in storage networks.
  • the methods described herein may be embodied as logic instructions on a computer-readable medium such as, e.g., firmware executable on a processor. When executed on a processor, the logic instructions cause processor to be programmed as a special-purpose machine that implements the described methods.
  • FIG. 1 is a schematic illustration of an exemplary implementation of a networked computing system 100 that utilizes a storage network.
  • the storage network comprises a storage pool 110 , which comprises an arbitrarily large quantity of storage space.
  • a storage pool 110 has a finite size limit determined by the particular hardware used to implement the storage pool 110 .
  • a plurality of logical disks (also called logical units or LUs) 112 a, 112 b may be allocated within storage pool 110 .
  • Each LU 112 a, 112 b comprises a contiguous range of logical addresses that can be addressed by host devices 120 , 122 , 124 and 128 by mapping requests from the connection protocol used by the host device to the uniquely identified LU 112 .
  • the term “host” comprises a computing system(s) that utilize storage on its own behalf, or on behalf of systems coupled to the host.
  • a host may be a supercomputer processing large databases or a transaction processing server maintaining transaction records.
  • a host may be a file server on a local area network (LAN) or wide area network (WAN) that provides storage services for an enterprise.
  • a file server may comprise one or more disk controllers and/or RAID controllers configured to manage multiple disk drives.
  • a host connects to a storage network via a communication connection such as, e.g., a Fibre Channel (FC) connection.
  • FC Fibre Channel
  • a host such as server 128 may provide services to other computing or data processing systems or devices.
  • client computer 126 may access storage pool 110 via a host such as server 128 .
  • Server 128 may provide file services to client 126 , and may provide other services such as transaction processing services, email services, etc.
  • client device 126 may or may not directly use the storage consumed by host 128 .
  • Devices such as wireless device 120 , and computers 122 , 124 , which are also hosts, may logically couple directly to LUs 112 a, 112 b.
  • Hosts 120 - 128 may couple to multiple LUs 112 a, 112 b, and LUs 112 a, 112 b may be shared among multiple hosts.
  • Each of the devices shown in FIG. 1 may include memory, mass storage, and a degree of data processing capability sufficient to manage a network connection.
  • FIG. 2 is a schematic illustration of an exemplary storage network 200 that may be used to implement a storage pool such as storage pool 110 .
  • Storage network 200 comprises a plurality of storage cells 210 a, 210 b, 210 c connected by a communication network 212 .
  • Storage cells 210 a, 210 b, 210 c may be implemented as one or more communicatively connected storage devices.
  • Exemplary storage devices include the STORAGEWORKS line of storage devices commercially available from Hewlett-Packard Corporation of Palo Alto, Calif., USA.
  • Communication network 212 may be implemented as a private, dedicated network such as, e.g., a Fibre Channel (FC) switching fabric.
  • portions of communication network 212 may be implemented using public communication networks pursuant to a suitable communication protocol such as, e.g., the Internet Small Computer Serial Interface (iSCSI) protocol.
  • iSCSI Internet Small Computer Serial Interface
  • Client computers 214 a, 214 b, 214 c may access storage cells 210 a, 210 b, 210 c through a host, such as servers 216 , 220 , 230 .
  • Clients 214 a, 214 b, 214 c may be connected to file server 216 directly, or via a network 218 such as a Local Area Network (LAN) or a Wide Area Network (WAN).
  • LAN Local Area Network
  • WAN Wide Area Network
  • the number of storage cells 210 a, 210 b, 210 c that can be included in any storage network is limited primarily by the connectivity implemented in the communication network 212 .
  • a switching fabric comprising a single FC switch can interconnect 256 or more ports, providing a possibility of hundreds of storage cells 210 a, 210 b, 210 c in a single storage network.
  • FIG. 3 is a schematic illustration of an exemplary computing device 330 that can be utilized to implement a host.
  • Computing device 330 includes one or more processors or processing units 332 , a system memory 334 , and a bus 336 that couples various system components including the system memory 334 to processors 332 .
  • the bus 336 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • the system memory 334 includes read only memory (ROM) 338 and random access memory (RAM) 340 .
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system (BIOS) 342 containing the basic routines that help to transfer information between elements within computing device 330 , such as during start-up, is stored in ROM 338 .
  • Computing device 330 further includes a hard disk drive 344 for reading from and writing to a hard disk (not shown), and may include a magnetic disk drive 346 for reading from and writing to a removable magnetic disk 348 , and an optical disk drive 350 for reading from or writing to a removable optical disk 352 such as a CD ROM or other optical media.
  • the hard disk drive 344 , magnetic disk drive 346 , and optical disk drive 350 are connected to the bus 336 by a SCSI interface 354 or some other appropriate interface.
  • the drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for computing device 330 .
  • exemplary environment described herein employs a hard disk, a removable magnetic disk 348 and a removable optical disk 352
  • other types of computer-readable media such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROMs), and the like, may also be used in the exemplary operating environment.
  • a number of program modules may be stored on the hard disk 344 , magnetic disk 348 , optical disk 352 , ROM 338 , or RAM 340 , including an operating system 358 , one or more application programs 360 , other program modules 362 , and program data 364 .
  • a user may enter commands and information into computing device 330 through input devices such as a keyboard 366 and a pointing device 368 .
  • Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
  • These and other input devices are connected to the processing unit 332 through an interface 370 that is coupled to the bus 336 .
  • a monitor 372 or other type of display device is also connected to the bus 336 via an interface, such as a video adapter 374 .
  • Computing device 330 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 376 .
  • the remote computer 376 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computing device 330 , although only a memory storage device 378 has been illustrated in FIG. 3 .
  • the logical connections depicted in FIG. 3 include a LAN 380 and a WAN 382 .
  • computing device 330 When used in a LAN networking environment, computing device 330 is connected to the local network 380 through a network interface or adapter 384 . When used in a WAN networking environment, computing device 330 typically includes a modem 386 or other means for establishing communications over the wide area network 382 , such as the Internet. The modem 386 , which may be internal or external, is connected to the bus 336 via a serial port interface 356 . In a networked environment, program modules depicted relative to the computing device 330 , or portions thereof, may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • Hosts 216 , 220 may include host adapter hardware and software to enable a connection to communication network 212 .
  • the connection to communication network 212 may be through an optical coupling or more conventional conductive cabling depending on the bandwidth requirements.
  • a host adapter may be implemented as a plug-in card on computing device 330 .
  • Hosts 216 , 220 may implement any number of host adapters to provide as many connections to communication network 212 as the hardware and software support.
  • the data processors of computing device 330 are programmed by means of instructions stored at different times in the various computer-readable storage media of the computer.
  • Programs and operating systems distributed, for example, on floppy disks, CD-ROMs, or electronically, and are installed or loaded into the secondary memory of a computer.
  • the programs are loaded at least partially into the computer's primary electronic memory.
  • FIG. 4 is a schematic illustration of an exemplary implementation of a storage cell 400 , such as storage cell 210 .
  • storage cell 400 includes two Network Storage Controllers (NSCs), also referred to as disk controllers, 410 a, 410 b to manage the operations and the transfer of data to and from one or more disk arrays 440 , 442 .
  • NSCs 410 a, 410 b may be implemented as plug-in cards having a microprocessor 416 a, 416 b, and memory 418 a, 418 b.
  • Each NSC 410 a, 410 b includes dual host adapter ports 412 a, 414 a, 412 b, 414 b that provide an interface to a host, i.e., through a communication network such as a switching fabric.
  • host adapter ports 412 a, 412 b, 414 a, 414 b may be implemented as FC N_Ports.
  • Each host adapter port 412 a, 412 b, 414 a, 414 b manages the login and interface with a switching fabric, and is assigned a fabric-unique port ID in the login process.
  • the architecture illustrated in FIG. 4 provides a fully-redundant storage cell; only a single NSC is required to implement a storage cell 210 .
  • Each NSC 410 a, 410 b further includes a communication port 428 a, 428 b that enables a communication connection 438 between the NSCs 410 a, 410 b .
  • the communication connection 438 may be implemented as a FC point-to-point connection, or pursuant to any other suitable communication protocol.
  • NSCs 410 a, 410 b further include a plurality of Fiber Channel Arbitrated Loop (FCAL) ports 420 a - 426 a, 420 b - 426 b that implement an FCAL communication connection with a plurality of storage devices, e.g., arrays of disk drives 440 , 442 .
  • FCAL Fiber Channel Arbitrated Loop
  • a FC switching fabric may be used.
  • storage capacity provided by the arrays of disk drives 440 , 442 in a storage cells 210 a, 210 b, 210 c may be added to the storage pool 110 .
  • logic instructions on a host computer 128 may establish a LU from storage capacity available on the arrays of disk drives 440 , 442 available in one or more storage cells 210 a, 210 b, 210 c. It will be appreciated that because a LU is a logical unit, not a physical unit, the physical storage space that constitutes the LU may be distributed across multiple storage cells 210 a, 210 b, 210 c. Data for the application may be stored on one or more LUs in the storage network.
  • An application that needs access to data in the storage network may launch a read query to a host computer.
  • the host computer queries the NSC(s) on one or more storage cells in which the requested data resides.
  • the NSC(s) retrieve the requested data from the storage media on which it resides and forwards the data to host computer, which in turn can forward data to the requesting device.
  • Storage network 200 may implement remote copy procedures to provide data redundancy for data stored in storage cells 210 a, 210 b, 210 c.
  • FIG. 9 is a schematic illustration of an exemplary network architecture for three-site data redundancy.
  • a LU resident on one or more disk arrays 912 at a first storage site 910 may implement synchronous replication with a remote copy resident on a disk array 922 at a second storage site 920 .
  • the second storage site 920 may maintain an internal mirror of disk array 922 , e.g., on either the same, or a second disk array 926 .
  • the second storage site 920 may implement asynchronous replication with a disk array 932 at a third storage site 930 .
  • the storage sites 910 , 920 , and 930 may correspond to the respective storage cells 210 a, 210 b, and 210 c.
  • the information in the LU is transmitted across the switching fabric, sometimes referred to as a “network cloud” to its destination storage cell.
  • An application can write data to the storage network 200 by launching a write request to a host computer 216 , 220 .
  • a host computer 216 , 220 launches a write command to the NSC(s) 410 a, 410 b in one or more storage cells 210 a, 210 b, 210 c on which the requested data resides.
  • the write command includes the data to be written to the storage network 200 .
  • the NSC(s) 410 a, 410 b write the data onto the storage media.
  • the storage network 200 is configured to implement three-site data replication, then data from the write operation is written to a second storage site 920 on the storage network, typically in a synchronous fashion.
  • An internal mirror write operation is performed between a primary copy on the first disk array 922 and a secondary copy which may be on the first disk array 922 or the second disk array 926 at the second site 920 . Then the write operation is written from the secondary copy to the third storage site 930 .
  • FIGS. 5-8 are flowcharts illustrating exemplary methods for performing internal mirroring operations in a storage array.
  • the methods illustrated in FIGS. 5-8 may be implemented as firmware in a suitable processor such as, e.g., in the storage controller of a storage cell such as the second storage site 920 of a three-site storage architecture.
  • a suitable processor such as, e.g., in the storage controller of a storage cell such as the second storage site 920 of a three-site storage architecture.
  • the methods illustrated in FIGS. 5-8 may find suitable application in other network architectures.
  • FIG. 5 is a flowchart illustrating operations 500 in a first exemplary implementation for synchronously executing mirrored write operations on a storage device having a primary and secondary cache.
  • a storage controller receives a write request, e.g., from a host computer over communication network 212 .
  • the storage controller initiates a write operation to write the request to the primary cache.
  • the storage controller forwards the write operation to the secondary cache within the same storage device.
  • the storage controller initiates a write operation on the secondary cache within the same storage device.
  • write operations are written to the respective caches in an ordered queue that corresponds to the order of execution of the write operations.
  • the storage controller After the write operation is committed to secondary cache, the storage controller receives a write operation acknowledgment (operation 518 ) indicating that the write operation is complete. At operation 520 the storage controller sends an acknowledgment to the host computer indicating that the write operation is complete. At operation 522 the storage controller copies the write data from the primary and secondary caches to respective first and second LUs within the same storage device.
  • FIG. 6 is a flowchart illustrating operations 600 in a first exemplary implementation for synchronously executing write operations on a storage device having unified (i.e., shared) cache.
  • a storage controller receives a write request, e.g., from a host computer over communication network 212 .
  • the storage controller initiates a write operation to write the request to the shared cache.
  • the storage controller sends an acknowledgment to the host computer indicating that the write operation is complete.
  • the storage controller places a block on incoming write operations.
  • the storage controller copies the write data from the primary and secondary caches to respective first and second disk arrays, or LUs.
  • the storage controller terminates the write block, thereby allowing subsequent write requests to be processed by the storage controller.
  • FIG. 7 is a flowchart illustrating operations 700 in a first exemplary implementation for asynchronously executing write operations on a storage device having unified (i.e., shared) cache.
  • a storage controller receives a write request, e.g., from a host computer over communication network 212 .
  • the storage controller initiates a write operation to write the request to the shared cache.
  • write operations are written to the cache in an ordered queue.
  • the storage controller sends an acknowledgment to the host computer indicating that the write operation is complete.
  • the storage controller copies the write data from the shared cache to respective first and second LUs.
  • FIG. 8 is a flowchart illustrating operations 800 in a second exemplary implementation for asynchronously executing write operations on a storage device having unified (i.e., shared) cache.
  • a storage controller receives a write request, e.g., from a host computer over communication network 212 .
  • the storage controller queues the write request in a primary cache in the storage cell.
  • the storage controller sends an acknowledgment to the host computer indicating that the write operation is complete.
  • the storage controller copies the write request from the primary cache to the secondary cache.
  • the storage controller receives a write operation acknowledgment (operation 818 ) indicating that the write operation is complete.
  • the storage controller copies the write data from the primary and secondary caches to respective first and second LUs within the same storage device.
  • the operations illustrated in FIGS. 5-8 permit a storage cell in the position of the second storage site 920 in a three-site data replication architecture to perform internal data mirroring operations.
  • This enables a fully synchronized three-site data recovery architecture, in which the data set in both the second site and the third site are consistent with the data set on the primary data site.
  • this permits the internal mirrors within the second data site to be fully data consistent while still consistent with the primary data site.
  • FIGS. 5-8 permit a remote mirror site to contain the secondary copy of an internal pair, while remaining geographically distant from the first storage site.
  • U.S. Patent Application No. 2004/0024838 to Cochran the disclosure of which is incorporated herein in its entirety, describes network architectures and operations for maintaining dominant and subordinate LUNs in remote copy operations.
  • the operations of FIGS. 5-8 permit a subordinate LUN to maintain a secondary copy of the data as an internal pair. This secondary copy remains data consistent and available for data recovery, if required.

Abstract

An exemplary storage network and methods of operation are disclosed which make use of data consistent internal mirrors within a storage device. The exemplary storage network comprises first, second, and third storage cells, each of which include physical storage media and a storage controller that controls data operations with the storage media. The storage controllers are configured such that, in operation, write operations executed on the first storage cell are copied remotely in an ordered sequence to a cache memory in the second storage cell, write operations in the cache memory are mirrored onto a primary and secondary storage media in the second storage cell, and write operations mirrored to the secondary storage media are copied remotely to the third storage cell.

Description

    TECHNICAL FIELD
  • The described subject matter relates to electronic computing, and more particularly to systems and methods for managing storage in electronic computing systems.
  • BACKGROUND
  • Effective collection, management, and control of information have become a central component of modern business processes. To this end, many businesses, both large and small, now implement computer-based information management systems.
  • Data management is an important component of a computer-based information management system. Many users implement storage networks to manage data operations in computer-based information management systems. Storage networks have evolved in computing power and complexity to provide highly reliable, managed storage solutions that may be distributed across a wide geographic area.
  • Data redundancy is one aspect of reliability in storage networks. A single copy of data is vulnerable if the network element on which the data resides fails. If the vulnerable data or the network element on which it resides can be recovered, then the loss may be temporary. However, if either the data or the network element cannot be recovered then the vulnerable data may be lost permanently.
  • Storage networks implement remote copy procedures to provide data redundancy and failover procedures to provide data consistency in the event of a failure of one or more network elements. Remote copy procedures replicate one or more data sets resident from a first storage site onto at least a second storage site, and frequently onto a third storage site. Adroit resource management is desirable to balance competing demands between reducing host response times and ensuring data consistency between multiple storage sites.
  • SUMMARY
  • In an exemplary implementation a storage network is provided. The storage network comprises a first storage cell at a first location, the first storage cell including physical storage media and a storage controller that controls data transfer operations with the storage media; a second storage cell at a second location, the second storage cell including physical storage media and a storage controller that controls data transfer operations with the storage media; and a third storage cell at a third location, the third storage cell including physical storage media and a storage media controller that controls data transfer operations with the storage media. In operation, write operations executed on the first storage cell are copied remotely in an ordered sequence to a cache memory in the second storage cell; write operations in the cache memory are mirrored onto a primary and secondary storage media in the second storage cell; and write operations in the mirrored secondary storage media are copied remotely to the third storage cell.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic illustration of an exemplary implementation of a networked computing system that utilizes a storage network.
  • FIG. 2 is a schematic illustration of an exemplary implementation of a storage network.
  • FIG. 3 is a schematic illustration of an exemplary implementation of a computing device that can be utilized to implement a host.
  • FIG. 4 is a schematic illustration of an exemplary implementation of a storage cell.
  • FIG. 5 is a flowchart illustrating operations in a first exemplary implementation for executing write operations in a storage network.
  • FIG. 6 is a flowchart illustrating operations in a second exemplary implementation for executing write operations in a storage network.
  • FIG. 7 is a flowchart illustrating operations in a third exemplary implementation for executing write operations in a storage network.
  • FIG. 8 is a flowchart illustrating operations in a fourth exemplary implementation for executing write operations in a storage network.
  • FIG. 9 is a schematic illustration of an exemplary implementation of a three-site data replication architecture.
  • DETAILED DESCRIPTION
  • Described herein are exemplary storage network architectures and methods for performing internal mirroring operations in storage networks. The methods described herein may be embodied as logic instructions on a computer-readable medium such as, e.g., firmware executable on a processor. When executed on a processor, the logic instructions cause processor to be programmed as a special-purpose machine that implements the described methods.
  • Exemplary Network Architecture
  • FIG. 1 is a schematic illustration of an exemplary implementation of a networked computing system 100 that utilizes a storage network. The storage network comprises a storage pool 110, which comprises an arbitrarily large quantity of storage space. In practice, a storage pool 110 has a finite size limit determined by the particular hardware used to implement the storage pool 110. However, there are few theoretical limits to the storage space available in a storage pool 110.
  • A plurality of logical disks (also called logical units or LUs) 112 a, 112 b may be allocated within storage pool 110. Each LU 112 a, 112 b comprises a contiguous range of logical addresses that can be addressed by host devices 120, 122, 124 and 128 by mapping requests from the connection protocol used by the host device to the uniquely identified LU 112. As used herein, the term “host” comprises a computing system(s) that utilize storage on its own behalf, or on behalf of systems coupled to the host. For example, a host may be a supercomputer processing large databases or a transaction processing server maintaining transaction records. Alternatively, a host may be a file server on a local area network (LAN) or wide area network (WAN) that provides storage services for an enterprise. A file server may comprise one or more disk controllers and/or RAID controllers configured to manage multiple disk drives. A host connects to a storage network via a communication connection such as, e.g., a Fibre Channel (FC) connection.
  • A host such as server 128 may provide services to other computing or data processing systems or devices. For example, client computer 126 may access storage pool 110 via a host such as server 128. Server 128 may provide file services to client 126, and may provide other services such as transaction processing services, email services, etc. Hence, client device 126 may or may not directly use the storage consumed by host 128.
  • Devices such as wireless device 120, and computers 122, 124, which are also hosts, may logically couple directly to LUs 112 a, 112 b. Hosts 120-128 may couple to multiple LUs 112 a, 112 b, and LUs 112 a, 112 b may be shared among multiple hosts. Each of the devices shown in FIG. 1 may include memory, mass storage, and a degree of data processing capability sufficient to manage a network connection.
  • FIG. 2 is a schematic illustration of an exemplary storage network 200 that may be used to implement a storage pool such as storage pool 110. Storage network 200 comprises a plurality of storage cells 210 a, 210 b, 210 c connected by a communication network 212. Storage cells 210 a, 210 b, 210 c may be implemented as one or more communicatively connected storage devices. Exemplary storage devices include the STORAGEWORKS line of storage devices commercially available from Hewlett-Packard Corporation of Palo Alto, Calif., USA. Communication network 212 may be implemented as a private, dedicated network such as, e.g., a Fibre Channel (FC) switching fabric. Alternatively, portions of communication network 212 may be implemented using public communication networks pursuant to a suitable communication protocol such as, e.g., the Internet Small Computer Serial Interface (iSCSI) protocol.
  • Client computers 214 a, 214 b, 214 c may access storage cells 210 a, 210 b, 210 c through a host, such as servers 216, 220, 230. Clients 214 a, 214 b, 214 c may be connected to file server 216 directly, or via a network 218 such as a Local Area Network (LAN) or a Wide Area Network (WAN). The number of storage cells 210 a, 210 b, 210 c that can be included in any storage network is limited primarily by the connectivity implemented in the communication network 212. A switching fabric comprising a single FC switch can interconnect 256 or more ports, providing a possibility of hundreds of storage cells 210 a, 210 b, 210 c in a single storage network.
  • Hundreds or even thousands of host computers 216, 220 may connect to storage network 200 to access data stored in storage cells 210 a, 210 b, 210 c. Hosts 216, 220 may be embodied as server computers. FIG. 3 is a schematic illustration of an exemplary computing device 330 that can be utilized to implement a host. Computing device 330 includes one or more processors or processing units 332, a system memory 334, and a bus 336 that couples various system components including the system memory 334 to processors 332. The bus 336 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. The system memory 334 includes read only memory (ROM) 338 and random access memory (RAM) 340. A basic input/output system (BIOS) 342, containing the basic routines that help to transfer information between elements within computing device 330, such as during start-up, is stored in ROM 338.
  • Computing device 330 further includes a hard disk drive 344 for reading from and writing to a hard disk (not shown), and may include a magnetic disk drive 346 for reading from and writing to a removable magnetic disk 348, and an optical disk drive 350 for reading from or writing to a removable optical disk 352 such as a CD ROM or other optical media. The hard disk drive 344, magnetic disk drive 346, and optical disk drive 350 are connected to the bus 336 by a SCSI interface 354 or some other appropriate interface. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for computing device 330. Although the exemplary environment described herein employs a hard disk, a removable magnetic disk 348 and a removable optical disk 352, other types of computer-readable media such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROMs), and the like, may also be used in the exemplary operating environment.
  • A number of program modules may be stored on the hard disk 344, magnetic disk 348, optical disk 352, ROM 338, or RAM 340, including an operating system 358, one or more application programs 360, other program modules 362, and program data 364. A user may enter commands and information into computing device 330 through input devices such as a keyboard 366 and a pointing device 368. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are connected to the processing unit 332 through an interface 370 that is coupled to the bus 336. A monitor 372 or other type of display device is also connected to the bus 336 via an interface, such as a video adapter 374.
  • Computing device 330 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 376. The remote computer 376 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computing device 330, although only a memory storage device 378 has been illustrated in FIG. 3. The logical connections depicted in FIG. 3 include a LAN 380 and a WAN 382.
  • When used in a LAN networking environment, computing device 330 is connected to the local network 380 through a network interface or adapter 384. When used in a WAN networking environment, computing device 330 typically includes a modem 386 or other means for establishing communications over the wide area network 382, such as the Internet. The modem 386, which may be internal or external, is connected to the bus 336 via a serial port interface 356. In a networked environment, program modules depicted relative to the computing device 330, or portions thereof, may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • Hosts 216, 220 may include host adapter hardware and software to enable a connection to communication network 212. The connection to communication network 212 may be through an optical coupling or more conventional conductive cabling depending on the bandwidth requirements. A host adapter may be implemented as a plug-in card on computing device 330. Hosts 216, 220 may implement any number of host adapters to provide as many connections to communication network 212 as the hardware and software support.
  • Generally, the data processors of computing device 330 are programmed by means of instructions stored at different times in the various computer-readable storage media of the computer. Programs and operating systems distributed, for example, on floppy disks, CD-ROMs, or electronically, and are installed or loaded into the secondary memory of a computer. At execution, the programs are loaded at least partially into the computer's primary electronic memory.
  • FIG. 4 is a schematic illustration of an exemplary implementation of a storage cell 400, such as storage cell 210. Referring to FIG. 4, storage cell 400 includes two Network Storage Controllers (NSCs), also referred to as disk controllers, 410 a, 410 b to manage the operations and the transfer of data to and from one or more disk arrays 440, 442. NSCs 410 a, 410 b may be implemented as plug-in cards having a microprocessor 416 a, 416 b, and memory 418 a, 418 b. Each NSC 410 a, 410 b includes dual host adapter ports 412 a, 414 a, 412 b, 414 b that provide an interface to a host, i.e., through a communication network such as a switching fabric. In a Fibre Channel implementation, host adapter ports 412 a, 412 b, 414 a, 414 b may be implemented as FC N_Ports. Each host adapter port 412 a, 412 b, 414 a, 414 b manages the login and interface with a switching fabric, and is assigned a fabric-unique port ID in the login process. The architecture illustrated in FIG. 4 provides a fully-redundant storage cell; only a single NSC is required to implement a storage cell 210.
  • Each NSC 410 a, 410 b further includes a communication port 428 a, 428 b that enables a communication connection 438 between the NSCs 410 a, 410 b. The communication connection 438 may be implemented as a FC point-to-point connection, or pursuant to any other suitable communication protocol.
  • In an exemplary implementation, NSCs 410 a, 410 b further include a plurality of Fiber Channel Arbitrated Loop (FCAL) ports 420 a-426 a, 420 b-426 b that implement an FCAL communication connection with a plurality of storage devices, e.g., arrays of disk drives 440, 442. While the illustrated embodiment implement FCAL connections with the arrays of disk drives 440, 442, it will be understood that the communication connection with arrays of disk drives 440, 442 may be implemented using other communication protocols. For example, rather than an FCAL configuration, a FC switching fabric may be used.
  • Exemplary Operations
  • Having described various components of an exemplary storage network, attention is now directed to operations of the storage network 200 and components thereof.
  • In operation, storage capacity provided by the arrays of disk drives 440, 442 in a storage cells 210 a, 210 b, 210 c may be added to the storage pool 110. When an application requires storage capacity, logic instructions on a host computer 128 may establish a LU from storage capacity available on the arrays of disk drives 440, 442 available in one or more storage cells 210 a, 210 b, 210 c. It will be appreciated that because a LU is a logical unit, not a physical unit, the physical storage space that constitutes the LU may be distributed across multiple storage cells 210 a, 210 b, 210 c. Data for the application may be stored on one or more LUs in the storage network.
  • An application that needs access to data in the storage network may launch a read query to a host computer. In response to a read query, the host computer queries the NSC(s) on one or more storage cells in which the requested data resides. The NSC(s) retrieve the requested data from the storage media on which it resides and forwards the data to host computer, which in turn can forward data to the requesting device.
  • Storage network 200 may implement remote copy procedures to provide data redundancy for data stored in storage cells 210 a, 210 b, 210 c. By way of example, FIG. 9 is a schematic illustration of an exemplary network architecture for three-site data redundancy. Referring to FIG. 9, a LU resident on one or more disk arrays 912 at a first storage site 910 may implement synchronous replication with a remote copy resident on a disk array 922 at a second storage site 920. The second storage site 920 may maintain an internal mirror of disk array 922, e.g., on either the same, or a second disk array 926. The second storage site 920 may implement asynchronous replication with a disk array 932 at a third storage site 930. Referring briefly to FIG. 2, the storage sites 910, 920, and 930 may correspond to the respective storage cells 210 a, 210 b, and 210 c. During the remote copy process the information in the LU is transmitted across the switching fabric, sometimes referred to as a “network cloud” to its destination storage cell.
  • Write Operations
  • An application can write data to the storage network 200 by launching a write request to a host computer 216, 220. In response to a write request, a host computer 216, 220 launches a write command to the NSC(s) 410 a, 410 b in one or more storage cells 210 a, 210 b, 210 c on which the requested data resides. The write command includes the data to be written to the storage network 200. In response to the write command, the NSC(s) 410 a, 410 b write the data onto the storage media. Referring again to FIG. 9, if the storage network 200 is configured to implement three-site data replication, then data from the write operation is written to a second storage site 920 on the storage network, typically in a synchronous fashion. An internal mirror write operation is performed between a primary copy on the first disk array 922 and a secondary copy which may be on the first disk array 922 or the second disk array 926 at the second site 920. Then the write operation is written from the secondary copy to the third storage site 930.
  • FIGS. 5-8 are flowcharts illustrating exemplary methods for performing internal mirroring operations in a storage array. In exemplary implementations, the methods illustrated in FIGS. 5-8 may be implemented as firmware in a suitable processor such as, e.g., in the storage controller of a storage cell such as the second storage site 920 of a three-site storage architecture. However, the methods illustrated in FIGS. 5-8 may find suitable application in other network architectures.
  • FIG. 5 is a flowchart illustrating operations 500 in a first exemplary implementation for synchronously executing mirrored write operations on a storage device having a primary and secondary cache. At operation 510 a storage controller receives a write request, e.g., from a host computer over communication network 212. At operation 512 the storage controller initiates a write operation to write the request to the primary cache. At operation 514 the storage controller forwards the write operation to the secondary cache within the same storage device. At operation 516 the storage controller initiates a write operation on the secondary cache within the same storage device. In exemplary implementations write operations are written to the respective caches in an ordered queue that corresponds to the order of execution of the write operations. After the write operation is committed to secondary cache, the storage controller receives a write operation acknowledgment (operation 518) indicating that the write operation is complete. At operation 520 the storage controller sends an acknowledgment to the host computer indicating that the write operation is complete. At operation 522 the storage controller copies the write data from the primary and secondary caches to respective first and second LUs within the same storage device.
  • FIG. 6 is a flowchart illustrating operations 600 in a first exemplary implementation for synchronously executing write operations on a storage device having unified (i.e., shared) cache. At operation 610 a storage controller receives a write request, e.g., from a host computer over communication network 212. At operation 612 the storage controller initiates a write operation to write the request to the shared cache. At operation 614 the storage controller sends an acknowledgment to the host computer indicating that the write operation is complete. At operation 616 the storage controller places a block on incoming write operations. At operation 618 the storage controller copies the write data from the primary and secondary caches to respective first and second disk arrays, or LUs. At operation 620 the storage controller terminates the write block, thereby allowing subsequent write requests to be processed by the storage controller.
  • FIG. 7 is a flowchart illustrating operations 700 in a first exemplary implementation for asynchronously executing write operations on a storage device having unified (i.e., shared) cache. At operation 710 a storage controller receives a write request, e.g., from a host computer over communication network 212. At operation 712 the storage controller initiates a write operation to write the request to the shared cache. In exemplary implementations write operations are written to the cache in an ordered queue. At operation 714 the storage controller sends an acknowledgment to the host computer indicating that the write operation is complete. At operation 716 the storage controller copies the write data from the shared cache to respective first and second LUs.
  • FIG. 8 is a flowchart illustrating operations 800 in a second exemplary implementation for asynchronously executing write operations on a storage device having unified (i.e., shared) cache. At operation 810 a storage controller receives a write request, e.g., from a host computer over communication network 212. At operation 812 the storage controller queues the write request in a primary cache in the storage cell. At operation 814 the storage controller sends an acknowledgment to the host computer indicating that the write operation is complete. At operation 816 the storage controller copies the write request from the primary cache to the secondary cache. After the write operation is committed to secondary cache within the same storage device, the storage controller receives a write operation acknowledgment (operation 818) indicating that the write operation is complete. At operation 820 the storage controller copies the write data from the primary and secondary caches to respective first and second LUs within the same storage device.
  • In one application, the operations illustrated in FIGS. 5-8 permit a storage cell in the position of the second storage site 920 in a three-site data replication architecture to perform internal data mirroring operations. This enables a fully synchronized three-site data recovery architecture, in which the data set in both the second site and the third site are consistent with the data set on the primary data site. In addition, this permits the internal mirrors within the second data site to be fully data consistent while still consistent with the primary data site.
  • In another application, the operations of FIGS. 5-8 permit a remote mirror site to contain the secondary copy of an internal pair, while remaining geographically distant from the first storage site. U.S. Patent Application No. 2004/0024838 to Cochran, the disclosure of which is incorporated herein in its entirety, describes network architectures and operations for maintaining dominant and subordinate LUNs in remote copy operations. The operations of FIGS. 5-8 permit a subordinate LUN to maintain a secondary copy of the data as an internal pair. This secondary copy remains data consistent and available for data recovery, if required.
  • In addition to the specific embodiments explicitly set forth herein, other aspects and embodiments of the present invention will be apparent to those skilled in the art from consideration of the specification disclosed herein. It is intended that the specification and illustrated embodiments be considered as examples only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (19)

1. A storage network, comprising:
a first storage cell at a first location, the first storage cell including physical storage media and a storage controller that controls data transfer operations with the storage media;
a second storage cell at a second location, the second storage cell including physical storage media and a storage controller that controls data transfer operations with the storage media;
a third storage cell at a third location, the third storage cell including physical storage media and a storage media controller that controls data transfer operations with the storage media;
wherein:
write operations executed on the first storage cell are copied remotely in an ordered sequence to a cache memory in the second storage cell;
write operations in the cache memory are mirrored onto a primary and secondary storage media in the second storage cell; and
write operations mirrored to the secondary storage media are copied remotely to the third storage cell.
2. The storage network of claim 1, wherein the first, second, and third storage cells are geographically distributed.
3. The storage network of claim 1, wherein the second storage cell comprises primary storage media and secondary storage media.
4. The storage network of claim 3, wherein the second storage cell comprises primary cache memory and secondary cache memory.
5. The storage network of claim 1, wherein write operations executed on the first storage cell are copied synchronously to a cache memory in the second storage cell.
6. The storage network of claim 5, wherein write operations copied to a cache memory in the second storage cell are subsequently written to a storage medium.
7. The storage network of claim 6, wherein write operations written to the second storage medium are copied asynchronously to a cache in the third storage cell.
8. The storage network of claim 7, wherein write operations copied to a cache memory in the second storage cell are subsequently written to a storage medium.
9. The storage network of claim 1, wherein write operations from the cache memory are written to the storage medium in the order executed at the first storage site.
10. A method of managing write operations in a storage network, comprising:
executing a write operation at a first storage site in a storage network;
receiving the write request at a storage controller in a second storage site in the storage network;
storing the write request on first storage media in the second storage site; and
mirroring the write request onto a second storage media in the second storage site in an ordered sequence.
11. The method of claim 10, further comprising transmitting the write request to a third storage site in the storage network.
12. The method of claim 10, wherein mirroring the write request onto a second storage media in the second storage site in an ordered sequence comprises synchronously storing the write request in a first cache memory and a second cache memory.
13. The method of claim 12, wherein the storage controller acknowledges completion of the write command after the write request is stored in the first cache memory and the second cache memory.
14. The method of claim 12, further comprising blocking incoming write requests until the storage completes storing the write request in the first cache memory and the second cache memory.
15. The method of claim 10 wherein mirroring the write request onto a second storage media in the second storage site in an ordered sequence comprises copying the write request from the primary and secondary cache onto respective primary and secondary storage media.
16. The method of claim 10, further comprising storing the write request on storage media at the third storage site.
17. A storage controller, comprising:
an input port for receiving write operations from a first remote storage cell;
a processor for executing the write operations; and
an output port for forwarding the write operations to a second remote storage cell,
wherein the processor is configured to store the received write operations in cache memory in an ordered sequence corresponding to the order of execution of the write operations at the first remote storage site.
18. The storage controller of claim 17, wherein the processor is further configured to copy write operations from the cache memory to respective first and second storage media.
19. The storage controller of claim 18, wherein the processor is further configured to transmit write operations executed on the second storage media to a remote storage site.
US10/945,183 2004-09-20 2004-09-20 Internal mirroring operations in storage networks Abandoned US20060064558A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/945,183 US20060064558A1 (en) 2004-09-20 2004-09-20 Internal mirroring operations in storage networks
JP2005266241A JP2006092535A (en) 2004-09-20 2005-09-14 Internal mirroring operation in storage network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/945,183 US20060064558A1 (en) 2004-09-20 2004-09-20 Internal mirroring operations in storage networks

Publications (1)

Publication Number Publication Date
US20060064558A1 true US20060064558A1 (en) 2006-03-23

Family

ID=36075333

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/945,183 Abandoned US20060064558A1 (en) 2004-09-20 2004-09-20 Internal mirroring operations in storage networks

Country Status (2)

Country Link
US (1) US20060064558A1 (en)
JP (1) JP2006092535A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080222359A1 (en) * 2007-03-06 2008-09-11 Hitachi, Ltd. Storage system and data management method
US20100011179A1 (en) * 2008-07-08 2010-01-14 Kazuhide Sano Remote copy system and method
US20100042795A1 (en) * 2007-05-01 2010-02-18 Fujitsu Limited Storage system, storage apparatus, and remote copy method
US20100180094A1 (en) * 2009-01-09 2010-07-15 Fujitsu Limited Storage system, backup storage apparatus, and backup control method
US20100332756A1 (en) * 2009-06-30 2010-12-30 Yarch Mark A Processing out of order transactions for mirrored subsystems
WO2017116844A1 (en) * 2015-12-28 2017-07-06 Netapp, Inc. Synchronous replication

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020016827A1 (en) * 1999-11-11 2002-02-07 Mccabe Ron Flexible remote data mirroring
US20020104008A1 (en) * 2000-11-30 2002-08-01 Cochran Robert A. Method and system for securing control-device-lun-mediated access to luns provided by a mass storage device
US20020103968A1 (en) * 2001-01-31 2002-08-01 Grover Rajiv K. Mirroring agent accessible to remote host computers, and accessing remote data-storage devices, via a communcations medium
US20020199073A1 (en) * 2001-06-11 2002-12-26 Keishi Tamura Method and system for backing up storage system data
US6502205B1 (en) * 1993-04-23 2002-12-31 Emc Corporation Asynchronous remote data mirroring system
US20030051111A1 (en) * 2001-08-08 2003-03-13 Hitachi, Ltd. Remote copy control method, storage sub-system with the method, and large area data storage system using them
US20030074492A1 (en) * 2001-05-29 2003-04-17 Cochran Robert A. Method and system for efficient format, read, write, and initial copy processing involving sparse logical units
US20030079102A1 (en) * 2001-06-01 2003-04-24 Lubbers Clark E. System and method for generating point in time storage copy
US20030079092A1 (en) * 2001-10-23 2003-04-24 Cochran Robert A. Flexible allegiance storage and computing architecture
US20030084241A1 (en) * 2001-10-22 2003-05-01 Lubbers Clark E. System and method for atomizing storage
US20030120676A1 (en) * 2001-12-21 2003-06-26 Sanrise Group, Inc. Methods and apparatus for pass-through data block movement with virtual storage appliances
US20030145179A1 (en) * 2002-01-29 2003-07-31 Eran Gabber Method and apparatus for replicated storage
US6606690B2 (en) * 2001-02-20 2003-08-12 Hewlett-Packard Development Company, L.P. System and method for accessing a storage area network as network attached storage
US7149919B2 (en) * 2003-05-15 2006-12-12 Hewlett-Packard Development Company, L.P. Disaster recovery system with cascaded resynchronization

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6502205B1 (en) * 1993-04-23 2002-12-31 Emc Corporation Asynchronous remote data mirroring system
US20020016827A1 (en) * 1999-11-11 2002-02-07 Mccabe Ron Flexible remote data mirroring
US20020104008A1 (en) * 2000-11-30 2002-08-01 Cochran Robert A. Method and system for securing control-device-lun-mediated access to luns provided by a mass storage device
US6594745B2 (en) * 2001-01-31 2003-07-15 Hewlett-Packard Development Company, L.P. Mirroring agent accessible to remote host computers, and accessing remote data-storage devices, via a communcations medium
US20020103968A1 (en) * 2001-01-31 2002-08-01 Grover Rajiv K. Mirroring agent accessible to remote host computers, and accessing remote data-storage devices, via a communcations medium
US6606690B2 (en) * 2001-02-20 2003-08-12 Hewlett-Packard Development Company, L.P. System and method for accessing a storage area network as network attached storage
US20030074492A1 (en) * 2001-05-29 2003-04-17 Cochran Robert A. Method and system for efficient format, read, write, and initial copy processing involving sparse logical units
US20030079102A1 (en) * 2001-06-01 2003-04-24 Lubbers Clark E. System and method for generating point in time storage copy
US20020199073A1 (en) * 2001-06-11 2002-12-26 Keishi Tamura Method and system for backing up storage system data
US20030051111A1 (en) * 2001-08-08 2003-03-13 Hitachi, Ltd. Remote copy control method, storage sub-system with the method, and large area data storage system using them
US20030084241A1 (en) * 2001-10-22 2003-05-01 Lubbers Clark E. System and method for atomizing storage
US20030079092A1 (en) * 2001-10-23 2003-04-24 Cochran Robert A. Flexible allegiance storage and computing architecture
US20030120676A1 (en) * 2001-12-21 2003-06-26 Sanrise Group, Inc. Methods and apparatus for pass-through data block movement with virtual storage appliances
US20030145179A1 (en) * 2002-01-29 2003-07-31 Eran Gabber Method and apparatus for replicated storage
US7149919B2 (en) * 2003-05-15 2006-12-12 Hewlett-Packard Development Company, L.P. Disaster recovery system with cascaded resynchronization

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8006036B2 (en) * 2007-03-06 2011-08-23 Hitachi, Ltd. Storage system and data management method
US20080222359A1 (en) * 2007-03-06 2008-09-11 Hitachi, Ltd. Storage system and data management method
US8200897B2 (en) 2007-03-06 2012-06-12 Hitachi, Ltd. Storage system and data management method
US20100042795A1 (en) * 2007-05-01 2010-02-18 Fujitsu Limited Storage system, storage apparatus, and remote copy method
US8468314B2 (en) 2007-05-01 2013-06-18 Fujitsu Limited Storage system, storage apparatus, and remote copy method for storage apparatus in middle of plural storage apparatuses
US8732420B2 (en) 2008-07-08 2014-05-20 Hitachi, Ltd. Remote copy system and method
US8364919B2 (en) * 2008-07-08 2013-01-29 Hitachi, Ltd. Remote copy system and method
US20100011179A1 (en) * 2008-07-08 2010-01-14 Kazuhide Sano Remote copy system and method
US20100180094A1 (en) * 2009-01-09 2010-07-15 Fujitsu Limited Storage system, backup storage apparatus, and backup control method
US8862843B2 (en) 2009-01-09 2014-10-14 Fujitsu Limited Storage system, backup storage apparatus, and backup control method
US20100332756A1 (en) * 2009-06-30 2010-12-30 Yarch Mark A Processing out of order transactions for mirrored subsystems
US8909862B2 (en) * 2009-06-30 2014-12-09 Intel Corporation Processing out of order transactions for mirrored subsystems using a cache to track write operations
WO2017116844A1 (en) * 2015-12-28 2017-07-06 Netapp, Inc. Synchronous replication
US10496320B2 (en) 2015-12-28 2019-12-03 Netapp Inc. Synchronous replication

Also Published As

Publication number Publication date
JP2006092535A (en) 2006-04-06

Similar Documents

Publication Publication Date Title
US10965753B2 (en) Interconnect delivery process
US6073209A (en) Data storage controller providing multiple hosts with access to multiple storage subsystems
US8127088B2 (en) Intelligent cache management
US8205051B2 (en) Data processing system
US8990153B2 (en) Pull data replication model
JP4309354B2 (en) Write operation control in storage network
US7343517B2 (en) Systems for managing of system metadata and methods for recovery from an inconsistent copy set
US20060230243A1 (en) Cascaded snapshots
US20070094466A1 (en) Techniques for improving mirroring operations implemented in storage area networks and network based virtualization
US20070294314A1 (en) Bitmap based synchronization
EP1887470A2 (en) Backup system and method
US10872036B1 (en) Methods for facilitating efficient storage operations using host-managed solid-state disks and devices thereof
JP2006092535A (en) Internal mirroring operation in storage network
US7694079B2 (en) Tagged sequential read operations
JP2005216304A (en) Reproduction of data at multiple sites
US11221928B2 (en) Methods for cache rewarming in a failover domain and devices thereof
US11349924B1 (en) Mechanism for peer-to-peer communication between storage management systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:COCHRAN, ROBERT A.;DAVIS, TITUS E.;REEL/FRAME:015824/0415

Effective date: 20040913

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION