US20030158869A1 - Incremental update control for remote copy - Google Patents
Incremental update control for remote copy Download PDFInfo
- Publication number
- US20030158869A1 US20030158869A1 US10/079,458 US7945802A US2003158869A1 US 20030158869 A1 US20030158869 A1 US 20030158869A1 US 7945802 A US7945802 A US 7945802A US 2003158869 A1 US2003158869 A1 US 2003158869A1
- Authority
- US
- United States
- Prior art keywords
- volume
- remote
- site
- primary
- bitmap
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
- G06F11/2074—Asynchronous techniques
Definitions
- the present invention generally relates to remote database synchronization. More particularly, the present invention is directed to a system and method for providing asynchronous incremental database update from a primary site to a remote recovery site, which completely decouples database updates at the primary site from the transmission of the database updates to the remote recovery site, thereby facilitating efficient data backup of business-critical data and disaster recovery thereof.
- Efficient disaster recovery requires that updates to business-critical data at a primary site be synchronized at a location that is remote to the primary site (i.e., remote recovery site) in order to ensure safety of and uninterrupted access to the business-critical data.
- remote recovery site i.e., a location that is remote to the primary site
- any updates since a last periodic backup may be lost, thus significantly impacting business operations.
- a key feature of the efficient disaster recovery is the frequency of resynchronization of the business-critical data from the primary site to the remote recovery site.
- resynchronization of data at a remote site principally involves two techniques: synchronous and asynchronous. Variants of the two techniques are also possible.
- application host writes by an application host are forwarded to the remote site as part of the input/output (i.e., “I/O”) command processing.
- I/O input/output
- the application host writes await remote confirmation before signaling I/O completion to the application host.
- There is a write latency associated with the synchronous technique because the application host awaits completion confirmation, which is further exacerbated by a physical separation of the primary site from the remote recovery site.
- the synchronous technique is invariably limited to relatively short distances because of the detrimental effect of a round-trip propagation delay on the I/O response completion signaling. Furthermore, until the I/O response completion signaling is received at the primary site, the application host is unable to access the data at the primary site. To the contrary of the synchronous technique, the asynchronous technique delivers application host writes over high-speed communication links to the remote recovery site while allowing the application host at the primary site to access the data. That is, the asynchronous technique signals I/O completion to the application host at the primary site before updating the remote recovery site.
- the asynchronous technique is often utilized when the distance between primary and the remote recovery sites (as well as possibly a relative low-bandwidth telecommunication link) would introduce prohibitive latencies if performed synchronously.
- a long-distance communication link may become a bottleneck that forces local I/O writes to be queued for transmission to the remote site.
- the queuing of I/O writes at the primary site negatively affects efficient disaster recovery since the queued I/O writes may be destroyed in an above-described disaster before they are transmitted to the remote recovery site.
- the frequency for the resynchronization of the business-critical data from the primary site to the remote recovery site takes into account a space and a time dimension.
- the space dimension ultimately accounts for the amount of data
- the time dimension accounts for the time period when resynchronization occurs.
- a resynchronization that involves copying all of the data represents a full database backup, while an incremental database backup copies only a portion of the data that has changed since the last full or incremental database backup. Whether full or incremental, either backup method represents a time-consistent view of the data at the primary site.
- a particularly useful resynchronization system is a Peer to Peer Remote Copy (i.e., “PPRC”) system offered by International Business Machines, Corporation (i.e., “IBM”), the assignee of the subject patent application.
- the PPRC provides synchronous copying of database updates from a primary Direct Access Storage Device (i.e., DASD) controller at a primary site to a remote DASD controller at the remote recovery site.
- DASD Direct Access Storage Device
- the PPRC system includes a primary controller and an associated primary DASD at the primary site and a remote controller and an associated DASD at the remote recovery site.
- each of the controllers includes a non-volatile storage (i.e., “NVS”) for maintaining data in the event of power or system failure.
- NVS non-volatile storage
- the data is first written (or buffered) to the NVS of the primary controller at the primary site, the data is then transferred to the NVS in the remote controller at the remote recovery site.
- the data at the primary and remote NVS is destaged to the attached DASD storage devices (i.e., disk), i.e., the data is written from the NVS to the associated DASD storage device.
- a single DASD storage device may include more than one volume or a single volume may span more than one DASD storage devices.
- the remote recovery site's DASD volume(s) are synchronously updated with data updates to the primary DASD volume(s).
- One persistent problem with the PPRC system is that the volumes, which are synchronized between the primary and remote DASD storage devices, are unavailable for use while the PPRC data updates are serviced.
- the PPRC system does not consider the transfer of data to the remote recovery site complete until all the data updated at the DASD of the primary site has been updated at the DASD of the remote recovery site.
- data updates to the DASD of the primary site invariably delay response times to user requests to the volumes involved in the data updates because synchronous updates must be made to the DASD of the remote recovery site before the volumes involved in the updates are available to service the user requests. Response time delays may occur with respect to user requests to the DASD of the primary and remote recovery sites.
- the user requests to volumes of either the primary or remote recovery site's DASD are subject to the data updates between the primary and the remote recovery sites and must therefore wait until the completion of the data updates before the requests can access the updated data. Therefore there is a need in the art for providing a system and method that efficiently performs asynchronous incremental database updates from a primary site to a remote recovery site, thereby completely decoupling data updates at the primary site from the transmission of the data updates to the remote recovery site.
- a method for asynchronously transmitting one or more incremental database updates from a primary volume at a primary site to a remote volume at a remote site, the primary site and the remote site interconnected by at least one communication link comprising the steps of: destaging modified data to the primary volume for a current database update and updating one or more bits in a first bitmap at the primary site that indicate one or more tracks on the primary volume that are to be overwritten with the modified data; transferring the first bitmap to a second bitmap at the primary site for indicating the modified data that is to be transmitted to the remote volume at the remote site for the current database update; and synchronizing the primary volume at the primary site with the remote volume at the remote site for the current database update by transmitting the modified data to the remote volume as indicated by one or more bits in the second bitmap, wherein the one or more incremental database updates at the primary volume of the primary site are decoupled from transmission of the one or more incremental database updates to the remote volume at the remote site
- a system for asynchronously transmitting one or more incremental database updates from a primary volume at a primary site to a remote volume at a remote site, the primary site and the remote site interconnected by at least one communication link comprising: a local controller associated with the primary site comprising: a means for destaging modified data to the primary volume for a current database update and updating one or more bits in a first bitmap at the primary site that indicate one or more tracks on the primary volume that are to be overwritten with the modified data; a means for transferring the first bitmap to a second bitmap at the primary site for indicating the modified data that is to be transmitted to the remote volume at the remote site for the current database update; and a means for synchronizing the primary volume at the primary site with the remote volume at the remote site for the current database update by transmitting the modified data to the remote volume as indicated by one or more bits in the second bitmap, wherein the one or more incremental database updates at the primary volume of the primary site
- a controller associated with a primary site for asynchronously transmitting one or more incremental database updates from a primary volume at the primary site to a remote volume at a remote site, the primary site and the remote site interconnected by at least one communication link, the controller comprising: means for destaging modified data to the primary volume for a current database update and updating one or more bits in a first bitmap at the primary site that indicate one or more tracks on the primary volume that are to be overwritten with the modified data; means for transferring the first bitmap to a second bitmap at the primary site for indicating the modified data that is to be transmitted to the remote volume at the remote site for the current database update; and means for synchronizing the primary volume at the primary site with the remote volume at the remote site for the current database update by transmitting the modified data to the remote volume as indicated by one or more bits in the second bitmap, wherein the one or more incremental database updates at the primary volume of the primary site are decoupled from transmission of the one or more
- a program storage device tangibly embodying a program of instructions executable by a machine to perform a method for asynchronously transmitting one or more incremental database updates from a primary volume at a primary site to a remote volume at a remote site, the primary site and the remote site interconnected by at least one communication link, the method comprising the steps of: destaging modified data to the primary volume for a current database update and updating one or more bits in a first bitmap at the primary site that indicate one or more tracks on the primary volume that are to be overwritten with the modified data; transferring the first bitmap to a second bitmap at the primary site for indicating the modified data that is to be transmitted to the remote volume at the remote site for the current database update; and synchronizing the primary volume at the primary site with the remote volume at the remote site for the current database update by transmitting the modified data to the remote volume as indicated by one or more bits in the second bitmap, wherein the one or more incremental database updates at the
- FIG. 1 is an exemplary system diagram for accomplishing asynchronous incremental database update from a primary site to a remote recovery site according to the present invention
- FIG. 2 is an exemplary method flowchart illustrating an initial setup performed for enabling the asynchronous incremental database update from a primary site to a remote recovery site of FIG. 1 according the present invention
- FIG. 3 is an exemplary method flowchart illustrating the asynchronous incremental database update from a primary site to a remote recovery site according the present invention
- FIG. 4 is an exemplary relationship table for representing the relationship between a pair of volumes of a FlashCopy pair at the primary site according to the present invention.
- FIG. 5 is a more detailed system diagram of the exemplary system in FIG. 1 for accomplishing the asynchronous incremental database update from a primary site to a remote recovery site according to the present invention.
- the present invention is directed to a method and system for providing remote asynchronous incremental data update. More particularly, the present invention is directed to providing an efficient mechanism for updating a remote copy of a database with asynchronous incremental updates to a local database, in which the data updates at the primary site are completely decoupled from the transmission of the data updates to the remote recovery site.
- FIG. 1 is an exemplary system diagram of a Remote FlashCopy system 100 for accomplishing the asynchronous incremental database update from a primary site 101 to a remote recovery site 103 according to the present invention.
- the Remote FlashCopy system 100 utilizes FlashCopy (i.e., “FC”) technology coupled with peer-to-peer remote copy (i.e., “PPRC”) technology to provide an asynchronous data update (i.e., database update) that obviates the above-identified limitations of the prior art.
- FlashCopy i.e., “FC”
- PPRC peer-to-peer remote copy
- the Remote FlashCopy system 100 is asynchronous because the application host 102 at the primary site 101 does not have to wait for the data updates to be recorded at the remote recovery site 103 before an ending status for the data updates at the primary site 101 is presented to the application host 102 , i.e., ending status being presented immediately upon update of Volume A 106 .
- FIG. 1 there are depicted four volumes (i.e., logically designated as volumes A, B, C, D), which are utilized for above-identified asynchronous migration, backup and disaster recovery solution.
- volumes A, B, C, D logically designated as volumes A, B, C, D
- the Remote FlashCopy system 100 is enabled to keep data on Volume D 124 time-consistent or synchronized with data on Volume A 106 , where both volumes are remote to one another.
- a peer-to-peer connection between Volume B 108 of the primary site 101 and Volume C 122 of the remote recovery site 103 is accomplished via two channel extenders 114 and at least one communication link 116 .
- the channel extenders 114 enable the peer-to-peer connection between Volume B 108 and Volume C 120 over longer distances. It is noted that the channel extenders 114 are not required, and the peer-to-peer connection between Volume B 108 and Volume C 122 may directly be established via the at least one communication link 116 .
- the at least one communication link 116 may include any suitable communication links known in the art, including channel links, T1/T3 links, Fibre channel, International Business Machines (i.e., “IBM”) Enterprise System Connection (i.e., “ESCON”) links, and the like.
- the primary site 101 and remote recovery site 103 are depicted in FIG. 1 as being in remote geographic locations with regard to one another, one skilled in the art understands that the primary and the remote recovery sites may be located anywhere with respect to each other, i.e., at the same geographic location, at locations a short distance apart, or further at locations a long distance apart.
- the primary site 101 comprises the application host 102 that is coupled to Volume A 106 , which updates the data stored on Volume A 106 .
- the primary site 101 further comprises a local Logical Subsystem (i.e., local “LSS”) 104 that includes a local FlashCopy pair of volumes, such as volumes A 106 and B 108 , which is involved in the data update at the primary site 101 and which also facilitates data update at the remote recovery site 103 .
- LSS 104 includes local DASD controller (depicted in FIG. 5 and described below) for managing access to both Volume A 106 and Volume B 108 .
- Volume A 106 is a source volume
- volume B 108 is a target volume.
- the remote recovery site 103 comprises a recovery host 118 that is coupled to Volume D 124 .
- the recovery host 118 may immediately begin accessing data from Volume D 124 at the remote recovery site 103 to recover from the failure or disaster of the primary site 101 .
- the remote recovery site 103 similarly further comprises a remote Logical Subsystem 120 (i.e., remote “LSS”) that includes a remote FlashCopy pair of volumes, such as volumes C 122 and D 124 .
- the remote LSS 120 also includes a remote DASD controller (depicted in FIG. 5 and described below) for managing access to Volume C 122 and Volume D 124 .
- Volume C 122 is a source volume and volume D 124 is a target volume.
- Volume B 108 of the primary site 101 and Volume C 122 of the remote recovery site 103 form a peer-to-peer remote copy (i.e., “PPRC”) pair in which Volume B 108 is a primary volume and Volume C 122 is a secondary volume.
- PPRC peer-to-peer remote copy
- the local LSS 104 of primary site 101 and the remote LSS 120 of the remote recovery site 103 are depicted as being remote to one another, one skilled in the art understands that the local and remote subsystems 104 , 120 may be located anywhere with respect to one another, i.e., at the same location, or at locations a short distance apart, or further at locations a long distance apart, as particularly illustrated in the exemplary FIG. 1.
- the Remote FlashCopy system 100 in addition to the Remote FlashCopy system 100 managing data updates at the primary site 101 , the Remote FlashCopy system 100 further controls asynchronous incremental data updates at a remote recovery site 103 .
- the method for managing remote data updates at the remote recovery site is described in greater detail below with reference to FIGS. 2 and 3.
- FIGS. 2 and 3 At this point, an overview of the operation of the Remote FlashCopy system 100 is presented for context and clarity. At first, an initial copy of the database (or portions thereof) included in Volume A 106 is made to Volume D, as will be described below with respect to FIG. 2.
- a FlashCopy from Volume A to Volume B sets all bits in the FlashCopy Bitmap 110 on Volume B to ‘ones’, thereby initializing the FlashCopy bitmap.
- the bits in the FlashCopy bitmap 110 represent the tracks of data on Volume A 106 that are updated.
- the setting of all bits to ‘ones’ represents the fact that all data is to be copied, and also represents the fact that all data to be copied is stored on Volume A.
- the FlashCopy performs a byte-for-byte virtual copy of data from Volume A 106 to Volume B 108 , i.e., no physical data is copied from Volume A to Volume B.
- the FlashCopy bitmap 110 on Volume B 108 represents a frozen image of data on Volume A 106 at a particular point in time, such as time T 0 .
- the FlashCopy bitmap 110 indicates the data on Volume A 106 that has changed since a last FlashCopy and further indicates the data that will be sent from the primary site 101 to the remote recovery site 103 .
- the FlashCopy 110 includes all ‘ones’, which indicates that all data is to be transferred to Volume C and that the data is stored on Volume A.
- the FlashCopy bitmap 110 is converted into a peer-to-peer remote copy (i.e., “PPRC”) bitmap 112 on Volume B, which exists in a peer-to-peer remote copy (“PPRC”) state with Volume C 120 , i.e., a PPRC session being established between Volume B 108 and Volume C 122 .
- PPRC peer-to-peer remote copy
- the PPRC volumes i.e., Volume B 108 and Volume C 122
- each bit of the FlashCopy bitmap 110 is inverted into the PPRC bitmap 112 .
- the conversion inverts all ‘ones’ to all ‘zeroes’ in the PPRC bit map 112 .
- the PPRC bitmap 112 is set up in order to transfer the data represented as changed on Volume A in the FlashCopy bitmap 110 to Volume C 122 via the PPRC session.
- a FlashCopy is performed from Volume C 122 to Volume D 124 , wherein Volume D 124 is time-consistent with Volume A 106 at time T 0 .
- FlashCopy bitmap 110 Following the initial copy of the data from Volume A 106 to Volume D 124 , updates to the data on Volume A 106 are recorded in the FlashCopy bitmap 110 on Volume B 108 , by setting a corresponding bit to a ‘zero’. That is, a ‘zero’ in FlashCopy bitmap 110 indicates that Volume B includes the data to be updated, whereas a ‘one’ represents that Volume B does not include the data to be updated and that this data is instead included on Volume A 106 . It should be noted that the data on Volume A 106 is copied to Volume B 108 upon demand, i.e., when the particular data on Volume A is to be overwritten (i.e., updated) with updated data upon destaging.
- the FlashCopy on Volume B represents all of the changes to volume A with relationship to Volume B, where the data may either be located on Volume A or Volume B as particularly represented by the FlashCopy bitmap 110 .
- destaging is a process of asynchronously writing modified data from a nonvolatile cache to a disk in the background while read/write requests from a host system are serviced in the foreground. It should be noted that destaging may be based upon occupancy of the nonvolatile cache, such as when the cache is full, or may be user-initiated.
- the destaging is preferably automatically initiated. More particularly with reference to FIG. 1, modified data that is cached in a non-volatile store (i.e., NVS) memory associated with the local logical subsystem 104 is destaged (i.e., written) to Volume A 106 during the initialization of the FlashCopy of Volume A 106 to Volume B 122 .
- the destaging of modified data causes a ‘one’ bit in the FlashCopy bitmap 110 in Volume B to be changed to a ‘zero’ bit by moving modified data from Volume A 106 to Volume B 108 .
- the FlashCopy bitmap 110 is then transferred to a PPRC bitmap 112 by inverting the bits in the FlashCopy bitmap 110 into the PPRC bitmap 112 . Therefore, the ‘ones’ in PPRC bitmap 112 indicate that the tracks of data associated with these bits have been updated with the modified data and that the modified data is to be transmitted to Volume C in order to synchronize Volume A 106 to Volume D 124 . More particularly, when the PPRC bitmap is updated, application host 102 may read/write Volume A 106 since Volume B 108 includes a consistent point-time copy of Volume A 106 .
- a FlashCopy of Volume C 122 to Volume D 124 is performed, in which a virtual copy of data is performed as described above with regard to the FlashCopy of Volume A 106 to Volume B 108 .
- a FlashCopy Bitmap 126 records any updates received by and destaged at Volume C 122 .
- Volume D 124 has a time-consistent copy of the modified data at Volume B 108 , which represents Volume A 106 , i.e., thereby Volume A and Volume D are time-synchronized at that point in time.
- time-synchronized copies may be staggered for performing the incremental update more efficiently, according to the present invention.
- FIG. 2 is an exemplary method flowchart 200 illustrating the initial setup performed that enables the asynchronous incremental database update from a primary site 101 to a remote recovery site 103 illustrated in FIG. 1, according the present invention.
- the application host 102 utilizes the data (i.e., reads, writes and updates the data) on Volume A 106 for business-critical purposes.
- the method for the initial setup according to the present invention begins at step 202 , where the application host 102 via a control manager (not shown) initiates an initial copy of data (total database or a subset thereof) at Volume A 106 to Volume D 124 (or a refresh of the database at Volume D 124 ).
- a user may implement via the control manager update policies at the application host 102 regarding database updates.
- the policies may include starting the copying at user discretion, copying at scheduled or specific times, and copying that is cycled periodically upon completion of previous iterations.
- the time it takes to copy the entire database is determined by a total number of volumes that the database spans, the physical distance between the primary site 101 and the remote recovery site 103 (i.e., distance between Volume B 108 and Volume C 122 ), and a number of communication links 116 between the sites 101 and 103 .
- a 12-terabyte database with 4 IBM's ESCON communication links 116 between the primary site 101 and remote recovery site 103 each link 116 running at a rate of 12.5 megabytes per second, will take about 3 days to complete the copy.
- Step 204 Volume A 106 is accessed as a FlashCopy source and application volume by the application host 102 .
- a FlashCopy of Volume A 106 to Volume B 108 is then performed at step 206 , which sets up a relationship between Volume A and Volume B with regard to the data on Volume A 106 .
- the FlashCopy performs byte-for-byte virtual copy of data from Volume A 106 to Volume B 108 .
- the FlashCopy sets all the bits in the FlashCopy bitmap 110 on Volume B to ‘ones’ (i.e., initializing the FlashCopy bitmap), which represents that all of the data is stored on Volume A 106 and no data is stored on Volume B 108 .
- a relationship table illustrated in FIG. 4 below is also set up on Volume B, which among other things, identifies Volume A 106 and Volume B 108 and provides pointers to the location of data on Volume A 106 and where the data is to be copied on Volume B 108 . The relationship table will be described in further detail below with reference to FIG. 4.
- any updates by the application host 102 to the data in the database on Volume A 106 are maintained in the FlashCopy bitmap 110 and the relationship table at step 208 . That is, upon demand to overwrite (i.e., update) data on Volume A by the application host 102 , the data to be overwritten is copied from Volume A to Volume B 108 and the FlashCopy bitmap 110 is updated for the one or more tracks representing the updated data from a ‘one’ to a ‘zero’ in the FlashCopy bitmap 110 .
- the FlashCopy preserves the state of the data at the time when FlashCopy was initiated, i.e., time T 0 , by physically copying the data from Volume A to Volume B before any update to that data is possible on Volume A.
- the relationship table illustrated below in FIG. 4 is updated with a number of tracks of data to copy. Therefore, the FlashCopy bitmap 110 represents the changes to the relationship between Volume A 106 and Volume B 108 with regard to the data.
- Step 212 Volume B 108 is accessed as a PPRC primary volume and Volume C is accessed as a PPRC secondary volume, i.e., a PPRC connection or session between Volume B 108 and Volume C 120 is thereby established.
- a PPRC copy of all data from Volume B 108 to Volume C 122 is performed. Since this is the initial or first copy of data, the PPRC sets the PPRC bitmap 112 to all ‘ones’ to represent that all data is to be copied during the initial copy of data. In operation, the PPRC inspects the PPRC bitmap 112 to identify which tracks of data are to be copied (in this case all data represented by ‘ones’) to the remote Volume C 122 , and then transfers the data from the identified storage locations (i.e., tracks) to volume C 122 .
- FlashCopy bitmap 110 it is noted that at the initialization of the FlashCopy bitmap 110 at step 206 , all of the data is stored on Volume A 106 as particularly represented by all ‘ones’ in the FlashCopy bitmap 110 . Therefore, during the PPRC session, PPRC reads Volume B 108 and inspects the FlashCopy bitmap 110 to determine if the data is on Volume A 106 or Volume B 108 . All data is copied from Volume B 106 according to step 214 as specified in the FlashCopy 110 , which specifies that all data is stored on Volume A 104 . At step 216 , it is determined whether the PPRC copy of all data from Volume B 106 to Volume C 122 is complete.
- Step 214 If the PPRC copy of data to Volume C is not complete, the method continues at step 214 . Otherwise, the method continues at step 218 , where a FlashCopy of Volume C 122 to Volume D 124 is performed.
- the FlashCopy virtually copies the data on Volume C 122 to Volume D 124 .
- Volume D 124 also includes a FlashCopy bitmap 126 that represents where the data is actually stored, i.e., whether on Volume C 122 or Volume D 124 .
- step 220 it is determined whether the FlashCopy of Volume C to Volume D is complete. If the FlashCopy is complete, the method ends at step 220 , otherwise the method continues at step 218 .
- FIG. 3 is an exemplary method flowchart 300 illustrating the asynchronous incremental database update from a primary site 101 to a remote recovery site 103 of FIG. 1 according the present invention, after the initial setup is performed according FIG. 2.
- the method of FIG. 3 represents a staggered sequence of time-consistent incremental updates (i.e., increments), such as time T 0 , T 1 and T 2 , as will pointed out in the following description. Staggering allows for efficiently performing the incremental update according to the present invention.
- the method begins at step 302 , when the application host 102 initiates a time-consistent incremental update of the data (i.e., an increment) on Volume A 106 at time T 1 , i.e., a current increment. It is noted that the time-consistent copy of data according to FIG. 2 represents a time-consistent increment at time T 0 . With regard to the aforementioned control manager, a user via the control manager of application host 102 initiates the current increment. It should be noted that this point, application host 102 updates of data (i.e., host I/O) on Volume A 106 are blocked until completion of step 318 , which is described below.
- modified data is destaged from the NVS to Volume A 106 , which forces an update to the FlashCopy bitmap 110 .
- a FlashCopy relationship between Volume A 106 and Volume B 108 initializing the FlashCopy bitmap 110 was established at step 206 of FIG. 2.
- the data on Volume A 106 that is to be overwritten with the modified data from the NVS is copied from Volume A 106 to Volume B 108 and the bits in FlashCopy bitmap 110 associated with the copied tracks of data are updated to ‘zeroes’ in the FlashCopy bitmap 110 .
- the data thus copied to Volume B 108 represents a previous increment in time, i.e., increment at time T 0 .
- the FlashCopy bitmap 110 is inspected and data is copied from Volume A 106 to Volume B 108 only if the bits in the FlashCopy 110 bitmap associated with the tracks of data are set to ‘ones’, which means that the data prior to being overwritten is transferred to Volume B 108 .
- the tracks in the FlashCopy bitmap 110 are ‘zeroes’, then the data to be overwritten is not copied to Volume B 108 , since Volume B already includes a copy the data.
- the FlashCopy bitmap 110 represents a time-consistent increment at timeof the data at Volume A.
- Volume B 108 physically includes data copied from Volume A 106 before it is overwritten during destaging of Volume A 106 .
- step 308 it is determined whether a previous PPRC resynchronization from Volume B 108 to Volume C 122 is complete, i.e., such as at step 216 described above with reference to FIG. 2 or step 324 described below with reference to FIG. 3.
- a previous PPRC resynchronization from Volume B 108 to Volume C 122 is complete, i.e., such as at step 216 described above with reference to FIG. 2 or step 324 described below with reference to FIG. 3.
- the resynchronization at step 308 represents a previous increment, i.e., increment at time T 0 .
- step 310 it is further determined whether to wait for resynchronization at time T 0 to complete or end the resynchronization. Generally, resynchronization will be allowed to complete at step 308 . Alternatively, if for instance the at least one communication link 116 between the primary site 101 to the remote recovery site 103 fails, the increment at time T 0 is ended and the method flowchart continues to step 326 . Referring back to step 312 , the updated FlashCopy bitmap 110 for the current increment (i.e., time T 0 ) is then transferred (i.e., bits in the bitmap representing tracks of data are inverted) to a PPRC bitmap 112 at Volume B 108 , which serves as a PPRC primary volume.
- a FlashCopy of Volume A 106 to Volume B 108 is then performed. It is noted that this FlashCopy is now for a subsequent increment at time T 2 , since the FlashCopy bitmap 110 for the current increment at time T 1 has now been transferred to PPRC bitmap 112 at step 312 , which now includes bits representing the tracks of data that have been updated for the current increment at time T 1 .
- step 318 the FlashCopy bitmap 110 for Volume A 106 is restored or initialized to all ‘ones’, and the application host 102 is allowed to resume updating data (i.e., host I/O) on Volume A 106 , which was blocked at step 302 described above.
- a FlashCopy of Volume C 122 to Volume D 124 is performed for a previous time increment, i.e. increment at time T 0 .
- the FlashCopy may perform a virtual copy, which does not physically copy any tracks of data from Volume C 122 to Volume D 124 .
- data at Volume D 124 is a virtual copy of data on Volume A 106 at time T 0 .
- a resynchronization is performed, i.e., a PPRC copy of data indicated as changed in the PPRC bitmap 112 is performed from Volume B 108 to Volume C 122 for time increment T 1 , i.e., the current time increment.
- the method 300 waits for the next incremental update (i.e., increment at time T 2 ) and continues to maintain the FlashCopy bitmap 110 , which was initialized at step 318 , as the application host 102 updates Volume A after the incremental update at increment time T 1 .
- the incremental database update method ends at step 328 .
- a FlashCopy of Volume C 122 to Volume D 124 for the current increment at time T 1 is performed at step 320 .
- the incremental database update illustrated in FIG. 3 may be repeated an indefinite number of times, such as at increment times T 1 , T 2 , T 3 . . . Tn, where n is indefinite.
- FIG. 4 is an exemplary relationship table 400 for representing a relationship between a Volume A 106 and Volume B 108 of a FlashCopy pair at the primary site 101 according to the present invention.
- FIG. 2 particularly illustrates fields 402 - 408 in the relationship table 400 that are maintained by the DASD controller associated with the local LSS 104 of the with primary site 101 .
- the relationship table 400 is generated at step 204 in FIG. 2.
- the relationship table 400 is accessed when the application host 102 updates Volume A 106 to determine target addresses for the updates on Volume B 108 .
- the first field, the target/source device address 402 identifies the address of the source DASD and target DASD involved in the copy operations.
- the second filed, the source start field 404 identifies the first track in the source DASD from where data is to be copied.
- the third field, the number of tracks field 406 indicates the number of tracks to be copied.
- the fourth field, the target start field 410 indicates the first track to which data is copied to in the target DASD. It should be noted that additional fields may be provided in the relationship table 400 as may be required for specific applications.
- FIG. 5 is a more detailed system diagram 500 of the exemplary Remote FlashCopy system 100 system of FIG. 1 for accomplishing the asynchronous incremental database update from a primary site to a remote recovery site according to the present invention.
- FIG. 5 particularly illustrates exemplary DASD controller units (i.e., controllers) 502 and 516 associated respectively with the local LSS 104 for the primary site 101 and the remote LSS 120 for the remote site 103 .
- the DASD controllers 502 and 516 include microcode (i.e., Unicode) for performing the asynchronous incremental database updates according to the present invention.
- each of the respective DASD controllers 501 and 516 includes an internal disk (not shown) that is specifically used by each respective controller for storing the microcode and loading the microcode into processor memory (not shown) associated with each DASD controller for execution.
- the local DASD controller 502 includes a host adapter 504 for enabling communication (i.e., read/write/update of data) between the application host 102 and the local DASD controller 502 .
- the remote DASD controller 516 likewise includes a host adapter 510 for enabling communication (i.e., read/write/update of data) between the recovery host 118 and the remote DASD controller 516 .
- the DASD controllers 502 and 516 include PPRC adapters 506 and 518 for establishing a PPRC session to enable transmission of database updates from the primary site 101 to the remote recovery site 103 according to the present invention.
- Cache 510 in the local DASD controller 502 caches the most recently accessed data from Volumes A 106 , thereby providing improved performance of the application host 102 since data may be obtained from the cache 510 instead of the associated volume A 106 if there is a cache hit.
- NVS 512 of the local DASD controller 502 buffers modified data until it is written to the associated primary Volume A 106 .
- Cache 524 and NVS 522 of the remote DASD controller 516 provide like functionality to that of the cache 510 and NVS 512 of the local DASD controller 502 .
- Device adapters 514 and 526 enable respective DASD controllers 502 and 516 to access data on the associated Volumes A-D (i.e., reference numbers 106 , 108 , 122 and 124 ).
- the local DASD controller 502 provides a memory area 508 for maintaining (i.e., storing and modifying) the FlashCopy bitmap 110 and the PPRC bitmap 112 and memory area 509 for maintaining the relationship table 400 according to the present invention.
- the remote DASD controller 516 likewise provides memory areas 528 and 529 for respectively maintaining a FlashCopy bitmap 126 and relationship table 400 . It is noted that the stored bitmaps and tables are read into the processor memory (not shown) associated with each respective DASD controller, modified according to the present invention, and the modified bitmaps and table are then written to each respective DASD controller.
Abstract
Description
- 1. Technical Field of the Invention
- The present invention generally relates to remote database synchronization. More particularly, the present invention is directed to a system and method for providing asynchronous incremental database update from a primary site to a remote recovery site, which completely decouples database updates at the primary site from the transmission of the database updates to the remote recovery site, thereby facilitating efficient data backup of business-critical data and disaster recovery thereof.
- 2. Description of the Prior Art
- In the contemporary business environment, which is so heavily dependent upon relatively uninterrupted access to various kinds of information (i.e., business-critical data), disaster recovery is often of critical importance. Explosive growth in e-commerce and data warehousing has resulted in an exponential growth of data storage, which has ripened the need for disaster recovery. Disaster recovery schemes guard the business-critical data in an event that an entire system or even a primary site storing the business-critical data is destroyed, such as for example, by earthquakes, fires, explosions, hurricanes, and the like. System outages affecting availability of data may be financially devastating to businesses in a variety of business types. For example, brokerage firms and other financial institutions may lose millions of dollars per hour when the systems are down or destroyed. Ensuring uninterrupted access to the information and guaranteeing that business data are securely and remotely updated to avoid data loss in the event of an above-described disaster are critical for safeguarding the business-critical data and business operations.
- Efficient disaster recovery requires that updates to business-critical data at a primary site be synchronized at a location that is remote to the primary site (i.e., remote recovery site) in order to ensure safety of and uninterrupted access to the business-critical data. However, if business-critical data at the remote recovery site is not kept current with the business-critical data at the primary site, any updates since a last periodic backup may be lost, thus significantly impacting business operations. Thus, a key feature of the efficient disaster recovery is the frequency of resynchronization of the business-critical data from the primary site to the remote recovery site.
- Generally, resynchronization of data (i.e., database updates) at a remote site principally involves two techniques: synchronous and asynchronous. Variants of the two techniques are also possible. In the synchronous technique, application host writes by an application host are forwarded to the remote site as part of the input/output (i.e., “I/O”) command processing. Typically, the application host writes await remote confirmation before signaling I/O completion to the application host. There is a write latency associated with the synchronous technique because the application host awaits completion confirmation, which is further exacerbated by a physical separation of the primary site from the remote recovery site. Thus, the synchronous technique is invariably limited to relatively short distances because of the detrimental effect of a round-trip propagation delay on the I/O response completion signaling. Furthermore, until the I/O response completion signaling is received at the primary site, the application host is unable to access the data at the primary site. To the contrary of the synchronous technique, the asynchronous technique delivers application host writes over high-speed communication links to the remote recovery site while allowing the application host at the primary site to access the data. That is, the asynchronous technique signals I/O completion to the application host at the primary site before updating the remote recovery site. The asynchronous technique is often utilized when the distance between primary and the remote recovery sites (as well as possibly a relative low-bandwidth telecommunication link) would introduce prohibitive latencies if performed synchronously. However, it is clearly evident that a long-distance communication link may become a bottleneck that forces local I/O writes to be queued for transmission to the remote site. The queuing of I/O writes at the primary site negatively affects efficient disaster recovery since the queued I/O writes may be destroyed in an above-described disaster before they are transmitted to the remote recovery site.
- The frequency for the resynchronization of the business-critical data from the primary site to the remote recovery site takes into account a space and a time dimension. The space dimension ultimately accounts for the amount of data, while the time dimension accounts for the time period when resynchronization occurs. A resynchronization that involves copying all of the data represents a full database backup, while an incremental database backup copies only a portion of the data that has changed since the last full or incremental database backup. Whether full or incremental, either backup method represents a time-consistent view of the data at the primary site. While individual host application I/O writes may be synchronously or asynchronously transmitted to the remote recovery site as they are made at the primary site, this fact presents a cost inefficiency in that the communication link between the primary site and the remote recovery site must be maintained (i.e., reserved or leased) to transfer the application host writes on a continuous basis.
- A particularly useful resynchronization system is a Peer to Peer Remote Copy (i.e., “PPRC”) system offered by International Business Machines, Corporation (i.e., “IBM”), the assignee of the subject patent application. The PPRC provides synchronous copying of database updates from a primary Direct Access Storage Device (i.e., DASD) controller at a primary site to a remote DASD controller at the remote recovery site. That is, the PPRC system includes a primary controller and an associated primary DASD at the primary site and a remote controller and an associated DASD at the remote recovery site. Generally, each of the controllers includes a non-volatile storage (i.e., “NVS”) for maintaining data in the event of power or system failure. During resynchronization, the data is first written (or buffered) to the NVS of the primary controller at the primary site, the data is then transferred to the NVS in the remote controller at the remote recovery site. At later points in time, the data at the primary and remote NVS is destaged to the attached DASD storage devices (i.e., disk), i.e., the data is written from the NVS to the associated DASD storage device. It should be noted that a single DASD storage device may include more than one volume or a single volume may span more than one DASD storage devices. It should further be noted that with the PPRC system, the remote recovery site's DASD volume(s) are synchronously updated with data updates to the primary DASD volume(s).
- One persistent problem with the PPRC system is that the volumes, which are synchronized between the primary and remote DASD storage devices, are unavailable for use while the PPRC data updates are serviced. The PPRC system does not consider the transfer of data to the remote recovery site complete until all the data updated at the DASD of the primary site has been updated at the DASD of the remote recovery site. Thus, data updates to the DASD of the primary site invariably delay response times to user requests to the volumes involved in the data updates because synchronous updates must be made to the DASD of the remote recovery site before the volumes involved in the updates are available to service the user requests. Response time delays may occur with respect to user requests to the DASD of the primary and remote recovery sites.
- Therefore, the user requests to volumes of either the primary or remote recovery site's DASD are subject to the data updates between the primary and the remote recovery sites and must therefore wait until the completion of the data updates before the requests can access the updated data. Therefore there is a need in the art for providing a system and method that efficiently performs asynchronous incremental database updates from a primary site to a remote recovery site, thereby completely decoupling data updates at the primary site from the transmission of the data updates to the remote recovery site.
- It is therefore an object of the present invention to provide a system and method for performing data updates at a remote site asynchronously from data updates at a primary site, thereby completely decoupling the data updates at the primary site from the transmission of the data updates to the remote recovery site.
- According an embodiment of the present invention, there is provided a method for asynchronously transmitting one or more incremental database updates from a primary volume at a primary site to a remote volume at a remote site, the primary site and the remote site interconnected by at least one communication link, the method comprising the steps of: destaging modified data to the primary volume for a current database update and updating one or more bits in a first bitmap at the primary site that indicate one or more tracks on the primary volume that are to be overwritten with the modified data; transferring the first bitmap to a second bitmap at the primary site for indicating the modified data that is to be transmitted to the remote volume at the remote site for the current database update; and synchronizing the primary volume at the primary site with the remote volume at the remote site for the current database update by transmitting the modified data to the remote volume as indicated by one or more bits in the second bitmap, wherein the one or more incremental database updates at the primary volume of the primary site are decoupled from transmission of the one or more incremental database updates to the remote volume at the remote site.
- According to another embodiment of the present invention, there is provided a system for asynchronously transmitting one or more incremental database updates from a primary volume at a primary site to a remote volume at a remote site, the primary site and the remote site interconnected by at least one communication link, the system comprising: a local controller associated with the primary site comprising: a means for destaging modified data to the primary volume for a current database update and updating one or more bits in a first bitmap at the primary site that indicate one or more tracks on the primary volume that are to be overwritten with the modified data; a means for transferring the first bitmap to a second bitmap at the primary site for indicating the modified data that is to be transmitted to the remote volume at the remote site for the current database update; and a means for synchronizing the primary volume at the primary site with the remote volume at the remote site for the current database update by transmitting the modified data to the remote volume as indicated by one or more bits in the second bitmap, wherein the one or more incremental database updates at the primary volume of the primary site are decoupled from transmission of the one or more incremental database updates to the remote volume at the remote site.
- According to a further embodiment of the present invention, there is provided a controller associated with a primary site for asynchronously transmitting one or more incremental database updates from a primary volume at the primary site to a remote volume at a remote site, the primary site and the remote site interconnected by at least one communication link, the controller comprising: means for destaging modified data to the primary volume for a current database update and updating one or more bits in a first bitmap at the primary site that indicate one or more tracks on the primary volume that are to be overwritten with the modified data; means for transferring the first bitmap to a second bitmap at the primary site for indicating the modified data that is to be transmitted to the remote volume at the remote site for the current database update; and means for synchronizing the primary volume at the primary site with the remote volume at the remote site for the current database update by transmitting the modified data to the remote volume as indicated by one or more bits in the second bitmap, wherein the one or more incremental database updates at the primary volume of the primary site are decoupled from transmission of the one or more incremental database updates to the remote volume at the remote site
- According to yet a further embodiment of the present invention, there is provided a program storage device, tangibly embodying a program of instructions executable by a machine to perform a method for asynchronously transmitting one or more incremental database updates from a primary volume at a primary site to a remote volume at a remote site, the primary site and the remote site interconnected by at least one communication link, the method comprising the steps of: destaging modified data to the primary volume for a current database update and updating one or more bits in a first bitmap at the primary site that indicate one or more tracks on the primary volume that are to be overwritten with the modified data; transferring the first bitmap to a second bitmap at the primary site for indicating the modified data that is to be transmitted to the remote volume at the remote site for the current database update; and synchronizing the primary volume at the primary site with the remote volume at the remote site for the current database update by transmitting the modified data to the remote volume as indicated by one or more bits in the second bitmap, wherein the one or more incremental database updates at the primary volume of the primary site are decoupled from transmission of the one or more incremental database updates to the remote volume at the remote site.
- The objects, features and advantages of the present invention will become apparent to one skilled in the art, in view of the following detailed description taken in combination with the attached drawings, in which:
- FIG. 1 is an exemplary system diagram for accomplishing asynchronous incremental database update from a primary site to a remote recovery site according to the present invention;
- FIG. 2 is an exemplary method flowchart illustrating an initial setup performed for enabling the asynchronous incremental database update from a primary site to a remote recovery site of FIG. 1 according the present invention;
- FIG. 3 is an exemplary method flowchart illustrating the asynchronous incremental database update from a primary site to a remote recovery site according the present invention;
- FIG. 4 is an exemplary relationship table for representing the relationship between a pair of volumes of a FlashCopy pair at the primary site according to the present invention; and
- FIG. 5 is a more detailed system diagram of the exemplary system in FIG. 1 for accomplishing the asynchronous incremental database update from a primary site to a remote recovery site according to the present invention.
- The present invention is directed to a method and system for providing remote asynchronous incremental data update. More particularly, the present invention is directed to providing an efficient mechanism for updating a remote copy of a database with asynchronous incremental updates to a local database, in which the data updates at the primary site are completely decoupled from the transmission of the data updates to the remote recovery site.
- FIG. 1 is an exemplary system diagram of a Remote FlashCopy
system 100 for accomplishing the asynchronous incremental database update from aprimary site 101 to aremote recovery site 103 according to the present invention. The Remote FlashCopysystem 100 utilizes FlashCopy (i.e., “FC”) technology coupled with peer-to-peer remote copy (i.e., “PPRC”) technology to provide an asynchronous data update (i.e., database update) that obviates the above-identified limitations of the prior art. Although it is contemplated that there may be one or more application hosts 102 and one or more recovery hosts 118, for ease and clarity of the following description, the one or more application hosts and the one or more recovery hosts will simply be referred to as theapplication host 102 and therecovery host 118, respectively. In operation, theRemote FlashCopy system 100 is asynchronous because theapplication host 102 at theprimary site 101 does not have to wait for the data updates to be recorded at theremote recovery site 103 before an ending status for the data updates at theprimary site 101 is presented to theapplication host 102, i.e., ending status being presented immediately upon update ofVolume A 106. - Now referring to FIG. 1, there are depicted four volumes (i.e., logically designated as volumes A, B, C, D), which are utilized for above-identified asynchronous migration, backup and disaster recovery solution. It should be noted that the number of volumes is in exemplary fashion limited to four for brevity and clarity, and that the number of volumes may significantly vary depending on a particular requirements for the
Remote FlashCopy system 100. TheRemote FlashCopy system 100 is enabled to keep data onVolume D 124 time-consistent or synchronized with data onVolume A 106, where both volumes are remote to one another. A peer-to-peer connection betweenVolume B 108 of theprimary site 101 andVolume C 122 of theremote recovery site 103 is accomplished via twochannel extenders 114 and at least onecommunication link 116. Thechannel extenders 114 enable the peer-to-peer connection betweenVolume B 108 andVolume C 120 over longer distances. It is noted that thechannel extenders 114 are not required, and the peer-to-peer connection betweenVolume B 108 andVolume C 122 may directly be established via the at least onecommunication link 116. The at least onecommunication link 116 may include any suitable communication links known in the art, including channel links, T1/T3 links, Fibre channel, International Business Machines (i.e., “IBM”) Enterprise System Connection (i.e., “ESCON”) links, and the like. Notwithstanding the fact that theprimary site 101 andremote recovery site 103 are depicted in FIG. 1 as being in remote geographic locations with regard to one another, one skilled in the art understands that the primary and the remote recovery sites may be located anywhere with respect to each other, i.e., at the same geographic location, at locations a short distance apart, or further at locations a long distance apart. - Further with reference to FIG. 1, the
primary site 101 comprises theapplication host 102 that is coupled toVolume A 106, which updates the data stored onVolume A 106. Theprimary site 101 further comprises a local Logical Subsystem (i.e., local “LSS”) 104 that includes a local FlashCopy pair of volumes, such as volumes A 106 andB 108, which is involved in the data update at theprimary site 101 and which also facilitates data update at theremote recovery site 103.LSS 104 includes local DASD controller (depicted in FIG. 5 and described below) for managing access to bothVolume A 106 andVolume B 108. In the local FlashCopy pair,Volume A 106 is a source volume andvolume B 108 is a target volume. Similarly, theremote recovery site 103 comprises arecovery host 118 that is coupled toVolume D 124. Upon disaster or failure affecting theprimary site 101, therecovery host 118 may immediately begin accessing data fromVolume D 124 at theremote recovery site 103 to recover from the failure or disaster of theprimary site 101. Theremote recovery site 103 similarly further comprises a remote Logical Subsystem 120 (i.e., remote “LSS”) that includes a remote FlashCopy pair of volumes, such as volumes C 122 andD 124. Theremote LSS 120 also includes a remote DASD controller (depicted in FIG. 5 and described below) for managing access toVolume C 122 andVolume D 124. In the remote FlashCopy pair,Volume C 122 is a source volume andvolume D 124 is a target volume. Additionally,Volume B 108 of theprimary site 101 andVolume C 122 of theremote recovery site 103 form a peer-to-peer remote copy (i.e., “PPRC”) pair in whichVolume B 108 is a primary volume andVolume C 122 is a secondary volume. As particularly noted above regarding the fact that thelocal LSS 104 ofprimary site 101 and theremote LSS 120 of theremote recovery site 103 are depicted as being remote to one another, one skilled in the art understands that the local andremote subsystems - Yet further with reference to FIG. 1, in addition to the
Remote FlashCopy system 100 managing data updates at theprimary site 101, theRemote FlashCopy system 100 further controls asynchronous incremental data updates at aremote recovery site 103. The method for managing remote data updates at the remote recovery site is described in greater detail below with reference to FIGS. 2 and 3. At this point, an overview of the operation of theRemote FlashCopy system 100 is presented for context and clarity. At first, an initial copy of the database (or portions thereof) included inVolume A 106 is made to Volume D, as will be described below with respect to FIG. 2. To perform the initial copy, a FlashCopy from Volume A to Volume B sets all bits in theFlashCopy Bitmap 110 on Volume B to ‘ones’, thereby initializing the FlashCopy bitmap. It should be noted that the bits in theFlashCopy bitmap 110 represent the tracks of data onVolume A 106 that are updated. The setting of all bits to ‘ones’ represents the fact that all data is to be copied, and also represents the fact that all data to be copied is stored on Volume A. It should further be noted that the FlashCopy performs a byte-for-byte virtual copy of data fromVolume A 106 toVolume B 108, i.e., no physical data is copied from Volume A to Volume B. TheFlashCopy bitmap 110 onVolume B 108 represents a frozen image of data onVolume A 106 at a particular point in time, such as time T0. TheFlashCopy bitmap 110 indicates the data onVolume A 106 that has changed since a last FlashCopy and further indicates the data that will be sent from theprimary site 101 to theremote recovery site 103. At the time T0, theFlashCopy 110 includes all ‘ones’, which indicates that all data is to be transferred to Volume C and that the data is stored on Volume A. Subsequently, theFlashCopy bitmap 110 is converted into a peer-to-peer remote copy (i.e., “PPRC”)bitmap 112 on Volume B, which exists in a peer-to-peer remote copy (“PPRC”) state withVolume C 120, i.e., a PPRC session being established betweenVolume B 108 andVolume C 122. More particularly, the PPRC volumes (i.e.,Volume B 108 and Volume C 122) are identified as participating in a PPRC state in which updates to the local primary PPRC volume (i.e., Volume B 106) are detected and therefore transmitted to the remote secondary volume (i.e., Volume C 122). During the conversion, each bit of theFlashCopy bitmap 110 is inverted into thePPRC bitmap 112. Thus, the conversion inverts all ‘ones’ to all ‘zeroes’ in thePPRC bit map 112. ThePPRC bitmap 112 is set up in order to transfer the data represented as changed on Volume A in theFlashCopy bitmap 110 toVolume C 122 via the PPRC session. Once the data is copied to Volume C, a FlashCopy is performed fromVolume C 122 toVolume D 124, whereinVolume D 124 is time-consistent withVolume A 106 at time T0. - Following the initial copy of the data from
Volume A 106 toVolume D 124, updates to the data onVolume A 106 are recorded in theFlashCopy bitmap 110 onVolume B 108, by setting a corresponding bit to a ‘zero’. That is, a ‘zero’ inFlashCopy bitmap 110 indicates that Volume B includes the data to be updated, whereas a ‘one’ represents that Volume B does not include the data to be updated and that this data is instead included onVolume A 106. It should be noted that the data onVolume A 106 is copied toVolume B 108 upon demand, i.e., when the particular data on Volume A is to be overwritten (i.e., updated) with updated data upon destaging. The FlashCopy on Volume B represents all of the changes to volume A with relationship to Volume B, where the data may either be located on Volume A or Volume B as particularly represented by theFlashCopy bitmap 110. Subsequently to the initialization described above or a previous data update, at a time interval (e.g., 30 minutes, 1 hour, or the like) since the initialization or the previous update, destaging of all modified data for volume A is initiated. In general, destaging is a process of asynchronously writing modified data from a nonvolatile cache to a disk in the background while read/write requests from a host system are serviced in the foreground. It should be noted that destaging may be based upon occupancy of the nonvolatile cache, such as when the cache is full, or may be user-initiated. - Still further with reference to FIG. 1, in the present invention, the destaging is preferably automatically initiated. More particularly with reference to FIG. 1, modified data that is cached in a non-volatile store (i.e., NVS) memory associated with the local
logical subsystem 104 is destaged (i.e., written) toVolume A 106 during the initialization of the FlashCopy ofVolume A 106 toVolume B 122. The destaging of modified data causes a ‘one’ bit in theFlashCopy bitmap 110 in Volume B to be changed to a ‘zero’ bit by moving modified data fromVolume A 106 toVolume B 108. After the destaging, theFlashCopy bitmap 110 is then transferred to aPPRC bitmap 112 by inverting the bits in theFlashCopy bitmap 110 into thePPRC bitmap 112. Therefore, the ‘ones’ inPPRC bitmap 112 indicate that the tracks of data associated with these bits have been updated with the modified data and that the modified data is to be transmitted to Volume C in order to synchronizeVolume A 106 toVolume D 124. More particularly, when the PPRC bitmap is updated,application host 102 may read/write Volume A 106 sinceVolume B 108 includes a consistent point-time copy ofVolume A 106. Once data indicated by thePPRC bitmap 112 as modified is transferred to Volume C, a FlashCopy ofVolume C 122 toVolume D 124 is performed, in which a virtual copy of data is performed as described above with regard to the FlashCopy ofVolume A 106 toVolume B 108. AFlashCopy Bitmap 126 records any updates received by and destaged atVolume C 122. Thus, at thispoint Volume D 124 has a time-consistent copy of the modified data atVolume B 108, which representsVolume A 106, i.e., thereby Volume A and Volume D are time-synchronized at that point in time. As will be described in greater detail with reference to FIG. 3, time-synchronized copies may be staggered for performing the incremental update more efficiently, according to the present invention. - FIG. 2 is an
exemplary method flowchart 200 illustrating the initial setup performed that enables the asynchronous incremental database update from aprimary site 101 to aremote recovery site 103 illustrated in FIG. 1, according the present invention. It should be noted that theapplication host 102 utilizes the data (i.e., reads, writes and updates the data) onVolume A 106 for business-critical purposes. The method for the initial setup according to the present invention begins atstep 202, where theapplication host 102 via a control manager (not shown) initiates an initial copy of data (total database or a subset thereof) atVolume A 106 to Volume D 124 (or a refresh of the database at Volume D 124). A user may implement via the control manager update policies at theapplication host 102 regarding database updates. The policies may include starting the copying at user discretion, copying at scheduled or specific times, and copying that is cycled periodically upon completion of previous iterations. The time it takes to copy the entire database is determined by a total number of volumes that the database spans, the physical distance between theprimary site 101 and the remote recovery site 103 (i.e., distance betweenVolume B 108 and Volume C 122), and a number ofcommunication links 116 between thesites ESCON communication links 116 between theprimary site 101 andremote recovery site 103, each link 116 running at a rate of 12.5 megabytes per second, will take about 3 days to complete the copy. - Further with reference to FIG. 2, at
step 204Volume A 106 is accessed as a FlashCopy source and application volume by theapplication host 102. A FlashCopy ofVolume A 106 toVolume B 108 is then performed atstep 206, which sets up a relationship between Volume A and Volume B with regard to the data onVolume A 106. As mentioned above, the FlashCopy performs byte-for-byte virtual copy of data fromVolume A 106 toVolume B 108. Although no physical data is copied from Volume A to Volume B at this point, the FlashCopy sets all the bits in theFlashCopy bitmap 110 on Volume B to ‘ones’ (i.e., initializing the FlashCopy bitmap), which represents that all of the data is stored onVolume A 106 and no data is stored onVolume B 108. For representing the relationship, a relationship table illustrated in FIG. 4 below is also set up on Volume B, which among other things, identifiesVolume A 106 andVolume B 108 and provides pointers to the location of data onVolume A 106 and where the data is to be copied onVolume B 108. The relationship table will be described in further detail below with reference to FIG. 4. Additionally, while the FlashCopy is in progress, any updates by theapplication host 102 to the data in the database onVolume A 106 are maintained in theFlashCopy bitmap 110 and the relationship table atstep 208. That is, upon demand to overwrite (i.e., update) data on Volume A by theapplication host 102, the data to be overwritten is copied from Volume A toVolume B 108 and theFlashCopy bitmap 110 is updated for the one or more tracks representing the updated data from a ‘one’ to a ‘zero’ in theFlashCopy bitmap 110. Thus, the FlashCopy preserves the state of the data at the time when FlashCopy was initiated, i.e., time T0, by physically copying the data from Volume A to Volume B before any update to that data is possible on Volume A. The relationship table illustrated below in FIG. 4 is updated with a number of tracks of data to copy. Therefore, theFlashCopy bitmap 110 represents the changes to the relationship betweenVolume A 106 andVolume B 108 with regard to the data. Atstep 210, it is determined whether the FlashCopy fromVolume A 106 toVolume B 108 is logically complete, and if the FlashCopy is not complete the method continues atstep 206. Otherwise, the method continues at step 212, whereVolume B 108 is accessed as a PPRC primary volume and Volume C is accessed as a PPRC secondary volume, i.e., a PPRC connection or session betweenVolume B 108 andVolume C 120 is thereby established. - Yet further with reference to FIG. 2, at step214 a PPRC copy of all data from
Volume B 108 toVolume C 122 is performed. Since this is the initial or first copy of data, the PPRC sets thePPRC bitmap 112 to all ‘ones’ to represent that all data is to be copied during the initial copy of data. In operation, the PPRC inspects thePPRC bitmap 112 to identify which tracks of data are to be copied (in this case all data represented by ‘ones’) to theremote Volume C 122, and then transfers the data from the identified storage locations (i.e., tracks) tovolume C 122. It is noted that at the initialization of theFlashCopy bitmap 110 atstep 206, all of the data is stored onVolume A 106 as particularly represented by all ‘ones’ in theFlashCopy bitmap 110. Therefore, during the PPRC session, PPRC readsVolume B 108 and inspects theFlashCopy bitmap 110 to determine if the data is onVolume A 106 orVolume B 108. All data is copied fromVolume B 106 according to step 214 as specified in theFlashCopy 110, which specifies that all data is stored onVolume A 104. Atstep 216, it is determined whether the PPRC copy of all data fromVolume B 106 toVolume C 122 is complete. If the PPRC copy of data to Volume C is not complete, the method continues atstep 214. Otherwise, the method continues atstep 218, where a FlashCopy ofVolume C 122 toVolume D 124 is performed. The FlashCopy virtually copies the data onVolume C 122 toVolume D 124. As mentioned above regarding the FlashCopy,Volume D 124 also includes aFlashCopy bitmap 126 that represents where the data is actually stored, i.e., whether onVolume C 122 orVolume D 124. At step 220, it is determined whether the FlashCopy of Volume C to Volume D is complete. If the FlashCopy is complete, the method ends at step 220, otherwise the method continues atstep 218. At this point, all data onVolume D 124 is time-consistent withVolume A 104, i.e., such as at time T0. The initial setup for enabling the asynchronous incremental database update according to the present invention is ended atstep 222. - FIG. 3 is an
exemplary method flowchart 300 illustrating the asynchronous incremental database update from aprimary site 101 to aremote recovery site 103 of FIG. 1 according the present invention, after the initial setup is performed according FIG. 2. Before FIG. 3 is described in detail, for clarity it is noted that the method of FIG. 3 represents a staggered sequence of time-consistent incremental updates (i.e., increments), such as time T0, T1 and T2, as will pointed out in the following description. Staggering allows for efficiently performing the incremental update according to the present invention. The method begins atstep 302, when theapplication host 102 initiates a time-consistent incremental update of the data (i.e., an increment) onVolume A 106 at time T1, i.e., a current increment. It is noted that the time-consistent copy of data according to FIG. 2 represents a time-consistent increment at time T0. With regard to the aforementioned control manager, a user via the control manager ofapplication host 102 initiates the current increment. It should be noted that this point,application host 102 updates of data (i.e., host I/O) onVolume A 106 are blocked until completion ofstep 318, which is described below. Atstep 304, modified data is destaged from the NVS toVolume A 106, which forces an update to theFlashCopy bitmap 110. It should be noted that a FlashCopy relationship betweenVolume A 106 andVolume B 108 initializing theFlashCopy bitmap 110 was established atstep 206 of FIG. 2. During destaging of the modified data toVolume A 106 atstep 304, the data onVolume A 106 that is to be overwritten with the modified data from the NVS is copied fromVolume A 106 toVolume B 108 and the bits inFlashCopy bitmap 110 associated with the copied tracks of data are updated to ‘zeroes’ in theFlashCopy bitmap 110. The data thus copied toVolume B 108 represents a previous increment in time, i.e., increment at time T0. Thus, theFlashCopy bitmap 110 is inspected and data is copied fromVolume A 106 toVolume B 108 only if the bits in theFlashCopy 110 bitmap associated with the tracks of data are set to ‘ones’, which means that the data prior to being overwritten is transferred toVolume B 108. However, if the tracks in theFlashCopy bitmap 110 are ‘zeroes’, then the data to be overwritten is not copied toVolume B 108, since Volume B already includes a copy the data. At this point, i.e., after the destaging atstep 304, theFlashCopy bitmap 110 represents a time-consistent increment at timeof the data at Volume A. In addition,Volume B 108 physically includes data copied fromVolume A 106 before it is overwritten during destaging ofVolume A 106. Atstep 306, it is determined whether the destaging is complete, and if not complete, the method continues atstep 304 until the destaging of modified data is complete. - Further with reference to FIG. 3, at step308 it is determined whether a previous PPRC resynchronization from
Volume B 108 toVolume C 122 is complete, i.e., such as atstep 216 described above with reference to FIG. 2 or step 324 described below with reference to FIG. 3. As noted above, during the PPRC synchronization data is physically copied according to the PPRC bitmap from eitherVolume A 106 orVolume B 108 toVolume C 122, according to the bits in thePPRC bitmap 112. It is again noted that the resynchronization at step 308 represents a previous increment, i.e., increment at time T0. Atstep 310, it is further determined whether to wait for resynchronization at time T0 to complete or end the resynchronization. Generally, resynchronization will be allowed to complete at step 308. Alternatively, if for instance the at least onecommunication link 116 between theprimary site 101 to theremote recovery site 103 fails, the increment at time T0 is ended and the method flowchart continues to step 326. Referring back to step 312, the updatedFlashCopy bitmap 110 for the current increment (i.e., time T0) is then transferred (i.e., bits in the bitmap representing tracks of data are inverted) to aPPRC bitmap 112 atVolume B 108, which serves as a PPRC primary volume. Atstep 314, a FlashCopy ofVolume A 106 toVolume B 108 is then performed. It is noted that this FlashCopy is now for a subsequent increment at time T2, since theFlashCopy bitmap 110 for the current increment at time T1 has now been transferred toPPRC bitmap 112 atstep 312, which now includes bits representing the tracks of data that have been updated for the current increment at time T1. Atstep 316, it is determined whether the FlashCopy ofVolume A 106 toVolume B 108 is complete. If the FlashCopy is not complete, the method continues atstep 314. Otherwise, atstep 318 theFlashCopy bitmap 110 forVolume A 106 is restored or initialized to all ‘ones’, and theapplication host 102 is allowed to resume updating data (i.e., host I/O) onVolume A 106, which was blocked atstep 302 described above. - Yet further with reference to FIG. 3, at step320 a FlashCopy of
Volume C 122 toVolume D 124 is performed for a previous time increment, i.e. increment at time T0. As noted above, the FlashCopy may perform a virtual copy, which does not physically copy any tracks of data fromVolume C 122 toVolume D 124. Thereafter, at step 322, it is determined whether the FlashCopy ofVolume C 122 toVolume D 124 is complete. If the FlashCopy is not complete, the method continues at step 320. Alternatively, the method continues atstep 324. After the FlashCopy at step 320 is complete, data atVolume D 124 is a virtual copy of data onVolume A 106 at time T0. At step 324 a resynchronization is performed, i.e., a PPRC copy of data indicated as changed in thePPRC bitmap 112 is performed fromVolume B 108 toVolume C 122 for time increment T1, i.e., the current time increment. At step 326, themethod 300 waits for the next incremental update (i.e., increment at time T2) and continues to maintain theFlashCopy bitmap 110, which was initialized atstep 318, as theapplication host 102 updates Volume A after the incremental update at increment time T1. The incremental database update method ends atstep 328. It is noted that during the next increment at time T2, a FlashCopy ofVolume C 122 toVolume D 124 for the current increment at time T1 is performed at step 320. The incremental database update illustrated in FIG. 3 may be repeated an indefinite number of times, such as at increment times T1, T2, T3 . . . Tn, where n is indefinite. - FIG. 4 is an exemplary relationship table400 for representing a relationship between a
Volume A 106 andVolume B 108 of a FlashCopy pair at theprimary site 101 according to the present invention. FIG. 2 particularly illustrates fields 402-408 in the relationship table 400 that are maintained by the DASD controller associated with thelocal LSS 104 of the withprimary site 101. The relationship table 400 is generated atstep 204 in FIG. 2. Atstep 208 of FIG. 2, the relationship table 400 is accessed when theapplication host 102updates Volume A 106 to determine target addresses for the updates onVolume B 108. The first field, the target/source device address 402, identifies the address of the source DASD and target DASD involved in the copy operations. The second filed, the source startfield 404, identifies the first track in the source DASD from where data is to be copied. The third field, the number of tracks field 406, indicates the number of tracks to be copied. The fourth field, the target start field 410, indicates the first track to which data is copied to in the target DASD. It should be noted that additional fields may be provided in the relationship table 400 as may be required for specific applications. - FIG. 5 is a more detailed system diagram500 of the exemplary
Remote FlashCopy system 100 system of FIG. 1 for accomplishing the asynchronous incremental database update from a primary site to a remote recovery site according to the present invention. FIG. 5 particularly illustrates exemplary DASD controller units (i.e., controllers) 502 and 516 associated respectively with thelocal LSS 104 for theprimary site 101 and theremote LSS 120 for theremote site 103. It is noted that theDASD controllers respective DASD controllers 501 and 516 includes an internal disk (not shown) that is specifically used by each respective controller for storing the microcode and loading the microcode into processor memory (not shown) associated with each DASD controller for execution. Thelocal DASD controller 502 includes ahost adapter 504 for enabling communication (i.e., read/write/update of data) between theapplication host 102 and thelocal DASD controller 502. Theremote DASD controller 516 likewise includes a host adapter 510 for enabling communication (i.e., read/write/update of data) between therecovery host 118 and theremote DASD controller 516. TheDASD controllers PPRC adapters primary site 101 to theremote recovery site 103 according to the present invention. Cache 510 in thelocal DASD controller 502 caches the most recently accessed data from Volumes A 106, thereby providing improved performance of theapplication host 102 since data may be obtained from the cache 510 instead of the associatedvolume A 106 if there is a cache hit. As described above,NVS 512 of thelocal DASD controller 502 buffers modified data until it is written to the associatedprimary Volume A 106.Cache 524 andNVS 522 of theremote DASD controller 516 provide like functionality to that of the cache 510 andNVS 512 of thelocal DASD controller 502.Device adapters respective DASD controllers reference numbers local DASD controller 502 provides amemory area 508 for maintaining (i.e., storing and modifying) theFlashCopy bitmap 110 and thePPRC bitmap 112 and memory area 509 for maintaining the relationship table 400 according to the present invention. Theremote DASD controller 516 likewise provides memory areas 528 and 529 for respectively maintaining aFlashCopy bitmap 126 and relationship table 400. It is noted that the stored bitmaps and tables are read into the processor memory (not shown) associated with each respective DASD controller, modified according to the present invention, and the modified bitmaps and table are then written to each respective DASD controller. - While the invention has been particularly shown and described to a preferred embodiment thereof, it will be understood by those skilled in the art that the foregoing and other changes in forma and details may be made therein without departing from the spirit and scope of the invention.
Claims (52)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/079,458 US7747576B2 (en) | 2002-02-20 | 2002-02-20 | Incremental update control for remote copy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/079,458 US7747576B2 (en) | 2002-02-20 | 2002-02-20 | Incremental update control for remote copy |
Publications (2)
Publication Number | Publication Date |
---|---|
US20030158869A1 true US20030158869A1 (en) | 2003-08-21 |
US7747576B2 US7747576B2 (en) | 2010-06-29 |
Family
ID=27733043
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/079,458 Active 2024-11-12 US7747576B2 (en) | 2002-02-20 | 2002-02-20 | Incremental update control for remote copy |
Country Status (1)
Country | Link |
---|---|
US (1) | US7747576B2 (en) |
Cited By (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040181639A1 (en) * | 2003-03-14 | 2004-09-16 | International Business Machines Corporation | Method, system, and program for establishing and using a point-in-time copy relationship |
US20050216681A1 (en) * | 2004-03-29 | 2005-09-29 | International Business Machines Corporation | Method, system, and article of manufacture for copying of data in a romote storage unit |
US20050251634A1 (en) * | 2004-05-05 | 2005-11-10 | International Business Machines Corporation | Point in time copy between data storage systems |
US20050278391A1 (en) * | 2004-05-27 | 2005-12-15 | Spear Gail A | Fast reverse restore |
US20060085485A1 (en) * | 2004-10-19 | 2006-04-20 | Microsoft Corporation | Protocol agnostic database change tracking |
US20060136526A1 (en) * | 2004-12-16 | 2006-06-22 | Childress Rhonda L | Rapid provisioning of a computer into a homogenized resource pool |
US20060174080A1 (en) * | 2005-02-03 | 2006-08-03 | Kern Robert F | Apparatus and method to selectively provide information to one or more computing devices |
US20060184650A1 (en) * | 2005-02-17 | 2006-08-17 | International Business Machines Corporation | Method for installing operating system on remote storage: flash deploy and install zone |
US7096330B1 (en) | 2002-07-29 | 2006-08-22 | Veritas Operating Corporation | Symmetrical data change tracking |
US7103796B1 (en) * | 2002-09-03 | 2006-09-05 | Veritas Operating Corporation | Parallel data change tracking for maintaining mirrored data consistency |
US20060230074A1 (en) * | 2005-03-30 | 2006-10-12 | International Business Machines Corporation | Method and system for increasing filesystem availability via block replication |
US7266644B2 (en) | 2003-10-24 | 2007-09-04 | Hitachi, Ltd. | Storage system and file-reference method of remote-site storage system |
US20070220322A1 (en) * | 2006-03-15 | 2007-09-20 | Shogo Mikami | Method for displaying pair state of copy pairs |
US20070220071A1 (en) * | 2006-03-15 | 2007-09-20 | Hitachi, Ltd. | Storage system, data migration method and server apparatus |
US7284104B1 (en) * | 2003-06-30 | 2007-10-16 | Veritas Operating Corporation | Volume-based incremental backup and recovery of files |
US20070282878A1 (en) * | 2006-05-30 | 2007-12-06 | Computer Associates Think Inc. | System and method for online reorganization of a database using flash image copies |
US20080098187A1 (en) * | 2006-10-18 | 2008-04-24 | Gal Ashour | System, method and computer program product for generating a consistent point in time copy of data |
US20080134163A1 (en) * | 2006-12-04 | 2008-06-05 | Sandisk Il Ltd. | Incremental transparent file updating |
US20080144471A1 (en) * | 2006-12-18 | 2008-06-19 | International Business Machines Corporation | Application server provisioning by disk image inheritance |
US20080168234A1 (en) * | 2007-01-08 | 2008-07-10 | International Business Machines Corporation | Managing write requests in cache directed to different storage groups |
US20080168220A1 (en) * | 2007-01-08 | 2008-07-10 | International Business Machines Corporation | Using different algorithms to destage different types of data from cache |
US7409510B2 (en) | 2004-05-27 | 2008-08-05 | International Business Machines Corporation | Instant virtual copy to a primary mirroring portion of data |
US20080228802A1 (en) * | 2007-03-14 | 2008-09-18 | Computer Associates Think, Inc. | System and Method for Rebuilding Indices for Partitioned Databases |
US20090094425A1 (en) * | 2007-10-08 | 2009-04-09 | Alex Winokur | Fast data recovery system |
US7620666B1 (en) | 2002-07-29 | 2009-11-17 | Symantec Operating Company | Maintaining persistent data change maps for fast data synchronization and restoration |
US20100023716A1 (en) * | 2008-07-23 | 2010-01-28 | Jun Nemoto | Storage controller and storage control method |
US7702670B1 (en) * | 2003-08-29 | 2010-04-20 | Emc Corporation | System and method for tracking changes associated with incremental copying |
US20110167044A1 (en) * | 2009-04-23 | 2011-07-07 | Hitachi, Ltd. | Computing system and backup method using the same |
US20110246731A1 (en) * | 2009-03-18 | 2011-10-06 | Hitachi, Ltd. | Backup system and backup method |
US20120011514A1 (en) * | 2010-07-12 | 2012-01-12 | International Business Machines Corporation | Generating an advanced function usage planning report |
US20120030503A1 (en) * | 2010-07-29 | 2012-02-02 | Computer Associates Think, Inc. | System and Method for Providing High Availability for Distributed Application |
US20120246424A1 (en) * | 2011-03-24 | 2012-09-27 | Hitachi, Ltd. | Computer system and data backup method |
US20130007389A1 (en) * | 2011-07-01 | 2013-01-03 | Futurewei Technologies, Inc. | System and Method for Making Snapshots of Storage Devices |
US20130073896A1 (en) * | 2011-09-19 | 2013-03-21 | Thomson Licensing | Method of exact repair of pairs of failed storage nodes in a distributed data storage system and corresponding device |
US8458127B1 (en) * | 2007-12-28 | 2013-06-04 | Blue Coat Systems, Inc. | Application data synchronization |
US20140075140A1 (en) * | 2011-12-30 | 2014-03-13 | Ingo Schmiegel | Selective control for commit lines for shadowing data in storage elements |
US9026849B2 (en) | 2011-08-23 | 2015-05-05 | Futurewei Technologies, Inc. | System and method for providing reliable storage |
US9032172B2 (en) | 2013-02-11 | 2015-05-12 | International Business Machines Corporation | Systems, methods and computer program products for selective copying of track data through peer-to-peer remote copy |
US20150169220A1 (en) * | 2013-12-13 | 2015-06-18 | Fujitsu Limited | Storage control device and storage control method |
US20150324280A1 (en) * | 2014-05-06 | 2015-11-12 | International Business Machines Corporation | Flash copy relationship management |
US9235348B2 (en) | 2010-08-19 | 2016-01-12 | International Business Machines Corporation | System, and methods for initializing a memory system |
US20160188426A1 (en) * | 2014-12-31 | 2016-06-30 | International Business Machines Corporation | Scalable distributed data store |
CN105938448A (en) * | 2015-03-03 | 2016-09-14 | 国际商业机器公司 | Method and device used for data replication |
WO2016180168A1 (en) * | 2015-05-11 | 2016-11-17 | 阿里巴巴集团控股有限公司 | Data copy method and device |
US20170177454A1 (en) * | 2015-12-21 | 2017-06-22 | International Business Machines Corporation | Storage System-Based Replication for Disaster Recovery in Virtualized Environments |
US9811430B1 (en) * | 2003-06-30 | 2017-11-07 | Veritas Technologies Llc | Method and system for incremental backup of data volumes |
US10146452B2 (en) * | 2016-04-22 | 2018-12-04 | International Business Machines Corporation | Maintaining intelligent write ordering with asynchronous data replication |
US10169747B2 (en) | 2010-07-12 | 2019-01-01 | International Business Machines Corporation | Advanced function usage detection |
US10168925B2 (en) * | 2016-08-18 | 2019-01-01 | International Business Machines Corporation | Generating point-in-time copy commands for extents of data |
US10235099B2 (en) * | 2016-08-18 | 2019-03-19 | International Business Machines Corporation | Managing point-in-time copies for extents of data |
US10423342B1 (en) * | 2017-03-30 | 2019-09-24 | Amazon Technologies, Inc. | Scaling events for hosting hierarchical data structures |
US10712953B2 (en) | 2017-12-13 | 2020-07-14 | International Business Machines Corporation | Management of a data written via a bus interface to a storage controller during remote copy operations |
CN111656340A (en) * | 2018-07-06 | 2020-09-11 | 斯诺弗雷克公司 | Data replication and data failover in a database system |
US11310137B2 (en) | 2017-02-05 | 2022-04-19 | Veritas Technologies Llc | System and method to propagate information across a connected set of entities irrespective of the specific entity type |
US11429640B2 (en) | 2020-02-28 | 2022-08-30 | Veritas Technologies Llc | Methods and systems for data resynchronization in a replication environment |
US11531604B2 (en) | 2020-02-28 | 2022-12-20 | Veritas Technologies Llc | Methods and systems for data resynchronization in a replication environment |
US11748319B2 (en) | 2017-02-05 | 2023-09-05 | Veritas Technologies Llc | Method and system for executing workload orchestration across data centers |
US11853575B1 (en) | 2019-06-08 | 2023-12-26 | Veritas Technologies Llc | Method and system for data consistency across failure and recovery of infrastructure |
US11928030B2 (en) | 2020-03-31 | 2024-03-12 | Veritas Technologies Llc | Optimize backup from universal share |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8095755B2 (en) * | 2006-10-18 | 2012-01-10 | International Business Machines Corporation | System, method and computer program product for generating a consistent point in time copy of data |
JP4679635B2 (en) * | 2008-12-29 | 2011-04-27 | 富士通株式会社 | Storage device, backup device, backup method and backup system |
US9563626B1 (en) * | 2011-12-08 | 2017-02-07 | Amazon Technologies, Inc. | Offline management of data center resource information |
US9092449B2 (en) * | 2012-10-17 | 2015-07-28 | International Business Machines Corporation | Bitmap selection for remote copying of updates |
US9262448B2 (en) * | 2013-08-12 | 2016-02-16 | International Business Machines Corporation | Data backup across physical and virtualized storage volumes |
CN105447033B (en) | 2014-08-28 | 2019-06-11 | 国际商业机器公司 | The method and apparatus of initial copy are generated in duplication initialization |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5504861A (en) * | 1994-02-22 | 1996-04-02 | International Business Machines Corporation | Remote data duplexing |
US20020053009A1 (en) * | 2000-06-19 | 2002-05-02 | Storage Technology Corporation | Apparatus and method for instant copy of data in a dynamically changeable virtual mapping environment |
US6446176B1 (en) * | 2000-03-09 | 2002-09-03 | Storage Technology Corporation | Method and system for transferring data between primary storage and secondary storage using a bridge volume and an internal snapshot copy of the data being transferred |
US20020178335A1 (en) * | 2000-06-19 | 2002-11-28 | Storage Technology Corporation | Apparatus and method for dynamically changeable virtual mapping scheme |
US6643671B2 (en) * | 2001-03-14 | 2003-11-04 | Storage Technology Corporation | System and method for synchronizing a data copy using an accumulation remote copy trio consistency group |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5692155A (en) | 1995-04-19 | 1997-11-25 | International Business Machines Corporation | Method and apparatus for suspending multiple duplex pairs during back up processing to insure storage devices remain synchronized in a sequence consistent order |
US6131148A (en) | 1998-01-26 | 2000-10-10 | International Business Machines Corporation | Snapshot copy of a secondary volume of a PPRC pair |
US6189079B1 (en) | 1998-05-22 | 2001-02-13 | International Business Machines Corporation | Data copy between peer-to-peer controllers |
US6253295B1 (en) | 1998-07-20 | 2001-06-26 | International Business Machines Corporation | System and method for enabling pair-pair remote copy storage volumes to mirror data in another pair of storage volumes |
US6237008B1 (en) | 1998-07-20 | 2001-05-22 | International Business Machines Corporation | System and method for enabling pair-pair remote copy storage volumes to mirror data in another storage volume |
-
2002
- 2002-02-20 US US10/079,458 patent/US7747576B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5504861A (en) * | 1994-02-22 | 1996-04-02 | International Business Machines Corporation | Remote data duplexing |
US6446176B1 (en) * | 2000-03-09 | 2002-09-03 | Storage Technology Corporation | Method and system for transferring data between primary storage and secondary storage using a bridge volume and an internal snapshot copy of the data being transferred |
US20020053009A1 (en) * | 2000-06-19 | 2002-05-02 | Storage Technology Corporation | Apparatus and method for instant copy of data in a dynamically changeable virtual mapping environment |
US20020178335A1 (en) * | 2000-06-19 | 2002-11-28 | Storage Technology Corporation | Apparatus and method for dynamically changeable virtual mapping scheme |
US6643671B2 (en) * | 2001-03-14 | 2003-11-04 | Storage Technology Corporation | System and method for synchronizing a data copy using an accumulation remote copy trio consistency group |
US6728736B2 (en) * | 2001-03-14 | 2004-04-27 | Storage Technology Corporation | System and method for synchronizing a data copy using an accumulation remote copy trio |
Cited By (108)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7552296B1 (en) | 2002-07-29 | 2009-06-23 | Symantec Operating Corporation | Symmetrical data change tracking |
US7096330B1 (en) | 2002-07-29 | 2006-08-22 | Veritas Operating Corporation | Symmetrical data change tracking |
US7620666B1 (en) | 2002-07-29 | 2009-11-17 | Symantec Operating Company | Maintaining persistent data change maps for fast data synchronization and restoration |
US7103796B1 (en) * | 2002-09-03 | 2006-09-05 | Veritas Operating Corporation | Parallel data change tracking for maintaining mirrored data consistency |
US7024530B2 (en) * | 2003-03-14 | 2006-04-04 | International Business Machines Corporation | Method, system, and program for establishing and using a point-in-time copy relationship |
US20040181639A1 (en) * | 2003-03-14 | 2004-09-16 | International Business Machines Corporation | Method, system, and program for establishing and using a point-in-time copy relationship |
US9811430B1 (en) * | 2003-06-30 | 2017-11-07 | Veritas Technologies Llc | Method and system for incremental backup of data volumes |
US7284104B1 (en) * | 2003-06-30 | 2007-10-16 | Veritas Operating Corporation | Volume-based incremental backup and recovery of files |
US7702670B1 (en) * | 2003-08-29 | 2010-04-20 | Emc Corporation | System and method for tracking changes associated with incremental copying |
US20070271414A1 (en) * | 2003-10-24 | 2007-11-22 | Yoji Nakatani | Storage system and file-reference method of remote-site storage system |
US7266644B2 (en) | 2003-10-24 | 2007-09-04 | Hitachi, Ltd. | Storage system and file-reference method of remote-site storage system |
US20050216681A1 (en) * | 2004-03-29 | 2005-09-29 | International Business Machines Corporation | Method, system, and article of manufacture for copying of data in a romote storage unit |
US7185157B2 (en) * | 2004-03-29 | 2007-02-27 | International Business Machines Corporation | Method, system, and article of manufacture for generating a copy of a first and a second set of volumes in a third set of volumes |
US20050251634A1 (en) * | 2004-05-05 | 2005-11-10 | International Business Machines Corporation | Point in time copy between data storage systems |
US7133989B2 (en) | 2004-05-05 | 2006-11-07 | International Business Machines Corporation | Point in time copy between data storage systems |
US20110047343A1 (en) * | 2004-05-27 | 2011-02-24 | International Business Machines Corporation | Data storage system for fast reverse restore |
US7409510B2 (en) | 2004-05-27 | 2008-08-05 | International Business Machines Corporation | Instant virtual copy to a primary mirroring portion of data |
US20090187613A1 (en) * | 2004-05-27 | 2009-07-23 | International Business Machines Corporation | Article of manufacture and system for fast reverse restore |
US7461100B2 (en) | 2004-05-27 | 2008-12-02 | International Business Machines Corporation | Method for fast reverse restore |
US20050278391A1 (en) * | 2004-05-27 | 2005-12-15 | Spear Gail A | Fast reverse restore |
US8005800B2 (en) | 2004-05-27 | 2011-08-23 | International Business Machines Corporation | Data storage system for fast reverse restore |
US7856425B2 (en) | 2004-05-27 | 2010-12-21 | International Business Machines Corporation | Article of manufacture and system for fast reverse restore |
US20060085485A1 (en) * | 2004-10-19 | 2006-04-20 | Microsoft Corporation | Protocol agnostic database change tracking |
US7487186B2 (en) * | 2004-10-19 | 2009-02-03 | Microsoft Corporation | Protocol agnostic database change tracking |
US20060136526A1 (en) * | 2004-12-16 | 2006-06-22 | Childress Rhonda L | Rapid provisioning of a computer into a homogenized resource pool |
US8862852B2 (en) * | 2005-02-03 | 2014-10-14 | International Business Machines Corporation | Apparatus and method to selectively provide information to one or more computing devices |
US20060174080A1 (en) * | 2005-02-03 | 2006-08-03 | Kern Robert F | Apparatus and method to selectively provide information to one or more computing devices |
US7698704B2 (en) | 2005-02-17 | 2010-04-13 | International Business Machines Corporation | Method for installing operating system on remote storage: flash deploy and install zone |
US20060184650A1 (en) * | 2005-02-17 | 2006-08-17 | International Business Machines Corporation | Method for installing operating system on remote storage: flash deploy and install zone |
US7487386B2 (en) * | 2005-03-30 | 2009-02-03 | International Business Machines Corporation | Method for increasing file system availability via block replication |
US20060230074A1 (en) * | 2005-03-30 | 2006-10-12 | International Business Machines Corporation | Method and system for increasing filesystem availability via block replication |
US20070220322A1 (en) * | 2006-03-15 | 2007-09-20 | Shogo Mikami | Method for displaying pair state of copy pairs |
US7725776B2 (en) * | 2006-03-15 | 2010-05-25 | Hitachi, Ltd. | Method for displaying pair state of copy pairs |
US20070220071A1 (en) * | 2006-03-15 | 2007-09-20 | Hitachi, Ltd. | Storage system, data migration method and server apparatus |
US20100229047A1 (en) * | 2006-03-15 | 2010-09-09 | Shogo Mikami | Method for displaying pair state of copy pairs |
US20070282878A1 (en) * | 2006-05-30 | 2007-12-06 | Computer Associates Think Inc. | System and method for online reorganization of a database using flash image copies |
US20080098187A1 (en) * | 2006-10-18 | 2008-04-24 | Gal Ashour | System, method and computer program product for generating a consistent point in time copy of data |
US7650476B2 (en) * | 2006-10-18 | 2010-01-19 | International Business Machines Corporation | System, method and computer program product for generating a consistent point in time copy of data |
US8589341B2 (en) * | 2006-12-04 | 2013-11-19 | Sandisk Il Ltd. | Incremental transparent file updating |
US20080134163A1 (en) * | 2006-12-04 | 2008-06-05 | Sandisk Il Ltd. | Incremental transparent file updating |
US20080144471A1 (en) * | 2006-12-18 | 2008-06-19 | International Business Machines Corporation | Application server provisioning by disk image inheritance |
US7945751B2 (en) | 2006-12-18 | 2011-05-17 | International Business Machines Corporation | Disk image inheritance |
US8127084B2 (en) | 2007-01-08 | 2012-02-28 | International Business Machines Corporation | Using different algorithms to destage different types of data from cache |
US7721043B2 (en) | 2007-01-08 | 2010-05-18 | International Business Machines Corporation | Managing write requests in cache directed to different storage groups |
US20080168220A1 (en) * | 2007-01-08 | 2008-07-10 | International Business Machines Corporation | Using different algorithms to destage different types of data from cache |
US20100174867A1 (en) * | 2007-01-08 | 2010-07-08 | International Business Machines Corporation | Using different algorithms to destage different types of data from cache |
US7783839B2 (en) | 2007-01-08 | 2010-08-24 | International Business Machines Corporation | Using different algorithms to destage different types of data from cache |
US20080168234A1 (en) * | 2007-01-08 | 2008-07-10 | International Business Machines Corporation | Managing write requests in cache directed to different storage groups |
US20080228802A1 (en) * | 2007-03-14 | 2008-09-18 | Computer Associates Think, Inc. | System and Method for Rebuilding Indices for Partitioned Databases |
US8694472B2 (en) * | 2007-03-14 | 2014-04-08 | Ca, Inc. | System and method for rebuilding indices for partitioned databases |
US7984327B2 (en) * | 2007-10-08 | 2011-07-19 | Axxana (Israel) Ltd. | Fast data recovery system |
US20090094425A1 (en) * | 2007-10-08 | 2009-04-09 | Alex Winokur | Fast data recovery system |
US8458127B1 (en) * | 2007-12-28 | 2013-06-04 | Blue Coat Systems, Inc. | Application data synchronization |
US8271753B2 (en) | 2008-07-23 | 2012-09-18 | Hitachi, Ltd. | Storage controller and storage control method for copying a snapshot using copy difference information |
US20100023716A1 (en) * | 2008-07-23 | 2010-01-28 | Jun Nemoto | Storage controller and storage control method |
US8661215B2 (en) | 2008-07-23 | 2014-02-25 | Hitachi Ltd. | System and method of acquiring and copying snapshots |
US8738874B2 (en) | 2008-07-23 | 2014-05-27 | Hitachi, Ltd. | Storage controller and storage control method for snapshot acquisition and remote copying of a snapshot |
US20110246731A1 (en) * | 2009-03-18 | 2011-10-06 | Hitachi, Ltd. | Backup system and backup method |
US20110167044A1 (en) * | 2009-04-23 | 2011-07-07 | Hitachi, Ltd. | Computing system and backup method using the same |
US8745006B2 (en) * | 2009-04-23 | 2014-06-03 | Hitachi, Ltd. | Computing system and backup method using the same |
US10289523B2 (en) * | 2010-07-12 | 2019-05-14 | International Business Machines Corporation | Generating an advanced function usage planning report |
US10169747B2 (en) | 2010-07-12 | 2019-01-01 | International Business Machines Corporation | Advanced function usage detection |
US20120011514A1 (en) * | 2010-07-12 | 2012-01-12 | International Business Machines Corporation | Generating an advanced function usage planning report |
US20120030503A1 (en) * | 2010-07-29 | 2012-02-02 | Computer Associates Think, Inc. | System and Method for Providing High Availability for Distributed Application |
US8578202B2 (en) * | 2010-07-29 | 2013-11-05 | Ca, Inc. | System and method for providing high availability for distributed application |
US9983819B2 (en) | 2010-08-19 | 2018-05-29 | International Business Machines Corporation | Systems and methods for initializing a memory system |
US9235348B2 (en) | 2010-08-19 | 2016-01-12 | International Business Machines Corporation | System, and methods for initializing a memory system |
US8555010B2 (en) * | 2011-03-24 | 2013-10-08 | Hitachi, Ltd. | Computer system and data backup method combining flashcopy and remote copy |
US20120246424A1 (en) * | 2011-03-24 | 2012-09-27 | Hitachi, Ltd. | Computer system and data backup method |
US9335931B2 (en) * | 2011-07-01 | 2016-05-10 | Futurewei Technologies, Inc. | System and method for making snapshots of storage devices |
US20130007389A1 (en) * | 2011-07-01 | 2013-01-03 | Futurewei Technologies, Inc. | System and Method for Making Snapshots of Storage Devices |
US9026849B2 (en) | 2011-08-23 | 2015-05-05 | Futurewei Technologies, Inc. | System and method for providing reliable storage |
US9104603B2 (en) * | 2011-09-19 | 2015-08-11 | Thomson Licensing | Method of exact repair of pairs of failed storage nodes in a distributed data storage system and corresponding device |
US20130073896A1 (en) * | 2011-09-19 | 2013-03-21 | Thomson Licensing | Method of exact repair of pairs of failed storage nodes in a distributed data storage system and corresponding device |
US20140075140A1 (en) * | 2011-12-30 | 2014-03-13 | Ingo Schmiegel | Selective control for commit lines for shadowing data in storage elements |
US20150234600A1 (en) * | 2013-02-11 | 2015-08-20 | International Business Machines Corporation | Selective copying of track data through peer-to-peer remote copy |
US10021148B2 (en) | 2013-02-11 | 2018-07-10 | International Business Machines Corporation | Selective copying of track data through peer-to-peer remote copy |
US9361026B2 (en) * | 2013-02-11 | 2016-06-07 | International Business Machines Corporation | Selective copying of track data based on track data characteristics through map-mediated peer-to-peer remote copy |
US9032172B2 (en) | 2013-02-11 | 2015-05-12 | International Business Machines Corporation | Systems, methods and computer program products for selective copying of track data through peer-to-peer remote copy |
US20150169220A1 (en) * | 2013-12-13 | 2015-06-18 | Fujitsu Limited | Storage control device and storage control method |
US10007602B2 (en) * | 2014-05-06 | 2018-06-26 | International Business Machines Corporation | Flash copy relationship management |
US20150324280A1 (en) * | 2014-05-06 | 2015-11-12 | International Business Machines Corporation | Flash copy relationship management |
US10747714B2 (en) | 2014-12-31 | 2020-08-18 | International Business Machines Corporation | Scalable distributed data store |
US20160188426A1 (en) * | 2014-12-31 | 2016-06-30 | International Business Machines Corporation | Scalable distributed data store |
US10089307B2 (en) * | 2014-12-31 | 2018-10-02 | International Business Machines Corporation | Scalable distributed data store |
US10108352B2 (en) * | 2015-03-03 | 2018-10-23 | International Business Machines Corporation | Incremental replication of a source data set |
CN105938448A (en) * | 2015-03-03 | 2016-09-14 | 国际商业机器公司 | Method and device used for data replication |
CN105938448B (en) * | 2015-03-03 | 2019-01-11 | 国际商业机器公司 | Method and apparatus for data duplication |
CN106302559A (en) * | 2015-05-11 | 2017-01-04 | 阿里巴巴集团控股有限公司 | A kind of data copy method and equipment |
WO2016180168A1 (en) * | 2015-05-11 | 2016-11-17 | 阿里巴巴集团控股有限公司 | Data copy method and device |
US11263231B2 (en) | 2015-05-11 | 2022-03-01 | Alibaba Group Holding Limited | Data copy method and device |
US10223222B2 (en) * | 2015-12-21 | 2019-03-05 | International Business Machines Corporation | Storage system-based replication for disaster recovery in virtualized environments |
US20170177454A1 (en) * | 2015-12-21 | 2017-06-22 | International Business Machines Corporation | Storage System-Based Replication for Disaster Recovery in Virtualized Environments |
US10146452B2 (en) * | 2016-04-22 | 2018-12-04 | International Business Machines Corporation | Maintaining intelligent write ordering with asynchronous data replication |
US10168925B2 (en) * | 2016-08-18 | 2019-01-01 | International Business Machines Corporation | Generating point-in-time copy commands for extents of data |
US10235099B2 (en) * | 2016-08-18 | 2019-03-19 | International Business Machines Corporation | Managing point-in-time copies for extents of data |
US10705765B2 (en) | 2016-08-18 | 2020-07-07 | International Business Machines Corporation | Managing point-in-time copies for extents of data |
US11310137B2 (en) | 2017-02-05 | 2022-04-19 | Veritas Technologies Llc | System and method to propagate information across a connected set of entities irrespective of the specific entity type |
US11748319B2 (en) | 2017-02-05 | 2023-09-05 | Veritas Technologies Llc | Method and system for executing workload orchestration across data centers |
US11086531B2 (en) | 2017-03-30 | 2021-08-10 | Amazon Technologies, Inc. | Scaling events for hosting hierarchical data structures |
US10423342B1 (en) * | 2017-03-30 | 2019-09-24 | Amazon Technologies, Inc. | Scaling events for hosting hierarchical data structures |
US10712953B2 (en) | 2017-12-13 | 2020-07-14 | International Business Machines Corporation | Management of a data written via a bus interface to a storage controller during remote copy operations |
CN111656340A (en) * | 2018-07-06 | 2020-09-11 | 斯诺弗雷克公司 | Data replication and data failover in a database system |
US11853575B1 (en) | 2019-06-08 | 2023-12-26 | Veritas Technologies Llc | Method and system for data consistency across failure and recovery of infrastructure |
US11429640B2 (en) | 2020-02-28 | 2022-08-30 | Veritas Technologies Llc | Methods and systems for data resynchronization in a replication environment |
US11531604B2 (en) | 2020-02-28 | 2022-12-20 | Veritas Technologies Llc | Methods and systems for data resynchronization in a replication environment |
US11847139B1 (en) | 2020-02-28 | 2023-12-19 | Veritas Technologies Llc | Methods and systems for data resynchronization in a replication environment |
US11928030B2 (en) | 2020-03-31 | 2024-03-12 | Veritas Technologies Llc | Optimize backup from universal share |
Also Published As
Publication number | Publication date |
---|---|
US7747576B2 (en) | 2010-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7747576B2 (en) | Incremental update control for remote copy | |
US7904684B2 (en) | System and article of manufacture for consistent copying of storage volumes | |
US9026696B1 (en) | Using I/O track information for continuous push with splitter for storage device | |
US7188222B2 (en) | Method, system, and program for mirroring data among storage sites | |
US7134044B2 (en) | Method, system, and program for providing a mirror copy of data | |
US7516356B2 (en) | Method for transmitting input/output requests from a first controller to a second controller | |
US7225307B2 (en) | Apparatus, system, and method for synchronizing an asynchronous mirror volume using a synchronous mirror volume | |
US7133986B2 (en) | Method, system, and program for forming a consistency group | |
US8521694B1 (en) | Leveraging array snapshots for immediate continuous data protection | |
US6950915B2 (en) | Data storage subsystem | |
US7188272B2 (en) | Method, system and article of manufacture for recovery from a failure in a cascading PPRC system | |
US5682513A (en) | Cache queue entry linking for DASD record updates | |
US8209282B2 (en) | Method, system, and article of manufacture for mirroring data at storage locations | |
US6941490B2 (en) | Dual channel restoration of data between primary and backup servers | |
US7921273B2 (en) | Method, system, and article of manufacture for remote copying of data | |
JP4074072B2 (en) | Remote copy system with data integrity | |
US7133983B2 (en) | Method, system, and program for asynchronous copy | |
US7451283B2 (en) | Method, system, and program for copying tracks between a primary storage and secondary storage | |
US7089446B2 (en) | Method, system, and article of manufacture for creating a consistent copy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICKA, WILLIAM FRANK;REEL/FRAME:012626/0869 Effective date: 20020205 Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION,NEW YO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICKA, WILLIAM FRANK;REEL/FRAME:012626/0869 Effective date: 20020205 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: TWITTER, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:032075/0404 Effective date: 20131230 |
|
REMI | Maintenance fee reminder mailed | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
SULP | Surcharge for late payment | ||
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.) |
|
FEPP | Fee payment procedure |
Free format text: 7.5 YR SURCHARGE - LATE PMT W/IN 6 MO, LARGE ENTITY (ORIGINAL EVENT CODE: M1555) |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552) Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |
|
AS | Assignment |
Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:062079/0677 Effective date: 20221027 Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:061804/0086 Effective date: 20221027 Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:061804/0001 Effective date: 20221027 |