US20080082744A1 - Storage system having data comparison function - Google Patents

Storage system having data comparison function Download PDF

Info

Publication number
US20080082744A1
US20080082744A1 US11/565,864 US56586406A US2008082744A1 US 20080082744 A1 US20080082744 A1 US 20080082744A1 US 56586406 A US56586406 A US 56586406A US 2008082744 A1 US2008082744 A1 US 2008082744A1
Authority
US
United States
Prior art keywords
data
storage device
storage
write
storage system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/565,864
Inventor
Yutaka Nakagawa
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Assigned to HITACHI, LTD. reassignment HITACHI, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NAKAGAWA, YUTAKA
Publication of US20080082744A1 publication Critical patent/US20080082744A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • G11C7/10Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
    • G11C7/1006Data managing, e.g. manipulating data before writing or reading out, data bus switches or control circuits therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C13/00Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
    • G11C13/0002Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
    • G11C13/0021Auxiliary circuits
    • G11C13/0069Writing or programming circuits or methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/26Using a specific storage system architecture
    • G06F2212/261Storage comprising a plurality of storage devices
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C13/00Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
    • G11C13/0002Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
    • G11C13/0021Auxiliary circuits
    • G11C13/0069Writing or programming circuits or methods
    • G11C2013/0076Write operation performed depending on read result
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C2207/00Indexing scheme relating to arrangements for writing information into, or reading information out from, a digital store
    • G11C2207/22Control and timing of internal memory operations
    • G11C2207/2245Memory devices with an internal cache buffer

Definitions

  • the present invention relates to a storage system.
  • RAID Redundant Arrays of Inexpensive Disks
  • HDD Hard Disk Drive
  • a storage device using a storage medium called a flash memory which is a type of non-volatile semiconductor memory
  • a flash memory medium using a storage medium called a NAND type flash memory of which capacity is increasing and price per unit capacity is decreasing, is used for general computer equipment.
  • a flash memory does not require time for moving a magnetic head, so overhead time required for data access can be decreased, and response performance can be improved compared with HDD.
  • Japanese Patent No. 3407317 discloses a technology on a storage device for decreasing the polarization of erase processing execution counts by managing the erase count in each erasing unit of the flash memory and writing data in an area of which erase count is low in a storage area of which erase count is high, so as to suppress the deterioration of the flash memory.
  • a feature of a flash memory is that write performance is poor (e.g. slow) compared to the read performance (e.g. speed). Other storage devices having this feature could possibly exist.
  • a storage system of the present invention has a cache area and a data comparator, and a controller of the storage system executes the following processing.
  • the controller writes a first data according to a write request received from a host device in a cache area, read a second data from a write destination location in the storage device according to the write request, and writes the read second data in the cache area.
  • the data comparator compares the first data and the second data written in the cache area. The controller does not write the first data in the storage device if the first data and second data match as a result of the comparison, and writes the first data on the cache area in the storage device if the first data and second data do not match.
  • the cache area can be created in a memory, for example.
  • the controller and data comparator can be constructed by hardware, a computer program or combination thereof (e.g. a part is implemented by a computer program and the rest is implemented by hardware) respectively.
  • the computer program is read and executed by a predetermined processor.
  • a memory area on a hardware resource such as a memory, may be used for the information processing performed by the computer program being read by the processor.
  • the computer program may be installed from a recordable medium, such as a CD-ROM, to the computer, or may be downloaded to the computer via a communication network.
  • FIG. 1 is a diagram depicting a general configuration of the storage system
  • FIG. 2 is a flow chart depicting an example of the first compare-write processing according to Embodiment 1;
  • FIG. 3 is a flow chart depicting an example of the second compare-write processing according to Embodiment 1;
  • FIG. 4 is a flow chart depicting an example of the third compare-write processing according to Embodiment 1;
  • FIG. 5 is a flow chart depicting the entire write processing when a device not executing compare-write processing exists
  • FIG. 6 shows a configuration example of a table for managing the executability setting of compare-write processing
  • FIG. 7 shows a user interface screen for setting the executability of compare-write processing
  • FIG. 8 is a diagram depicting the data structure of the RAID 5 configuration
  • FIG. 9 is a flow chart depicting the write processing in the RAID 5 configuration
  • FIG. 10 is a flow chart depicting the first compare-write processing in the RAID 5 configuration according to Embodiment 2;
  • FIG. 11 is a diagram depicting a general configuration of the storage system according to Embodiment 3.
  • FIG. 12 is a diagram depicting the compare-write processing in the RAID 1 configuration according to Embodiment 2.
  • Embodiment 1 of the present invention will be described with reference to FIG. 1 to FIG. 7 .
  • FIG. 1 shows a configuration example of the storage system.
  • This storage system 200 can be connected with one or a plurality of host computer 100 via a network 101 . If necessary, the storage system 200 can be connected with one or a plurality of management computers 110 via a network 111 .
  • the network 101 can be SAN (Storage Area Network), for example.
  • the network 111 can be LAN (Local Area Network), for example.
  • the networks 101 and 111 need not be separate networks.
  • the host computer 100 is a computer device constructed as a work station, main frame or a personal computer, for example.
  • the host computer 100 accesses the storage system 200 and reads/writes data.
  • the management computer 110 is a computer device which accesses the storage system 200 and manages the storage system 200 .
  • the management computer 110 and the host computer 100 may be the same computer devices.
  • the storage system 200 can roughly be divided into a storage controller 300 and a storage array 400 .
  • the storage controller 300 can be comprised of a host interface 310 , a management interface 320 , a processor 330 , a local memory 340 , a cache memory 350 , a data comparison circuit 360 and a storage array interface 370 .
  • the storage controller 300 can be one or a plurality of circuit boards, for example.
  • the host interface 310 is an interface for performing communication between the host computer 100 and storage system 200 .
  • the management interface 320 is an interface for performing communication between the management computer 110 and storage system 200 .
  • the storage array interface 370 is an interface for performing communication between the storage controller 300 and storage array 400 .
  • the processor 330 controls communication between the host computer 100 and storage system 200 , controls communication between the management computer 110 and storage system 200 , controls communication between the storage controller 300 and storage array 400 , and executes various programs stored in the local memory 340 .
  • the local memory 340 stores various programs to be executed by the processor 330 , and stores data required for controlling the storage system 200 .
  • the programs to be executed by the processor 330 include programs for implementing the later mentioned compare-write of data.
  • the cache memory 350 plays a role of a data buffer which temporarily stores data to be transferred from the host computer 100 , management computer 110 or storage array 400 to the storage controller 300 , or stores data required for controlling the storage system 200 .
  • the data comparison circuit 360 is a circuit for judging whether two data match or mismatch in the later mentioned data compare-write processing.
  • the data comparison circuit 360 is implemented as hardware, but may be implemented as a program which is stored in the local memory 340 and is executed by the processor 330 .
  • the storage array 400 can be comprised of one or a plurality of storage device 410 .
  • the storage device 410 is, for example, a flash memory, hard disk drive, optical disk, a magneto-optical disk, and a magnetic tape, but is not especially restricted to any device.
  • a plurality of types of storage devices may coexist in the storage array.
  • FIG. 2 shows an example of the flow of the first compare-write processing.
  • step is abbreviated by “S”.
  • the entire new data and entire old data, not a part, are compared.
  • the data as a whole may be expressed as “entire data”.
  • step 500 the processor 330 , which reads and executes a predetermined computer program, writes an entire new data according to a received write request in a cache memory 350 , reads an entire old data from the storage device 410 a , and writes the entire old data in the cache memory 350 .
  • the processor 330 specifies the above mentioned storage destination address from the write destination information specified by the received write request, and reads the entire old data from the specified storage destination address.
  • the data comparison circuit 360 compares the entire new data and the entire old data on the cache memory 350 .
  • the processor 330 may set the respective write locations of the new data and old data on the cache memory 350 in the data comparison circuit 360 , so that the data comparison circuit 360 reads the entire new data and the entire old data from the setting address of the timing of this setting, and these data are compared.
  • the respective write locations of the new data and old data on the cache memory 350 may be predetermined so that the data comparison circuit 360 reads the new data and old data from the predetermined locations.
  • step 520 if the comparison result in step 510 is a match, processing advances to step 540 .
  • step 540 the processor 330 sets the new data on the cache memory 350 to an erasable state, for example, and ends the compare-write processing.
  • the erasable state means a data management state wherein writing of other data to the storage area of this data is enabled by clearing the overwrite inhibit flag, for example.
  • step 520 if the comparison result in step 510 is a mismatch, processing advances to step 530 . This is because the entire new data must be written to the storage destination address.
  • step 530 the processor 330 writes the new data in the storage device 410 a , and then processing advances to step 540 .
  • Possible units of comparing data in step 510 are one data when the entire new data divided into one or more data, a multiple of a minimum write unit (minimum data size of one write execution) of the storage device 410 a , a multiple of a minimum read unit (minimum data size of one read execution) of the storage device 410 a , and a multiple of a minimum erase unit (minimum data size of one erase execution) of the storage device 410 a , for example.
  • FIG. 3 shows an example of a flow of the second compare-write processing.
  • differences from the first compare-write processing will primarily be described, and description on redundant aspects will be omitted or simplified.
  • step 600 the processor 330 writes new data in the cache memory 350 , and reads old data from the storage device 410 a , and writes it in the cache memory 350 .
  • the data comparison circuit 360 compares a part of the new data and a part of the old data.
  • the parts of the data to be compared are portions of data which exist in a same location of the respective entire data. For example, if a part of the new data is a portion of the new data which exists from the beginning to a predetermined position, a part of the old data to be compared with this is also a portion of the old data which exists from the beginning to the predetermined position.
  • the comparison target position will be described later.
  • step 620 if the partial data comparison result in step 610 is a mismatch, processing advances to step 650 .
  • the processor 330 writes the entire new data in the storage device 410 a . Then processing advances to step 660 .
  • step 620 if the partial data comparison result in step 610 is a match, processing advances to step 630 .
  • the data comparison circuit 360 compares the entire new data and the entire old data (the remaining part of data which was not compared may be compared).
  • step 640 if the entire data comparison result is a match in step 630 , processing advances to step 650 .
  • the processor 330 writes the new data in the storage device 410 a . Then processing advances to step 660 .
  • step 640 if the entire data comparison result is a mismatch in step 630 , processing advances to step 660 .
  • step 660 just like step 540 , the processor 330 sets the new data on the cache memory 350 to the erasable state, and ends compare-write processing.
  • the comparison target position described in step 610 may be a data integrity code shown in Japanese Patent Application Laid-Open No. 2001-202295, a first part of the write data, end of the write data, or an arbitrary location of the write data.
  • the above mentioned second compare-write processing in FIG. 3 has a partial data comparison processing which is not included in the first compare-write processing in FIG. 2 .
  • the second compare-write processing is effective when the data volume that can be compared within a predetermined time is limited.
  • the entire old data is read from the storage device 410 at the point when partial data comparison (step 610 ) is performed, but in the third compare-write processing, the entire old data may be read when comparison of the entire data becomes necessary.
  • FIG. 4 shows an example of the flow of the third compare-write processing.
  • step 700 the processor 330 writes the new data to the cache memory 350 , and reads a part of the old data (partial data comparison target position) from the storage device 410 a , and writes it to the cache memory 350 .
  • step 710 the data comparison circuit 360 compares a part of the new data and the same part of the old data (that is a part of the old data which was read).
  • step 720 if the partial data comparison result in step 710 is a mismatch, processing advances to step 760 .
  • the processor 330 writes the new data in the storage device 410 a .
  • processing advances to step 770 .
  • step 720 if the partial data comparison result in step 710 is a match, processing advances to step 730 .
  • the processor 330 reads the entire old data of the write target area (entire old data which exists in the range where the entire new data is scheduled to be written) from the storage device 410 a , and writes it in the cache memory 350 .
  • processing advances to step 740 .
  • the data comparison circuit 360 compares the entire new data and the entire old data.
  • step 750 if the entire data comparison result is a mismatch in step 740 , processing advances to step 760 .
  • the processor 330 writes the new data in the storage device 410 a . Then processing advances to step 770 .
  • step 750 if the entire data comparison result in step 740 is a match, processing advances to step 770 .
  • step 770 just like step 540 , the processor 330 sets the new data on the cache memory 350 to erasable state, and ends compare-write processing.
  • the volume of the old data to be read from the storage device 410 a when the partial data comparison result mismatches is less compared with the compare-write processing in FIG. 3 .
  • the time required for preparation for comparison processing when data must be updated to new data can be decreased. Therefore the third compare-write processing is effective when the data update volume is high.
  • This compare-write processing can be applied to the entire storage area in the storage array 400 , but may be applied only to a part thereof.
  • the processor 330 refers to a later mentioned compare-write setting management table 900 , for example, and judges whether the write target device is a compare target device (step 800 ), as shown in FIG. 5 . If it is a compare target in step 800 , one of the above mentioned first to third compare-write processings is executed (step 810 ), and if not, the compare-write processing is not executed, in other word, normal write processing is executed (step 820 ).
  • FIG. 6 shows a configuration example of the compare-write setting management table 900 .
  • compare-write setting management table 900 information on whether compare-write processing is executed or not is stored for each predetermined unit.
  • the predetermined unit are storage system unit, logical device (LU) unit, physical device unit, and each type of storage device 410 .
  • the logical device is a logical storage device, which is set using the storage space of one or a plurality of storage devices 410 , and is also called a “logical volume” or “logical unit”.
  • the setting values of the compare-write setting management table 900 can be changed by the processor 330 according to the internal state of the storage system 200 , such as write count to the storage device 410 , or can be changed by a user using the management computer 110 , as mentioned later.
  • the processor 330 monitors at least one of write count, erase count, write frequency (write count per unit time) and erase frequency (erase count per unit time) for each LU. If a value acquired by monitoring exceeds a predetermined threshold, the processor 330 specifies a storage device having an LU of which value exceeded the threshold (by, for example, referring to a table in which correspondence of LU and storage device is recorded), and sets compare of the specified storage device to “ON”.
  • FIG. 7 shows a screen for the user to change a value of the compare-write setting management table 900 using the management computer 110 .
  • the executability of the compare-write processing can be set, not limited by a graphical interface, but also by another interface, such as a command line interface.
  • the present embodiment which is configured as in the above description, can suppress the write count in the storage device. Therefore in a storage system constructed with a storage device of which write count has limitation, the life of the storage device can be extended. Also in a storage system constructed with a storage device of which performance is poorer compared with the read performance, the write performance can be improved.
  • Embodiment 2 of the present invention will be described with reference to FIG. 8 to FIG. 10 .
  • the present embodiment is a variant form of Embodiment 1, so description of the configuration overlapping with the above mentioned configuration is omitted or simplified, and difference will be primarily described.
  • a case when storage data is made redundant among a plurality of storage devices 410 using RAID technology will be described.
  • FIG. 8 shows a configuration of RAID 5 to be used for description of Embodiment 2. First the general processing is described, then a method of using the compare-write processing of the present invention will be described.
  • FIG. 9 shows this processing.
  • the processor 330 reads data D 11 (old data D 11 ) stored in the storage device 410 a , and stores it in the cache memory 350 .
  • the processor 330 reads a parity P 1 (old parity P 1 ) stored in the storage device 410 b , and stores it in the cache memory 350 .
  • the processor 330 calculates a new parity P 1 ′, using the old data D 11 and the new data D 11 ′ and old parity P 1 .
  • step 1050 The timing to execute step 1050 is arbitrary, such as before step 1000 or between step 1000 and step 1010 .
  • step 1100 the processor 330 writes the new data D 11 ′ to the cache memory 350 , and read data D 11 (old data D 11 ) stored in the storage device 410 a , and writes it in the cache memory 350 .
  • step 1110 the processor 330 reads a parity P 1 (old parity P 1 ) stored in the storage device 410 b , and writes it in the cache memory 350 .
  • step 1120 the data comparison circuit 360 compares the new data D 11 ′ and the old data D 11 .
  • step 1130 processing advances to step 1140 since the data in the storage device 410 a is not updated.
  • the processor 330 sets the new data D 11 ′ on the cache memory 350 to erasable state.
  • step 1150 processing advances to step 1150 .
  • the processor 330 calculates a new parity P 1 ′ using the old data D 11 , new data D 11 ′ and old parity P 1 . Then the processor 330 performs processing of writing the new data D 11 ′ in the storage device 410 a (step 1160 ), processing of writing the new parity P 1 ′ in the data storage device 410 e (step 1170 ), processing to set the new data D 11 ′ on the cache memory 350 to erasable state (step 1180 ), and processing to set the new parity P 1 ′ on the cache memory 350 to erasable state (step 1190 ), and ends compare-write processing.
  • the sequence can be changed only if the data on the cache memory 350 is set to erasable state after performing processing to write the new data D 11 ′.
  • any timing can be used only if it is before calculating the new parity P 1 ′ in step 1150 , and the parity need not be read if the comparison result in step 1130 is a match.
  • RAID 5 was used for description, but the present invention can also be constructed using another RAID level which generates a parity and error correction codes from the data, and stores them.
  • each data is not compared but the old data is read from one of the storage devices 410 storing the copied data, and is compared with the new data, so that read count and comparison count of the old data are decreased.
  • Embodiment 2 the case of comparing the entire new data and entire old data was shown, but it is also possible to construct such that partial data comparison is performed first, then entire data is compared, as shown in Embodiment 1.
  • the present embodiment which has the above configuration, can exhibit not only the same effect as Embodiment 1, but also can decrease overhead applied to compare-write processing using a RAID configuration.
  • Embodiment 3 of the present invention will now be described with reference to FIG. 11 .
  • the present embodiment is a variant form of Embodiment 1 and Embodiment 2, so description of the configuration overlapping with the above mentioned configurations is omitted or simplified, and difference will be primarily described.
  • a case when new data and old data are compared by the storage array will be described.
  • FIG. 11 shows a configuration example of the storage system according to Embodiment 3. The difference from FIG. 1 is that the data comparison circuit 360 is not in the storage controller 300 , and that a device having an embedded data buffer 430 , data comparison circuit 440 and processor 450 is used as the storage device 410 .
  • reading the old data from the storage device 410 and comparison of the new data and old data are performed by a storage device controller 420 in the storage device 410 .
  • the processor 450 reads the old data from the storage area 460 to the data buffer 430 , data is compared using the data comparison circuit 440 , and is written to the storage area 460 if necessary based on the comparison result.
  • executability can be set from the storage controller 300 for the storage device 410 according to the setting from the management computer 110 or the access frequency of the storage device 410 .
  • a RAID configuration may be formed among the storage areas 460 .
  • a RAID configuration may be formed among the storage areas 460 if (1) the storage device 410 a is a unit of replacing a failed part of the storage system, or (2) the storage area 460 is a replacement unit.
  • the storage controller 300 need not read old data or compare new data and old data every time the storage device 410 is written to, so load on the storage controller 300 is shifted to the storage array 400 .
  • the storage controller may comprise a plurality of first controllers (e.g. controller boards) for controlling communication with a host device (e.g. host computer or another storage system 1 ), a plurality of second controllers (e.g. controller boards) for controlling communication with a storage device, a cache memory for storing data exchanged between the host device and storage device, a control memory for storing data for controlling the storage system, and a connector (e.g. switch such as a cross bar switch) for connecting the first controller, second controller, cache memory and control memory respectively.
  • a connector e.g. switch such as a cross bar switch
  • the data comparison circuit may exist in any of the first controller, second controller and connector.
  • the above mentioned processing executed by the processor 330 may be performed either by a processor installed in the first controller or a processor installed in the second controller.
  • a control memory is not essential, and an area for storing information which could be stored by the control memory may be created in the cache memory instead.

Abstract

Before writing a first data from a host device to a storage device, a second data stored in a write destination location on the storage device is read and compared with the first data, and if they match, the first data is not written in the storage device.

Description

    CROSS-REFERENCE TO PRIOR APPLICATION
  • This application relates to and claims the benefit of priority from Japanese Patent Application No. 2006-266604, filed on Sep. 29, 2006, the entire disclosure of which is incorporated herein by reference.
  • BACKGROUND
  • The present invention relates to a storage system.
  • Storage systems using RAID (Redundant Arrays of Inexpensive Disks) technology, which increase the speed of processing for a read/write request from a host by operating a plurality of storage devices in parallel, and improves reliability by redundant configuration, has been developed. In Non-patent Document (D. Patterson, et al: “A case for Redundant Arrays of Inexpensive Disks (RAID)”, Proceedings of the 1988 ACM SIGMOD International Conference on Management of Data, pp. 109-116, 1998), five types of RAID configurations from RAID 1 to RAID 5 are described in detail. In addition to the five types of RAID configurations, such configurations as RAID 0 and RAID 6 exist, and these configurations are selectively used according to the application.
  • Conventionally a storage device called HDD (Hard Disk Drive), which is a type of magnetic storage device, has been generally used for the storage device of the above mentioned storage system.
  • Other than the above mentioned HDD, a storage device using a storage medium called a flash memory, which is a type of non-volatile semiconductor memory, also exists. Recently a flash memory medium using a storage medium called a NAND type flash memory, of which capacity is increasing and price per unit capacity is decreasing, is used for general computer equipment.
  • Unlike HDD, a flash memory does not require time for moving a magnetic head, so overhead time required for data access can be decreased, and response performance can be improved compared with HDD.
  • However each storage element of the flash memory has a limitation in erase count (guaranteed count) for overwriting data. Japanese Patent No. 3407317 discloses a technology on a storage device for decreasing the polarization of erase processing execution counts by managing the erase count in each erasing unit of the flash memory and writing data in an area of which erase count is low in a storage area of which erase count is high, so as to suppress the deterioration of the flash memory.
  • By using the technology disclosed in Japanese Patent No. 3407317, polarization of the erase processing execution count in each storage element can be decreased, so that the time when the erase count reaches a guaranteed count can be delayed. However if this technology is used for a storage system having a flash memory, it is possible that many I/Os are generated in the storage system, and the erase count reaches the guaranteed count (that is life runs out) in a short time, and this flash memory must be replaced. Such a problem also occurs when a storage system has another type of storage device of which write count or erase count has limitation.
  • A feature of a flash memory is that write performance is poor (e.g. slow) compared to the read performance (e.g. speed). Other storage devices having this feature could possibly exist.
  • SUMMARY
  • With the foregoing in view, it is an object of the present invention to extend the life of a storage device installed in a storage system when the storage device has limitation in write count or erase count.
  • It is another object of the present invention to improve write performance of a storage device installed in a storage system when write performance thereof is poor compared with the read performance.
  • A storage system of the present invention has a cache area and a data comparator, and a controller of the storage system executes the following processing. The controller writes a first data according to a write request received from a host device in a cache area, read a second data from a write destination location in the storage device according to the write request, and writes the read second data in the cache area. The data comparator compares the first data and the second data written in the cache area. The controller does not write the first data in the storage device if the first data and second data match as a result of the comparison, and writes the first data on the cache area in the storage device if the first data and second data do not match.
  • The cache area can be created in a memory, for example. The controller and data comparator can be constructed by hardware, a computer program or combination thereof (e.g. a part is implemented by a computer program and the rest is implemented by hardware) respectively. The computer program is read and executed by a predetermined processor. A memory area on a hardware resource, such as a memory, may be used for the information processing performed by the computer program being read by the processor. The computer program may be installed from a recordable medium, such as a CD-ROM, to the computer, or may be downloaded to the computer via a communication network.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagram depicting a general configuration of the storage system;
  • FIG. 2 is a flow chart depicting an example of the first compare-write processing according to Embodiment 1;
  • FIG. 3 is a flow chart depicting an example of the second compare-write processing according to Embodiment 1;
  • FIG. 4 is a flow chart depicting an example of the third compare-write processing according to Embodiment 1;
  • FIG. 5 is a flow chart depicting the entire write processing when a device not executing compare-write processing exists;
  • FIG. 6 shows a configuration example of a table for managing the executability setting of compare-write processing;
  • FIG. 7 shows a user interface screen for setting the executability of compare-write processing;
  • FIG. 8 is a diagram depicting the data structure of the RAID 5 configuration;
  • FIG. 9 is a flow chart depicting the write processing in the RAID 5 configuration;
  • FIG. 10 is a flow chart depicting the first compare-write processing in the RAID 5 configuration according to Embodiment 2;
  • FIG. 11 is a diagram depicting a general configuration of the storage system according to Embodiment 3; and
  • FIG. 12 is a diagram depicting the compare-write processing in the RAID 1 configuration according to Embodiment 2.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • As examples of embodiments of the present invention, the first to third embodiments will now be described.
  • Embodiment 1
  • Embodiment 1 of the present invention will be described with reference to FIG. 1 to FIG. 7.
  • FIG. 1 shows a configuration example of the storage system.
  • This storage system 200 can be connected with one or a plurality of host computer 100 via a network 101. If necessary, the storage system 200 can be connected with one or a plurality of management computers 110 via a network 111. The network 101 can be SAN (Storage Area Network), for example. The network 111 can be LAN (Local Area Network), for example. The networks 101 and 111 need not be separate networks.
  • The host computer 100 is a computer device constructed as a work station, main frame or a personal computer, for example. The host computer 100 accesses the storage system 200 and reads/writes data.
  • The management computer 110 is a computer device which accesses the storage system 200 and manages the storage system 200. The management computer 110 and the host computer 100 may be the same computer devices.
  • The storage system 200 can roughly be divided into a storage controller 300 and a storage array 400.
  • The storage controller 300 can be comprised of a host interface 310, a management interface 320, a processor 330, a local memory 340, a cache memory 350, a data comparison circuit 360 and a storage array interface 370. The storage controller 300 can be one or a plurality of circuit boards, for example.
  • The host interface 310 is an interface for performing communication between the host computer 100 and storage system 200. The management interface 320 is an interface for performing communication between the management computer 110 and storage system 200. The storage array interface 370 is an interface for performing communication between the storage controller 300 and storage array 400.
  • The processor 330 controls communication between the host computer 100 and storage system 200, controls communication between the management computer 110 and storage system 200, controls communication between the storage controller 300 and storage array 400, and executes various programs stored in the local memory 340.
  • The local memory 340 stores various programs to be executed by the processor 330, and stores data required for controlling the storage system 200. The programs to be executed by the processor 330 include programs for implementing the later mentioned compare-write of data.
  • The cache memory 350 plays a role of a data buffer which temporarily stores data to be transferred from the host computer 100, management computer 110 or storage array 400 to the storage controller 300, or stores data required for controlling the storage system 200.
  • The data comparison circuit 360 is a circuit for judging whether two data match or mismatch in the later mentioned data compare-write processing. In the description of the embodiment, the data comparison circuit 360 is implemented as hardware, but may be implemented as a program which is stored in the local memory 340 and is executed by the processor 330.
  • The storage array 400 can be comprised of one or a plurality of storage device 410. The storage device 410 is, for example, a flash memory, hard disk drive, optical disk, a magneto-optical disk, and a magnetic tape, but is not especially restricted to any device. A plurality of types of storage devices may coexist in the storage array.
  • When the storage system 200 received a write request from the host computer 100, the compare-write processing is executed. Now some compare-write processing will be described. In the following description, a case of storing data, of which write is requested, in the storage device 410 a (device A in FIG. 1) will be considered. Here a write target data of the write request from the host computer 100 is called “new data”, and data already written in a storage destination address (address in the storage device 410 a) of the new data is called “old data”.
  • FIG. 2 shows an example of the flow of the first compare-write processing. In FIG. 2, “step” is abbreviated by “S”.
  • In the first compare-write processing, the entire new data and entire old data, not a part, are compared. In the following description, the data as a whole may be expressed as “entire data”.
  • First, in step 500, the processor 330, which reads and executes a predetermined computer program, writes an entire new data according to a received write request in a cache memory 350, reads an entire old data from the storage device 410 a, and writes the entire old data in the cache memory 350. Specifically, for example, the processor 330 specifies the above mentioned storage destination address from the write destination information specified by the received write request, and reads the entire old data from the specified storage destination address.
  • Then in step 510, the data comparison circuit 360 compares the entire new data and the entire old data on the cache memory 350. In this case, for example, the processor 330 may set the respective write locations of the new data and old data on the cache memory 350 in the data comparison circuit 360, so that the data comparison circuit 360 reads the entire new data and the entire old data from the setting address of the timing of this setting, and these data are compared. Or the respective write locations of the new data and old data on the cache memory 350 may be predetermined so that the data comparison circuit 360 reads the new data and old data from the predetermined locations.
  • In step 520, if the comparison result in step 510 is a match, processing advances to step 540. This is because it is unnecessary to write the entire new data, since the entire old data, of which contents are the same as the entire new data, already exists in the storage destination address. In step 540, the processor 330 sets the new data on the cache memory 350 to an erasable state, for example, and ends the compare-write processing. The erasable state means a data management state wherein writing of other data to the storage area of this data is enabled by clearing the overwrite inhibit flag, for example.
  • In step 520, if the comparison result in step 510 is a mismatch, processing advances to step 530. This is because the entire new data must be written to the storage destination address. In step 530, the processor 330 writes the new data in the storage device 410 a, and then processing advances to step 540.
  • Possible units of comparing data in step 510 are one data when the entire new data divided into one or more data, a multiple of a minimum write unit (minimum data size of one write execution) of the storage device 410 a, a multiple of a minimum read unit (minimum data size of one read execution) of the storage device 410 a, and a multiple of a minimum erase unit (minimum data size of one erase execution) of the storage device 410 a, for example.
  • In the first compare-write processing described with reference to FIG. 2, the entire new data and entire old data are compared, but in the second compare-write processing to be described next, data is partially compared first, then the entire data is compared.
  • FIG. 3 shows an example of a flow of the second compare-write processing. Herein below, differences from the first compare-write processing will primarily be described, and description on redundant aspects will be omitted or simplified.
  • In step 600, the processor 330 writes new data in the cache memory 350, and reads old data from the storage device 410 a, and writes it in the cache memory 350.
  • Then in step 610, the data comparison circuit 360 compares a part of the new data and a part of the old data. Here the parts of the data to be compared are portions of data which exist in a same location of the respective entire data. For example, if a part of the new data is a portion of the new data which exists from the beginning to a predetermined position, a part of the old data to be compared with this is also a portion of the old data which exists from the beginning to the predetermined position. The comparison target position will be described later.
  • In step 620, if the partial data comparison result in step 610 is a mismatch, processing advances to step 650. In other words, the processor 330 writes the entire new data in the storage device 410 a. Then processing advances to step 660.
  • In step 620, if the partial data comparison result in step 610 is a match, processing advances to step 630. In other words, the data comparison circuit 360 compares the entire new data and the entire old data (the remaining part of data which was not compared may be compared).
  • In step 640, if the entire data comparison result is a match in step 630, processing advances to step 650. In other words, the processor 330 writes the new data in the storage device 410 a. Then processing advances to step 660.
  • In step 640, if the entire data comparison result is a mismatch in step 630, processing advances to step 660.
  • In step 660, just like step 540, the processor 330 sets the new data on the cache memory 350 to the erasable state, and ends compare-write processing.
  • The comparison target position described in step 610 may be a data integrity code shown in Japanese Patent Application Laid-Open No. 2001-202295, a first part of the write data, end of the write data, or an arbitrary location of the write data.
  • The above mentioned second compare-write processing in FIG. 3 has a partial data comparison processing which is not included in the first compare-write processing in FIG. 2. By this, when data must be updated to the new data, representative data can be compared, so the necessity of an update (that is necessity to write the new data in the storage device 410 a) can be judged without waiting for comparing the entire data. Therefore the second compare-write processing is effective when the data volume that can be compared within a predetermined time is limited.
  • In the second compare-write processing in FIG. 3, the entire old data is read from the storage device 410 at the point when partial data comparison (step 610) is performed, but in the third compare-write processing, the entire old data may be read when comparison of the entire data becomes necessary.
  • FIG. 4 shows an example of the flow of the third compare-write processing.
  • First in step 700, the processor 330 writes the new data to the cache memory 350, and reads a part of the old data (partial data comparison target position) from the storage device 410 a, and writes it to the cache memory 350.
  • Then in step 710, the data comparison circuit 360 compares a part of the new data and the same part of the old data (that is a part of the old data which was read).
  • In step 720, if the partial data comparison result in step 710 is a mismatch, processing advances to step 760. In other words, the processor 330 writes the new data in the storage device 410 a. Then processing advances to step 770.
  • In step 720, if the partial data comparison result in step 710 is a match, processing advances to step 730. In other words, the processor 330 reads the entire old data of the write target area (entire old data which exists in the range where the entire new data is scheduled to be written) from the storage device 410 a, and writes it in the cache memory 350. Then processing advances to step 740. In other words, the data comparison circuit 360 compares the entire new data and the entire old data.
  • In step 750, if the entire data comparison result is a mismatch in step 740, processing advances to step 760. In other words, the processor 330 writes the new data in the storage device 410 a. Then processing advances to step 770.
  • In step 750, if the entire data comparison result in step 740 is a match, processing advances to step 770.
  • In step 770, just like step 540, the processor 330 sets the new data on the cache memory 350 to erasable state, and ends compare-write processing.
  • In the case of the third compare-write processing shown in FIG. 4, the volume of the old data to be read from the storage device 410 a when the partial data comparison result mismatches is less compared with the compare-write processing in FIG. 3. As a result, the time required for preparation for comparison processing when data must be updated to new data can be decreased. Therefore the third compare-write processing is effective when the data update volume is high.
  • This compare-write processing can be applied to the entire storage area in the storage array 400, but may be applied only to a part thereof. In this case, if the host computer 100 sends a write request to the storage system 200, the processor 330 refers to a later mentioned compare-write setting management table 900, for example, and judges whether the write target device is a compare target device (step 800), as shown in FIG. 5. If it is a compare target in step 800, one of the above mentioned first to third compare-write processings is executed (step 810), and if not, the compare-write processing is not executed, in other word, normal write processing is executed (step 820).
  • FIG. 6 shows a configuration example of the compare-write setting management table 900.
  • In the compare-write setting management table 900, information on whether compare-write processing is executed or not is stored for each predetermined unit. Examples of the predetermined unit are storage system unit, logical device (LU) unit, physical device unit, and each type of storage device 410. The logical device is a logical storage device, which is set using the storage space of one or a plurality of storage devices 410, and is also called a “logical volume” or “logical unit”.
  • The setting values of the compare-write setting management table 900 can be changed by the processor 330 according to the internal state of the storage system 200, such as write count to the storage device 410, or can be changed by a user using the management computer 110, as mentioned later.
  • Specifically, for example, the processor 330 monitors at least one of write count, erase count, write frequency (write count per unit time) and erase frequency (erase count per unit time) for each LU. If a value acquired by monitoring exceeds a predetermined threshold, the processor 330 specifies a storage device having an LU of which value exceeded the threshold (by, for example, referring to a table in which correspondence of LU and storage device is recorded), and sets compare of the specified storage device to “ON”.
  • FIG. 7 shows a screen for the user to change a value of the compare-write setting management table 900 using the management computer 110.
  • On this screen, setting values are displayed for each unit of managing the executability of the compare-write processing, and the setting can be changed. The executability of the compare-write processing can be set, not limited by a graphical interface, but also by another interface, such as a command line interface.
  • The present embodiment, which is configured as in the above description, can suppress the write count in the storage device. Therefore in a storage system constructed with a storage device of which write count has limitation, the life of the storage device can be extended. Also in a storage system constructed with a storage device of which performance is poorer compared with the read performance, the write performance can be improved.
  • Embodiment 2
  • Now Embodiment 2 of the present invention will be described with reference to FIG. 8 to FIG. 10. The present embodiment is a variant form of Embodiment 1, so description of the configuration overlapping with the above mentioned configuration is omitted or simplified, and difference will be primarily described. In the present embodiment, a case when storage data is made redundant among a plurality of storage devices 410 using RAID technology will be described.
  • FIG. 8 shows a configuration of RAID 5 to be used for description of Embodiment 2. First the general processing is described, then a method of using the compare-write processing of the present invention will be described.
  • Here a case of the 4D+1P configuration using five storage devices, 410 a to 410 e (in other words, a RAID group comprised of five data storage devices in RAID 5), will be considered. In a data group for generating a certain parity, data to be stored in the storage device 410 a is called D11, data to be stored in the storage device 410 b is called D12, data to be stored in the storage device 410 c is called D13, data to be stored in the storage device 410 d is called D14, and parity to be stored in the storage device 410 e is called P1. At this time, P1=D11 XOR D12 XOR D13 XOR D14 is established. XOR indicates exclusive OR.
  • In this state, if D11 is updated to D11′, P1 also must be updated to P1′, and can be calculated based on P1′=D11 XOR D11′ XOR P1.
  • FIG. 9 shows this processing. First in step 1000, the processor 330 reads data D11 (old data D11) stored in the storage device 410 a, and stores it in the cache memory 350. Then in step 1010, the processor 330 reads a parity P1 (old parity P1) stored in the storage device 410 b, and stores it in the cache memory 350. Then in step 1020, the processor 330 calculates a new parity P1′, using the old data D11 and the new data D11′ and old parity P1. Then the processor 330 writes the new data D11′ in the storage device 410 a in step 1030, and writes the new parity P1′ in the storage device 410 e in step 1040. The timing to execute step 1050 is arbitrary, such as before step 1000 or between step 1000 and step 1010.
  • Now the first compare-write processing according to the second embodiment will be described with reference to FIG. 10.
  • First in step 1100, the processor 330 writes the new data D11′ to the cache memory 350, and read data D11 (old data D11) stored in the storage device 410 a, and writes it in the cache memory 350.
  • Then in step 1110, the processor 330 reads a parity P1 (old parity P1) stored in the storage device 410 b, and writes it in the cache memory 350.
  • Then in step 1120, the data comparison circuit 360 compares the new data D11′ and the old data D11.
  • If the judgment result in step 1130 is a match, processing advances to step 1140 since the data in the storage device 410 a is not updated. In other words, the processor 330 sets the new data D11′ on the cache memory 350 to erasable state.
  • If the judgment result in step 1130 is a mismatch, processing advances to step 1150. In other words, the processor 330 calculates a new parity P1′ using the old data D11, new data D11′ and old parity P1. Then the processor 330 performs processing of writing the new data D11′ in the storage device 410 a (step 1160), processing of writing the new parity P1′ in the data storage device 410 e (step 1170), processing to set the new data D11′ on the cache memory 350 to erasable state (step 1180), and processing to set the new parity P1′ on the cache memory 350 to erasable state (step 1190), and ends compare-write processing.
  • For the four steps from step 1160 to step 1190, the sequence can be changed only if the data on the cache memory 350 is set to erasable state after performing processing to write the new data D11′.
  • For the timing to read the parity in step 1110, any timing can be used only if it is before calculating the new parity P1′ in step 1150, and the parity need not be read if the comparison result in step 1130 is a match.
  • In the above example, RAID 5 was used for description, but the present invention can also be constructed using another RAID level which generates a parity and error correction codes from the data, and stores them.
  • If identical data is written in a plurality of storage devices 410, such as the case of RAID 1, it is also possible that in a step of comparing data in Embodiment 1, each data is not compared but the old data is read from one of the storage devices 410 storing the copied data, and is compared with the new data, so that read count and comparison count of the old data are decreased.
  • In Embodiment 2, the case of comparing the entire new data and entire old data was shown, but it is also possible to construct such that partial data comparison is performed first, then entire data is compared, as shown in Embodiment 1.
  • The present embodiment, which has the above configuration, can exhibit not only the same effect as Embodiment 1, but also can decrease overhead applied to compare-write processing using a RAID configuration.
  • Embodiment 3
  • Embodiment 3 of the present invention will now be described with reference to FIG. 11. The present embodiment is a variant form of Embodiment 1 and Embodiment 2, so description of the configuration overlapping with the above mentioned configurations is omitted or simplified, and difference will be primarily described. In the present embodiment, a case when new data and old data are compared by the storage array will be described.
  • FIG. 11 shows a configuration example of the storage system according to Embodiment 3. The difference from FIG. 1 is that the data comparison circuit 360 is not in the storage controller 300, and that a device having an embedded data buffer 430, data comparison circuit 440 and processor 450 is used as the storage device 410.
  • In the present embodiment, reading the old data from the storage device 410 and comparison of the new data and old data, which are performed by the storage controller 300 in Embodiment 1, are performed by a storage device controller 420 in the storage device 410. In other words, after the processor 450 reads the old data from the storage area 460 to the data buffer 430, data is compared using the data comparison circuit 440, and is written to the storage area 460 if necessary based on the comparison result.
  • If a storage device 410, which can set executability of compare-write processing for the entire storage device 410 or for each predetermined unit of the storage area 460 in the storage device 410, is used, executability can be set from the storage controller 300 for the storage device 410 according to the setting from the management computer 110 or the access frequency of the storage device 410.
  • Also a RAID configuration may be formed among the storage areas 460. Specifically, for example, a RAID configuration may be formed among the storage areas 460 if (1) the storage device 410 a is a unit of replacing a failed part of the storage system, or (2) the storage area 460 is a replacement unit.
  • In the present embodiment, which is constructed as the above description, the storage controller 300 need not read old data or compare new data and old data every time the storage device 410 is written to, so load on the storage controller 300 is shifted to the storage array 400.
  • The present invention is not limited to the above mentioned embodiments. Experts in the art could add and change in various ways within the scope of the invention. For example, the storage controller may comprise a plurality of first controllers (e.g. controller boards) for controlling communication with a host device (e.g. host computer or another storage system 1), a plurality of second controllers (e.g. controller boards) for controlling communication with a storage device, a cache memory for storing data exchanged between the host device and storage device, a control memory for storing data for controlling the storage system, and a connector (e.g. switch such as a cross bar switch) for connecting the first controller, second controller, cache memory and control memory respectively. In this case, one or both of the first controller and second controller can perform processing as the storage controller. Here the data comparison circuit may exist in any of the first controller, second controller and connector. The above mentioned processing executed by the processor 330 may be performed either by a processor installed in the first controller or a processor installed in the second controller. A control memory is not essential, and an area for storing information which could be stored by the control memory may be created in the cache memory instead.

Claims (18)

1. A storage system which has a storage device and receives a write request sent from a host device and stores data according to the write request in the storage device, comprising:
a cache area;
a controller for writing a first data according to a write request received from the host device in the cache area, reading a second data from a write destination location in the storage device according to the received write request, and writing the read second data in the cache area; and
a data comparator for comparing the first data and the second data written in the cache area, wherein
the controller does not write the first data to the storage device if the first data and the second data match as a result of the comparison, and writes the first data that is on the cache area in the storage device if the first data and the second data do not match.
2. The storage system according to claim 1, wherein
the data comparator compares a part of the first data and a part of the second data, and if the part of the first data and the part of the second data match, the data comparator compares the remaining parts, and
the controller writes the first data in the storage device if the result of the comparison of the parts, or the result of the comparison of the remaining parts is a mismatch.
3. The storage system according to claim 1, wherein
the controller reads a part of the second data from the write destination location, and writes a part of the second data to the cache area,
the data comparator compares a part of the first data written in the cache area and a part of the second data written in the cache area, and
the controller writes the first data in the storage device if the part of the first data written in the cache area and the part of the second data written in the cache area mismatch.
4. The storage system according to claim 3, wherein
the controller reads the remaining part of the second data from the storage device and writes the same in the cache area if the result of the comparison is a match,
the data comparator compares the remaining part of the first data and the remaining part of the second data, and
the controller does not write the first data in the storage device if the remaining part of the first data and the remaining part of the second data match, and writes the first data in the storage device if the remaining part of the first data and the remaining part of the second data mismatch.
5. The storage system according to claim 3, wherein the data size of a part of the second data to be read from the storage device is not less than a minimum data size required for the comparison, and is a unit of reading of the storage device.
6. The storage system according to claim 1, wherein
the controller is constructed such that redundant data is generated based on data according to the write request, and the data and the redundant data are written in the storage device,
the data comparator compares the first data and the second data, and does not compare a first redundant data which is a redundant data of the first data and a second redundant data which is a redundant data of the second data, and
the controller does not write the first data and the first redundant data if the comparison result match, and writes the first data and the first redundant data in the storage device if the comparison result mismatch.
7. The storage system according to claim 6, wherein
the data comparator compares a part of the first data and a part of the second data, and if the part of the first data and the part of the second data match, the data comparator compares the remaining parts, and
the controller writes the first data and the first redundant data in the storage device if the result of the comparison of the parts or the result of comparison of the remaining parts is a mismatch.
8. The storage system according to claim 6, wherein
a plurality of the storage devices exist, the plurality of storage devices constitute a RAID group, a RAID level of the RAID group is a RAID level which requires generation of parity data, and the redundant data is the parity data.
9. The storage system according to claim 6, wherein the redundant data is an error correction code.
10. The storage system according to claim 1, wherein
a plurality of the storage devices exist, and the control section is constructed so that mirroring processing, to multiplex and write data in the plurality of storage devices, is performed, and if the write request is received, the control section selects one storage device out of the plurality of storage devices, and reads the second data from the write destination location in the selected storage device.
11. The storage system according to claim 1, wherein
a unit of the data to be compared is one of the following (1) to (4):
(1) size of one data when the first data is divided into one or more data;
(2) a multiple of minimum data size required for writing;
(3) a multiple of minimum data size required for reading; and
(4) a multiple of minimum erase size.
12. The storage system according to claim 1, wherein the control section writes only a part of the first data which does not match the second data in the storage device as a result of the comparison.
13. The storage system according to claim 1, further comprises a control unit for receiving a write request from the host device and writing data in the storage device, wherein the cache area, the controller and the data comparator are installed in the storage device.
14. The storage system according to claim 1, wherein the storage device is a flash memory device.
15. The storage system according to claim 1, wherein a plurality of the storage devices exist, and the plurality of storage devices are a plurality of storage areas in one storage device.
16. The storage system according to claim 1, wherein a plurality of the storage devices exist, and the controller reads the second data from the write destination location if the write destination location is a storage device requiring write suppression out of the plurality of storage devices, and writes the first data in the write destination location without reading the second data if not.
17. The storage system according to claim 16, wherein the storage device requiring the write suppression is a storage device in which at least one of write count, erase count, write frequency and erase frequency is a predetermined value or more.
18. A storage control method, comprising the steps of:
receiving a write request sent from a host device;
writing a first data according to the received write request in a cache area;
reading a second data from a write destination location in a storage device according to the received write request;
writing the read second data in the cache area;
comparing the first data and the second data written in the cache area; and
not writing the first data in the storage device if the first data and the second data match as a result of the comparison, and writing the first data that is on the cache memory in the storage device if the first data and the second data do not match.
US11/565,864 2006-09-29 2006-12-01 Storage system having data comparison function Abandoned US20080082744A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006-266604 2006-09-29
JP2006266604A JP4372134B2 (en) 2006-09-29 2006-09-29 Storage system with data comparison function

Publications (1)

Publication Number Publication Date
US20080082744A1 true US20080082744A1 (en) 2008-04-03

Family

ID=39286988

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/565,864 Abandoned US20080082744A1 (en) 2006-09-29 2006-12-01 Storage system having data comparison function

Country Status (2)

Country Link
US (1) US20080082744A1 (en)
JP (1) JP4372134B2 (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211762A1 (en) * 2006-03-07 2007-09-13 Samsung Electronics Co., Ltd. Method and system for integrating content and services among multiple networks
US20080133504A1 (en) * 2006-12-04 2008-06-05 Samsung Electronics Co., Ltd. Method and apparatus for contextual search and query refinement on consumer electronics devices
US20080183698A1 (en) * 2006-03-07 2008-07-31 Samsung Electronics Co., Ltd. Method and system for facilitating information searching on electronic devices
US20080235393A1 (en) * 2007-03-21 2008-09-25 Samsung Electronics Co., Ltd. Framework for corrrelating content on a local network with information on an external network
US20080266449A1 (en) * 2007-04-25 2008-10-30 Samsung Electronics Co., Ltd. Method and system for providing access to information of potential interest to a user
US20080288641A1 (en) * 2007-05-15 2008-11-20 Samsung Electronics Co., Ltd. Method and system for providing relevant information to a user of a device in a local network
US20090055393A1 (en) * 2007-01-29 2009-02-26 Samsung Electronics Co., Ltd. Method and system for facilitating information searching on electronic devices based on metadata information
US20090187723A1 (en) * 2006-04-27 2009-07-23 Nxp B.V. Secure storage system and method for secure storing
US20090327592A1 (en) * 2008-06-30 2009-12-31 KOREA POLYTECHNIC UNIVERSITY Industry and Academic Cooperation Foundation Clustering device for flash memory and method thereof
US20100070895A1 (en) * 2008-09-10 2010-03-18 Samsung Electronics Co., Ltd. Method and system for utilizing packaged content sources to identify and provide information based on contextual information
US20100318879A1 (en) * 2009-06-11 2010-12-16 Samsung Electronics Co., Ltd. Storage device with flash memory and data storage method
US8115869B2 (en) 2007-02-28 2012-02-14 Samsung Electronics Co., Ltd. Method and system for extracting relevant information from content metadata
US8176068B2 (en) 2007-10-31 2012-05-08 Samsung Electronics Co., Ltd. Method and system for suggesting search queries on electronic devices
US20140172797A1 (en) * 2012-12-14 2014-06-19 Lsi Corporation Method and Apparatus to Share a Single Storage Drive Across a Large Number of Unique Systems When Data is Highly Redundant
KR101411499B1 (en) 2008-05-19 2014-07-01 삼성전자주식회사 Variable resistance memory device and management method thereof
US20140281130A1 (en) * 2013-03-15 2014-09-18 The Boeing Company Accessing non-volatile memory through a volatile shadow memory
US8891296B2 (en) 2013-02-27 2014-11-18 Empire Technology Development Llc Linear Programming based decoding for memory devices
US9032245B2 (en) 2011-08-30 2015-05-12 Samsung Electronics Co., Ltd. RAID data management method of improving data reliability and RAID data storage device
WO2015155824A1 (en) * 2014-04-07 2015-10-15 株式会社日立製作所 Storage system
US9286385B2 (en) 2007-04-25 2016-03-15 Samsung Electronics Co., Ltd. Method and system for providing access to information of potential interest to a user
US9448921B2 (en) 2013-01-11 2016-09-20 Empire Technology Development Llc Page allocation for flash memories
US9859925B2 (en) 2013-12-13 2018-01-02 Empire Technology Development Llc Low-complexity flash memory data-encoding techniques using simplified belief propagation
KR20180081333A (en) * 2017-01-06 2018-07-16 삼성전자주식회사 Memory device comprising resistance change material method for operating the memory device
US10929299B2 (en) 2017-06-15 2021-02-23 Fujitsu Limited Storage system, method and non-transitory computer-readable storage medium
US11032520B2 (en) 2013-03-15 2021-06-08 James Carey Self-healing video surveillance system
US11223803B2 (en) 2013-03-15 2022-01-11 James Carey Self-healing video surveillance system
US11392466B2 (en) 2015-03-05 2022-07-19 Kioxia Corporation Storage system
US20230062773A1 (en) * 2021-08-24 2023-03-02 Kioxia Corporation Nonvolatile memory and memory system
US11726715B2 (en) 2021-10-11 2023-08-15 Western Digital Technologies, Inc. Efficient data path in compare command execution
US11853178B2 (en) 2015-03-05 2023-12-26 Kioxia Corporation Storage system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010020648A (en) * 2008-07-12 2010-01-28 Hitachi Ulsi Systems Co Ltd Storage device
JP5468184B2 (en) * 2010-10-29 2014-04-09 エンパイア テクノロジー ディベロップメント エルエルシー Advanced data encoding with reduced erasure count for solid state drives

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6320791B1 (en) * 1998-10-26 2001-11-20 Nec Corporation Writing apparatus for a non-volatile semiconductor memory device
US6438665B2 (en) * 1996-08-08 2002-08-20 Micron Technology, Inc. System and method which compares data preread from memory cells to data to be written to the cells
US20050132040A1 (en) * 2002-05-08 2005-06-16 Adtron Corporation Method and apparatus for controlling storage medium exchange with a storage controller subsystem
US20060047872A1 (en) * 2004-08-31 2006-03-02 Yutaka Nakagawa Storage system has the function of preventing drive write error
US7366848B1 (en) * 2005-06-02 2008-04-29 Sun Microsystems, Inc. Reducing resource consumption by ineffective write operations in a shared memory system
US7409492B2 (en) * 2006-03-29 2008-08-05 Hitachi, Ltd. Storage system using flash memory modules logically grouped for wear-leveling and RAID

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6438665B2 (en) * 1996-08-08 2002-08-20 Micron Technology, Inc. System and method which compares data preread from memory cells to data to be written to the cells
US6320791B1 (en) * 1998-10-26 2001-11-20 Nec Corporation Writing apparatus for a non-volatile semiconductor memory device
US20050132040A1 (en) * 2002-05-08 2005-06-16 Adtron Corporation Method and apparatus for controlling storage medium exchange with a storage controller subsystem
US20060047872A1 (en) * 2004-08-31 2006-03-02 Yutaka Nakagawa Storage system has the function of preventing drive write error
US7366848B1 (en) * 2005-06-02 2008-04-29 Sun Microsystems, Inc. Reducing resource consumption by ineffective write operations in a shared memory system
US7409492B2 (en) * 2006-03-29 2008-08-05 Hitachi, Ltd. Storage system using flash memory modules logically grouped for wear-leveling and RAID

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211762A1 (en) * 2006-03-07 2007-09-13 Samsung Electronics Co., Ltd. Method and system for integrating content and services among multiple networks
US20080183698A1 (en) * 2006-03-07 2008-07-31 Samsung Electronics Co., Ltd. Method and system for facilitating information searching on electronic devices
US8863221B2 (en) 2006-03-07 2014-10-14 Samsung Electronics Co., Ltd. Method and system for integrating content and services among multiple networks
US8200688B2 (en) 2006-03-07 2012-06-12 Samsung Electronics Co., Ltd. Method and system for facilitating information searching on electronic devices
US20090187723A1 (en) * 2006-04-27 2009-07-23 Nxp B.V. Secure storage system and method for secure storing
US20080133504A1 (en) * 2006-12-04 2008-06-05 Samsung Electronics Co., Ltd. Method and apparatus for contextual search and query refinement on consumer electronics devices
US8935269B2 (en) 2006-12-04 2015-01-13 Samsung Electronics Co., Ltd. Method and apparatus for contextual search and query refinement on consumer electronics devices
US8782056B2 (en) 2007-01-29 2014-07-15 Samsung Electronics Co., Ltd. Method and system for facilitating information searching on electronic devices
US20090055393A1 (en) * 2007-01-29 2009-02-26 Samsung Electronics Co., Ltd. Method and system for facilitating information searching on electronic devices based on metadata information
US8115869B2 (en) 2007-02-28 2012-02-14 Samsung Electronics Co., Ltd. Method and system for extracting relevant information from content metadata
US20080235393A1 (en) * 2007-03-21 2008-09-25 Samsung Electronics Co., Ltd. Framework for corrrelating content on a local network with information on an external network
US8510453B2 (en) * 2007-03-21 2013-08-13 Samsung Electronics Co., Ltd. Framework for correlating content on a local network with information on an external network
US20080266449A1 (en) * 2007-04-25 2008-10-30 Samsung Electronics Co., Ltd. Method and system for providing access to information of potential interest to a user
US9286385B2 (en) 2007-04-25 2016-03-15 Samsung Electronics Co., Ltd. Method and system for providing access to information of potential interest to a user
US8209724B2 (en) 2007-04-25 2012-06-26 Samsung Electronics Co., Ltd. Method and system for providing access to information of potential interest to a user
US8843467B2 (en) 2007-05-15 2014-09-23 Samsung Electronics Co., Ltd. Method and system for providing relevant information to a user of a device in a local network
US20080288641A1 (en) * 2007-05-15 2008-11-20 Samsung Electronics Co., Ltd. Method and system for providing relevant information to a user of a device in a local network
US8176068B2 (en) 2007-10-31 2012-05-08 Samsung Electronics Co., Ltd. Method and system for suggesting search queries on electronic devices
KR101411499B1 (en) 2008-05-19 2014-07-01 삼성전자주식회사 Variable resistance memory device and management method thereof
US20090327592A1 (en) * 2008-06-30 2009-12-31 KOREA POLYTECHNIC UNIVERSITY Industry and Academic Cooperation Foundation Clustering device for flash memory and method thereof
US20100070895A1 (en) * 2008-09-10 2010-03-18 Samsung Electronics Co., Ltd. Method and system for utilizing packaged content sources to identify and provide information based on contextual information
US8938465B2 (en) 2008-09-10 2015-01-20 Samsung Electronics Co., Ltd. Method and system for utilizing packaged content sources to identify and provide information based on contextual information
US9037776B2 (en) * 2009-06-11 2015-05-19 Samsung Electronics Co., Ltd. Storage device with flash memory and data storage method
US20100318879A1 (en) * 2009-06-11 2010-12-16 Samsung Electronics Co., Ltd. Storage device with flash memory and data storage method
US9032245B2 (en) 2011-08-30 2015-05-12 Samsung Electronics Co., Ltd. RAID data management method of improving data reliability and RAID data storage device
US8924351B2 (en) * 2012-12-14 2014-12-30 Lsi Corporation Method and apparatus to share a single storage drive across a large number of unique systems when data is highly redundant
US20140172797A1 (en) * 2012-12-14 2014-06-19 Lsi Corporation Method and Apparatus to Share a Single Storage Drive Across a Large Number of Unique Systems When Data is Highly Redundant
US9448921B2 (en) 2013-01-11 2016-09-20 Empire Technology Development Llc Page allocation for flash memories
US8891296B2 (en) 2013-02-27 2014-11-18 Empire Technology Development Llc Linear Programming based decoding for memory devices
US9424945B2 (en) 2013-02-27 2016-08-23 Empire Technology Development Llc Linear programming based decoding for memory devices
US20140281130A1 (en) * 2013-03-15 2014-09-18 The Boeing Company Accessing non-volatile memory through a volatile shadow memory
US11611723B2 (en) 2013-03-15 2023-03-21 James Carey Self-healing video surveillance system
JP2014182815A (en) * 2013-03-15 2014-09-29 Boeing Co Accessing non-volatile memory through volatile shadow memory
US10089224B2 (en) * 2013-03-15 2018-10-02 The Boeing Company Write caching using volatile shadow memory
US11032520B2 (en) 2013-03-15 2021-06-08 James Carey Self-healing video surveillance system
US11223803B2 (en) 2013-03-15 2022-01-11 James Carey Self-healing video surveillance system
US11683451B2 (en) 2013-03-15 2023-06-20 James Carey Self-healing video surveillance system
US9859925B2 (en) 2013-12-13 2018-01-02 Empire Technology Development Llc Low-complexity flash memory data-encoding techniques using simplified belief propagation
WO2015155824A1 (en) * 2014-04-07 2015-10-15 株式会社日立製作所 Storage system
US11853178B2 (en) 2015-03-05 2023-12-26 Kioxia Corporation Storage system
US11392466B2 (en) 2015-03-05 2022-07-19 Kioxia Corporation Storage system
KR20180081333A (en) * 2017-01-06 2018-07-16 삼성전자주식회사 Memory device comprising resistance change material method for operating the memory device
KR102646755B1 (en) 2017-01-06 2024-03-11 삼성전자주식회사 Memory device comprising resistance change material method for operating the memory device
US10929299B2 (en) 2017-06-15 2021-02-23 Fujitsu Limited Storage system, method and non-transitory computer-readable storage medium
US20230062773A1 (en) * 2021-08-24 2023-03-02 Kioxia Corporation Nonvolatile memory and memory system
US11726715B2 (en) 2021-10-11 2023-08-15 Western Digital Technologies, Inc. Efficient data path in compare command execution

Also Published As

Publication number Publication date
JP2008084270A (en) 2008-04-10
JP4372134B2 (en) 2009-11-25

Similar Documents

Publication Publication Date Title
US20080082744A1 (en) Storage system having data comparison function
US7975168B2 (en) Storage system executing parallel correction write
US9378093B2 (en) Controlling data storage in an array of storage devices
US6523087B2 (en) Utilizing parity caching and parity logging while closing the RAID5 write hole
US20140223223A1 (en) Storage system
US20070067666A1 (en) Disk array system and control method thereof
US20120023287A1 (en) Storage apparatus and control method thereof
US20120079317A1 (en) System and method for information handling system redundant storage rebuild
JPH09231017A (en) Data storage
US8386837B2 (en) Storage control device, storage control method and storage control program
JP2008015769A (en) Storage system and writing distribution method
JP2006195851A (en) Storage system
JP2005122338A (en) Disk array device having spare disk drive, and data sparing method
JP4884721B2 (en) Storage system and storage control method that do not require storage device format
US10338844B2 (en) Storage control apparatus, control method, and non-transitory computer-readable storage medium
JP2007035217A (en) Data saving processing method of disk storage device and disk storage system
US20100115310A1 (en) Disk array apparatus
US6564295B2 (en) Data storage array apparatus, method of controlling access to data storage array apparatus, and program and medium for data storage array apparatus
US20100057978A1 (en) Storage system and data guarantee method
JP2005309818A (en) Storage device, data reading method, and data reading program
US7962690B2 (en) Apparatus and method to access data in a raid array
US8375187B1 (en) I/O scheduling for flash drives
US6931499B2 (en) Method and apparatus for copying data between storage volumes of storage systems
US7293193B2 (en) Array controller for disk array, and method for rebuilding disk array
JP5949816B2 (en) Cache control device and control method therefor, storage device, and computer program

Legal Events

Date Code Title Description
AS Assignment

Owner name: HITACHI, LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NAKAGAWA, YUTAKA;REEL/FRAME:018572/0572

Effective date: 20061110

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION