WO2004111881A1 - System and method for utilizing compression in database caches to facilitate access to database information - Google Patents

System and method for utilizing compression in database caches to facilitate access to database information Download PDF

Info

Publication number
WO2004111881A1
WO2004111881A1 PCT/US2004/017259 US2004017259W WO2004111881A1 WO 2004111881 A1 WO2004111881 A1 WO 2004111881A1 US 2004017259 W US2004017259 W US 2004017259W WO 2004111881 A1 WO2004111881 A1 WO 2004111881A1
Authority
WO
WIPO (PCT)
Prior art keywords
database
cache
compressed
data
storage device
Prior art date
Application number
PCT/US2004/017259
Other languages
French (fr)
Inventor
Rob Reinauer
Ken White
Chunsheng Sun
Richard Arnold
Sunil Jacob
Desmond Tan
Kevin Lewis
Original Assignee
Pervasive Software, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pervasive Software, Inc. filed Critical Pervasive Software, Inc.
Publication of WO2004111881A1 publication Critical patent/WO2004111881A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/31Providing disk cache in a specific location of a storage system
    • G06F2212/311In host system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/40Specific encoding of data in memory or cache
    • G06F2212/401Compressed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/46Caching storage objects of specific type in disk cache
    • G06F2212/465Structured object, e.g. database record
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99942Manipulating data structure, e.g. compression, compaction, compilation

Definitions

  • the present invention relates to the management of data and more particularly to database management, for example, in networked client-server enviromnents.
  • cache memory has been used to improve performance of central processing units (CPUs) in computer systems.
  • This cache memory is typically small in size as compared to main memory and is used to hold a segment of main memory for use by the CPU in its operations.
  • CPU performance is typically enhanced. For this reason, it is often desirable to increase the cache memory size.
  • a portion of the main memory. for the computer system managing the database is often used by the database management application as a database cache for data being read from or written to the stored database file.
  • the database cache provides a buffer between the access, create and modify instructions from the database users and the database file itself.
  • the database cache can provide for improved access times to client systems to the extent that the database management application can satisfy queries to the database from the data currently sitting in the database cache.
  • the database file is typically stored on some physical media, such as ohe or more hard disks.
  • the present invention provides a system and method for utilizing compression in database caches to facilitate access to database information.
  • the present invention achieves performance advantages by using compression within the main memory database cache used by a database management system to manage data transfers to and from a physical database file stored on a storage system or stored on a networked attached device or node.
  • the present invention provides a significant technical advantage by increasing the effective database cache size. And this effective increase in database cache size can greatly enhance the operations-per-second capability of a database management system by reducing unnecessary disk or network accesses thereby reducing data access times.
  • the present invention provides a solution that substantially eliminates or reduces disadvantages and problems associated with previously developed database cache management systems. More specifically, the present invention provides systems and methods for managing data in database caches.
  • This system includes a first data storage location.
  • This first data storage location typically comprises a disk or network resource.
  • Additional data storage locations typically in the form of local or cache memory, allow database management systems to more quickly access frequently used data from a database cache as opposed to disk.
  • data stored within the local memory may be compressed. This compression typically becomes desirable when decompressing compressed data from the database cache and supplying this data to the data user can occur more quickly than accessing the original data from disk.
  • the database cache can include both data and instructions, and the data may be formatted into pages or other like data structures as known to those skilled in the art. Furthermore, these pages (data structures) may include one or more flags that indicate whether the page has been compressed.
  • Other embodiments may further optimize the present invention by utilizing a local uncompressed database cache and a local compressed database cache.
  • data pages such as the least recently used pages within the local database cache, move to the local compressed database cache when the local uncompressed database cache is full.
  • This use of compressed and uncompressed database caches can enhance performance because the performance penalty for compression/decompression is typically significantly less than disk access costs even when the compression penalty is incurred multiple times.
  • the system can trade the fewest number of compression efforts for the highest number of disk reads.
  • the pages within the compressed database cache may, in fact, only be compressed if the page actually compresses when compressed and/or when the compressed database cache has become full. This procedure can be used to avoid unnecessary compression and decompression, thus effectively increasing system resources.
  • FIGURE 1 graphically depicts the cache curve demonstrating the benefit of effectively increasing the size of database cache.
  • FIGURE 2 depicts one general embodiment of the system and method provided by the present invention
  • FIGURE 3 illustrates an embodiment of the present invention that utilizes two cache memories
  • FIGURE 4 illustrates other embodiments of the present invention utilizing two cache memories
  • FIGURE 5 depicts various structures within a block of compressed data.
  • FIGURE 6 is a block diagram for a client-server database environment in which a database server utilizes compression within its database cache.
  • the present invention compresses data within a database cache in order to effectively increase the size of. database cache.
  • Increased processor speeds make this possible as the time required to compress and decompress data and/or instructions with increased processor speed is less than the time required to access the data from disk. Previously, the time spent to compress and decompress data and/or instructions exceeded the time required to access and retrieve data and/or instructions from disk.
  • the present invention couples increased processor speeds with high performance compression algorithms as known to those skilled in the art, allowing data and/or instructions to be compressed and decompressed faster than the time required for disk access.
  • FIGURE 1 graphically depicts the improved performance of a system based on a percentage of data caches, assuming a standard hard disk drive is used for persistent storage.
  • Cache . curve 12 shows that the benefit increases exponentially. This benefit increases exponentially as the number of operations per second on the y axis increase. Improved performance comes from the elimination of disk access times that may be associated with each operation.
  • Cache curve 12 shows a change from 10 to 20 percent does not yield a large increase in the number of operations per second. However, an increase from 80 to 100 percent yields a much greater increase in operations per second.
  • the Y-axis on FIGURE 1 has been left without a scale because the actual numbers are dependent on a number of factors.
  • the Y-axis value for a given percentage of the total information in cache for a given time period can be calculated using the following formula, where "td” represents the time to acquire information from disk, “tm” represents the time to acquire information from memory, “tp” represents the time to process a given piece of information, and “tt” represents the total time taken to run the test:
  • the method and system of the present invention add incremental costs to the access of cache through various processes. Primarily, compression, decompression and cache management all add incremental costs. These costs previously exceeded those associated with disk I/O access. However, increased processor speeds have greatly reduced these costs while disk I/O costs remain relatively unchanged.
  • FIGURE 2 generally illustrates how the present invention handles data and/or instructions. This data and/or instructions are contained within pages. For example, when user 26 requests access to page 22, cache manager 28 causes page 22 to be accessed from disk 24. Compression/decompression algorithm 30 compresses page 22 within cache 32. Cache manager 28 directs compression/decompression algorithm 30 to decompress page 22 from cache 32, then page 22 is provided to user 26. Whenever page 22 is accessed by user 26, user 26 accesses page 22 from cache 32 and decompresses page 22 with a compression/decompression algorithm.
  • two or more database caches may be used by a database management system, according to the present invention.
  • page 22 when first accessed, resides within uncompressed cache 34.
  • This embodiment uses two caches: uncompressed cache 34 and compressed cache 36. Users always read data and/or instructions from uncompressed cache 34.
  • the compression/decompression algorithm 40 decompresses page 22 and delivers the page to uncompressed cache 34.
  • Compression/decompression algorithm 40 compresses the least recently used (LRU) pages from uncompressed cache 34 to compressed cache 36. These actions are directed by cache manager 41.
  • LRU least recently used
  • cache management strategies may be utilized, if desired, including cache management strategies such as a least-recently-used cache management strategy in which relative "ages" of cache information is kept and the information that has remained unused for the longest time is replaced, a least-frequently-used cache management strategy in which the number of times information has been used over some number of uses or period of time and the information that is least used is replaced, and a first-in-flrst-out cache management strategy in which the first information added to the database cache is the first information to be replaced.
  • cache management strategies such as a least-recently-used cache management strategy in which relative "ages" of cache information is kept and the information that has remained unused for the longest time is replaced, a least-frequently-used cache management strategy in which the number of times information has been used over some number of uses or period of time and the information that is least used is replaced, and a first-in-flrst-out cache management strategy in which the first information added to the database cache is the first information to be replaced.
  • FIGURE 4 similar to FIGURE 3, again illustrates an embodiment of the present invention containing both an uncompressed cache 46 and a compressed cache 48.
  • page 42 initially resides on disk 44.
  • cache manager 52 directs that page 42 be retrieved from disk 44 and stored within uncompressed cache 46. Additional pages are stored in an uncompressed cache 46 until uncompressed cache 46 has been filled.
  • cache manager 48 directs that the LRU page, page 54, stored within uncompressed cache 46, to be compressed via compression/decompression algorithm 56 and stored within compressed cache 48.
  • LRU page 54 remains stored within compressed cache 48 until needed.
  • compression/decompression algorithm 56 decompresses the page, which is then stored within uncompressed cache 46.
  • cache manager 52 directs that the LRU page, page 58, within compressed cache 48 to be deleted or written over as represented by "trash" block 59.
  • the decision to push the LRU pages from uncompressed cache 46 to compressed cache 48, and to delete the LRU from compressed cache 48 comes from the theory that users are more likely to access a recently accessed pages. This theory avoids the costs associated with repeatedly compressing and uncompressing frequently used pages.
  • the present invention uses an LRU cache management technique, any cache management technique known to those skilled in the art may be used in place of this technique.
  • the present invention also provides the ability to compress and uncompress asynchronously. Both compression and decompression provide cache management costs. Moving data and/or instructions from one cache to another involves up dating pointers within the cache manager. Furthermore, the compression and decompression require processor time. Compressing asynchronously queues uncompressed pages for compression when processor time becomes available. Asynchronous decompression similarly queues pages for decompression but requires predictive read ahead.
  • Another embodiment of the present invention addresses the problem with compressed pages that compress longer than the original page (i.e., the page actually expands). This problem occurs within all compression techniques. Compression does not guarantee to compress every page.
  • the cache management technique may store pages that compress larger than their uncompressed size only within the uncompressed cache. This feature reduces or eliminates wasted memory but still consumes processing resources.
  • Cache compression enables cache to store more data pages within the cache. For example, if all data pages can be compressed to at least 1/2 their original size, with the size of the cache being constant, the cache can hold twice as many pages. Although having to compress/decompress pages adds overheads (i.e., CPU utilization), this increase in overhead is small when compared to disk I/O access costs.
  • LZ Limpel-Ziv
  • a Limpel-Ziv (LZ) compression algorithm provide a technique that encodes a streaming byte sequence using a dynamic table.
  • Popular variations on this technique include the LZ78 and LZ77 families.
  • other compression algorithms could be used, as desired.
  • a high performance compression/decompression algorithm is used so that the processing overhead for the algorithm does not outweigh the access time benefits provided by the compressed database cache approach of the present invention.
  • any of a number of well-understood variants of the LZ algorithm may be utilized, as desired, depending upon the particular application, as would be understood by one of skill in the art.
  • the algorithms may be modified with the following abilities.
  • First, the algorithm does not compress the data page when the compression ratio is less than 2, and the page may be flagged to eliminate future attempts to compress the same page.
  • Second, the algorithm writes out the compressed data in predetermined sizes such as 256 bytes (256B). For example, if a IK data page compresses to 356B, the algorithm, when compressing, writes the first 256B compressed data in one chunk and the remainder of the IOOB in another chunk. It is noted that a trade-off exists with respect to the compression block size, and other compression block sizes may be utilized, as desired.
  • the algorithm provides a pointer to a compressed data header object that provides information about the compressed data pages.
  • FIGURE 5 depicts the various structures that compressed data header object 70 tracks within the pieces of compressed data.
  • the decompression algorithm gathers related chunks of 256B compressed data and decompresses them into their original size.
  • Page_ptr page pointer 72
  • Page pointer 72 may include an additional pointer 74 returned by the compression algorithm when the compression is successful. If pointer 74 is NULL, the data page is not compressed.
  • the compressed cache receives a pool of memory 76.
  • the different pools in the compressed cache include a pool of 256B objects 78 that hold compressed data.
  • Compressed data headers 70 serve as the map table or data pointer list for finding the different chunks of related compressed data.
  • each header is limited to four 256B objects. It is noted, however, that this size could be altered, as desired, without departing from the present invention. And in addition to being any fixed number of blocks, this header could also contain a structure containing a variable number of blocks, such as a vector or linked list.
  • Synchronous compression and decompression manipulates the data on request.
  • the thread moves a page from uncompressed cache to compressed cache, the thread invokes the compression function to compress the data page directly into the compressed cache pool of objects.
  • Decompression occurs in the same manner.
  • the correct size buffer is located in the uncompressed cache, after which pieces of related compressed data located in compressed cache decompress directly into the uncompressed cache.
  • Compressing on demand requires no additional memory copying, reducing the amount of overhead.
  • the disadvantage during heavy paging situations between uncompressed and compressed caches occurs when the data pages do not meet the required compression ratio.
  • Failed compression is an overhead in addition to the memory copy functions that needs to be invoked.
  • Asynchronous operation is typically best suited for compression.
  • Data pages move from uncompressed cache to compressed cache, after which the compression function operates to compress the data.
  • the advantage being that compression can happen any time. If heavy paging situations occurs, the overhead incurred is only the memory copy function. When less busy, a thread in the compressed cache can start the compression of the data pages queued for compression. This situation will typically invoke an additional memory copy function.
  • cache compression does not occur until all the available primary cache has been filled with uncompressed pages. This avoids compression and decompression until needed.
  • Compression of the secondary cache may begin when the system becomes I/O bound or may be considered the permanent state of the secondary cache. In this way, if an entire database fits into the total available cache without compression, the processor costs associated with compression and decompression are avoided automatically. This may also imply that an asynchronous thread might try to compress all non-compressed pages before freeing the LRU pages. It would do this in order based on a cache management algorithm, such as MRU/LRU.
  • the present invention provides a system and method that substantially eliminates or reduces disadvantages and problems associated with previously developed database management systems.
  • This system includes a first data storage location. Where this first data storage location typically is a disk or network resource. Additional data storage locations, typically in the form of local or database cache, allow data users to more quickly access frequently used data from local or database cache as opposed to disk.
  • the present invention compresses data stored within the database cache. This compression only now becomes desirable as decompressing compressed data from the local memory and supplying decompressed compressed data to the data user can occur more quickly than accessing the original data from disk.
  • the present invention further includes a processor and instructions operable to decompress the compressed data more quickly than the time required to access the information or data from non-local memory.
  • FIGURE 6 is a block diagram for a client-server database environment 600 in which a database server 605 utilizes compression within its database cache 608 to manage the database.
  • a database server 605 utilizes compression within its database cache 608 to manage the database.
  • one ore more client systems 604A, 604B ... 604C are connected through network 602 to a server-based database management system 605.
  • the database cache 608 provides an interface between the client systems 604A, 604B ... 604C and the database 614 stored on the storage system 606.
  • the database management server 605 utilizes a database cache 608.
  • this database cache 608 includes a compressed cache 612 and an uncompressed cache 610.
  • client system 604C can also utilize a local database cache.
  • client system 604C could include a local database cache 620 that provides an interface between the database related operations of the client system 604C and the database server 605.
  • the client system 604C could cache database information locally in its local database cache 620 thereby reducing the number of accesses the client system 604C needs to make to database server 605 and also reducing latency caused by network access through network 602.
  • the client system 604C could also utilize compression with respect to its local database cache 620, according to the present invention.
  • the local database cache 620 would include a compressed cache 622 and an uncompressed cache 624.
  • the ratio of compressed to uncompressed cache can be selected, as desired, and the entire database cache can be compressed if this implementation is desired.
  • a fixed ratio or a dynamic ratio could be used, as desired.
  • coherence between these two caches can be problematic. Solutions to this cache coherence problem are discussed, for example, in U.S. Patent Application No. 10/144,917, filed May 14, 2002, and entitled "SYSTEM AND METHOD OF MAINTAINING FUNCTIONAL CLIENT SIDE DATA CACHE COHERENCE,” the entire text and all contents of which are hereby expressly incorporated by reference in its entirety.

Abstract

A system and method are disclosed for utilizing compression in database caches to facilitate access to database information. In contrast with applying compression to the database that is stored on disk, the present invention achieves performance advantages by using compression within the main memory database cache used by a database management system to manage data transfers to and from a physical database file stored on a storage system or stored on a networked attached device or node. The disclosed system and method thereby provide a significant technical advantage by increasing the effective database cache size. And this effective increase in database cache size can greatly enhance the operations-per-second capability of a database management system by reducing unnecessary disk or network accesses thereby reducing data access times.

Description

SYSTEM AND METHOD FOR UTILIZING COMPRESSION IN DATABASE CACHES TO FACttITATE ACCESS TO DATABASE INFORMATION
Inventors: Rob Reinauer, Ken White, Chunsheng Sun, Richard Arnold, Sunil Jacob, Desmond Tan and Kevin Lewis
TECHNICAL FDSLD OF THE INVENTION
[0001] The present invention relates to the management of data and more particularly to database management, for example, in networked client-server enviromnents.
BACKGROUND OF THE INVENTION
[0002] hi prior systems, cache memory has been used to improve performance of central processing units (CPUs) in computer systems. This cache memory is typically small in size as compared to main memory and is used to hold a segment of main memory for use by the CPU in its operations. To the extent the CPU can use instructions in cache memory without having to pull new information from main memory, CPU performance is typically enhanced. For this reason, it is often desirable to increase the cache memory size. Limitations exist, however, that hinder the ability to add more physical memory. For example, many operating systems limit how much main memory and cache memory a system can physically access. To increase the effective size of CPU cache memory, therefore, prior solutions have proposed the use of cache compression within the cache memory.
[0003] With respect to database environments, a portion of the main memory. for the computer system managing the database is often used by the database management application as a database cache for data being read from or written to the stored database file. The database cache provides a buffer between the access, create and modify instructions from the database users and the database file itself. In addition, the database cache can provide for improved access times to client systems to the extent that the database management application can satisfy queries to the database from the data currently sitting in the database cache. The database file is typically stored on some physical media, such as ohe or more hard disks. With respect to the storage of large database files, prior work has focused on using data compression algorithms to reduce the size of the database files stored on hard drives. In addition, because most existing database access protocols operate on uncompressed data, prior work has also focused on protocols and query methodology for directly accessing the compressed data where the database file is compressed on disk. However, with increases in the speed of CPUs outpacing the speed of disk access, this disk compression can provide only limited improvement due to the physical limitations related to accessing data from a physical disk.
SUMMARY OF THE INVENTION
[0006] The present invention provides a system and method for utilizing compression in database caches to facilitate access to database information. In contrast with applying compression to the database that is stored on disk, the present invention achieves performance advantages by using compression within the main memory database cache used by a database management system to manage data transfers to and from a physical database file stored on a storage system or stored on a networked attached device or node. As discussed herein, the present invention provides a significant technical advantage by increasing the effective database cache size. And this effective increase in database cache size can greatly enhance the operations-per-second capability of a database management system by reducing unnecessary disk or network accesses thereby reducing data access times.
[0007] In part, the present invention provides a solution that substantially eliminates or reduces disadvantages and problems associated with previously developed database cache management systems. More specifically, the present invention provides systems and methods for managing data in database caches. This system includes a first data storage location. This first data storage location typically comprises a disk or network resource. Additional data storage locations, typically in the form of local or cache memory, allow database management systems to more quickly access frequently used data from a database cache as opposed to disk. To increase the effective size of the database cache, data stored within the local memory may be compressed. This compression typically becomes desirable when decompressing compressed data from the database cache and supplying this data to the data user can occur more quickly than accessing the original data from disk. The database cache can include both data and instructions, and the data may be formatted into pages or other like data structures as known to those skilled in the art. Furthermore, these pages (data structures) may include one or more flags that indicate whether the page has been compressed.
[0008] Other embodiments may further optimize the present invention by utilizing a local uncompressed database cache and a local compressed database cache. In such an embodiment, data pages, such as the least recently used pages within the local database cache, move to the local compressed database cache when the local uncompressed database cache is full. This use of compressed and uncompressed database caches can enhance performance because the performance penalty for compression/decompression is typically significantly less than disk access costs even when the compression penalty is incurred multiple times. By utilizing an uncompressed cache for the pages most likely to be re-used, the system can trade the fewest number of compression efforts for the highest number of disk reads. In certain embodiments, the pages within the compressed database cache may, in fact, only be compressed if the page actually compresses when compressed and/or when the compressed database cache has become full. This procedure can be used to avoid unnecessary compression and decompression, thus effectively increasing system resources.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] It is noted that the appended drawings illustrate only exemplary embodiments of the invention and are, therefore, not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
[0010] FIGURE 1 graphically depicts the cache curve demonstrating the benefit of effectively increasing the size of database cache.
[0011] FIGURE 2 depicts one general embodiment of the system and method provided by the present invention;
[0012] FIGURE 3 illustrates an embodiment of the present invention that utilizes two cache memories;
[0013] FIGURE 4 illustrates other embodiments of the present invention utilizing two cache memories;
[0014] FIGURE 5 depicts various structures within a block of compressed data.
[0015] FIGURE 6 is a block diagram for a client-server database environment in which a database server utilizes compression within its database cache.
DETAILED DESCRIPTION OF THE INVENTION
[0016] The present invention compresses data within a database cache in order to effectively increase the size of. database cache. Increased processor speeds make this possible as the time required to compress and decompress data and/or instructions with increased processor speed is less than the time required to access the data from disk. Previously, the time spent to compress and decompress data and/or instructions exceeded the time required to access and retrieve data and/or instructions from disk. The present invention couples increased processor speeds with high performance compression algorithms as known to those skilled in the art, allowing data and/or instructions to be compressed and decompressed faster than the time required for disk access.
[0017] FIGURE 1 graphically depicts the improved performance of a system based on a percentage of data caches, assuming a standard hard disk drive is used for persistent storage. Cache . curve 12 shows that the benefit increases exponentially. This benefit increases exponentially as the number of operations per second on the y axis increase. Improved performance comes from the elimination of disk access times that may be associated with each operation. Cache curve 12 shows a change from 10 to 20 percent does not yield a large increase in the number of operations per second. However, an increase from 80 to 100 percent yields a much greater increase in operations per second. The Y-axis on FIGURE 1 has been left without a scale because the actual numbers are dependent on a number of factors. These factors include the time to acquire a given piece of information from disk, the time to acquire the same piece of information from cache, and the time required to process that piece of information on behalf of the client (where this time is considered the same regardless of the source of the piece of information). Given these values, the Y-axis value for a given percentage of the total information in cache for a given time period can be calculated using the following formula, where "td" represents the time to acquire information from disk, "tm" represents the time to acquire information from memory, "tp" represents the time to process a given piece of information, and "tt" represents the total time taken to run the test:
Y = tt / ((X * tm) + ((100 - x) * td) + tp)
Assuming current hard drive technology, which is multiple orders of magnitude slower than typical RAM access times and a reasonable "tp" value, as X increases, Y will increase exponentially. The reason for this relationship is the exponential disparity between "tm" and "td."
[0018] The method and system of the present invention add incremental costs to the access of cache through various processes. Primarily, compression, decompression and cache management all add incremental costs. These costs previously exceeded those associated with disk I/O access. However, increased processor speeds have greatly reduced these costs while disk I/O costs remain relatively unchanged.
[0019] The present invention provides a system and method of effectively increasing database cache or local memory for database management systems. This greatly improves the performance of many database applications. FIGURE 2 generally illustrates how the present invention handles data and/or instructions. This data and/or instructions are contained within pages. For example, when user 26 requests access to page 22, cache manager 28 causes page 22 to be accessed from disk 24. Compression/decompression algorithm 30 compresses page 22 within cache 32. Cache manager 28 directs compression/decompression algorithm 30 to decompress page 22 from cache 32, then page 22 is provided to user 26. Whenever page 22 is accessed by user 26, user 26 accesses page 22 from cache 32 and decompresses page 22 with a compression/decompression algorithm.
[0020] If desired, as shown in FIGURE 3, two or more database caches may be used by a database management system, according to the present invention. Here, page 22, when first accessed, resides within uncompressed cache 34. This embodiment uses two caches: uncompressed cache 34 and compressed cache 36. Users always read data and/or instructions from uncompressed cache 34. Hence, if page 22 has been compressed, the compression/decompression algorithm 40 decompresses page 22 and delivers the page to uncompressed cache 34. Compression/decompression algorithm 40 compresses the least recently used (LRU) pages from uncompressed cache 34 to compressed cache 36. These actions are directed by cache manager 41. Other cache management strategies may be utilized, if desired, including cache management strategies such as a least-recently-used cache management strategy in which relative "ages" of cache information is kept and the information that has remained unused for the longest time is replaced, a least-frequently-used cache management strategy in which the number of times information has been used over some number of uses or period of time and the information that is least used is replaced, and a first-in-flrst-out cache management strategy in which the first information added to the database cache is the first information to be replaced.
[0021] FIGURE 4, similar to FIGURE 3, again illustrates an embodiment of the present invention containing both an uncompressed cache 46 and a compressed cache 48. Here, page 42 initially resides on disk 44. When needed by user application 50, cache manager 52 directs that page 42 be retrieved from disk 44 and stored within uncompressed cache 46. Additional pages are stored in an uncompressed cache 46 until uncompressed cache 46 has been filled. As uncompressed cache 46 fills, cache manager 48 directs that the LRU page, page 54, stored within uncompressed cache 46, to be compressed via compression/decompression algorithm 56 and stored within compressed cache 48. LRU page 54 remains stored within compressed cache 48 until needed. When needed, compression/decompression algorithm 56 decompresses the page, which is then stored within uncompressed cache 46. When compressed cache 48 is full, cache manager 52 directs that the LRU page, page 58, within compressed cache 48 to be deleted or written over as represented by "trash" block 59. The decision to push the LRU pages from uncompressed cache 46 to compressed cache 48, and to delete the LRU from compressed cache 48 comes from the theory that users are more likely to access a recently accessed pages. This theory avoids the costs associated with repeatedly compressing and uncompressing frequently used pages. Although the present invention uses an LRU cache management technique, any cache management technique known to those skilled in the art may be used in place of this technique. When a requested page has been deleted from compressed cache 48, that page must be read from disk 44.
[0022] The present invention also provides the ability to compress and uncompress asynchronously. Both compression and decompression provide cache management costs. Moving data and/or instructions from one cache to another involves up dating pointers within the cache manager. Furthermore, the compression and decompression require processor time. Compressing asynchronously queues uncompressed pages for compression when processor time becomes available. Asynchronous decompression similarly queues pages for decompression but requires predictive read ahead.
[0023] Operating systems often already use predictive read ahead when accessing files. This type of read ahead assumes the user will request the page following that being viewed. The drawback associated with predictive read ahead occurs when the user does not request what was predicted, thus requiring additional resources to be expended.
[0024] Another embodiment of the present invention addresses the problem with compressed pages that compress longer than the original page (i.e., the page actually expands). This problem occurs within all compression techniques. Compression does not guarantee to compress every page.
[0025] In some instances, the cache management technique may store pages that compress larger than their uncompressed size only within the uncompressed cache. This feature reduces or eliminates wasted memory but still consumes processing resources.
[0026] Cache compression enables cache to store more data pages within the cache. For example, if all data pages can be compressed to at least 1/2 their original size, with the size of the cache being constant, the cache can hold twice as many pages. Although having to compress/decompress pages adds overheads (i.e., CPU utilization), this increase in overhead is small when compared to disk I/O access costs.
[0027] There are a wide variety of different compression algorithms that can be used to compress data including data in a database cache. For example, a Limpel-Ziv (LZ) compression algorithm provide a technique that encodes a streaming byte sequence using a dynamic table. Popular variations on this technique include the LZ78 and LZ77 families. It is noted that other compression algorithms could be used, as desired. Preferably, a high performance compression/decompression algorithm is used so that the processing overhead for the algorithm does not outweigh the access time benefits provided by the compressed database cache approach of the present invention. In addition, any of a number of well-understood variants of the LZ algorithm may be utilized, as desired, depending upon the particular application, as would be understood by one of skill in the art.
[0028] To improve the compression/decompression algorithms performance, the algorithms may be modified with the following abilities. First, the algorithm does not compress the data page when the compression ratio is less than 2, and the page may be flagged to eliminate future attempts to compress the same page. Second, the algorithm writes out the compressed data in predetermined sizes such as 256 bytes (256B). For example, if a IK data page compresses to 356B, the algorithm, when compressing, writes the first 256B compressed data in one chunk and the remainder of the IOOB in another chunk. It is noted that a trade-off exists with respect to the compression block size, and other compression block sizes may be utilized, as desired. In particular, it is noted that the smaller the compression block size, the better able the system typically is in taking advantage of reduced data size, but the more overhead the system will typically incur in managing the compressed blocks. Third, the algorithm provides a pointer to a compressed data header object that provides information about the compressed data pages.
[0029] FIGURE 5 depicts the various structures that compressed data header object 70 tracks within the pieces of compressed data. As shown in this embodiment, the decompression algorithm gathers related chunks of 256B compressed data and decompresses them into their original size.
[0030] The integration of cache compression into an existing application may require modifying some structures within the existing application. The main part of the integration work occurs within the two caches. The structures identified begin with the page pointer (page_ptr) 72. Page pointer 72 may include an additional pointer 74 returned by the compression algorithm when the compression is successful. If pointer 74 is NULL, the data page is not compressed.
[0031] As mentioned above, the compressed cache receives a pool of memory 76. The different pools in the compressed cache include a pool of 256B objects 78 that hold compressed data. Compressed data headers 70 serve as the map table or data pointer list for finding the different chunks of related compressed data. In the embodiment depicted, each header is limited to four 256B objects. It is noted, however, that this size could be altered, as desired, without departing from the present invention. And in addition to being any fixed number of blocks, this header could also contain a structure containing a variable number of blocks, such as a vector or linked list.
[0032] Synchronous compression and decompression manipulates the data on request. When a thread moves a page from uncompressed cache to compressed cache, the thread invokes the compression function to compress the data page directly into the compressed cache pool of objects. Decompression occurs in the same manner. During decompression, first the correct size buffer is located in the uncompressed cache, after which pieces of related compressed data located in compressed cache decompress directly into the uncompressed cache. Compressing on demand requires no additional memory copying, reducing the amount of overhead. The disadvantage during heavy paging situations between uncompressed and compressed caches occurs when the data pages do not meet the required compression ratio. Failed compression (compressions thresholds in expansion) is an overhead in addition to the memory copy functions that needs to be invoked.
[0033] Asynchronous operation is typically best suited for compression. Data pages move from uncompressed cache to compressed cache, after which the compression function operates to compress the data. The advantage being that compression can happen any time. If heavy paging situations occurs, the overhead incurred is only the memory copy function. When less busy, a thread in the compressed cache can start the compression of the data pages queued for compression. This situation will typically invoke an additional memory copy function.
[0034] As noted above, compression and decompression adds a certain amount of overhead to the workings of an application. However, the idea trades this overhead for I/O by storing more data pages in cache. Not all database environments will experience a performance boost by having cache compression. Situations, which involve small size databases and CPU-bound systems, actually may experience a negative impact on performance with cache compression. Therefore, a setting for cache compression may be made available for the better-informed user within the application configuration or setup.
[0035] Returning to FIGURE 4, in this and other embodiments of the present invention cache compression does not occur until all the available primary cache has been filled with uncompressed pages. This avoids compression and decompression until needed.
[0036] Compression of the secondary cache may begin when the system becomes I/O bound or may be considered the permanent state of the secondary cache. In this way, if an entire database fits into the total available cache without compression, the processor costs associated with compression and decompression are avoided automatically. This may also imply that an asynchronous thread might try to compress all non-compressed pages before freeing the LRU pages. It would do this in order based on a cache management algorithm, such as MRU/LRU.
[0037] In summary, the present invention provides a system and method that substantially eliminates or reduces disadvantages and problems associated with previously developed database management systems. This system includes a first data storage location. Where this first data storage location typically is a disk or network resource. Additional data storage locations, typically in the form of local or database cache, allow data users to more quickly access frequently used data from local or database cache as opposed to disk.
[0038] To increase the effective size of the database cache stored in memory, the present invention compresses data stored within the database cache. This compression only now becomes desirable as decompressing compressed data from the local memory and supplying decompressed compressed data to the data user can occur more quickly than accessing the original data from disk. Thus, the present invention further includes a processor and instructions operable to decompress the compressed data more quickly than the time required to access the information or data from non-local memory.
[0039] FIGURE 6 is a block diagram for a client-server database environment 600 in which a database server 605 utilizes compression within its database cache 608 to manage the database. In the embodiment depicted, one ore more client systems 604A, 604B ... 604C are connected through network 602 to a server-based database management system 605. The database cache 608 provides an interface between the client systems 604A, 604B ... 604C and the database 614 stored on the storage system 606. In its operations, the database management server 605 utilizes a database cache 608. As discussed above and according to the present invention, this database cache 608 includes a compressed cache 612 and an uncompressed cache 610. It is noted that in a client-server environment, the client systems 604A, 604B ... 604C can also utilize a local database cache. For example, client system 604C could include a local database cache 620 that provides an interface between the database related operations of the client system 604C and the database server 605. As such, the client system 604C could cache database information locally in its local database cache 620 thereby reducing the number of accesses the client system 604C needs to make to database server 605 and also reducing latency caused by network access through network 602. In addition, the client system 604C could also utilize compression with respect to its local database cache 620, according to the present invention. As such, the local database cache 620 would include a compressed cache 622 and an uncompressed cache 624. With respect to the database cache 608 and the local database cache 620, if utilized, the ratio of compressed to uncompressed cache can be selected, as desired, and the entire database cache can be compressed if this implementation is desired. In addition, a fixed ratio or a dynamic ratio could be used, as desired. It is noted that where a local database cache is used in addition to the server database cache, coherence between these two caches can be problematic. Solutions to this cache coherence problem are discussed, for example, in U.S. Patent Application No. 10/144,917, filed May 14, 2002, and entitled "SYSTEM AND METHOD OF MAINTAINING FUNCTIONAL CLIENT SIDE DATA CACHE COHERENCE," the entire text and all contents of which are hereby expressly incorporated by reference in its entirety.
[0040] Further modifications and alternative embodiments of this invention will be apparent to those skilled in the art in view of this description. It will be recognized, therefore, that the present invention is not limited by these example arrangements. Accordingly, this description is to be construed as illustrative only and is for the purpose of teaching those skilled in the art the manner of carrying out the invention. It is to be understood that the forms of the invention herein shown and described are to be taken as the presently preferred embodiments. Various changes may be made in the implementations and architectures for database processing. For example, equivalent elements may be substituted for those illustrated and described herein, and certain features of the invention may be utilized independently of the use of other features, all as would be apparent to one skilled in the art after having the benefit of this description of the invention.

Claims

CLAIMSWhat is claimed is:
1. A system for managing access to database information, comprising: a first data storage device configured to store a database; a second data storage device configured to store a database cache, at least a portion of the database cache comprising a compressed cache portion, the database cache also including an uncompressed cache portion where the compressed cache portion is less than the full database cache; and a database management system configured to control data transfers among the database, the compressed portion of the database cache and the uncompressed portion of the database cache, if any, to manage accesses to database information.
2. The system of claim 1, wherein the first data storage device comprises a disk drive.
3. The system of claim 1, wherein the second data storage device comprises memory within a computer system.
4. The system of claim 3, wherein the memory is configured to store database cache information in pages.
5. The system of claim 4, wherein each database cache page stored within memory are configured to include a flag indicating whether or not the page comprises compressed data.
6. The system of claim 1, wherein the entire database cache comprises compressed data and there is no uncompressed cache portion.
7. The system of claim 1, wherein the database management system comprises a server system within a client-server environment.
8. The system of claim 1, wherein the database management system is a client system within a client-server environment, wherein the first data storage device comprises a remote storage device coupled to a server system, and wherein the database cache is a local database cache utilized by the client system.
9. A database management system for managing access to database information in a client-server environment, comprising: a plurality of client systems, the client systems configured to access information in a database; a server system coupled to the client systems through a network, the server system configured to manage transfers of information between the client systems and the database; a first data storage device coupled to the server system and configured to store the database; and a second data storage device coupled to the server system and configured to store a database cache, at least a portion of the database cache comprising a compressed cache portion, the database cache also including an uncompressed cache portion where the compressed cache portion is less than the full database cache; wherein the server system is configured to control data transfers among the database, the compressed portion of the database cache and the uncompressed portion of the database cache, if any, to manage accesses to database information by the client systems.
10 The database management system of claim 9, wherein the first data storage device comprises a disk drive.
11. The database management system of claim 9, wherein the second data storage device comprises memory within a computer system.
12. The database management system of claim 9, wherein the entire database cache comprises compressed data and there is no uncompressed cache portion.
13. The database management system of claim 9, wherein one or more of the client systems utilize a local database cache to manage database information accessed by the client system, the local database cache including at least in part a compressed cache portion.
14. A method for managing access to database information, comprising: storing a database in a first data storage device; storing a database cached in a second data storage device, at least a portion of the database cache comprising a compressed cache portion, the database cache also including an uncompressed cache portion where the compressed cache portion is less than the full database cache; and controlling data transfers among the database, the compressed portion of the database cache and the uncompressed portion of the database cache, if any, to manage accesses to database information.
15. The method of claim 14, wherein the first data storage device comprises a disk drive.
16. The method of claim 14, wherein the second data storage device comprises memory within a computer system.
17. The method of claim 16, configuring the memory to store database cache information in pages.
18. The method of claim 17, further comprising utilizing for each stored database cache page a flag within the page to indicate whether or not the page comprises compressed data.
19. The method of claim 14, storing the entire database cache as compressed data such that there is no uncompressed cache portion.
20. The method of claim 14, wherein a server system within a client-server environment performs the controlling step.
PCT/US2004/017259 2003-05-28 2004-05-28 System and method for utilizing compression in database caches to facilitate access to database information WO2004111881A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/447,205 2003-05-28
US10/447,205 US7181457B2 (en) 2003-05-28 2003-05-28 System and method for utilizing compression in database caches to facilitate access to database information

Publications (1)

Publication Number Publication Date
WO2004111881A1 true WO2004111881A1 (en) 2004-12-23

Family

ID=33551241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/017259 WO2004111881A1 (en) 2003-05-28 2004-05-28 System and method for utilizing compression in database caches to facilitate access to database information

Country Status (2)

Country Link
US (1) US7181457B2 (en)
WO (1) WO2004111881A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102159539A (en) * 2008-06-26 2011-08-17 矽烷实验室股份公司 A new metformin glycinate salt for blood glucose control
CN111159142A (en) * 2018-11-07 2020-05-15 马上消费金融股份有限公司 Data processing method and device

Families Citing this family (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7133054B2 (en) * 2004-03-17 2006-11-07 Seadragon Software, Inc. Methods and apparatus for navigating an image
US7546419B2 (en) * 2004-06-01 2009-06-09 Aguera Y Arcas Blaise Efficient data cache
US7042455B2 (en) * 2003-05-30 2006-05-09 Sand Codex Llc System and method for multiple node display
US7075535B2 (en) * 2003-03-05 2006-07-11 Sand Codex System and method for exact rendering in a zooming user interface
US7930434B2 (en) * 2003-03-05 2011-04-19 Microsoft Corporation System and method for managing communication and/or storage of image data
US7912299B2 (en) * 2004-10-08 2011-03-22 Microsoft Corporation System and method for efficiently encoding data
US7254271B2 (en) * 2003-03-05 2007-08-07 Seadragon Software, Inc. Method for encoding and serving geospatial or other vector data as images
US7761570B1 (en) 2003-06-26 2010-07-20 Nominum, Inc. Extensible domain name service
US7769826B2 (en) * 2003-06-26 2010-08-03 Nominum, Inc. Systems and methods of providing DNS services using separate answer and referral caches
US20060235941A1 (en) * 2005-03-29 2006-10-19 Microsoft Corporation System and method for transferring web page data
US7496589B1 (en) * 2005-07-09 2009-02-24 Google Inc. Highly compressed randomly accessed storage of large tables with arbitrary columns
US7668846B1 (en) 2005-08-05 2010-02-23 Google Inc. Data reconstruction from shared update log
US7548928B1 (en) 2005-08-05 2009-06-16 Google Inc. Data compression of large scale data stored in sparse tables
US20070088920A1 (en) * 2005-10-19 2007-04-19 Philip Garcia Managing data for memory, a data store, and a storage device
US7843911B2 (en) * 2005-11-15 2010-11-30 Nominum, Inc. Data grouping approach to telephone number management in domain name systems
US20070110051A1 (en) * 2005-11-15 2007-05-17 Nominum, Inc. Numeric approach to telephone number management in domain name systems
US20070110049A1 (en) * 2005-11-15 2007-05-17 Nominum, Inc. Data compression approach to telephone number management in domain name systems
US7512597B2 (en) * 2006-05-31 2009-03-31 International Business Machines Corporation Relational database architecture with dynamic load capability
US8077059B2 (en) * 2006-07-21 2011-12-13 Eric John Davies Database adapter for relational datasets
US7694016B2 (en) * 2007-02-07 2010-04-06 Nominum, Inc. Composite DNS zones
US8533661B2 (en) 2007-04-27 2013-09-10 Dell Products, Lp System and method for automated on-demand creation of a customized software application
US8805799B2 (en) * 2007-08-07 2014-08-12 International Business Machines Corporation Dynamic partial uncompression of a database table
US7747585B2 (en) * 2007-08-07 2010-06-29 International Business Machines Corporation Parallel uncompression of a partially compressed database table determines a count of uncompression tasks that satisfies the query
US20090043792A1 (en) * 2007-08-07 2009-02-12 Eric Lawrence Barsness Partial Compression of a Database Table Based on Historical Information
US7987161B2 (en) * 2007-08-23 2011-07-26 Thomson Reuters (Markets) Llc System and method for data compression using compression hardware
US8484351B1 (en) 2008-10-08 2013-07-09 Google Inc. Associating application-specific methods with tables used for data storage
US10430415B2 (en) * 2008-12-23 2019-10-01 International Business Machines Corporation Performing predicate-based data compression
US8843449B2 (en) 2009-06-16 2014-09-23 Bmc Software, Inc. Unobtrusive copies of actively used compressed indices
US8417892B1 (en) * 2009-08-28 2013-04-09 Google Inc. Differential storage and eviction for information resources from a browser cache
US8612374B1 (en) 2009-11-23 2013-12-17 F5 Networks, Inc. Methods and systems for read ahead of remote data
US8443149B2 (en) * 2010-09-01 2013-05-14 International Business Machines Corporation Evicting data from a cache via a batch file
US8566521B2 (en) * 2010-09-01 2013-10-22 International Business Machines Corporation Implementing cache offloading
US8645338B2 (en) 2010-10-28 2014-02-04 International Business Machines Corporation Active memory expansion and RDBMS meta data and tooling
US8583608B2 (en) * 2010-12-08 2013-11-12 International Business Machines Corporation Maximum allowable runtime query governor
KR20130027253A (en) * 2011-09-07 2013-03-15 삼성전자주식회사 Method for compressing data
US9710282B2 (en) 2011-12-21 2017-07-18 Dell Products, Lp System to automate development of system integration application programs and method therefor
US8943076B2 (en) 2012-02-06 2015-01-27 Dell Products, Lp System to automate mapping of variables between business process applications and method therefor
US8805716B2 (en) 2012-03-19 2014-08-12 Dell Products, Lp Dashboard system and method for identifying and monitoring process errors and throughput of integration software
US8782103B2 (en) 2012-04-13 2014-07-15 Dell Products, Lp Monitoring system for optimizing integrated business processes to work flow
US9158782B2 (en) 2012-04-30 2015-10-13 Dell Products, Lp Cloud based master data management system with configuration advisor and method therefore
US9015106B2 (en) 2012-04-30 2015-04-21 Dell Products, Lp Cloud based master data management system and method therefor
US9606995B2 (en) 2012-04-30 2017-03-28 Dell Products, Lp Cloud based master data management system with remote data store and method therefor
US8589207B1 (en) 2012-05-15 2013-11-19 Dell Products, Lp System and method for determining and visually predicting at-risk integrated processes based on age and activity
US9069898B2 (en) 2012-05-31 2015-06-30 Dell Products, Lp System for providing regression testing of an integrated process development system and method therefor
US9092244B2 (en) 2012-06-07 2015-07-28 Dell Products, Lp System for developing custom data transformations for system integration application programs
WO2013186828A1 (en) * 2012-06-11 2013-12-19 株式会社 日立製作所 Computer system and control method
US9779027B2 (en) * 2012-10-18 2017-10-03 Oracle International Corporation Apparatus, system and method for managing a level-two cache of a storage appliance
US9772949B2 (en) * 2012-10-18 2017-09-26 Oracle International Corporation Apparatus, system and method for providing a persistent level-two cache
US10642735B2 (en) 2013-03-15 2020-05-05 Oracle International Corporation Statement cache auto-tuning
CN104216914B (en) 2013-06-04 2017-09-15 Sap欧洲公司 large-capacity data transmission
US9183074B2 (en) 2013-06-21 2015-11-10 Dell Products, Lp Integration process management console with error resolution interface
WO2015075837A1 (en) * 2013-11-25 2015-05-28 株式会社日立製作所 Storage device and control method therefor
JP6212137B2 (en) * 2013-12-12 2017-10-11 株式会社日立製作所 Storage device and storage device control method
US10558571B2 (en) * 2014-03-20 2020-02-11 Sybase, Inc. Second level database file cache for row instantiation
US10015274B2 (en) * 2015-12-31 2018-07-03 International Business Machines Corporation Enhanced storage clients
US10270465B2 (en) 2015-12-31 2019-04-23 International Business Machines Corporation Data compression in storage clients
JP6524945B2 (en) * 2016-03-25 2019-06-05 日本電気株式会社 Control device, storage device, storage control method and computer program
US10482021B2 (en) 2016-06-24 2019-11-19 Qualcomm Incorporated Priority-based storage and access of compressed memory lines in memory in a processor-based system
US10084615B2 (en) * 2016-11-14 2018-09-25 Electronics And Telecommunications Research Institute Handover method and control transfer method
US10498858B2 (en) 2016-12-14 2019-12-03 Dell Products, Lp System and method for automated on-demand creation of and execution of a customized data integration software application
US11259169B2 (en) * 2017-09-21 2022-02-22 Microsoft Technolgy Licensing, LLC Highly scalable home subscriber server
US10558364B2 (en) * 2017-10-16 2020-02-11 Alteryx, Inc. Memory allocation in a data analytics system
US11855898B1 (en) 2018-03-14 2023-12-26 F5, Inc. Methods for traffic dependent direct memory access optimization and devices thereof
JP2021043837A (en) * 2019-09-13 2021-03-18 キオクシア株式会社 Memory system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5794228A (en) * 1993-04-16 1998-08-11 Sybase, Inc. Database system with buffer manager providing per page native data compression and decompression

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5875454A (en) 1996-07-24 1999-02-23 International Business Machiness Corporation Compressed data cache storage system
US6115787A (en) 1996-11-05 2000-09-05 Hitachi, Ltd. Disc storage system having cache memory which stores compressed data
US6879266B1 (en) * 1997-08-08 2005-04-12 Quickshift, Inc. Memory module including scalable embedded parallel data compression and decompression engines
US6324621B2 (en) 1998-06-10 2001-11-27 International Business Machines Corporation Data caching with a partially compressed cache
US7283987B2 (en) 2001-03-05 2007-10-16 Sap Ag Compression scheme for improving cache behavior in database systems
US7054912B2 (en) * 2001-03-12 2006-05-30 Kabushiki Kaisha Toshiba Data transfer scheme using caching technique for reducing network load

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5794228A (en) * 1993-04-16 1998-08-11 Sybase, Inc. Database system with buffer manager providing per page native data compression and decompression

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
CATE V ET AL: "COMBINING THE CONCEPTS OF COMPRESSION AND CACHING FOR A TWO-LEVEL FILESYSTEM", COMPUTER ARCHITECTURE NEWS, ASSOCIATION FOR COMPUTING MACHINERY, NEW YORK, US, vol. 19, no. 2, 1 April 1991 (1991-04-01), pages 200 - 211, XP000203262, ISSN: 0163-5964 *
COCKSHOTT W P ET AL: "Data compression in database systems", DATABASE AND EXPERT SYSTEMS APPLICATIONS, 1998. PROCEEDINGS. NINTH INTERNATIONAL WORKSHOP ON VIENNA, AUSTRIA 26-28 AUG. 1998, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 26 August 1998 (1998-08-26), pages 981 - 990, XP010296751, ISBN: 0-8186-8353-8 *
FRENCH C D: "Teaching an OLTP database kernel advanced datawarehousing techniques", DATA ENGINEERING, 1997. PROCEEDINGS. 13TH INTERNATIONAL CONFERENCE ON BIRMINGHAM, UK 7-11 APRIL 1997, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 7 April 1997 (1997-04-07), pages 194 - 198, XP010218542, ISBN: 0-8186-7807-0 *
GRAEFE G ET AL: "Data compression and database performance", APPLIED COMPUTING, 1991., YPROCEEDINGS OF THE 1991 SYMPOSIUM ON KANSAS CITY, MO, USA 3-5 APRIL 1991, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 3 April 1991 (1991-04-03), pages 22 - 27, XP010022657, ISBN: 0-8186-2136-2 *
LEE J-S ET AL: "Performance analysis of a selectively compressed memory system", MICROPROCESSORS AND MICROSYSTEMS, IPC BUSINESS PRESS LTD. LONDON, GB, vol. 26, no. 2, 17 March 2002 (2002-03-17), pages 63 - 76, XP004339935, ISSN: 0141-9331 *
MCDONALD I: "Distributed, configurable memory management in an operating system supporting quality of service", DISTRIBUTED COMPUTING SYSTEMS, 1999. PROCEEDINGS. 7TH IEEE WORKSHOP ON FUTURE TRENDS OF CAPE TOWN, SOUTH AFRICA 20-22 DEC. 1999, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 20 December 1999 (1999-12-20), pages 191 - 196, XP010367838, ISBN: 0-7695-0468-X *
WILSON P R ET AL: "The case for compressed caching in virtual memory systems", PROCEEDINGS OF THE 1999 USENIX ANNUAL TECHNICAL CONFERENCE USENIX ASSOC BERKELEY, CA, USA, 6 June 1999 (1999-06-06) - 11 June 1999 (1999-06-11), pages 101 - 116, XP002299640, ISBN: 1-880446-33-2 *
YANG J ET AL: "FREQUENT VALUE COMPRESSION IN DATA CACHES", PROCEEDINGS OF THE ANNUAL INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, 10 December 2000 (2000-12-10), pages 258 - 265, XP000994541 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102159539A (en) * 2008-06-26 2011-08-17 矽烷实验室股份公司 A new metformin glycinate salt for blood glucose control
CN102159539B (en) * 2008-06-26 2014-04-16 矽烷实验室股份公司 A new metformin glycinate salt for blood glucose control
CN111159142A (en) * 2018-11-07 2020-05-15 马上消费金融股份有限公司 Data processing method and device
CN111159142B (en) * 2018-11-07 2023-07-14 马上消费金融股份有限公司 Data processing method and device

Also Published As

Publication number Publication date
US20050015374A1 (en) 2005-01-20
US7181457B2 (en) 2007-02-20

Similar Documents

Publication Publication Date Title
US7181457B2 (en) System and method for utilizing compression in database caches to facilitate access to database information
US6658549B2 (en) Method and system allowing a single entity to manage memory comprising compressed and uncompressed data
US6192432B1 (en) Caching uncompressed data on a compressed drive
US7058783B2 (en) Method and mechanism for on-line data compression and in-place updates
US6857047B2 (en) Memory compression for computer systems
US5812817A (en) Compression architecture for system memory application
JP4831785B2 (en) Adaptive session compression management method, compression manager, and session management system
JP3399520B2 (en) Virtual uncompressed cache in compressed main memory
US6360300B1 (en) System and method for storing compressed and uncompressed data on a hard disk drive
US7895242B2 (en) Compressed storage management
US7058763B2 (en) File system for caching web proxies
US6349375B1 (en) Compression of data in read only storage and embedded systems
US9727479B1 (en) Compressing portions of a buffer cache using an LRU queue
US20070088920A1 (en) Managing data for memory, a data store, and a storage device
US5544349A (en) Method and system for improving the performance of memory in constrained virtual memory environments by reducing paging activity
US20020178176A1 (en) File prefetch contorol method for computer system
US10963377B2 (en) Compressed pages having data and compression metadata
CN107423425B (en) Method for quickly storing and inquiring data in K/V format
US7526615B2 (en) Compressed victim cache
US6654856B2 (en) System and method for managing storage space of a cache
EP2168060A1 (en) System and/or method for reducing disk space usage and improving input/output performance of computer systems
US7469320B2 (en) Adaptive replacement cache
US6654867B2 (en) Method and system to pre-fetch compressed memory blocks using pointers
US20230021108A1 (en) File storage
JP3171160B2 (en) Compressed file server method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase