WO1995001600A1 - Predictive disk cache system - Google Patents
Predictive disk cache system Download PDFInfo
- Publication number
- WO1995001600A1 WO1995001600A1 PCT/US1994/007882 US9407882W WO9501600A1 WO 1995001600 A1 WO1995001600 A1 WO 1995001600A1 US 9407882 W US9407882 W US 9407882W WO 9501600 A1 WO9501600 A1 WO 9501600A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- cache
- predictive
- ram
- read
- disk
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0862—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/31—Providing disk cache in a specific location of a storage system
- G06F2212/311—In host system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
- G06F2212/6024—History based prefetching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
A predictive disk cache system for use with a host computer tracks read and write requests in operation and builds sequence tables (23, 25), which are copied to non-volatile memory such as a disk drive (11) of the cache system. In one embodiment, the predictive cache system implements a cache (21) in system RAM (random access memory) (17), updates sequence tables in RAM, and copies the tables to the disk drive at low priority. On start-up, the predictive disk cache system loads ahead of any application or secondary operating system to maximize disk activity during a run period. After loading, the system accesses the sequence tables previously built and stored, and follows sequences written therein in loading files from the disk drive.
Description
PREDICTIVE DISK CACHE SYSTEM
Field of Invention
The present invention is in the area of memory caching for digital mass storage systems, and pertains in particular to a system for optimizing computer disk- intensive operations. The invention uses a predictive cache to improve both actual and perceived performance.
Background of the Invention
A disk cache provides a place to store in dynamic random access memory (DRAM) or static random access memory (SRAM) digital information on its way to or from a digital storage unit. DRAM and SRAM may be collectively referred to as random access memory (RAM). The disk cache serves the memory requirements of a computer's central processing unit (CPU). Disk caching software provides a temporary and fast method of transferring information relative to computer software applications. In order for the computer to process information, both the software application and data used by the application must reside in the computers RAM. The data to be manipulated and altered by the computer software application typically originally resides on media inside a digital storage device such as a hard disk drive, CD-ROM, floppy-optical or floppy drive.
The RAM is the first location that disk caching information is sent to on the way to the central processing unit (CPU). All RAM memory is virtual, which means, all data including cached data is lost when the computer is turned off or re-booted. Disk caching software typically starts anew when the computer is re¬ started and any previously established disk caches are emptied.
In conventional computer systems the typical mass digital storage device is a hard disk drive, and
descriptions of caching systems in this specification will make use of descriptions of operations of hard disk drives. Other types of storage drives may be used however, such as CD-ROM as mentioned above.
The peripheral disk drive can be the single most expensive component in a computer system. The basic design of these drives consists of multiple layers of fast revolving magnetic disks with multiple magnetic reading and writing heads floating on a cushion of air a few millionths of an inch above each disk surface. The critical engineering parameters of disk drive construction include: stepper motor actuation of the moving heads, an atmospherically controlled environment, and protection from shock due to external forces. Each magnetically coated disk has cylindrical matching concentric tracks with interleaving sector association. This low level formatting optimizes clustering of fields for individual application programs, which increases access speeds by the read and write heads.
The primary purpose of a disk cache is to provide the central processing unit even faster access to data stored on the peripheral device by storing it temporarily in RAM, either in the main computer system RAM or auxiliary disk-controlled RAM. Performance in information processing without a RAM cache is directly related to the current limiting average access speeds of modern hard disk drives. A problem is that the average access time for DRAM is as much as 200,000 times faster than for a hard disk drive. Due to the natural physical limitations of all the moving parts within a hard drive, the performance ratio between RAM and hard drive technology, will likely widen in the future. Also, throughout the development of computer processing, CPU clock speeds have increased along with increasing data transfer rates corresponding to design improvements in bus architecture.
These system improvements demand faster hard drive operation. One logical cost effective solution is to invent better disk caching systems.
Typically, a disk cache intercepts requests for data issued by the host computer's operating system, and first checks in cache RAM to see if the requested file is already there. If it is not, the hard drive is accessed and a copy of the requested file is also placed in a predetermined cache RAM location, either in system RAM or in a dedicated cache RAM on a disk controller device. In the case of a software cache, a resident program in the computer's main memory (RAM) manages the information. Typically, a hardware disk cache uses a separate small processor and separate RAM to control data flow. Each approach has advantages and disadvantages depending on intended use and hardware configuration. Hardware caches are almost never configurable, and current software caches are very limited in flexibility of caching routines. The measure of a successful disk-caching system in either case is determined by software design and memory utilization.
Current user software applications typically request data in a non-linear fashion, which increases cache memory requirements, and the trend in memory requirements of most programs is upward. Moreover, operating systems such as UNIX, OS/2, and Windows encourage software development that places a heavy drain on memory resources. Disk cache design is best served by a balanced algorithm that takes into consideration the size of the cache and what to have residing in it at any point in time.
Typically, reading from a hard drive consumes about 90% of a disk cache's function while writing to a hard drive takes up about 10% of its duties. An example of a state-of-the-art disk cache is a 4 set-associative read- ahead algorithm with a defer-write or elevator-write algorithm. In such a system, the cache compares read
requests in four fully associative mini-caches, each assigned to an area of the disk. Then, if a requested file is not there, copies ahead that requested file along with predetermined adjacent sectors (or whole tracks if cache is big enough) till the mini-set cache fills up. Upon filling up, it then dumps a sector that was least recently used (LRU) in order to read-ahead the next sector or sectors it doesn't already have in cache memory. The idea here, assuming an unfragmented disk, is that adjacent sectors of data on the hard drive will be needed some time soon by the application program or that the already copied sector will soon be needed again. The elevator write algorithm defers writing back to disk until disk activity slows, and writes back according to the location of the write heads (like an elevator operation discharges passengers). This saves time in overall head movement.
The overall performance of a disk caching system is measured by benchmarks called Cache Hit Ratio (% of attempts you found a request in the disk cache), Hit Speed and Miss Speed. A good disk caching program can decrease wait times in practically all cases, which in turn increases performance. A 90% cache Hit ratio makes it appear as though the disk is operating 10 times faster than it would seem without the cache.
Both bus and CPU designs currently surpass the speeds attained by disk drives and other peripheral storage devices, and will soon get even faster. What is needed is a cost effective system to increase apparent hard drive access time and transfer rates to speeds that coincide with advancements in overall computer performance designs. Such a disk cache system needs to be predictive, not virtual, and should adapt to individual user's hardware/software configurations automatically. A predictive cache system would control the cacheing of files and determine precisely when to collect the data (so
that the cache doesn't slow the CPU down) and what data to collect to increase the chances that it's something the loading or running application program will need.
Summary of Invention
In a preferred embodiment a predictive cache system for optimizing read/write operations from and to a non a non-volatile mass storage device connected on a host computer's bus, comprises control means for operating the predictive disk cache system, RAM cache means in communication with the host computer's bus for temporary storage of files fetched from and to be written to the non-volatile mass storage device, and non-volatile sequence table means for storing sequence histories of the read/write operations from and to the non-volatile mass storage device. The control means is configured to write the sequence histories to the non-volatile sequence table means during operation, and to follow sequences recorded thereon in performing the read/write operations.
In one embodiment the cache is implemented in system RAM and sequence tables are stored on the optimized disk drive. In others, hardware caches may be employed. Also in a preferred embodiment sequence tables are selectively associated with specific operating systems or application programs, and sequence information for loading start-up files is recorded in the sequence tables to facilitate start-up of the applications.
In the preferred embodiment, the predictive cache system can out perform exiting disk cache systems in the following areas: 1) Operate in conjunction with present system disk caching software to default to a "random access mode" which establishes control routines for highest possible "cache hitting" 2) Maintain a performance level of "cache hit ratio" even in cluttered
fragmented hard disk environments 3) Reduce the size, therefore the cost of required disk cache, and free up memory resources 4) Reduce "seek time" and "miss time" by established sequence patterns loaded into cache memory at system startup 5) Reduce component overall wear and power requirements by eliminating unnecessary disk drive head movements 6) Maintain a closer performance parity with advancements in CPU and bus system design 7) Reduce start-up time for applications.
Brief Description of Drawings
Fig. 1 is a block diagram of an embodiment of the present invention.
Fig. 2 is a diagram of steps as performed in an embodiment of the invention.
Fig. 3 is a logic flow diagrams of operation according to an embodiment of the present invention.
Description of Preferred Embodiments
The present invention is a predictive disk cache system for reducing execution time of hard drive-intensive operations, such as Windows startup, to improve computer system performance. Most applications such as Windows, typically startup in the same sequence, accessing a large number of disk files with most files being accessed just once. A conventional disk cache system such as SMARTDRV or caching disk adapters such as Mylex cannot improve performance beyond gains achieved by a simple read-ahead sequence as described above in the "Background" section. Currently available disk caching systems have no predictive abilities to determine what data is required next by a particular software application.
In an embodiment of the present invention, requested data is tracked across patterns, such as: which sector is most likely to be read after reading this one?; and if known, which is read after the next and so on? The Predictive Cache System uses adaptive control routines to increase probabilities of a "cache hit". Tracking sequence data is accumulated in run time and is stored with updates in a sequence table in a storage device, and read into the Predictive Cache System memory on starting the computer. This ability, unlike conventional disk caches which start anew each time the system is turned off, significantly speeds up application loading procedures. Even when there are short durations of 100% disk accessing activities, predominantly on starting complex applications, the Predictive Cache System will improve performance by sequentially reading disk sectors to be loaded, in reordered, optimized sequence. The present embodiment of the invention has the ability to read large recognized blocks of needed data files before they are required.
The sequence tables stored contain, for example, the following information for each allocation unit on the disk drive to be accessed.
* Next sector to be read, Y/N
* How quickly will the next sector be needed? This information is used to optimize the sequence of disk accesses, helping the Predictive Cache System to determine whether to read this sector immediately or later.
* Status information to control replacement of this information ( for adaptive replacement), and associated information. As mentioned above, most
complex programs require large numbers of startup files only once; therefore the Predictive cache System can dump them from cache RAM just after the operating system loads them, providing more room for other purposes.
Each allocation unit has a corresponding entry in the sequence table in a permanent file preferably maintained on the drive accessed by the cacheing system. The tables are updated within the definition of custom control routines. These updates to the table are rewritten to the table in the drive at very low priority so they don't impact system performance.
Fig. 1 is a block diagram illustrating an embodiment of the present invention. In this embodiment, the operating control routines 19 for the Predictive Cache System reside in system RAM 17, in communication with system bus 15. System CPU 13 also communicates on the bus, and a peripheral storage device 11, in this case a hard disk drive, resides on the bus as well. Storage device 11 is the object of the Predictive Disk Cache system in this embodiment.
When the Predictive Cache System intercepts a request for data files from CPU 13 the Predictive Cache System checks its cache 21 first for the requested files. If the files are not in Cache, it updates its sequence table 23 in RAM and immediately forwards the request to the storage device 11. Sequence table 23 in RAM is a temporary copy of a sequence table 25 written to drive 11 as described below.
The requested files are located and provided to CPU 13 for processing according to the algorithms of the application program. Whenever device 11 is not processing required read data processes, the Predictive Cache System updates sequence table 25 on the drive. The
sequence table is updated first in cache RAM and then later on the storage device. The table is identical at both locations 23 and 25 other than the differences related to the timed delay of multiple updates.
In this example, the diagram shows sequence table 25 comprises lists of startup files 27 in needed order for a particular application, and sequence information 29 for a particular sector n on device 11.
In either case, the Predictive Cache System using past requests can predict future needs in each allocation unit. The sequence table is particular in this embodiment for a particular application, and at startup of any application, the proper sequence table is accessed and updated.
Fig. 2 is a flow diagram illustrating steps in disk caching according an embodiment of the Predictive Cache System. The flow diagram assumes the existing computer system already has a software and/or hardware cache installed. In this embodiment of the invention, the Predictive Cache System works in conjunction with the existing disk cache software. At installation, a configuration utility (step 12) is provided for a user to set algorithm priorities and input associated system wide caches and hardware configurations. The user can return at any time to the configuration program, to change hardware or software addresses as well as to customize overall cache algorithms, such as: cache size, application shared memory allocation ratios; partitioning of the set associations, LAN network nodal assignments and priorities, and system wide defaults. Options include a number of suggested control routines and their related intentions, each suited for a particular system use. Along with them, an associated chart gives the user a past performance "cache hit" ratio for each of the previously tried sequence table routines. Set-up also
establishes batch files inside command routines at "boot- up", as well as establishing sub-directories needed for the Predictive Cache System. An optional utility benchmark program displays current performance data related to cache hits, hit speeds and miss speeds.
At step 14, the computer is powered up with the Predictive Cache System already configured into the system. Once the operating system is loaded, the Predictive Cache system is loaded next at step 16, so it can manipulate exiting disk caching software batch routines. This assures optimization by not allocating valuable system memory to two or three caches (three in the case of both an existing hardware and software cache), and also saves redundant "hit and miss" times.
At step 18, the Predictive Cache System scans for startup execution programs in the loading batch file (such as those typically found in the DOS system's AUTOEXEC.BAT) of the operating system. As described above, a sequence table located on the drive is available, having been prepared according to previous operations. This file is now read for the order of operations to proceed. Startup files according to the sequence table are then loaded into cache memory to be compared to disk reading requests.
At step 20 the accessing priorities are set from the sequence table in cache memory, then programs are loaded to system RAM accordingly. The predetermined startup program files could be loaded quickly using an elevator- read routine established previously at function 16, when loaded at disk null times. During usual periods of program loading that are 100% disk bound, the sequences established previously in the Predictive Cache System's sequence tables further facilitate orderly access to requested files according to relative locations on the hard disk.
Once the startup files are loaded to system RAM for
a particular application program, most conventional programs won't need them again; therefore, the Predictive Cache System dumps them from cache RAM at step 24. This gives the system in this embodiment all the advantages of predictive caching on startup as well as the advantage of an empty cache ready to access new sequence blocks. In this embodiment, after the initial start-up files are dumped from cache RAM, a new configuration of cache RAM allocations is assigned to individual portions of the disk drive. This best serves both cache RAM memory utilization and the predictive control routines.
At step 26 the Predictive Cache System proceeds as follows:
* Immediately reads the sector requested (high priority; the system is waiting for this data)
* Using an adaptive replacement algorithm marks a sector as the next sector to be read in the table entry following the sector immediately before.
* Follows the sector numbers and schedules a read- ahead of sectors that are likely to be read next. This read-ahead data is stored to the cache.
* Sets a default time on the previous read-ahead files that aren't used soon enough to dump them from memory on time out. This may happen even though the sequence data is stored in the table which guarantees quick access next time for this or other future established sequences.
When and if the predictive disk cache approaches fill-up, or the application software needs a bigger allocation of system memory (in the case where the control routines for the Predictive Cache System reside in system RAM rather than in a hardware RAM), a flag at step 28 alerts the predictive disk caching system to default to an adaptive replacement of the least recently used (LRU) sequence set of data files. During null periods of disk
activity, or after a user-selectable time-out period, the system writes back all program requests within parameters of minimum head movement to include updates to the sequence tables at step 30. In the present embodiment, the system can default at any time after step 24 to the previously established caching software (step 32) if a predetermined randomness prevails on overall system disk seeking.
The flow diagram of Fig. 2 represents one embodiment of the invention. In other embodiments the order of steps may be different.
Fig. 3 is a logic flow diagram illustrating operations in an embodiment of the Predictive Cache System. A request is received at function 41 and checked if it is a "read" or "write" request at decision 43. If the request is for a "read" file, the system checks the cache for the file at decision 45. This cache can be a combination of the original system cache (for random access requests) or just a customized predictive cache only. Either may be selected in configuration. If the cache doesn't hold the requested file, the file is read immediately from the hard drive at function 47, high priority. The requested file is noted in a predictive algorithm at function 48 and the cache sequence table is updated in RAM at junction 49.
If a random access is recorded, the system defaults to an existing disk cache routine at function 51. If at decision 53 the disk is not busy, the new sequence is written to the hard drive at function 57 during null periods of disk drive activity. If the disk is busy, control loops back and continues to check for busy until a window opens.
If the file already resides in the disk cache at decision 45, then it is immediately read to the operating system at function 59 and the predictive cache algorithm
is updated at function 61. The system requests a read- ahead at decision 63 on a medium priority level of disk operating activity. This is an operation of the Predictive Cache System according to the present embodiment. If after a pre-selected number of time frames, the program isn't loaded to the cache, it defaults and the system stops trying at function 73. When a priority door opens, the decision at 63 is yes, and the system checks the fullness the disk cache at decision 65. As an optional feature, the Predictive Cache System can vary the size and number of files to dump with the size of the request from the sequence tables. If the cache is filling, the adaptive algorithm as described above can select "blocks" of sequence files or random blocks based on the least recently used block at function 67. If there is room in the cache, the file or sequence of files are read to the cache at function 71.
If at decision 43 the request is a "write" request, the invention first checks the hard disk activity at decision 81 before writing to the disk at function 87. Data associated with write requests is written on an elevator-deferred basis in the write algorithm and are saved to cache RAM at function 83 until the hard disk activity is low. The system continues to check for disk activity, and copies to he drive when a window opens. In an alternative embodiment, priority for writing can be raised to guarantee system integrity if there is a power or system lock up.
The Predictive Cache System can have features incorporated into existing hardware as well as software. On hardware devices, dedicated cache memory comes with disk drive controllers as well as with sophisticated disk drives. These hardware devices typically take advantage of advanced bus structures such as Extended Industry Standard Architecture (EISA), Video Electronics Standard
Association (VESA) Local Bus and Peripheral Component Interconnect (PCI) bus. They also give the CPU full utilization of system RAM for best overall performance. Another embodiment incorporates a predictive cache in an encoded programmable read only memory (EPROM) device to be added on a bus structure of a computing device to do a specific caching operation every time the system is turned on. The EPROM can also contain the optimizing device files needed by the start up applications. In yet another embodiment a Predictive Disk Cache system is provided to be used on a local area network (LAN). The adaptive sequence tables to be used reflect the given size and typical program application use. The LAN system also incorporates a unique "last-sector-read" for each node on the multi-user system to identify that particular user's sequence tables for future use. In still another embodiment the system can control predictive "writes" to the hard drive as a system-wide back-up feature and/or can be used for repetitive write operations. A predictive disk cache system according to the invention may also be incorporated in an operating system for a computer.
It will be apparent to one skilled in the art that there are many changes that might be made without departing from the spirit and scope of the invention. The Predictive Cache System can be configured on existing hardware or can be incorporated into new optimizing storage devices and/or device controllers. It provides for a fast access to application execution files from start up.
Claims
1. A predictive cache system for optimizing read/write operations from and to a non-volatile mass storage device connected on a host computer's bus, comprising: control means for operating said predictive disk cache system;
RAM cache means in communication with said host computer's bus for temporary storage of files fetched from and to be written to said non-volatile mass storage device; and non-volatile sequence table means for storing sequence histories of said read/write operations from and to said non-volatile mass storage device; said control means configured to write said sequence histories to said non-volatile sequence table means during operation, and to follow sequences recorded thereon in performing said read/write operations.
2. A predictive cache system as in claim 1 wherein said RAM cache means is configured as a portion of system RAM on start-up and reboot, and said sequence table means is configured as a portion of storage area on said non¬ volatile mass storage device.
3. A predictive cache system as in claim 1 wherein said host computer is a general-purpose computer, and said non volatile mass storage device is a hard disk drive dedicated to said general-purpose computer.
4. A predictive cache system as in claim 1 wherein said control means is configured to compose and write said non¬ volatile sequence table means to be selectively associated with specific application programs.
5. A predictive cache system as in claim 4 wherein said non-volatile sequence table means associated with a specific application program comprises sequences to load start-up files for said specific application program in order and other sequences associated with specific sectors on said peripheral storage device within the storage region whereon files associated with said specific application program are stored.
6. A predictive cache system as in claim 1 wherein control routines for said predictive cache system and RAM for the cache are implemented as a hardware unit expansion board for connection to said host computer bus.
7. A predictive disk cache system as in claim 1 wherein said control means comprises control routines for implementing said predictive disk cache system encoded in a programmable read-only memory device to be connected to said host computer's bus.
8. A predictive disk cache system as in claim 1 wherein said control means comprises control routines for implementing said predictive disk cache system imbedded in the operating system of said host computer.
9. A non-volatile mass storage unit connectable on a host computer's I/O bus, comprising: physical storage media; read and write heads positionable to read from and write to said physical storage media;
RAM including a RAM cache for temporary storage of files fetched from and to be written to said mass storage unit; and communication means connected to said read and write heads, to said RAM, and connectable to a host computer's I/O bus, for providing a digital communication path; said RAM further comprising control routines accessible by a host computer for managing read and write operations from and to said mass storage unit through said RAM cache, recording sequence histories of said read and write operations from and to said mass storage unit as sequence tables on said physical storage media during operation, and following sequences recorded thereon in performing said read/write operations.
10. A general-purpose computer system comprising: control means including microprocessor-based CPU means for managing operations of said general-purpose computer system; system RAM for storage of data during computer operations, said system RAM comprising a RAM cache; a non-volatile mass storage device for storage of application routines and data to be used by said general- purpose computer; and system bus means connected to said CPU means, to said system RAM, and to said non-volatile mass storage device for providing digital communication; said control means configured to read from and write to said non-volatile mass storage device through said RAM cache by following sequence data in a sequence table means recorded on said peripheral storage device, and further configured to update said sequence table means according to sequences of operations performed.
11. A general-purpose computer as in claim 10 wherein said peripheral storage device is a hard disk drive connected on said system bus means.
12. A method for caching data transfers between a general purpose computer having a microprocessor-based CPU and a connected non-volatile mass storage device, comprising steps of: reading a sequence table stored on said non-volatile mass storage device; loading files from said non-volatile mass storage device according to a sequence recorded in said sequence table; and storing said files temporarily in a RAM cache accessible by said microprocessor-based CPU until requested by said microprocessor-based CPU.
13. The method of claim 12 comprising additional steps of: tracking read and write requests made by said microprocessor-based CPU means; and updating the sequence table according to tracking data accumulated in the step of tracking read and write requests.
14. The method of claim 12 wherein sequence tables are written specifically for existing application routines.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US8672293A | 1993-07-02 | 1993-07-02 | |
US08/086,722 | 1993-07-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1995001600A1 true WO1995001600A1 (en) | 1995-01-12 |
Family
ID=22200461
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1994/007882 WO1995001600A1 (en) | 1993-07-02 | 1994-07-01 | Predictive disk cache system |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO1995001600A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5668814A (en) * | 1995-03-20 | 1997-09-16 | Raychem Corporation | Dual DDS data multiplexer |
US6282204B1 (en) | 1997-12-19 | 2001-08-28 | Terayon Communication Systems, Inc. | ISDN plus voice multiplexer system |
WO2001075581A1 (en) * | 2000-03-31 | 2001-10-11 | Intel Corporation | Using an access log for disk drive transactions |
US6779058B2 (en) | 2001-07-13 | 2004-08-17 | International Business Machines Corporation | Method, system, and program for transferring data between storage devices |
EP1345113A3 (en) * | 2002-03-13 | 2008-02-06 | Hitachi, Ltd. | Management server |
EP1424628A3 (en) * | 2002-11-26 | 2008-08-27 | Microsoft Corporation | Improved reliability of diskless network-bootable computers using non-volatile memory cache |
EP3037961A1 (en) * | 2009-04-20 | 2016-06-29 | Intel Corporation | Booting an operating system of a system using a read ahead technique |
TWI588824B (en) * | 2015-12-11 | 2017-06-21 | 捷鼎國際股份有限公司 | Accelerated computer system and method for writing data into discrete pages |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4334289A (en) * | 1980-02-25 | 1982-06-08 | Honeywell Information Systems Inc. | Apparatus for recording the order of usage of locations in memory |
US4882642A (en) * | 1987-07-02 | 1989-11-21 | International Business Machines Corporation | Sequentially processing data in a cached data storage system |
US4980823A (en) * | 1987-06-22 | 1990-12-25 | International Business Machines Corporation | Sequential prefetching with deconfirmation |
US5093777A (en) * | 1989-06-12 | 1992-03-03 | Bull Hn Information Systems Inc. | Method and apparatus for predicting address of a subsequent cache request upon analyzing address patterns stored in separate miss stack |
US5146578A (en) * | 1989-05-01 | 1992-09-08 | Zenith Data Systems Corporation | Method of varying the amount of data prefetched to a cache memory in dependence on the history of data requests |
US5235697A (en) * | 1990-06-29 | 1993-08-10 | Digital Equipment | Set prediction cache memory system using bits of the main memory address |
US5257370A (en) * | 1989-08-29 | 1993-10-26 | Microsoft Corporation | Method and system for optimizing data caching in a disk-based computer system |
US5283884A (en) * | 1991-12-30 | 1994-02-01 | International Business Machines Corporation | CKD channel with predictive track table |
US5285527A (en) * | 1991-12-11 | 1994-02-08 | Northern Telecom Limited | Predictive historical cache memory |
US5287487A (en) * | 1990-08-31 | 1994-02-15 | Sun Microsystems, Inc. | Predictive caching method and apparatus for generating a predicted address for a frame buffer |
US5289581A (en) * | 1990-06-29 | 1994-02-22 | Leo Berenguel | Disk driver with lookahead cache |
-
1994
- 1994-07-01 WO PCT/US1994/007882 patent/WO1995001600A1/en active Application Filing
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4334289A (en) * | 1980-02-25 | 1982-06-08 | Honeywell Information Systems Inc. | Apparatus for recording the order of usage of locations in memory |
US4980823A (en) * | 1987-06-22 | 1990-12-25 | International Business Machines Corporation | Sequential prefetching with deconfirmation |
US4882642A (en) * | 1987-07-02 | 1989-11-21 | International Business Machines Corporation | Sequentially processing data in a cached data storage system |
US5146578A (en) * | 1989-05-01 | 1992-09-08 | Zenith Data Systems Corporation | Method of varying the amount of data prefetched to a cache memory in dependence on the history of data requests |
US5093777A (en) * | 1989-06-12 | 1992-03-03 | Bull Hn Information Systems Inc. | Method and apparatus for predicting address of a subsequent cache request upon analyzing address patterns stored in separate miss stack |
US5257370A (en) * | 1989-08-29 | 1993-10-26 | Microsoft Corporation | Method and system for optimizing data caching in a disk-based computer system |
US5235697A (en) * | 1990-06-29 | 1993-08-10 | Digital Equipment | Set prediction cache memory system using bits of the main memory address |
US5289581A (en) * | 1990-06-29 | 1994-02-22 | Leo Berenguel | Disk driver with lookahead cache |
US5287487A (en) * | 1990-08-31 | 1994-02-15 | Sun Microsystems, Inc. | Predictive caching method and apparatus for generating a predicted address for a frame buffer |
US5285527A (en) * | 1991-12-11 | 1994-02-08 | Northern Telecom Limited | Predictive historical cache memory |
US5283884A (en) * | 1991-12-30 | 1994-02-01 | International Business Machines Corporation | CKD channel with predictive track table |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5668814A (en) * | 1995-03-20 | 1997-09-16 | Raychem Corporation | Dual DDS data multiplexer |
US5978390A (en) * | 1995-03-20 | 1999-11-02 | Raychem Corporation | Dual DDS data multiplexer |
US6282204B1 (en) | 1997-12-19 | 2001-08-28 | Terayon Communication Systems, Inc. | ISDN plus voice multiplexer system |
WO2001075581A1 (en) * | 2000-03-31 | 2001-10-11 | Intel Corporation | Using an access log for disk drive transactions |
US6684294B1 (en) * | 2000-03-31 | 2004-01-27 | Intel Corporation | Using an access log for disk drive transactions |
US6779058B2 (en) | 2001-07-13 | 2004-08-17 | International Business Machines Corporation | Method, system, and program for transferring data between storage devices |
EP1345113A3 (en) * | 2002-03-13 | 2008-02-06 | Hitachi, Ltd. | Management server |
EP1424628A3 (en) * | 2002-11-26 | 2008-08-27 | Microsoft Corporation | Improved reliability of diskless network-bootable computers using non-volatile memory cache |
US7454653B2 (en) | 2002-11-26 | 2008-11-18 | Microsoft Corporation | Reliability of diskless network-bootable computers using non-volatile memory cache |
EP3037961A1 (en) * | 2009-04-20 | 2016-06-29 | Intel Corporation | Booting an operating system of a system using a read ahead technique |
EP3037960A1 (en) * | 2009-04-20 | 2016-06-29 | Intel Corporation | Booting an operating system of a system using a read ahead technique |
US10073703B2 (en) | 2009-04-20 | 2018-09-11 | Intel Corporation | Booting an operating system of a system using a read ahead technique |
TWI588824B (en) * | 2015-12-11 | 2017-06-21 | 捷鼎國際股份有限公司 | Accelerated computer system and method for writing data into discrete pages |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6948033B2 (en) | Control method of the cache hierarchy | |
US4875155A (en) | Peripheral subsystem having read/write cache with record access | |
US4779189A (en) | Peripheral subsystem initialization method and apparatus | |
US6988165B2 (en) | System and method for intelligent write management of disk pages in cache checkpoint operations | |
EP0077453B1 (en) | Storage subsystems with arrangements for limiting data occupancy in caches thereof | |
JP3409859B2 (en) | Control method of control device | |
EP0848321B1 (en) | Method of data migration | |
US4571674A (en) | Peripheral storage system having multiple data transfer rates | |
EP0130349B1 (en) | A method for the replacement of blocks of information and its use in a data processing system | |
EP0071719B1 (en) | Data processing apparatus including a paging storage subsystem | |
US6857047B2 (en) | Memory compression for computer systems | |
US8285924B1 (en) | Cache control system | |
US7437515B1 (en) | Data structure for write pending | |
US20030079087A1 (en) | Cache memory control unit and method | |
US7085907B2 (en) | Dynamic reconfiguration of memory in a multi-cluster storage control unit | |
Cohen et al. | Storage hierarchies | |
US5694570A (en) | Method and system of buffering data written to direct access storage devices in data processing systems | |
US5293618A (en) | Method for controlling access to a shared file and apparatus therefor | |
US20050251625A1 (en) | Method and system for data processing with recovery capability | |
JPH05303528A (en) | Write-back disk cache device | |
WO1995001600A1 (en) | Predictive disk cache system | |
Menon et al. | The IBM 3990 disk cache | |
JP4189342B2 (en) | Storage apparatus, storage controller, and write-back cache control method | |
EP0156179A2 (en) | Method for protecting volatile primary store in a staged storage system | |
US7536518B2 (en) | Scalable disc array unit, and management method and management program for a scalable disc array unit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
122 | Ep: pct application non-entry in european phase |