WO2001046792A1 - Buffer management system for managing the transfer of data into and out of a buffer in a disc drive - Google Patents

Buffer management system for managing the transfer of data into and out of a buffer in a disc drive Download PDF

Info

Publication number
WO2001046792A1
WO2001046792A1 PCT/US2000/029503 US0029503W WO0146792A1 WO 2001046792 A1 WO2001046792 A1 WO 2001046792A1 US 0029503 W US0029503 W US 0029503W WO 0146792 A1 WO0146792 A1 WO 0146792A1
Authority
WO
WIPO (PCT)
Prior art keywords
buffer
disc
data
sector
segment
Prior art date
Application number
PCT/US2000/029503
Other languages
French (fr)
Inventor
Steven Scott Williams
Brett Alan Cook
Gregory Paul Moller
Jonathan Williams Haines
Original Assignee
Seagate Technology Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seagate Technology Llc filed Critical Seagate Technology Llc
Priority to GB0214398A priority Critical patent/GB2373612B/en
Priority to JP2001547242A priority patent/JP2003518313A/en
Priority to KR1020027008037A priority patent/KR20020064357A/en
Priority to DE10085321T priority patent/DE10085321T1/en
Publication of WO2001046792A1 publication Critical patent/WO2001046792A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device

Definitions

  • This application relates generally to disc drive caching techniques and more particularly to a buffer management system for controlling the transfer of data between a host computer and a disc of a disc drive.
  • a disc drive data is stored on one or more discs coated with a magnetizable medium.
  • Data is written to the discs by an array of transducers, typically referred to as read/write heads, mounted to a radial actuator for movement of the heads relative to the discs.
  • the information is stored on a plurality of concentric circular tracks on the discs until such time that the data is read from the discs by the read/write heads.
  • Each of the concentric tracks is generally divided into a plurality of separately addressable data sectors.
  • the heads are used to transfer data between a desired track and an external environment, which includes, among many components, a host computer.
  • the head senses the data previously written on the disc track and transfers the information to the external environment.
  • data is written onto the disc track. Once the data is written to the disc, each sector holds a block of data, which is the absolute smallest quantity that can be written to the disc during a single write operation.
  • Adjacent blocks are typically written to the disc during a single write operation referred to as a command.
  • Critical to both of these operations - reading and writing - is the accurate locating of the head over the center of the desired track.
  • the heads are positioned with respect to the disc surfaces by an actuator voice coil motor.
  • the voice coil motor is responsible for pivoting the actuator body about a pivot shaft, thus moving the heads across the disc surfaces.
  • the actuator thus allows the heads to move back and forth in an accurate fashion between an inner radius and an outer radius of a disc.
  • the actuator arm is driven by a control signal fed to the voice coil motor at the rear end of the actuator arm.
  • a servo control system is used to sense the position of the actuator and control the movement of the head above the disc using servo signals read from a disc surface in the disc drive.
  • the servo control system relies on servo information stored on the disc.
  • the signals from this information generally indicate the present position of the head with respect to the disc, i.e., the current track position.
  • the servo control system uses the sensed information to maintain head position or determine how to optimally move the head to a new position centered above a desired track.
  • the servo control system then delivers a control signal to the voice control motor to position the head over a desired new track or to maintain the position over the desired current track.
  • the transfer of files between the host computer and the disc is controlled in a multi-level setting characterized by a bi-level transfer scheme.
  • track sectors are selected to contain the data sectors of which the file is divided. More specifically, and in a microscopic sense, cells along a track are magnetized to correspond to the bit structure of the file for the purposes of subsequent reading.
  • a disc drive typically includes a buffer to implement this bi- level transfer scheme. The purpose of the buffer is to accept the sectors of data during its transfer between the host computer and the disc and then transfer the data to the proper component - either the host computer or the disc.
  • the system microprocessor contains programming designed to locate the sectors on the track where the data is to be read from or written to.
  • the microprocessor is also programmed to control the transfer of the file at the sector, or macroscopic, level.
  • the transfer of data to and from the buffer is accomplished in character-by-character fashion.
  • the conventional method of accomplishing a transfer in a character-by-character environment is through the use of state machines, such as a buffer controller, a host interface controller, and a disc interface controller.
  • state machines such as a buffer controller, a host interface controller, and a disc interface controller.
  • the transfer of files between the host and the discs is carried out under the overall control of the system microprocessor and a more precise control of the state machine controllers.
  • the host computer prompts the system microprocessor with generalized information which allows the microprocessor to define the sectors on the track to which the file is to be written.
  • the microprocessor then initiates the transfer by implementing operations which result in transfer at the sector level.
  • Such operations include commanding the servo control system to position the head over the selected track which is to receive the file and identifying a segment of the buffer which is to be used in the transfer
  • the microprocessor commands the host interface controller to initiate the transfer of the data to the buffer segment
  • the host interface controller working in conjunction with the buffer controller and the host computer, provides precise control over the buffer in the transfer of characters of the file into sector blocks of the buffer segment
  • the disc interface controller controls the transfer of the blocks of data from the buffer segment to the approp ⁇ ate sectors on a disc pursuant to commands issued by the system microprocessor once the read/w ⁇ te heads are positioned over the track that is to receive the file
  • a read/w ⁇ te circuit connected to the read/w ⁇ te heads then magnetizes the cells along the data tracks
  • counters and interrupts are used in conjunction with the buffers to "watch over" the transfer of data to the buffer.
  • the counters which are typically located within the host and disc interface controllers, count the number of transfers that occur between both the host and the buffer and the disc and the buffer Based upon this calculation, an interrupt generator supplies an interrupt to the microprocessor immediately after a selected number of transfers to the buffer have occurred.
  • the use of counters and interrupts is to inquire as to whether the buffer is full when a transfer is to take place
  • the most significant drawback of disc d ⁇ ves is a relatively long delay between the time that a read or w ⁇ te operation is initiated and the time that it is mechanically completed This delay includes a seek latency, the time du ⁇ ng which the read/w ⁇ te heads are moved to a desired track, a rotational latency, the time du ⁇ ng which the disc rotates until a desired track sector is under the read/w ⁇ te head, and an additional delay corresponding to the time du ⁇ ng which data blocks are read from or w ⁇
  • a well-known means for reducing the number of disc accesses associated with almost any software application involves the use of a single common cache memory.
  • the cache memory typically a volatile random access memory (VRAM) device, stores blocks of data that are read from, or blocks that are to be written onto, the disc.
  • VRAM volatile random access memory
  • the cache is first checked to see if the requested data block is already available, thus avoiding the typical physical disc read. If the data block is not available, then the system instructs the read/write heads to perform a disc read, from which the data is transferred to the cache.
  • the system In order to read directly from the write cache, the system must keep track of which data blocks are being held in the cache instead of the disc. Second, it gives the read/write heads time to catch up with the other components of the computer. Once the cache is filled, the system microprocessor initiates write operations to transfer the data blocks in the cache to the disc drive. As further described in U.S. Patent 5,765,193, there are several methods for managing the "orderly" transfer of data from the write cache to the disc.
  • One common technique for managing the write cache is to supplement the disc write operation with instructions to search the write cache for adjacent data blocks. The existence of at least two adjacent data blocks defines a cluster, which the system will transfer in whole pursuant to the disc write operation.
  • a second known technique involves keeping a "least recently used” (LRU) queue, which comprises a list of data blocks in its order of use.
  • LRU least recently used
  • This technique uses a virtual block number, a number assigned to the data block in relation to its intended storage location on the disc, to rank the data blocks in relation to the order of most recent use by the system. Based on a theory that data blocks more recently used are more likely to be used again prior to less recently used data blocks, the system transfers the latter to the disc while retaining the former in the write cache for quick access by the central processing unit.
  • a third, and more outdated, technique is to simply remove blocks from the cache on a first-in-first-out basis.
  • the present invention is a means for rearranging buffer sectors so that buffer data space can be used more efficiently.
  • the vectored buffer management system in accordance with the present invention, hereinafter called "VBM,” provides a means for rearranging the buffer space so that transfer of data into and out of the buffer can be done in an arbitrary or random fashion.
  • VBM allows data to be written to and from the buffer sectors in a non-sequential fashion.
  • VBM contains three components: an algorithmic component, a hardware component, and a software component.
  • the algorithmetic component defines a model, referred to as the VBM table, of the data buffer as a logically organized singly linked list of sectors.
  • Each entry in the VBM table represents a single sector of the buffer.
  • Each sector of the buffer stores a data block having a logical block address (LBA) representing a sector on a disc in the disc drive.
  • LBA logical block address
  • the buffer sector where the hardware component is currently positioned is referred to as the index sector.
  • the value of each index sector is "next" sector, a physical buffer sector number denoting the "next" sector for the hardware component to reference upon conclusion of the transfer of data to or from the current index sector.
  • the hardware component is responsible for the physical transfer of data into and out of the buffer. During a disk write operation, the hardware component uses the "VBM" table as the basis for the management of the data transfer.
  • HAP host address pointer
  • DAP disc address pointer
  • HAP and DAP both are coupled to a host next pointer (HNP) and a disc next pointer (DNP), respectively.
  • HNP host next pointer
  • DNP disc next pointer
  • the functions of the next pointers are to read the next sector value from the index sector in order to direct the address pointers to the next sector.
  • the hardware also provides an automated traversal function independent of the task that is managing the buffer allocation such that the function can still traverse a large number of table entries, even when the CPU is performing another task.
  • the software component maximizes the advantage of write caching by ensuring that the maximum possible number of buffer sectors are available to new host write commands at any given time, and by minimizing the number of disk operations needed to complete the requests from the host. More particularly, the software provides the mechanism for creating and maintaining the VBM table that is used by hardware to direct data transfers into and out of the data buffer. The software component is further responsible for administering control of the hardware component. By creating a list of free buffer sectors, the software component also provides for the ability of VBM to separate cached data from free space, merge buffer segments of the buffer table into single disc operations when beneficial, reuse any buffer sectors that hold redundant data, and return buffer sectors to the free list in any order.
  • FIG. 1 is a plan view of a disc d ⁇ ve incorporating a preferred embodiment of the present invention showing the pnmary internal components.
  • FIG. 2 is a functional block diagram generally showing the main functional components used to control the disc d ⁇ ve of FIG. 1
  • FIG 3 is a plan view of the disc generally showing the main components on the surface of the disc
  • FIG. 4 is a functional block diagram of the buffer management system in accordance with a preferred embodiment of the present invention.
  • FIG. 5 is a functional block diagram of the buffer management system more particularly focusing on the application and control modules of FIG. 4
  • FIG 6 is a schematic representation of the components of the application module of FIG. 4
  • FIG. 7 is a flow diagram illustrating the operation of the control module in response to a w ⁇ te instruction by a host computer.
  • FIG 8 is a flow diagram illustrating the arrangement of sectors of the free_hst into one or more segments.
  • FIG. 9 is a flow diagram illustrating the mending process.
  • FIG. 10 is a flow diagram illustrating the operation of the control module when given an instruction to commit data from a buffer to a disc.
  • FIG. 11 is a flow diagram illustrating the merging process.
  • FIG. 12 is a flow diagram generally illustrating the operation of the present invention.
  • FIG. 13 is a schematic representation of the traverser component of the application module of FIG. 4 Detailed Description
  • FIG. 1 A disc drive 100 constructed in accordance with a preferred embodiment of the present invention is shown in FIG. 1.
  • the disc drive 100 includes a base 102 to which various components of the disc drive 100 are mounted.
  • a top cover 104 shown partially cut away, cooperates with the base 102 to form an internal, sealed environment for the disc drive 100 in a conventional manner.
  • the components include a spindle motor 106 which rotates one or more discs 108 at a constant high speed. Information is written to and read from tracks 160 (FIG. 3) on the discs 108 through the use of an actuator assembly 110, which rotates about a bearing shaft assembly 112 positioned adjacent to the discs 108.
  • the actuator assembly 110 includes a plurality of actuator arms 114 which extend towards the discs 108, with one or more flexures 116 extending from each of the actuator arms 114.
  • a head 118 mounted at the distal end of each of the flexures 116 is a head 118 which includes an air bearing slider enabling the head 118 to fly in close proximity above the corresponding surface of the associated disc 108.
  • the spindle motor 116 is typically de-energized when the disc drive 100 is not in use for extended periods of time.
  • the heads 118 are moved over park zones 120 near the inner diameter of the discs 108 when the drive motor is de-energized.
  • the heads 118 are secured over the park zones 120 through the use of an actuator latch arrangement, which prevents inadvertent rotation of the actuator assembly 110 when the heads 118 are parked.
  • the radial position of the heads 118 is controlled through the use of a voice coil motor (VCM) 124, which typically includes a coil 126 attached to the actuator assembly 110, as well as one or more permanent magnets 128 which establish a magnetic field in which the coil 126 is immersed.
  • VCM voice coil motor
  • the controlled application of current to the coil 126 causes magnetic interaction between the permanent magnets 128 and the coil 126 so that the coil 126 moves in accordance with the well-known Lorentz relationship.
  • the actuator assembly 110 pivots about the bearing shaft assembly 112 and the heads 118 are caused to move across the surfaces of the discs 108.
  • a flex assembly 130 provides the requisite electrical connection paths for the actuator assembly 110 while allowing pivotal movement of the actuator assembly 110 du ⁇ ng operation
  • the flex assembly includes a p ⁇ nted circuit board 132 to which head wires (not shown) are connected, the head wires being routed along the actuator arms 114 and the flexures 116 to the heads 118
  • the p ⁇ nted circuit board 132 typically includes circuitry for controlling the w ⁇ te currents applied to the heads 118 duong a wote operation and for amplifying read signals generated by the heads 118 duong a read operation
  • the flex assembly terminates at a flex bracket 134 for communication through the base deck 102 to a disc d ⁇ ve p ⁇ nted circuit board (not shown) mounted to the bottom side of the disc d ⁇ ve 100.
  • FIG 2 shown therein is a functional block diagram of the disc d ⁇ ve 100 of FIG 1 generally showing the mam functional circuits which are resident on the disc d ⁇ ve pnnted circuit board and used to control the operation of the disc d ⁇ ve 100
  • the disc d ⁇ ve 100 is shown m FIG 2 to be operably connected to a host computer 140 in which the disc d ⁇ ve 100 is mounted in a conventional manner
  • Control communication paths are provided between the host computer 140 and a disc d ⁇ ve microprocessor 142, the microprocessor 142 generally providing top level communication and control for the disc dove 100 in conjunction with programming for the microprocessor 142 stored in microprocessor memory (MEM) 143
  • the MEM 143 can include random access memory (RAM), read only memory (ROM), and other sources of resident memory for the microprocessor 142
  • the discs 108 are rotated at a constant high speed by a spmdle control circuit 148
  • the radial position of the heads 118 is controlled through the application of current to
  • a servo control circuit 150 provides such control.
  • Data is transferred between the host computer 140 and the disc dove 100 by way of a disc dove interface 144, which includes a buffer 145 to facilitate high speed data transfer between the host computer 140 and the disc dove 100.
  • Data to be wotten to the disc dove 100 are thus passed from the host computer 140 to the buffer 145 and then to a read/wote channel 146, which encodes and seoahzes the data and provides the requisite wote cuoent signals to the heads 118.
  • read signals are generated by the heads 118 and provided to the read/w ⁇ te channel 146.
  • the interface 144 performs read signal decoding, eoor detection, and error correction operations.
  • the interface 144 then outputs the retoeved data to the buffer 145 for subsequent transfer to the host computer 140.
  • Such operations of the disc drive 100 are well known in the art and are discussed, for example, in U.S. Pat. No. 5,276,662 issued Jan. 4, 1994, to Shaver et al.
  • FIG. 3 shown therein is a plan view of the disc 108, generally showing the main components on the surface of the disc 108.
  • the discs 108 are circumferentially divided into a plurality of concentric circular tracks 160.
  • the number of tracks 160 per disc 108 will vary with each particular manufactured disc 108.
  • a one-time revolution (INDEX) 162 around each track 160 is typically indicated by a mark that extends the radius of the disc 108.
  • the disc 108 is radially divided into a plurality of servo segments 164.
  • the servo segments 164 begin at the center 166 of the disc 108 and terminate at the outer edge 168 of the disc 108.
  • the number of servo segments 164 per disc 108 varies with each particular manufactured disc 108.
  • Each track 160 is composed of spaced servo segments 164 with data sectors between the servo segments 164.
  • FIG. 12 illustrates, in operations 399 through 417, a preferred embodiment of the overall operation 400 of the buffer management system for the buffer 145, in accordance with the present invention.
  • a buffer table 206 (FIG. 4) is maintained by representing the buffer 145 as divided into a plurality of equally sized buffer sectors 208 (FIG. 4).
  • the buffer sectors 206 are arranged into circularly linked lists, referred to as buffer segments 210 (FIG. 4).
  • a particular buffer segment 210 is arbitrarily selected by the control module 202 to store the data file.
  • data which is actually transferred from a host 140 (FIG.
  • the data file is stored in the buffer 145 for an indefinite period of time.
  • the data file is represented as stored in the buffer segment until, in operation 412, the data is removed from the buffer 145 and relocated to the disc 108, thus effectuating the transfer from the host 140 to the disc 108.
  • a determination is made as to whether either "look-up," the transfer or removal was administered prematurely. If not, then the process is finished. However, if a look-up was premature, then, if the look-up was a transfer, operation 416 re-initiates the transfer in operation 408. If the look-up was a removal and replacement, then operation 416 reinitiates the removal in operation 412.
  • the buffer management system 200 manages the transfer of data between the host computer 140 and the disc 108 through the implementation of three main components: a control module 202, an application module 204, and a buffer table 206.
  • the buffer table 206 is divided into n equal buffer sectors 208, each having a capacity to store 512 bytes. Each buffer sector 208 is further incorporated into a circularly linked list, or buffer segment 210.
  • Table 1 can be broken down into 2 independent buffer segments 210, as shown in Tables 2 and 3:
  • each buffer sector 208 contains two instructions important to the operation of the buffer management system 200: an offset value 214 ("offset”) and a next sector instruction value 212 (“next sector instruction”). Both instructions are crucial to the operation of the buffer management system 200 in that, taken as a whole, the offset 214 and the next sector instruction 212 define the circular aspect of the buffer segment 210.
  • offset values 214 the values in the left column are referred to as the offset values 214 and the values in the right column are refeoed to as the next sector instruction values 212.
  • the application module 204 When the application module 204 is positioned at the cuoent buffer sector 208 "4," the application module 204 reads the next sector instruction 212 and moves to the next buffer sector 208 "7.” Once positioned at current buffer sector 208 "7,” the application module 204 reads the next sector instruction 212 and moves to the next buffer sector 208 "5.” This procedure is repeated until the application module 204 is positioned back to the buffer sector 208 having offset 214 value "4,” thus enclosing the circularly linked segment 210. According to the prefeoed embodiment, the maintenance of the buffer table 206 and the operation of the application module 204 on the buffer table 206 are both administered by the control module 202. Greater detail of the application module 204 and the control module 202 are discussed below.
  • the application module 204 is the hardware component of the buffer management system 200.
  • the application module 204 is divided into two main sub-modules: a host address pointer (HAP) 220 coupled to a host next pointer (HNP) 222 and a disc address pointer (DAP) 224 coupled to a disc next pointer (DNP) 226.
  • HAP host address pointer
  • DAP disc address pointer
  • the application module 204 walks through the buffer table 206 in segment-by-segment fashion.
  • the host next pointer 222 and the disc next pointer 226 effectively control the positioning of the next buffer sector 208 position of the host address pointer 220 and the disc address pointer 224, respectively.
  • the operation of the application module 204 is controlled through the control module 202.
  • the host address pointer 220 is responsible for committing the data blocks to the buffer 145. In the preferred embodiment, committing data to the buffer 145 is accomplished by reference to the buffer table 206. In its operation, the host next pointer 222 communicates to the host address pointer 220 the value of the "next" sector 208 of the particular buffer segment 210 upon which the host address pointer 220 is operating. The host next pointer 222 receives the offset value 214 of the "next" sector 208 in the buffer segment 210 from the next sector instruction 212. The host address pointer 220 walks through the particular buffer segment 210 in response to the next sector instruction 212 read by the host next pointer 222.
  • the host address pointer 220 After the host address pointer 220 commits data to the current buffer sector 208, it immediately jumps to the next buffer sector 208 as communicated by the host next pointer 222. Ultimately, the next sector instruction value 212 of the last buffer sector 208 will equal the offset value 214 of the initial buffer sector 208 referenced by the host address pointer 220 in the buffer segment 210. This is the point at which the circular segment wraps. The host address pointer 220 continues to walk through the segment 210, wrapping as needed, until all the data for the cuoent command has been transfeoed.
  • the application module 204 When data is to be transfeoed from the buffer 145 to the disc 108, the application module 204 generally operates in the same fashion as when data is transfeoed from the disc 108 to the buffer 145.
  • the disc address pointer 224 is responsible for committing the data from the buffer 145 to the disc 108. In the prefeoed embodiment, committing data to the disc 108 is accomplished by reference to the buffer table 206. In its operation, the disc next pointer 226 communicates to the disc address pointer 224 the position of the "next" buffer sector 208 of the particular buffer segment 210 upon which the disc address pointer 226 is operating.
  • the disc next pointer 226 receives the offset value 214 of the "next" buffer sector 208 in the buffer segment 210 from the next sector instruction 212.
  • the disc address pointer 224 walks through the particular buffer segment 210 in response to the next sector instruction 212 read by the disc next pointer 226. After the disc address pointer 224 commits data from the buffer sector 208 to the disc 108, it immediately jumps to the next buffer sector 208 as dictated by the disc next pointer 226.
  • the next sector instruction value 212 of the last buffer sector 208 will equal the offset value 214 of the initial buffer sector 208 referenced by the disc address pointer 224 in the buffer segment 210. This is the point at which the circular segment wraps.
  • the disc address pointer 224 continues to walk through the segment 210, wrapping as needed, until all the data for the cuoent command has been transfeoed.
  • disc address pointer 224 data is transfeoed from the buffer 145 in sector-by-sector fashion. Once the data has been extracted from the particular buffer sector 208, it is transfeoed to the disc 108 according to the particular servo segment 164 and track 160 upon which the file is to be located.
  • FIG. 6 a schematic representation of the application module 204 is shown operating on the buffer 145.
  • the disc cuoent address counter 244 and the disc cuoent page register 246 form the disc address pointer 224.
  • the host cuoent address counter 252 and the host cuoent page register 250 form the host address pointer 220.
  • the disc next page register 240 and the host next page register 242 serve as the disc next pointer 226 and the host next pointer 222, respectively.
  • the remap base address register 248 sets the disc remap address 247 and the host remap address 249 at the base of the VBM table 206 in the buffer 145. Accordingly, in a prefeoed embodiment, the output of the remap base address register 248 uses either the disc remap address 247 or the host remap address 249 to address the buffer 145, depending upon which acknowledgment control line (241 or 243) is enabled. If the disc interface controller requests data to be read from the buffer 145, then the disc remap acknowledgment line 241 is enabled and the disc remap address 247 is used to access the buffer 145.
  • the host interface controller requests data to be written to the buffer 145, then the host remap acknowledgement line 243 is enabled and the host remap address 249 is used to address the buffer 145.
  • the disc next page register 240 loads the disc cuoent page register 246 with the next sector instruction value 212 of the next sector 208 that is to be read from the buffer table 145. Data is read from the cuoent sector 208 so long as the disc data address line 245 originating from the output of the disc cuoent address counter 244 is enabled.
  • the output of the disc cuoent address counter 244 is tied to a sector comparator 256.
  • the value of the disc cuoent address counter 244 is input B to the sector comparator 256.
  • Input A of the comparator 256 is a constant value of the buffer sector 208 size, typically 512 bytes.
  • a signal is transmitted from the output of the comparator 256 to the disc cuoent page register 246 requesting a switch to the next buffer sector 208, which has a value loaded in the disc cuoent page register 246 by the disc next page register 240.
  • the next buffer sector 208 value is defined by the next sector instruction value 212 loaded into the disc next page register 240 from the buffer 145.
  • the host next page register 242 loads the host cuoent register 250 with the next sector instruction value 212 of the next sector 208 that is to be written to the buffer 145. Data is read from the cuoent sector 208 so long as the host data address line 251 originating from the output of the host cuoent address counter 252 is enabled.
  • the output of the host cuoent address counter 252 is tied to a sector comparator 258.
  • the value of the host cuoent address counter 252 is input A to the sector comparator 258.
  • Input B of the comparator 258 is a constant value of the buffer sector 208 size, typically 512 bytes.
  • the traversal component 260 includes a traverse state machine 262, a traverse down counter 264, and a next traverse address register 266.
  • the traverse state machine 262 is loaded with the value 214 of the offset base address.
  • the offset value 214 is latched into the next traverse address register 266 as the next sector 208 position for the traverse component 260.
  • the traversal component 260 traverses the application module 204 over a buffer segment 210 once the traverse state machine 262 initializes the traverse down counter 264.
  • the traversal component 260 receives the traverse address from the next traverse address register 266 and continually executes a traversal of the application module 204 until the traverse down counter 264 completes counting.
  • the count for the traverse down counter 264 is loaded by the microprocessor 142, thereby allowing the application module 204 to iteratively hop a particular buffer segment 210 a predetermined number of times.
  • the next sector instruction value 212 dictates the next sector 208 to traverse and is communicated to the traversal component 260 through the next traverse address register 266.
  • the next sector instruction value 212 is the only control parameter controlling the subsequent sector 208 of traversal until the traverse down counter 264 completes counting, at which time, the traversal component 260 is initialized to await the next value 214 of the offset base address.
  • maintenance of the buffer table 206 and operation of the application module 204 are both administered by the control module 202.
  • the control module 202 is the software component of the buffer management system 200.
  • the control module 202 maximizes the method of write caching by ensuring that the maximum possible number of sectors 208 of the buffer 145 are available to the host 140 for write commands at any given time.
  • the control module 202 provides a mechanism for maintaining the buffer table 206 that is used by the application module 204 to direct data transfers into and out of the data buffer 145, via the host 220 and disc 226 address pointers.
  • a flow diagram generally described in operations 299 through 317, illustrates the operation of the control module 202 when employed to transfer data from the host 140 to the buffer 145.
  • the control module 202 organizes the sectors 208 of the buffer 145 into a free_list in order to maximize the possible number of sectors 208 available to the host 140 for write commands at any given time.
  • the free_list is initialized to be a circularly linked segment 164 of all the free sectors 208 in the write buffer 145.
  • Table 4 is an illustration of the free list, as initialized:
  • control module 202 completes organization of the sectors 208 into the free_list, the physical transformation of the data from the host 140 to the buffer 145 is initiated. It is during this process that the control module 202 divides the buffer table 206 into one or more buffer segments 210. In operation 308, the control module 202 arbitrarily selects a buffer sector 208 - in the prefeoed embodiment, refeoed to as the first index sector 208 - in the free_list as "free_space_start,” which is the buffer sector 208 that the control module 202 selects as the beginning sector 208 for data transfer to the buffer 145.
  • the control module 202 controls the application module 204 such that the host address pointer 220 is positioned to the first index sector 208 labeled free_space_start.
  • the control module 202 receives the incoming data to be cached.
  • the control module 202 directs the host address pointer 220 to transfer the first block of the file to the sector identified as free_space_start.
  • the control module 202 determines whether transfer of the file is complete. If all data blocks have been transfeoed, the control module jumps to operation 316.
  • the control module 202 positions the host address pointer 220 to the next "index" sector 208 (each cuoent position of the host address pointer 220 is preferably refeoed to as the "index" sector) pursuant to the next sector instruction 212 of the previous index sector 208.
  • the host address pointer 220 transfers the data block as the host next pointer 222 references the next index sector 208.
  • the control module 202 determines that the transfer of the file is complete, it assigns the label "free_space_start" to the sector 208 in the free_list that appears directly after the last buffer sector 208 in the cuoent buffer segment 210.
  • the determination of the size of the buffer segment 210 and where in the buffer table 206 the division of the free_list into buffer segments 210 occurs is described in more detail by FIG. 8 and Table 5.
  • a flow diagram generally described in operations 329 through 339, illustrates the aoangement of the sectors 108 of the free_list into one or more segments 210.
  • the control module 202 effectively aoanges the free_list into one or more circularly linked lists segments 210 once a command of length "L" buffer sectors 108 is sent by the host 140 to the buffer management system 200.
  • operation 330 which is the same as operation 310 in FIG.
  • control module 202 directs the host address pointer 220 to transfer the first block of the file to the first index sector 208, which, in the prefeoed embodiment, is labeled "free_space_start.”
  • Operation 332 locates the last buffer sector 208 of data for this particular command, i.e., the last sector 208 of that particular buffer segment 210 by traversing with the traversal component 260 the singly linked free_list starting at free_space_start and going "L - 1" sectors down the buffer table 206.
  • Operation 334 designates the value of the next sector instruction 212 of the buffer sector 208 cooesponding to "L - 1" steps down the table as “free_space_start.” Since this buffer sector 208 now incorporates the next sector instruction 212 "free_space_start,” the circularly linked list is enclosed to illustrate that the particular buffer segment 210 is an independent list from the free_list. Operation 336 traverses the remaining buffer sectors 208 in the free_list in order to illustrate the remaining circularly linked list. In operation 338, designation of the next sector instruction 212 of the last sector 208 in the free_list as "free_space_start” encloses the remaining sectors 208 of the buffer table 206 in an independent circularly linked list.
  • the end result is an illustration of how the free_list is divided into one or more buffer segments 210 once the host 140 transfers data to the buffer 145.
  • sectors 0 through 2 represent the buffer segment 210 to which the control module 202 is cuoently instructing the host address pointer 220 to fill with data
  • sectors 3 through "n - 1" represent the remaining buffer sectors 208 in the free_list.
  • sector "3" is the next "free_space_start” position upon which the control module 202 will direct the host address pointer 220 the next time that the host 140 implements a transfer of data.
  • the free_list of the buffer table 206 is divided, in this example, into two separate buffer segments 210, one of length 3 and one of length "n - 3.” Similar manipulations can be done in order to either further subdivide various buffer segments 210 or to merge segments 210 back into single loops.
  • FIG. 9 a flow diagram, illustrating a mending process programmed in the control module 202 that is initiated any time a write command is issued, is shown.
  • the mending process of FIG. 9 is actually a separate process than that defined in FIG. 8, the two processes are concuoent with each other and the mending process ultimately concludes with operation 300.
  • the mending process is initiated, in operation 320, when the host 140 sends a new write command that is received by the control module 202.
  • control module 202 In operation 322, once a new write command is received by the control module 202, the control module 202 first checks to determine whether any of the logical block addresses (LBA's) of the data from the new command overlap any of the LBA's of data cached into the buffer sectors 208 from previous commands. If an overlap is not found, then operation 300 is initiated and the data is transfeoed to the buffer 145 in accordance with the flow diagram in FIG. 8.
  • LBA's logical block addresses
  • control module in operation 324, takes the buffer sectors 208 cooesponding to the old write command and adds them to the end of the free_list. This is accomplished by replacing the next sector instruction value 212 of the buffer sector 208 located at the end of the free_list with the old write sectors 208. The data from these buffer sectors 208 is considered
  • a flow diagram generally described in operations 349 through 359, illustrates the operation of the control module 202 when employed to transfer data from the buffer 145 to the disc 108.
  • the buffer sector 208 storing the first data block of the file to be transfeoed is designated the "index" sector 108.
  • the index sector 208 is added to the end of the free_list.
  • the control module 202 directs the disc address pointer 224 to transfer the data block in the index sector 208 to the disc 108.
  • the control module 202 determines whether all the data blocks within the file to be transfeoed have been committed to disc. If not, then, in operation 358, the control module 202 positions the disc address pointer 224 on the next "index" sector 208, as instructed by the next sector instruction 212 of the previous index sector 208. After positioning the disc address pointer 224 on the next index sector 208, the control module 202 jumps to operation 352 and repeats the procedure from there. Once the disc address pointer 224 has completely transfeoed the particular buffer segment 210 storing the file, as determined in operation 356, the task is finished and the control module 202 initializes the disc address pointer 210 to await instructions for subsequent transfers.
  • the process of adding buffer sectors 208 that are to be written to the disc 108 to the end of the free_list is greatly advantageous if a new write command comes to the control module 202 that is sequential in LBA space, thus immediately following on the disc 108, with the buffer sectors 208 being committed to the disc 108.
  • This advantage is realized in that no manipulation of the buffer table 206 is necessary for the new write command because the control module 202 will write the new data to the disc 108 during the same operation as it is committing the old data to the disc 108.
  • Table 7 provides the best analysis for an illustration of this process:
  • the data relating to the new write command would have a first index sector 108 of "3," since sector 3 cooesponds to the free_space_start. If the new write command is larger (more sectors) than the segment 210 - in this case sectors 3, 4, 5, and 6 - the control module 202 uses the free_list as a circular loop. Since the buffer sectors 208 which had stored the data being committed to the disc 108 have already been added to the end of the free_list, such sectors 208 become part of the circular free_list.
  • the control module 202 directs the disc address pointer 224 to follow the host address pointer 220 (initiated at the first index sector 208), which, in turn, is following the disc address pointer 224 as data from buffer sectors 208 are committed to the disc 108.
  • the control module 202 implements a real time circular loop using the disc 224 and host address 220 pointers.
  • One more advantage of the buffer management system 200 is the ability of the control module 202 to merge two or more cached commands into one buffer segment 210. It is advantageous to merge one buffer segment 210 into another because such a process allows the data to be written from the buffer 145 to the disc 108 in one operation. If two or more buffer segments 210 contain data files that are to be written onto adjacent servo segments 164 on the disc 108, the control module 202 will "merge" such buffer segments 210 of the buffer table 206 into one buffer segment 210.
  • a flow diagram generally described in operations 379 through 385, illustrates the merging process of the control module 202.
  • operation 380 cached writes contained in buffer segments 210 are compared to determine whether the data files are to be committed to the disc 108 on adjacent, or sequential, servo segments 164.
  • operation 382 if such data is not to be committed to adjacent servo segments 164, then operation 381 checks to see if all buffer segments 210 in the table 206 have been compared to each other. If not, then the comparison routine of operation 380 is repeated. If all buffer segments 210 have been compared, the process has finished without merging any buffer segments 210.
  • Table 8 illustrates the buffer table 206 prior to the merge.
  • Table 8 contains 3 separate buffer segments 210: segment 0: 1 , segment 4:5, and segment 2:3:6.
  • the data blocks contained in segment 0: 1 are to be placed on the disc 108 in adjacent servo segments 210. Therefore, since segments 0: 1 and 4:5 are sequential on the disc 108, the control module 202 will merge the two buffer segments 210, resulting in Table 9.
  • Table 9 only contains two buffer segments 210: segment 0:1 :4:5, and segment 2:3:6.
  • the present invention may be viewed as a buffer management system (such as 200) for managing a transfer of data from a host computer (such as 140) to a recordable disc (such as 108) in a disc drive (such as 100).
  • the disc drive (such as 100) operates on the host computer (such as 140).
  • the recordable disc (such as 108) is radially divided into one or more equal servo segments (such as 164) and circumferentially divided into one or more rotational tracks (such as 160).
  • the tracks (such as 160) are divided by the servo segments (such as 164).
  • the disc drive (such as 100) is of a type wherein data transfers between the host computer (such as 140) and the disc (such as 108) are effectuated by transferring a selected number of data blocks constituting a file from either the host computer (such as 140) or the disc (such as 108) to a buffer (such as 145) for temporary storage of the file. After temporary storage, the file is transfeoed to either the host computer (such as 140) or the disc (such as 108).
  • management of the buffer is accomplished by the buffer management system (such as 200), which includes a buffer table (such as 206), an application module (such as 204), and a control module (such as 202).
  • the buffer management system (such as 200) includes a buffer table (such as 206), which is divided into one or more equally sized buffer sectors (such as 208). Each one of the buffer sectors (such as 208) is a sector in a buffer segment (such as 210).
  • the buffer segment (such as 210) is defined as a circular linked list within the buffer table (such as 206).
  • the buffer management system (such as 200) also includes an application module (such as 204) to walk through each particular buffer segment (such as 210).
  • the buffer table (such as 206) is divided into at least one buffer segment (such as 210). However, the buffer table (such as 206) may only be divided into as many buffer segments (such as 210) as the buffer table (such as 206) has buffer sectors (such as 208).
  • the buffer management system (such as 200) further includes a control module (such as 202), which creates and maintains the buffer table (such as 206).
  • the control module (such as 202) is coupled with the application module (such as 204) in order to administer the walking through the buffer table (such as 206) by the application module (such as 204).
  • the application module (such as 204) walks through the buffer table (such as 206) in response to a next sector instruction (such as 212) in an index sector (such as 208) upon which the application module (such as 204) is cuoently positioned.
  • control module includes a look-up routine (such as in operation 414) that re-initiates the application module (such as 204) to perform a subsequent look-up of the data.
  • the application module (such as 204) preferably includes a host address pointer (such as 220) for committing the data to the buffer (such as 145).
  • the host address pointer (such as 220) is preferably coupled to a host next pointer (such as 222), which communicates the next sector instruction (such as 212) of the index sector (such as 208) to the host address pointer (such as 220).
  • the next sector instruction (such as 212) controls positioning of the host address pointer (such as 220) within the particular buffer segment (such as 210).
  • the application module also preferably includes a disc address pointer (such as 224) for committing the data from the buffer (such as 145) to the disc (such as 208).
  • the disc address pointer (such as 224) is preferably coupled to a disc next pointer (such as 226), which communicates the next sector instruction (such as 212) of the index sector (such as 208) to the disc address pointer (such as 224).
  • the next sector instruction (such as 212) controls the positioning of the disc address pointer (such as 224) within the particular buffer segment (such as 210).
  • the buffer table (such as 206) includes a free list of sectors readily available to the system (such as 200) for temporary storage.
  • the control module When an existing data block is to be written to the disc (such as 108) from an index sector (such as 208), the control module (such as 202) mends the index sector (such as 208) into the free list of sectors so that the index sector (such as 208) will become available to the system (such as 200) as it is committed to the disc (such as 108).
  • the control module (such as 202) arbitrarily selects the particular buffer segment (such as 210) from the free list and arbitrarily positions the host address pointer (such as 220) over a first index sector (such as 208) of the particular buffer segment (such as 210) when a write instruction is issued from the host (such as 140).
  • the free list preferably includes any buffer segments (such as 210) that contained data either written to the disc 108 or considered redundant.
  • the control module (such as 202) also includes a merging routine (such as in operations 380 through 384).
  • the merging routine (such as in operations 380 through 384) combines one or more buffer segments (such as 210) that are to be written onto adjacent servo segments (such as 164).
  • the merging routine (such as in operations 380 through 384) transforms the plurality of buffer segments (such as 210) into one aggregate buffer segment (such as 210) that can be committed to the disc (such as 108) in one write process.
  • the present invention may also be viewed as a method (such as in operation 400) for managing a transfer of data from a host computer (such as 140) to a recordable disc (such as 108) in a disc drive (such as 100).
  • the method (such as in operation 400) includes maintaining (such as in operation 402) a buffer table (such as 206), which is divided into one or more equal-sized buffer sectors (such as 208), and aoanging (such as in operation 404) each of the buffer sectors (such as 208) into at least one buffer segment (such as 210).
  • the buffer table (such as 206), which is a singly linked list, is divided into one or more buffer segments (such as 210), which are circularly linked lists.
  • the method (such as in operation 400) further includes selecting (such as in operation 406) a particular buffer segment (such as 210) to temporarily store the file to be transfeoed from the host computer (such as 140) to the buffer (such as 145), as well as transferring (such as in operation 408) the file from the host computer (such as 140) to the buffer (such as 145) by walking through the particular buffer segment (such as 210) in response to a next sector instruction (such as 212) contained in an index sector (such as 208).
  • the index sector is the buffer sector (such as 208) from which the data is being transfeoed.
  • the method (such as in operation 400) further includes storing (such as in operation 410) the file in the particular buffer segment (such as 210) for a temporary period of time and, eventually, removing (such as in operation 412) the file from the buffer (such as 145) and relocating the file to the recordable disc (such as 108) by walking through the particular buffer segment (such as 210) in response to the next sector instruction (such as 212) contained in the index sector (such as 208).
  • the method (such as in operation 400) in maintaining the buffer table (such as in operation 402) includes the step of creating (such as in operation 300) a free list of sectors that are readily available to accept a new data block.
  • the method (such as in operation 400) in the selecting step (such as in operation 406) preferably includes arbitrarily selecting, once the command to write the file to the buffer (such as 145) is sent by the host computer (such as 140), a first index sector (such as 208) as a beginning to the particular buffer segment (such as 210) that will be used in the storing the data (such as operation 410).
  • the method (such as in operation 400) in the aoanging step (such as operation 404) preferably includes generating a buffer segment (such as 210) from the free list of sectors by beginning at the arbitrarly selected first index sector (such as 208) and traversing one fewer buffer sector than the selected number of data blocks in the file.
  • the method (such as in operation 400) accepts (such as in operation 302) an incoming data file to be cached and places the incoming data file into the buffer segment (such as 210) at the index sector (such as 208).
  • the method may aoange (such as in operation 404) the buffer table (such as 206) by organizing the sectors (such as 208) into one or more buffer segments (such as 210).
  • the buffer table (such as 206) may only be divided into a maximum number of buffer segments (such as 210) equal to the number of sectors (such as 208) in the buffer table (such as 206).
  • the method (such as in operation 400) may also mend (such as in operations 354-358) each sector (such as 208) that is to be committed to the disc (such 108) into the free list of sectors so that each sector (such as 208) is available to the system as it is committed to the disc (such as 108).
  • the method (such as in operation 400) compares
  • the data block to be stored in the buffer (such as 145) to each existing data block in all of the buffer segments (such as 210). If the comparison (such as in operation 322) reveals a redundancy between the existing data block and the data block to be stored in the buffer (such as 145), the buffer segment (such as 210), which stored the existing data block, is mended (such as in operations 354-358) to the free list.
  • the method (such as in operations 354-358) also preferably merges (such as in operation 384) together one or more buffer segments (such as 210) containing data files that are to be written to adjacent servo segments (such as 208), thus only allowing the write operation to be executed one time for those segments 164.
  • the method (such as in operation 400) includes transferring (such as in operation 408) data to the buffer (such as 145) by walking through the particular buffer segment (such as 210) with a host address pointer (such as 220) coupled to a host next pointer (such as 222).
  • the host next pointer (such as 222) communicates the next sector instruction (such as 212) to the host address pointer (such as 220).
  • the next sector instruction (such as 212) controls positioning of the host address pointer (such as 220) within the particular buffer segment (such as 210).
  • the method (such as in operation 400) preferably includes removing and relocating (such as in operation 412) data from the buffer by walking through the particular buffer segment (such as 210) with a disc address pointer (such as 224) coupled to a disc next pointer (such as 226).
  • the disc next pointer (such as 226) communicates the next sector instruction (such as 212) to the disc address pointer (such as 224).
  • the next sector instruction (such as 212) controls positioning of the disc address pointer (such as 224) within the particular buffer segment (such as 210).
  • either operation may be re-initiated (such as in operations 414, 416) according to the method (such as in operation 400).
  • VBM may be useful to other types of data storage devices, such as tape drives, optical drives, and networks, that implement data transfer using a buffer.
  • control module may be designed using integrated or logic circuits, as opposed to software, or in combination with software.
  • application module may combine the address pointers and the next pointers into one single component.
  • the maintenance of the buffer table may be construed in various other ways that do not take away from the invention.
  • VBM may be employed to a file allocation table.
  • VBM table itself could be built as a doubly linked list (with both next and previous pointers), and hardware could be constructed to assist the software in the maintenance of such a table.
  • the entire system could also be set up to work with collections of data larger or smaller than the standard 512 bytes.

Abstract

A method and system for managing the transfer of data into and out of a buffer in a disc drive is disclosed. The operation of the vectored buffer management (VBM) system is administered by a software-based control module. The control module creates and maintains a buffer table by dividing the buffer table into buffer sectors. The buffer sectors are further arranged into buffer segments, or circularly linked lists upon which the buffer table is partitioned. The control module effectuates the transfer by implementating an application module across the buffer table. Once the application module is arbitrarily positioned at an index sector by the control module, data transfer is accomplished when the application module walks through a particular buffer segment identified by the index sector. Specific control of the application module across a buffer segment is administered through host and disc address pointers, which are coupled to host and disc next pointers, respectively.

Description

BUFFER MANAGEMENT SYSTEM FOR MANAGING THE TRANSFER OF DATA INTO AND OUT OF A BUFFER IN A DISC DRIVE
Field of the Invention
This application relates generally to disc drive caching techniques and more particularly to a buffer management system for controlling the transfer of data between a host computer and a disc of a disc drive.
Background of the Invention
In a disc drive, data is stored on one or more discs coated with a magnetizable medium. Data is written to the discs by an array of transducers, typically referred to as read/write heads, mounted to a radial actuator for movement of the heads relative to the discs. The information is stored on a plurality of concentric circular tracks on the discs until such time that the data is read from the discs by the read/write heads. Each of the concentric tracks is generally divided into a plurality of separately addressable data sectors. The heads are used to transfer data between a desired track and an external environment, which includes, among many components, a host computer. During a read operation the head senses the data previously written on the disc track and transfers the information to the external environment. During a write operation, data is written onto the disc track. Once the data is written to the disc, each sector holds a block of data, which is the absolute smallest quantity that can be written to the disc during a single write operation.
Adjacent blocks, commonly referred to as chunks, or clusters, are typically written to the disc during a single write operation referred to as a command. Critical to both of these operations - reading and writing - is the accurate locating of the head over the center of the desired track. Typically, the heads are positioned with respect to the disc surfaces by an actuator voice coil motor. The voice coil motor is responsible for pivoting the actuator body about a pivot shaft, thus moving the heads across the disc surfaces. The actuator thus allows the heads to move back and forth in an accurate fashion between an inner radius and an outer radius of a disc. The actuator arm is driven by a control signal fed to the voice coil motor at the rear end of the actuator arm. A servo control system is used to sense the position of the actuator and control the movement of the head above the disc using servo signals read from a disc surface in the disc drive. The servo control system relies on servo information stored on the disc. The signals from this information generally indicate the present position of the head with respect to the disc, i.e., the current track position. The servo control system uses the sensed information to maintain head position or determine how to optimally move the head to a new position centered above a desired track. The servo control system then delivers a control signal to the voice control motor to position the head over a desired new track or to maintain the position over the desired current track.
The transfer of files between the host computer and the disc is controlled in a multi-level setting characterized by a bi-level transfer scheme. At a macroscopic level, track sectors are selected to contain the data sectors of which the file is divided. More specifically, and in a microscopic sense, cells along a track are magnetized to correspond to the bit structure of the file for the purposes of subsequent reading. A disc drive typically includes a buffer to implement this bi- level transfer scheme. The purpose of the buffer is to accept the sectors of data during its transfer between the host computer and the disc and then transfer the data to the proper component - either the host computer or the disc.
Typically, the system microprocessor contains programming designed to locate the sectors on the track where the data is to be read from or written to. The microprocessor is also programmed to control the transfer of the file at the sector, or macroscopic, level. The transfer of data to and from the buffer is accomplished in character-by-character fashion. The conventional method of accomplishing a transfer in a character-by-character environment is through the use of state machines, such as a buffer controller, a host interface controller, and a disc interface controller. As described in U.S. Patent 5,276,662, the transfer of files between the host and the discs is carried out under the overall control of the system microprocessor and a more precise control of the state machine controllers. For example, when a file is to be written to a disc, the host computer prompts the system microprocessor with generalized information which allows the microprocessor to define the sectors on the track to which the file is to be written. The microprocessor then initiates the transfer by implementing operations which result in transfer at the sector level. Such operations include commanding the servo control system to position the head over the selected track which is to receive the file and identifying a segment of the buffer which is to be used in the transfer The microprocessor commands the host interface controller to initiate the transfer of the data to the buffer segment Conventionally, the host interface controller, working in conjunction with the buffer controller and the host computer, provides precise control over the buffer in the transfer of characters of the file into sector blocks of the buffer segment The disc interface controller controls the transfer of the blocks of data from the buffer segment to the appropπate sectors on a disc pursuant to commands issued by the system microprocessor once the read/wπte heads are positioned over the track that is to receive the file A read/wπte circuit connected to the read/wπte heads then magnetizes the cells along the data tracks
Conventionally, counters and interrupts are used in conjunction with the buffers to "watch over" the transfer of data to the buffer The counters, which are typically located within the host and disc interface controllers, count the number of transfers that occur between both the host and the buffer and the disc and the buffer Based upon this calculation, an interrupt generator supplies an interrupt to the microprocessor immediately after a selected number of transfers to the buffer have occurred The use of counters and interrupts is to inquire as to whether the buffer is full when a transfer is to take place The most significant drawback of disc dπves is a relatively long delay between the time that a read or wπte operation is initiated and the time that it is mechanically completed This delay includes a seek latency, the time duπng which the read/wπte heads are moved to a desired track, a rotational latency, the time duπng which the disc rotates until a desired track sector is under the read/wπte head, and an additional delay corresponding to the time duπng which data blocks are read from or wπtten onto the disc surface Due to the aforementioned delays, the speed with which a disc dπve operates is typically slower than that of the other components of the computer system That being the case, it is advantageous, for total system performance, for the host system to transfer the data it wishes to wπte to a buffer, then allow the dπve to commit that data to a disc at a later time that is optimal for the dπve to perform the (mechanical) task of putting the data on the media. This concept, usually called write caching, is common to data storage devices.
A well-known means for reducing the number of disc accesses associated with almost any software application involves the use of a single common cache memory. The cache memory, typically a volatile random access memory (VRAM) device, stores blocks of data that are read from, or blocks that are to be written onto, the disc. Once the system issues a disc read operation the cache is first checked to see if the requested data block is already available, thus avoiding the typical physical disc read. If the data block is not available, then the system instructs the read/write heads to perform a disc read, from which the data is transferred to the cache.
As described in U.S. Patent 5,765,193, it is common for requests of data from a disc to be specific as to related data, such as data files. The data files will be written to and stored on the disc in adjacent sectors of the track. Accordingly, a principle of reference is commonly known among computer programmers illustrating "when data is stored or to be stored at one location, it is highly probable that data stored or to be stored at physically adjacent locations will be accessed either simultaneously or soon thereafter each other." To realize this principle, data, which is to be written to a disc, is commonly transferred to a write cache prior to recording on the disc. This procedure is advantageous in two respects. First, it allows the host to quickly access the data without having to perform an actual read operation on the disc. In order to read directly from the write cache, the system must keep track of which data blocks are being held in the cache instead of the disc. Second, it gives the read/write heads time to catch up with the other components of the computer. Once the cache is filled, the system microprocessor initiates write operations to transfer the data blocks in the cache to the disc drive. As further described in U.S. Patent 5,765,193, there are several methods for managing the "orderly" transfer of data from the write cache to the disc. One common technique for managing the write cache is to supplement the disc write operation with instructions to search the write cache for adjacent data blocks. The existence of at least two adjacent data blocks defines a cluster, which the system will transfer in whole pursuant to the disc write operation. By reducing the number of disc accesses, this technique decreases the overall number of time-consuming seek operations. A second known technique involves keeping a "least recently used" (LRU) queue, which comprises a list of data blocks in its order of use. This technique uses a virtual block number, a number assigned to the data block in relation to its intended storage location on the disc, to rank the data blocks in relation to the order of most recent use by the system. Based on a theory that data blocks more recently used are more likely to be used again prior to less recently used data blocks, the system transfers the latter to the disc while retaining the former in the write cache for quick access by the central processing unit. A third, and more outdated, technique is to simply remove blocks from the cache on a first-in-first-out basis.
While all three techniques relating to the transfer of data from the buffer to the disc provide an effective means of implementing the storage and retrieval of data files, problems have arisen in conjunction with data transfer to a disc from buffers of limited data space. Particularly, if the data from the buffer is not transfeoed to the disc in first-in-first-out fashion, then the buffer sectors from which the data was transferred cannot be re-used until the buffer sectors lower in order from the emptied sectors were, in fact, emptied. For example, in a 100-sector buffer, if sectors 45 through 50 were written to the disc, then that particular buffer space cannot be reused by the host until buffer sectors 1 through 44 have been emptied. When dealing with disc drives of limited amounts of buffer data space, this "sequential" buffer accessing method is a significant disadvantage to the host computer with respect to write caching.
Summary of the Invention
Against this backdrop the present invention has been developed. The present invention is a means for rearranging buffer sectors so that buffer data space can be used more efficiently. In particular, the vectored buffer management system, in accordance with the present invention, hereinafter called "VBM," provides a means for rearranging the buffer space so that transfer of data into and out of the buffer can be done in an arbitrary or random fashion. In essence, VBM allows data to be written to and from the buffer sectors in a non-sequential fashion. VBM contains three components: an algorithmic component, a hardware component, and a software component. The algorithmetic component defines a model, referred to as the VBM table, of the data buffer as a logically organized singly linked list of sectors. Each entry in the VBM table represents a single sector of the buffer. Each sector of the buffer stores a data block having a logical block address (LBA) representing a sector on a disc in the disc drive. The buffer sector where the hardware component is currently positioned is referred to as the index sector. The value of each index sector is "next" sector, a physical buffer sector number denoting the "next" sector for the hardware component to reference upon conclusion of the transfer of data to or from the current index sector. The hardware component is responsible for the physical transfer of data into and out of the buffer. During a disk write operation, the hardware component uses the "VBM" table as the basis for the management of the data transfer. This is accomplished by setting a host address pointer (HAP), which is responsible for committing the data to the buffer, and the disc address pointer (DAP), which is responsible for transferring the data from the buffer to the disc. The initial value of these pointers is arbitrarily selected by the software component. The HAP and the DAP both are coupled to a host next pointer (HNP) and a disc next pointer (DNP), respectively. The functions of the next pointers are to read the next sector value from the index sector in order to direct the address pointers to the next sector. The hardware also provides an automated traversal function independent of the task that is managing the buffer allocation such that the function can still traverse a large number of table entries, even when the CPU is performing another task.
The software component maximizes the advantage of write caching by ensuring that the maximum possible number of buffer sectors are available to new host write commands at any given time, and by minimizing the number of disk operations needed to complete the requests from the host. More particularly, the software provides the mechanism for creating and maintaining the VBM table that is used by hardware to direct data transfers into and out of the data buffer. The software component is further responsible for administering control of the hardware component. By creating a list of free buffer sectors, the software component also provides for the ability of VBM to separate cached data from free space, merge buffer segments of the buffer table into single disc operations when beneficial, reuse any buffer sectors that hold redundant data, and return buffer sectors to the free list in any order.
These and vaπous other features, as well as advantages which characteπze the present invention, will be apparent from a reading of the following detailed descπption and a review of the associated drawings.
Brief Description of the Drawings
FIG. 1 is a plan view of a disc dπve incorporating a preferred embodiment of the present invention showing the pnmary internal components.
FIG. 2 is a functional block diagram generally showing the main functional components used to control the disc dπve of FIG. 1
FIG 3 is a plan view of the disc generally showing the main components on the surface of the disc
FIG. 4 is a functional block diagram of the buffer management system in accordance with a preferred embodiment of the present invention. FIG. 5 is a functional block diagram of the buffer management system more particularly focusing on the application and control modules of FIG. 4
FIG 6 is a schematic representation of the components of the application module of FIG. 4
FIG. 7 is a flow diagram illustrating the operation of the control module in response to a wπte instruction by a host computer.
FIG 8 is a flow diagram illustrating the arrangement of sectors of the free_hst into one or more segments.
FIG. 9 is a flow diagram illustrating the mending process. FIG. 10 is a flow diagram illustrating the operation of the control module when given an instruction to commit data from a buffer to a disc. FIG. 11 is a flow diagram illustrating the merging process. FIG. 12 is a flow diagram generally illustrating the operation of the present invention.
FIG. 13 is a schematic representation of the traverser component of the application module of FIG. 4 Detailed Description
A disc drive 100 constructed in accordance with a preferred embodiment of the present invention is shown in FIG. 1. The disc drive 100 includes a base 102 to which various components of the disc drive 100 are mounted. A top cover 104, shown partially cut away, cooperates with the base 102 to form an internal, sealed environment for the disc drive 100 in a conventional manner. The components include a spindle motor 106 which rotates one or more discs 108 at a constant high speed. Information is written to and read from tracks 160 (FIG. 3) on the discs 108 through the use of an actuator assembly 110, which rotates about a bearing shaft assembly 112 positioned adjacent to the discs 108. The actuator assembly 110 includes a plurality of actuator arms 114 which extend towards the discs 108, with one or more flexures 116 extending from each of the actuator arms 114. Mounted at the distal end of each of the flexures 116 is a head 118 which includes an air bearing slider enabling the head 118 to fly in close proximity above the corresponding surface of the associated disc 108.
The spindle motor 116 is typically de-energized when the disc drive 100 is not in use for extended periods of time. The heads 118 are moved over park zones 120 near the inner diameter of the discs 108 when the drive motor is de-energized. The heads 118 are secured over the park zones 120 through the use of an actuator latch arrangement, which prevents inadvertent rotation of the actuator assembly 110 when the heads 118 are parked.
The radial position of the heads 118 is controlled through the use of a voice coil motor (VCM) 124, which typically includes a coil 126 attached to the actuator assembly 110, as well as one or more permanent magnets 128 which establish a magnetic field in which the coil 126 is immersed. The controlled application of current to the coil 126 causes magnetic interaction between the permanent magnets 128 and the coil 126 so that the coil 126 moves in accordance with the well-known Lorentz relationship. As the coil 126 moves, the actuator assembly 110 pivots about the bearing shaft assembly 112 and the heads 118 are caused to move across the surfaces of the discs 108.
A flex assembly 130 provides the requisite electrical connection paths for the actuator assembly 110 while allowing pivotal movement of the actuator assembly 110 duπng operation The flex assembly includes a pπnted circuit board 132 to which head wires (not shown) are connected, the head wires being routed along the actuator arms 114 and the flexures 116 to the heads 118 The pπnted circuit board 132 typically includes circuitry for controlling the wπte currents applied to the heads 118 duong a wote operation and for amplifying read signals generated by the heads 118 duong a read operation The flex assembly terminates at a flex bracket 134 for communication through the base deck 102 to a disc dπve pπnted circuit board (not shown) mounted to the bottom side of the disc dπve 100.
Referring now to FIG 2, shown therein is a functional block diagram of the disc dπve 100 of FIG 1 generally showing the mam functional circuits which are resident on the disc dπve pnnted circuit board and used to control the operation of the disc dπve 100 The disc dπve 100 is shown m FIG 2 to be operably connected to a host computer 140 in which the disc dπve 100 is mounted in a conventional manner Control communication paths are provided between the host computer 140 and a disc dπve microprocessor 142, the microprocessor 142 generally providing top level communication and control for the disc dove 100 in conjunction with programming for the microprocessor 142 stored in microprocessor memory (MEM) 143 The MEM 143 can include random access memory (RAM), read only memory (ROM), and other sources of resident memory for the microprocessor 142 The discs 108 are rotated at a constant high speed by a spmdle control circuit 148 The radial position of the heads 118 is controlled through the application of current to a coil in the actuator assembly 110. A servo control circuit 150 provides such control. Data is transferred between the host computer 140 and the disc dove 100 by way of a disc dove interface 144, which includes a buffer 145 to facilitate high speed data transfer between the host computer 140 and the disc dove 100. Data to be wotten to the disc dove 100 are thus passed from the host computer 140 to the buffer 145 and then to a read/wote channel 146, which encodes and seoahzes the data and provides the requisite wote cuoent signals to the heads 118. To retoeve data that has been previously stored by the disc dπve 100, read signals are generated by the heads 118 and provided to the read/wπte channel 146. The interface 144 performs read signal decoding, eoor detection, and error correction operations. The interface 144 then outputs the retoeved data to the buffer 145 for subsequent transfer to the host computer 140. Such operations of the disc drive 100 are well known in the art and are discussed, for example, in U.S. Pat. No. 5,276,662 issued Jan. 4, 1994, to Shaver et al.
Referring now to FIG. 3, shown therein is a plan view of the disc 108, generally showing the main components on the surface of the disc 108. The discs 108 are circumferentially divided into a plurality of concentric circular tracks 160. The number of tracks 160 per disc 108 will vary with each particular manufactured disc 108. A one-time revolution (INDEX) 162 around each track 160 is typically indicated by a mark that extends the radius of the disc 108. The disc 108 is radially divided into a plurality of servo segments 164. The servo segments 164 begin at the center 166 of the disc 108 and terminate at the outer edge 168 of the disc 108. As with the number of tracks 160 per disc 108, the number of servo segments 164 per disc 108 varies with each particular manufactured disc 108. Each track 160 is composed of spaced servo segments 164 with data sectors between the servo segments 164.
In a general sense, FIG. 12 illustrates, in operations 399 through 417, a preferred embodiment of the overall operation 400 of the buffer management system for the buffer 145, in accordance with the present invention. In operation 402, a buffer table 206 (FIG. 4) is maintained by representing the buffer 145 as divided into a plurality of equally sized buffer sectors 208 (FIG. 4). In operation 404, the buffer sectors 206 are arranged into circularly linked lists, referred to as buffer segments 210 (FIG. 4). In operation 406, a particular buffer segment 210 is arbitrarily selected by the control module 202 to store the data file. In operation 408, data, which is actually transferred from a host 140 (FIG. 4) to the buffer 145 as a file, is represented as being transfeoed into a buffer segment 210. In operation 410, the data file is stored in the buffer 145 for an indefinite period of time. The data file is represented as stored in the buffer segment until, in operation 412, the data is removed from the buffer 145 and relocated to the disc 108, thus effectuating the transfer from the host 140 to the disc 108. In operation 414, a determination is made as to whether either "look-up," the transfer or removal was administered prematurely. If not, then the process is finished. However, if a look-up was premature, then, if the look-up was a transfer, operation 416 re-initiates the transfer in operation 408. If the look-up was a removal and replacement, then operation 416 reinitiates the removal in operation 412.
Referring now to FIG. 4, a functional block diagram of a buffer management system 200 for managing the transfer of data to and from the buffer 145 is shown. The buffer management system 200 manages the transfer of data between the host computer 140 and the disc 108 through the implementation of three main components: a control module 202, an application module 204, and a buffer table 206. The buffer table 206 is divided into n equal buffer sectors 208, each having a capacity to store 512 bytes. Each buffer sector 208 is further incorporated into a circularly linked list, or buffer segment 210. The buffer table 206 can be divided into as many as n buffer segments 210 or can be made of only one buffer segment 210 having n buffer sectors 208. Table 1, below, depicts a buffer table 206 containing .. = 9 buffer sectors 208 and two circularly linked buffer segments 210.
Figure imgf000012_0001
Table 1
For simplicity, Table 1 can be broken down into 2 independent buffer segments 210, as shown in Tables 2 and 3:
Table 3
With respect to the buffer table 206, each buffer sector 208 contains two instructions important to the operation of the buffer management system 200: an offset value 214 ("offset") and a next sector instruction value 212 ("next sector instruction"). Both instructions are crucial to the operation of the buffer management system 200 in that, taken as a whole, the offset 214 and the next sector instruction 212 define the circular aspect of the buffer segment 210. For example, referring back to Table 3, the values in the left column are referred to as the offset values 214 and the values in the right column are refeoed to as the next sector instruction values 212. When the application module 204 is positioned at the cuoent buffer sector 208 "4," the application module 204 reads the next sector instruction 212 and moves to the next buffer sector 208 "7." Once positioned at current buffer sector 208 "7," the application module 204 reads the next sector instruction 212 and moves to the next buffer sector 208 "5." This procedure is repeated until the application module 204 is positioned back to the buffer sector 208 having offset 214 value "4," thus enclosing the circularly linked segment 210. According to the prefeoed embodiment, the maintenance of the buffer table 206 and the operation of the application module 204 on the buffer table 206 are both administered by the control module 202. Greater detail of the application module 204 and the control module 202 are discussed below.
Referring now to FIG. 5, a more specific functional block diagram of the system 200 of FIG. 4 is shown. In particular, functional blocks of the components of the application module 204 are depicted. The application module 204 is the hardware component of the buffer management system 200. The application module 204 is divided into two main sub-modules: a host address pointer (HAP) 220 coupled to a host next pointer (HNP) 222 and a disc address pointer (DAP) 224 coupled to a disc next pointer (DNP) 226. The application module 204 walks through the buffer table 206 in segment-by-segment fashion. The host next pointer 222 and the disc next pointer 226 effectively control the positioning of the next buffer sector 208 position of the host address pointer 220 and the disc address pointer 224, respectively. As noted above, the operation of the application module 204 is controlled through the control module 202.
The host address pointer 220 is responsible for committing the data blocks to the buffer 145. In the preferred embodiment, committing data to the buffer 145 is accomplished by reference to the buffer table 206. In its operation, the host next pointer 222 communicates to the host address pointer 220 the value of the "next" sector 208 of the particular buffer segment 210 upon which the host address pointer 220 is operating. The host next pointer 222 receives the offset value 214 of the "next" sector 208 in the buffer segment 210 from the next sector instruction 212. The host address pointer 220 walks through the particular buffer segment 210 in response to the next sector instruction 212 read by the host next pointer 222. After the host address pointer 220 commits data to the current buffer sector 208, it immediately jumps to the next buffer sector 208 as communicated by the host next pointer 222. Ultimately, the next sector instruction value 212 of the last buffer sector 208 will equal the offset value 214 of the initial buffer sector 208 referenced by the host address pointer 220 in the buffer segment 210. This is the point at which the circular segment wraps. The host address pointer 220 continues to walk through the segment 210, wrapping as needed, until all the data for the cuoent command has been transfeoed. When data is to be transfeoed from the buffer 145 to the disc 108, the application module 204 generally operates in the same fashion as when data is transfeoed from the disc 108 to the buffer 145. The disc address pointer 224 is responsible for committing the data from the buffer 145 to the disc 108. In the prefeoed embodiment, committing data to the disc 108 is accomplished by reference to the buffer table 206. In its operation, the disc next pointer 226 communicates to the disc address pointer 224 the position of the "next" buffer sector 208 of the particular buffer segment 210 upon which the disc address pointer 226 is operating. The disc next pointer 226 receives the offset value 214 of the "next" buffer sector 208 in the buffer segment 210 from the next sector instruction 212. The disc address pointer 224 walks through the particular buffer segment 210 in response to the next sector instruction 212 read by the disc next pointer 226. After the disc address pointer 224 commits data from the buffer sector 208 to the disc 108, it immediately jumps to the next buffer sector 208 as dictated by the disc next pointer 226. Ultimately, the next sector instruction value 212 of the last buffer sector 208 will equal the offset value 214 of the initial buffer sector 208 referenced by the disc address pointer 224 in the buffer segment 210. This is the point at which the circular segment wraps. The disc address pointer 224 continues to walk through the segment 210, wrapping as needed, until all the data for the cuoent command has been transfeoed.
With respect to the operation of disc address pointer 224, data is transfeoed from the buffer 145 in sector-by-sector fashion. Once the data has been extracted from the particular buffer sector 208, it is transfeoed to the disc 108 according to the particular servo segment 164 and track 160 upon which the file is to be located. Referring now to FIG. 6, a schematic representation of the application module 204 is shown operating on the buffer 145. The disc cuoent address counter 244 and the disc cuoent page register 246 form the disc address pointer 224. Similarly, the host cuoent address counter 252 and the host cuoent page register 250 form the host address pointer 220. The disc next page register 240 and the host next page register 242 serve as the disc next pointer 226 and the host next pointer 222, respectively. The remap base address register 248 sets the disc remap address 247 and the host remap address 249 at the base of the VBM table 206 in the buffer 145. Accordingly, in a prefeoed embodiment, the output of the remap base address register 248 uses either the disc remap address 247 or the host remap address 249 to address the buffer 145, depending upon which acknowledgment control line (241 or 243) is enabled. If the disc interface controller requests data to be read from the buffer 145, then the disc remap acknowledgment line 241 is enabled and the disc remap address 247 is used to access the buffer 145. If the host interface controller requests data to be written to the buffer 145, then the host remap acknowledgement line 243 is enabled and the host remap address 249 is used to address the buffer 145. The disc next page register 240 loads the disc cuoent page register 246 with the next sector instruction value 212 of the next sector 208 that is to be read from the buffer table 145. Data is read from the cuoent sector 208 so long as the disc data address line 245 originating from the output of the disc cuoent address counter 244 is enabled. The output of the disc cuoent address counter 244 is tied to a sector comparator 256. The value of the disc cuoent address counter 244 is input B to the sector comparator 256. Input A of the comparator 256 is a constant value of the buffer sector 208 size, typically 512 bytes. When the value of the disc cuoent address counter 244 equals input A, a signal is transmitted from the output of the comparator 256 to the disc cuoent page register 246 requesting a switch to the next buffer sector 208, which has a value loaded in the disc cuoent page register 246 by the disc next page register 240. The next buffer sector 208 value is defined by the next sector instruction value 212 loaded into the disc next page register 240 from the buffer 145.
The host next page register 242 loads the host cuoent register 250 with the next sector instruction value 212 of the next sector 208 that is to be written to the buffer 145. Data is read from the cuoent sector 208 so long as the host data address line 251 originating from the output of the host cuoent address counter 252 is enabled. The output of the host cuoent address counter 252 is tied to a sector comparator 258. The value of the host cuoent address counter 252 is input A to the sector comparator 258. Input B of the comparator 258 is a constant value of the buffer sector 208 size, typically 512 bytes. When the value of the host cuoent address counter 252 equals input B, a signal is transmitted from the output of the comparator 258 to the host cuoent page register 250 requesting a switch to the next buffer sector 208, which has a value loaded in the host cuoent page register 250 by the host next page register 242. The next buffer sector value is the next sector instruction value 212 loaded into the host next page register 242 from the buffer 145. Referring now to FIG. 13, a schematic representation of the traversal component of the application module 204 is shown. The traversal component 260 includes a traverse state machine 262, a traverse down counter 264, and a next traverse address register 266. The traverse state machine 262 is loaded with the value 214 of the offset base address. The offset value 214 is latched into the next traverse address register 266 as the next sector 208 position for the traverse component 260. The traversal component 260 traverses the application module 204 over a buffer segment 210 once the traverse state machine 262 initializes the traverse down counter 264. The traversal component 260 receives the traverse address from the next traverse address register 266 and continually executes a traversal of the application module 204 until the traverse down counter 264 completes counting. The count for the traverse down counter 264 is loaded by the microprocessor 142, thereby allowing the application module 204 to iteratively hop a particular buffer segment 210 a predetermined number of times. The next sector instruction value 212 dictates the next sector 208 to traverse and is communicated to the traversal component 260 through the next traverse address register 266. After the value 214 of the offset base address is initially transmitted to the traverse state machine 262, the next sector instruction value 212 is the only control parameter controlling the subsequent sector 208 of traversal until the traverse down counter 264 completes counting, at which time, the traversal component 260 is initialized to await the next value 214 of the offset base address. As noted above, maintenance of the buffer table 206 and operation of the application module 204 are both administered by the control module 202. The control module 202 is the software component of the buffer management system 200. The control module 202 maximizes the method of write caching by ensuring that the maximum possible number of sectors 208 of the buffer 145 are available to the host 140 for write commands at any given time. In short, the control module 202 provides a mechanism for maintaining the buffer table 206 that is used by the application module 204 to direct data transfers into and out of the data buffer 145, via the host 220 and disc 226 address pointers.
Referring to FIG. 7, a flow diagram, generally described in operations 299 through 317, illustrates the operation of the control module 202 when employed to transfer data from the host 140 to the buffer 145. In operation 300, the control module 202 organizes the sectors 208 of the buffer 145 into a free_list in order to maximize the possible number of sectors 208 available to the host 140 for write commands at any given time. The free_list is initialized to be a circularly linked segment 164 of all the free sectors 208 in the write buffer 145. Table 4 is an illustration of the free list, as initialized:
Figure imgf000018_0001
Table 4
Once the control module 202 completes organization of the sectors 208 into the free_list, the physical transformation of the data from the host 140 to the buffer 145 is initiated. It is during this process that the control module 202 divides the buffer table 206 into one or more buffer segments 210. In operation 308, the control module 202 arbitrarily selects a buffer sector 208 - in the prefeoed embodiment, refeoed to as the first index sector 208 - in the free_list as "free_space_start," which is the buffer sector 208 that the control module 202 selects as the beginning sector 208 for data transfer to the buffer 145. The control module 202 controls the application module 204 such that the host address pointer 220 is positioned to the first index sector 208 labeled free_space_start. In operation 302, the control module 202 receives the incoming data to be cached. In operation 310, the control module 202 directs the host address pointer 220 to transfer the first block of the file to the sector identified as free_space_start. In operation 312 the control module 202 determines whether transfer of the file is complete. If all data blocks have been transfeoed, the control module jumps to operation 316. However, if the transfer is not complete, then, in operation 314, the control module 202 positions the host address pointer 220 to the next "index" sector 208 (each cuoent position of the host address pointer 220 is preferably refeoed to as the "index" sector) pursuant to the next sector instruction 212 of the previous index sector 208. The host address pointer 220 transfers the data block as the host next pointer 222 references the next index sector 208. In operation 316, once the control module 202 determines that the transfer of the file is complete, it assigns the label "free_space_start" to the sector 208 in the free_list that appears directly after the last buffer sector 208 in the cuoent buffer segment 210. The determination of the size of the buffer segment 210 and where in the buffer table 206 the division of the free_list into buffer segments 210 occurs is described in more detail by FIG. 8 and Table 5.
Referring to FIG. 8, a flow diagram, generally described in operations 329 through 339, illustrates the aoangement of the sectors 108 of the free_list into one or more segments 210. The control module 202 effectively aoanges the free_list into one or more circularly linked lists segments 210 once a command of length "L" buffer sectors 108 is sent by the host 140 to the buffer management system 200. In operation 330, which is the same as operation 310 in FIG. 7, the control module 202 directs the host address pointer 220 to transfer the first block of the file to the first index sector 208, which, in the prefeoed embodiment, is labeled "free_space_start." Operation 332 locates the last buffer sector 208 of data for this particular command, i.e., the last sector 208 of that particular buffer segment 210 by traversing with the traversal component 260 the singly linked free_list starting at free_space_start and going "L - 1" sectors down the buffer table 206. Operation 334 designates the value of the next sector instruction 212 of the buffer sector 208 cooesponding to "L - 1" steps down the table as "free_space_start." Since this buffer sector 208 now incorporates the next sector instruction 212 "free_space_start," the circularly linked list is enclosed to illustrate that the particular buffer segment 210 is an independent list from the free_list. Operation 336 traverses the remaining buffer sectors 208 in the free_list in order to illustrate the remaining circularly linked list. In operation 338, designation of the next sector instruction 212 of the last sector 208 in the free_list as "free_space_start" encloses the remaining sectors 208 of the buffer table 206 in an independent circularly linked list. Thus, the end result is an illustration of how the free_list is divided into one or more buffer segments 210 once the host 140 transfers data to the buffer 145. Table 5 depicts a better illustration of the aoangement of buffer sectors 208 into buffer segments 210 by applying an instruction of L = 3 to the initialized free list of Table 4.
Figure imgf000020_0001
Table 5
Whereas sectors 0 through 2 represent the buffer segment 210 to which the control module 202 is cuoently instructing the host address pointer 220 to fill with data, sectors 3 through "n - 1" represent the remaining buffer sectors 208 in the free_list. In this example, sector "3" is the next "free_space_start" position upon which the control module 202 will direct the host address pointer 220 the next time that the host 140 implements a transfer of data. Hence, the free_list of the buffer table 206 is divided, in this example, into two separate buffer segments 210, one of length 3 and one of length "n - 3." Similar manipulations can be done in order to either further subdivide various buffer segments 210 or to merge segments 210 back into single loops.
Referring now to FIG. 9, a flow diagram, illustrating a mending process programmed in the control module 202 that is initiated any time a write command is issued, is shown. Although the mending process of FIG. 9 is actually a separate process than that defined in FIG. 8, the two processes are concuoent with each other and the mending process ultimately concludes with operation 300. The mending process is initiated, in operation 320, when the host 140 sends a new write command that is received by the control module 202. In operation 322, once a new write command is received by the control module 202, the control module 202 first checks to determine whether any of the logical block addresses (LBA's) of the data from the new command overlap any of the LBA's of data cached into the buffer sectors 208 from previous commands. If an overlap is not found, then operation 300 is initiated and the data is transfeoed to the buffer 145 in accordance with the flow diagram in FIG. 8.
If an overlap of LBA's is found, then the control module, in operation 324, takes the buffer sectors 208 cooesponding to the old write command and adds them to the end of the free_list. This is accomplished by replacing the next sector instruction value 212 of the buffer sector 208 located at the end of the free_list with the old write sectors 208. The data from these buffer sectors 208 is considered
"redundant" data; thus the buffer sector 208 is mended to the free_list. Once the two buffer sectors 208 are mended, in operation 324, operation 300 is initiated and the data is transfeoed to the buffer 145 in accordance with the flow diagram in FIG. 8. Table 6 is an illustration of the buffer table 206 as it goes through the mending process depicted in FIG. 8.
Figure imgf000021_0001
Table 6 In this example, a list of L=3 has been mended to a list of L=4 to make a new list of L=7. An overlap was found with respect to the LB A of the data blocks contained in the buffer sectors 208 0, 1, and 2 and the LB A of data blocks related to a new write command. Hence, the mending process added the first index sector 208 (sector 0) of the "redundant" buffer segment 210 to the end of the original free_list (sectors 3, 4, 5, and 6) by replacing the next sector instruction 212 of sector 3 with the offset value 214 of sector 0. Sector 3 denotes the end of the free_list since 3 is the buffer sector 208 to where one complete circle of the singly linked list terminates. The control module 202 then administers the write command by positioning the host address pointer 222 at sector 0, which will be the first index sector 208 of the particular buffer segment 210.
Referring to Figure 10, a flow diagram, generally described in operations 349 through 359, illustrates the operation of the control module 202 when employed to transfer data from the buffer 145 to the disc 108. In operation 350, the buffer sector 208 storing the first data block of the file to be transfeoed is designated the "index" sector 108. In operation 352, the index sector 208 is added to the end of the free_list. In operation 354, the control module 202 directs the disc address pointer 224 to transfer the data block in the index sector 208 to the disc 108. Thus, at the instant the data from a buffer sector 208 is committed to the disc 108, that particular buffer sector 208 is immediately freed for use by next host 140 write commands. In operation 356, the control module 202 determines whether all the data blocks within the file to be transfeoed have been committed to disc. If not, then, in operation 358, the control module 202 positions the disc address pointer 224 on the next "index" sector 208, as instructed by the next sector instruction 212 of the previous index sector 208. After positioning the disc address pointer 224 on the next index sector 208, the control module 202 jumps to operation 352 and repeats the procedure from there. Once the disc address pointer 224 has completely transfeoed the particular buffer segment 210 storing the file, as determined in operation 356, the task is finished and the control module 202 initializes the disc address pointer 210 to await instructions for subsequent transfers.
The process of adding buffer sectors 208 that are to be written to the disc 108 to the end of the free_list is greatly advantageous if a new write command comes to the control module 202 that is sequential in LBA space, thus immediately following on the disc 108, with the buffer sectors 208 being committed to the disc 108. This advantage is realized in that no manipulation of the buffer table 206 is necessary for the new write command because the control module 202 will write the new data to the disc 108 during the same operation as it is committing the old data to the disc 108. Table 7 provides the best analysis for an illustration of this process:
Figure imgf000023_0001
Table 7
In this example, the data relating to the new write command would have a first index sector 108 of "3," since sector 3 cooesponds to the free_space_start. If the new write command is larger (more sectors) than the segment 210 - in this case sectors 3, 4, 5, and 6 - the control module 202 uses the free_list as a circular loop. Since the buffer sectors 208 which had stored the data being committed to the disc 108 have already been added to the end of the free_list, such sectors 208 become part of the circular free_list. The control module 202 directs the disc address pointer 224 to follow the host address pointer 220 (initiated at the first index sector 208), which, in turn, is following the disc address pointer 224 as data from buffer sectors 208 are committed to the disc 108. Hence, the control module 202 implements a real time circular loop using the disc 224 and host address 220 pointers.
One more advantage of the buffer management system 200 is the ability of the control module 202 to merge two or more cached commands into one buffer segment 210. It is advantageous to merge one buffer segment 210 into another because such a process allows the data to be written from the buffer 145 to the disc 108 in one operation. If two or more buffer segments 210 contain data files that are to be written onto adjacent servo segments 164 on the disc 108, the control module 202 will "merge" such buffer segments 210 of the buffer table 206 into one buffer segment 210.
Referring to FIG. 11, a flow diagram, generally described in operations 379 through 385, illustrates the merging process of the control module 202. In operation 380, cached writes contained in buffer segments 210 are compared to determine whether the data files are to be committed to the disc 108 on adjacent, or sequential, servo segments 164. In operation 382, if such data is not to be committed to adjacent servo segments 164, then operation 381 checks to see if all buffer segments 210 in the table 206 have been compared to each other. If not, then the comparison routine of operation 380 is repeated. If all buffer segments 210 have been compared, the process has finished without merging any buffer segments 210.
If the data from the buffer segments 210 are to be committed in sequential position on the disc 108, then the actual merging process will begin in operation 384. In operation 384, the next sector instruction value 212 of the last buffer sector 208 of either of the buffer segments 210 is replaced with the offset value 214 of the first index sector 208 of the other buffer segment 210. Hence, the two segments 210 are merged into one circularly linked list. Once the merge is complete, the process is initiated again with operation 380 until all buffer segments 210 are compared to one another, as determined by operation 381. In essence, this is an ongoing process due to the frequency of cached writes to the buffer 145. Tables 8 and 9 provide further illustration of the merging process operation 384:
Figure imgf000024_0001
Table 8
Figure imgf000025_0001
Table 9
In this example, Table 8 illustrates the buffer table 206 prior to the merge. Table 8 contains 3 separate buffer segments 210: segment 0: 1 , segment 4:5, and segment 2:3:6. In this illustration, the data blocks contained in segment 0: 1 are to be placed on the disc 108 in adjacent servo segments 210. Therefore, since segments 0: 1 and 4:5 are sequential on the disc 108, the control module 202 will merge the two buffer segments 210, resulting in Table 9. As shown, Table 9 only contains two buffer segments 210: segment 0:1 :4:5, and segment 2:3:6.
In summary, the present invention may be viewed as a buffer management system (such as 200) for managing a transfer of data from a host computer (such as 140) to a recordable disc (such as 108) in a disc drive (such as 100). The disc drive (such as 100) operates on the host computer (such as 140). The recordable disc (such as 108) is radially divided into one or more equal servo segments (such as 164) and circumferentially divided into one or more rotational tracks (such as 160). Thus, the tracks (such as 160) are divided by the servo segments (such as 164).
The disc drive (such as 100) is of a type wherein data transfers between the host computer (such as 140) and the disc (such as 108) are effectuated by transferring a selected number of data blocks constituting a file from either the host computer (such as 140) or the disc (such as 108) to a buffer (such as 145) for temporary storage of the file. After temporary storage, the file is transfeoed to either the host computer (such as 140) or the disc (such as 108). According to the present invention, management of the buffer is accomplished by the buffer management system (such as 200), which includes a buffer table (such as 206), an application module (such as 204), and a control module (such as 202). The buffer management system (such as 200) includes a buffer table (such as 206), which is divided into one or more equally sized buffer sectors (such as 208). Each one of the buffer sectors (such as 208) is a sector in a buffer segment (such as 210). The buffer segment (such as 210) is defined as a circular linked list within the buffer table (such as 206). The buffer management system (such as 200) also includes an application module (such as 204) to walk through each particular buffer segment (such as 210). The buffer table (such as 206) is divided into at least one buffer segment (such as 210). However, the buffer table (such as 206) may only be divided into as many buffer segments (such as 210) as the buffer table (such as 206) has buffer sectors (such as 208).
The buffer management system (such as 200) further includes a control module (such as 202), which creates and maintains the buffer table (such as 206). The control module (such as 202) is coupled with the application module (such as 204) in order to administer the walking through the buffer table (such as 206) by the application module (such as 204). The application module (such as 204) walks through the buffer table (such as 206) in response to a next sector instruction (such as 212) in an index sector (such as 208) upon which the application module (such as 204) is cuoently positioned. In case the data was committed - either to the disc (such as 108) or the buffer (such as 145) - prematurely, the control module (such as 202) includes a look-up routine (such as in operation 414) that re-initiates the application module (such as 204) to perform a subsequent look-up of the data.
The application module (such as 204) preferably includes a host address pointer (such as 220) for committing the data to the buffer (such as 145). The host address pointer (such as 220) is preferably coupled to a host next pointer (such as 222), which communicates the next sector instruction (such as 212) of the index sector (such as 208) to the host address pointer (such as 220). Hence, the next sector instruction (such as 212) controls positioning of the host address pointer (such as 220) within the particular buffer segment (such as 210).
The application module (such as 204) also preferably includes a disc address pointer (such as 224) for committing the data from the buffer (such as 145) to the disc (such as 208). The disc address pointer (such as 224) is preferably coupled to a disc next pointer (such as 226), which communicates the next sector instruction (such as 212) of the index sector (such as 208) to the disc address pointer (such as 224). Hence, the next sector instruction (such as 212) controls the positioning of the disc address pointer (such as 224) within the particular buffer segment (such as 210). In a prefeoed embodiment, the buffer table (such as 206) includes a free list of sectors readily available to the system (such as 200) for temporary storage. When an existing data block is to be written to the disc (such as 108) from an index sector (such as 208), the control module (such as 202) mends the index sector (such as 208) into the free list of sectors so that the index sector (such as 208) will become available to the system (such as 200) as it is committed to the disc (such as 108). The control module (such as 202) arbitrarily selects the particular buffer segment (such as 210) from the free list and arbitrarily positions the host address pointer (such as 220) over a first index sector (such as 208) of the particular buffer segment (such as 210) when a write instruction is issued from the host (such as 140). The free list preferably includes any buffer segments (such as 210) that contained data either written to the disc 108 or considered redundant.
The control module (such as 202) also includes a merging routine (such as in operations 380 through 384). The merging routine (such as in operations 380 through 384) combines one or more buffer segments (such as 210) that are to be written onto adjacent servo segments (such as 164). The merging routine (such as in operations 380 through 384) transforms the plurality of buffer segments (such as 210) into one aggregate buffer segment (such as 210) that can be committed to the disc (such as 108) in one write process.
The present invention may also be viewed as a method (such as in operation 400) for managing a transfer of data from a host computer (such as 140) to a recordable disc (such as 108) in a disc drive (such as 100). The method (such as in operation 400) includes maintaining (such as in operation 402) a buffer table (such as 206), which is divided into one or more equal-sized buffer sectors (such as 208), and aoanging (such as in operation 404) each of the buffer sectors (such as 208) into at least one buffer segment (such as 210). The buffer table (such as 206), which is a singly linked list, is divided into one or more buffer segments (such as 210), which are circularly linked lists. The method (such as in operation 400) further includes selecting (such as in operation 406) a particular buffer segment (such as 210) to temporarily store the file to be transfeoed from the host computer (such as 140) to the buffer (such as 145), as well as transferring (such as in operation 408) the file from the host computer (such as 140) to the buffer (such as 145) by walking through the particular buffer segment (such as 210) in response to a next sector instruction (such as 212) contained in an index sector (such as 208). The index sector is the buffer sector (such as 208) from which the data is being transfeoed.
The method (such as in operation 400) further includes storing (such as in operation 410) the file in the particular buffer segment (such as 210) for a temporary period of time and, eventually, removing (such as in operation 412) the file from the buffer (such as 145) and relocating the file to the recordable disc (such as 108) by walking through the particular buffer segment (such as 210) in response to the next sector instruction (such as 212) contained in the index sector (such as 208). In a prefeoed embodiment, the method (such as in operation 400) in maintaining the buffer table (such as in operation 402) includes the step of creating (such as in operation 300) a free list of sectors that are readily available to accept a new data block. The method (such as in operation 400) in the selecting step (such as in operation 406) preferably includes arbitrarily selecting, once the command to write the file to the buffer (such as 145) is sent by the host computer (such as 140), a first index sector (such as 208) as a beginning to the particular buffer segment (such as 210) that will be used in the storing the data (such as operation 410). The method (such as in operation 400) in the aoanging step (such as operation 404) preferably includes generating a buffer segment (such as 210) from the free list of sectors by beginning at the arbitrarly selected first index sector (such as 208) and traversing one fewer buffer sector than the selected number of data blocks in the file. Ultimately, in the prefeoed embodiment, the method (such as in operation 400) accepts (such as in operation 302) an incoming data file to be cached and places the incoming data file into the buffer segment (such as 210) at the index sector (such as 208).
As mentioned, the method (such as in operation 400) may aoange (such as in operation 404) the buffer table (such as 206) by organizing the sectors (such as 208) into one or more buffer segments (such as 210). However, the buffer table (such as 206) may only be divided into a maximum number of buffer segments (such as 210) equal to the number of sectors (such as 208) in the buffer table (such as 206). The method (such as in operation 400) may also mend (such as in operations 354-358) each sector (such as 208) that is to be committed to the disc (such 108) into the free list of sectors so that each sector (such as 208) is available to the system as it is committed to the disc (such as 108). In a prefeoed embodiment, the method (such as in operation 400) compares
(such as in operation 322) the data block to be stored in the buffer (such as 145) to each existing data block in all of the buffer segments (such as 210). If the comparison (such as in operation 322) reveals a redundancy between the existing data block and the data block to be stored in the buffer (such as 145), the buffer segment (such as 210), which stored the existing data block, is mended (such as in operations 354-358) to the free list. The method (such as in operations 354-358) also preferably merges (such as in operation 384) together one or more buffer segments (such as 210) containing data files that are to be written to adjacent servo segments (such as 208), thus only allowing the write operation to be executed one time for those segments 164.
In the prefeoed embodiment, the method (such as in operation 400) includes transferring (such as in operation 408) data to the buffer (such as 145) by walking through the particular buffer segment (such as 210) with a host address pointer (such as 220) coupled to a host next pointer (such as 222). The host next pointer (such as 222) communicates the next sector instruction (such as 212) to the host address pointer (such as 220). In essence, the next sector instruction (such as 212) controls positioning of the host address pointer (such as 220) within the particular buffer segment (such as 210). Further, the method (such as in operation 400) preferably includes removing and relocating (such as in operation 412) data from the buffer by walking through the particular buffer segment (such as 210) with a disc address pointer (such as 224) coupled to a disc next pointer (such as 226). The disc next pointer (such as 226) communicates the next sector instruction (such as 212) to the disc address pointer (such as 224). In essence, the next sector instruction (such as 212) controls positioning of the disc address pointer (such as 224) within the particular buffer segment (such as 210). In case either the data was transfeoed (such as in operation 408) or removed and replaced (such as in operation 412) prematurely, either operation may be re-initiated (such as in operations 414, 416) according to the method (such as in operation 400).
It will be clear that the present invention is well adapted to attain the ends and advantages mentioned, as well as those inherent therein. While a presently prefeoed embodiment has been described for purposes of this disclosure, various changes and modifications may be made which are well within the scope of the present invention. For example, VBM may be useful to other types of data storage devices, such as tape drives, optical drives, and networks, that implement data transfer using a buffer. Similarly, the control module may be designed using integrated or logic circuits, as opposed to software, or in combination with software. Further, the application module may combine the address pointers and the next pointers into one single component. Additionally, the maintenance of the buffer table may be construed in various other ways that do not take away from the invention. For example, the techniques of VBM may be employed to a file allocation table. Also, the VBM table itself could be built as a doubly linked list (with both next and previous pointers), and hardware could be constructed to assist the software in the maintenance of such a table. The entire system could also be set up to work with collections of data larger or smaller than the standard 512 bytes. Numerous other changes may be made which will readily suggest themselves to those skilled in the art and which are encompassed in the spirit of the invention disclosed and as defined in the appended claims.

Claims

ClaimsWhat is claimed is:
1. A buffer management system for managing a transfer of data from a host computer to a recordable disc in a disc drive, the disc drive operating on the host computer, the recordable disc radially divided into a plurality of equal servo segments and circumferentially divided into a plurality of rotational tracks, wherein the disc drive is of a type that data transfers between the host computer and the disc are effectuated by transferring a file defined as a selected number of data blocks to a buffer for temporary storage, the system comprising: a buffer table divided into a plurality of equal buffer sectors, wherein each one of the plurality of buffer sectors is a sector in a buffer segment defined as one of a plurality of circular linked lists upon which the buffer table is divided; an application module operable to walk through a particular buffer segment, wherein data is transfeoed as the buffer segment is walked through; and a control module to create and maintain the buffer table, the control module being coupled with the application module such that the control module administers the walk through by the application module in response to a next sector instruction in an index sector upon which the application module is cuoently positioned.
2. The system of claim 1, wherein the application module further comprises: a host address pointer for committing the data to the buffer, the host address pointer comprising a host next pointer for communicating the next sector instruction of the index sector to the host address pointer, such that the next sector instruction controls positioning of the host address pointer within the particular buffer segment; a disc address pointer for committing the data from the buffer to the disc, the disc address pointer comprising a disc next pointer for communicating the next sector instruction of the index sector to the disc address pointer, such that the next sector instruction controls the positioning of the disc address pointer within the particular buffer segment.
3. The system of claim 2, wherein the buffer table comprises a free list of
sectors readily available to the system for temporary storage, wherein the control module mends the index sector that is to be written to the disc into the free list, such
that the index sector becomes available to the system as it is committed to the disc.
4. The system of claim 3, wherein the control module selects the particular buffer segment from the free list in arbitrary fashion, such that the host address pointer is arbitrarily positioned by the control module over a first index sector of the particular buffer segment.
5. The system of claim 3, wherein the sector is mended into the free list of sectors because the data block of the index sector is redundant when compared to the data block of a new command.
6. The system of claim 2, wherein the control module comprises a look-up routine, wherein the look-up routine re-initiates the application module in case the data was committed prematurely.
7. The system of claim 1, wherein the buffer table comprises at least one buffer segment, wherein the buffer table is divided into a maximum number of buffer segments equal to the number of buffer sectors in the buffer table.
8. The system of claim 1, wherein the control module further comprises a merging routine, wherein the merging routine combines a plurality of buffer segments in the buffer table that are to be written onto a plurality of adjacent servo segments, such that the merging routine transforms the plurality of buffer segments into one aggregate buffer segment that can be committed to the disc in one write process.
9. A method for managing a transfer of data from a host computer to a recordable disc in a disc drive, the disc drive operating on the host computer, the recordable disc radially divided into a plurality of equal servo segments and circumferentially divided into a plurality of rotational tracks, wherein the disc drive is of a type that data transfers between the host computer and the disc are effectuated by transferring a file defined as a selected number of data blocks to a buffer for temporary storage of the file, the method comprising: (a) maintaining a buffer table divided into a plurality of equal-sized buffer sectors;
(b) aoanging each one of the plurality of buffer sectors into at least one buffer segment defined as a circular linked list upon which the buffer table is divided;
(c) selecting a particular buffer segment to temporarily store the file to be transfeoed from the host computer to the buffer;
(d) transferring the file from the host computer to the buffer by walking through the particular buffer segment in response to a next sector instruction contained in an index sector defined as the buffer sector from which the data is being transfeoed;
(e) storing the file in the particular buffer segment for a temporary period of time; and
(f) removing the file from the buffer and relocating the file to the recordable disc by walking through the particular buffer segment in response to the next sector instruction contained in the index sector.
10. The method according to claim 9, wherein the maintaining step (a) comprises creating a free list of sectors that is readily available to accept a new data block.
11. The method according to claim 10, wherein the selecting step (c) comprises arbitrarily selecting a first index sector as a beginning to the particular buffer segment that will be used in the storing step (e).
12. The method according to claim 11, wherein the aoanging step (b) comprises generating a buffer segment from the free list of sectors by beginning at the arbitrarly selected first index sector and traversing one fewer buffer sector than the selected number of data blocks in the file.
13. The method according to claim 12, wherein the transferring step (d) further comprises:
(i) accepting an incoming data file to be cached; and (ii) placing the incoming data file into the buffer segment at the index sector.
14. The method according to claim 9, wherein the aoanging step (b) comprises organizing the sectors into a plurality of buffer segments up to a maximum number of buffer segments equal to the number of sectors in the buffer table.
15. The method according to claim 10 further comprising:
(g) mending the index sector that is to be written to the disc into the free list of sectors such that the index sector becomes available to the system as it is committed to the disc.
16. The method according to claim 15 further comprising:
(h) comparing a data block to be stored in the buffer to each existing data block in each of the buffer segments, such that if the comparing step (h) reveals a redundancy between the existing data block and the data block to be stored in the buffer, the buffer segment which stored the existing data block is mended into the free list.
17. The method of claim 9 further comprising:
(g) merging together a particular plurality of buffer segments containing data files that are to be written to a plurality of adjacent servo segments, such that a write operation of the particular plurality of buffer segments can be executed in one operation.
18. The method according to claim 9, wherein the transferring step (d) comprises walking through the particular buffer segment with an address pointer coupled to a next pointer, wherein the next pointer communicates the next sector instruction to the address pointer, such that the next sector instruction controls positioning of the address pointer within the particular buffer segment.
19. The method according to claim 9 further comprising:
(g) re-initiating the transferring step (d) and the removing and replacing step (f) in case either one of the steps was performed prematurely.
20. A buffer management system for a data storage device comprising: an application module for transferring data into and out of a buffer in a data storage device; and control means for operating the application module such that the application module walks through a buffer table under administration by the control means.
AMENDED CLAIMS
[received by the International Bureau on 24 May 2001 (24 05 01), oogmal claims 11-20 cancelled, on ginal claim 10 amended, remaining claims unchanged (3 pages)]
1. A buffer management system for managing a transfer of data from a host computer to a recordable disc in a disc dπve, the disc dπve operating on the host computer, the recordable disc radially divided into a plurality of equal servo segments and circumferentially divided into a plurality of rotational tracks, wherein the disc drive is of a type that data transfers between the host computer and the disc are effectuated by transferπng a file defined as a selected number of data blocks to a buffer for temporary storage, the system comprising: a buffer table divided into a plurality of equal buffer sectors, wherem each one of the plurality of buffer sectors is a sector in a buffer segment defined as one of a plurality of circular linked lists upon which the buffer table is divided; and an application module operable to walk through a particular buffer segment, wherem data is transfeoed as the buffer segment is walked through; a control module to create and maintain the buffer table, the control module being coupled with the apphcauon module such that the control module administers the walk through by the application module in response to a next sector instruction in an index sector upon which the application module is currently positioned.
2. The system of claim 1, wherein the application module further comprises: a host address pomter for committing the data to the buffer, the host address pomter compnsmg a host next pointer for communicating the next sector instruction of the index sector to the host address pomter, such that the next sector instruction controls positioning of the host address pointer withm the particular buffer segment; a disc address pomter for committing the data from the buffer to the disc, the disc address pointer comprising a disc next pointer for communicatmg the next sector instruction of the index sector to the disc address pointer, such that the next sector instruction controls the positioning of the disc address pointer within the particular buffer segment.
3. The system of claim 2, wherem the buffer table compπses a free list of sectors readily available to the system for temporary storage, wherem the control module mends the index sector that is to be wπtten to the disc into the free list, such that the index sector becomes available to the system as it is committed to the disc.
20. A buffer management system for a data storage device comprising: an application module for transferring data into and out of a buffer in a data storage device; and control means for operating the application module such that the application module walks through a buffer table under administration by the control means.
4. The system of claim 3, wherein the control module selects the particular buffer segment from the free list in arbitrary fashion, such that the host address pointer is arbitrarily positioned by the control module over a first index sector of the particular buffer segment.
5. The system of claim 3, wherein the sector is mended into the free list of sectors because the data block of the index sector is redundant when compared to the data block of a new command.
6. The system of claim 2, wherein the control module comprises a look-up routine, wherein the look-up routine re-initiates the application module in case the data was committed prematurely.
7. The system of claim 1, wherein the buffer table comprises at least one buffer segment, wherein the buffer table is divided into a maximum number of buffer segments equal to the number of buffer sectors in the buffer table.
8. The system of claim 1, wherein the control module further comprises a merging routine, wherein the merging routine combines a plurality of buffer segments in the buffer table that are to be written onto a plurality of adjacent servo segments, such that the merging routine transforms the plurality of buffer segments into one aggregate buffer segment that can be committed to the disc in one write process.
9. A method for managing a transfer of data from a host computer to a recordable disc in a disc drive, the disc drive operating on the host computer, the recordable disc radially divided into a plurality of equal servo segments and circumferentially divided into a plurality of rotational tracks, wherein the disc drive is of a type that data transfers between the host computer and the disc are effectuated by transferring a file defined as a selected number of data blocks to a buffer for temporary storage of the file, the method comprising the steps of:
(a) maintaining a buffer table divided into a plurality of equal-sized buffer sectors; (b) aoanging each one of the plurality of buffer sectors into at least one buffer segment defined as a circular linked list upon which the buffer table is divided;
(c) selecting a particular buffer segment to temporarily store the file to be transfeoed from the host computer to the buffer;
PCT/US2000/029503 1999-12-22 2000-10-26 Buffer management system for managing the transfer of data into and out of a buffer in a disc drive WO2001046792A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
GB0214398A GB2373612B (en) 1999-12-22 2000-10-26 Buffer management system for managing the transfer of data into and out of a buffer in a disc drive
JP2001547242A JP2003518313A (en) 1999-12-22 2000-10-26 Buffer management system for managing the transfer of data to and from the disk drive buffer
KR1020027008037A KR20020064357A (en) 1999-12-22 2000-10-26 Buffer management system for managing the transfer of data into and out of a buffer in a disc drive
DE10085321T DE10085321T1 (en) 1999-12-22 2000-10-26 Buffer management system for managing data transfer to and from a buffer in a disk drive

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17177999P 1999-12-22 1999-12-22
US60/171,779 1999-12-22

Publications (1)

Publication Number Publication Date
WO2001046792A1 true WO2001046792A1 (en) 2001-06-28

Family

ID=22625101

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/029503 WO2001046792A1 (en) 1999-12-22 2000-10-26 Buffer management system for managing the transfer of data into and out of a buffer in a disc drive

Country Status (7)

Country Link
US (2) US6330640B1 (en)
JP (1) JP2003518313A (en)
KR (1) KR20020064357A (en)
CN (1) CN1434936A (en)
DE (1) DE10085321T1 (en)
GB (1) GB2373612B (en)
WO (1) WO2001046792A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003052760A1 (en) * 2001-12-17 2003-06-26 Thomson Licensing S.A. Reproduction apparatus having a buffer for reducing the mean access time to an information carrier
EP1710797A1 (en) * 2005-04-04 2006-10-11 NEC Electronics Corporation Data recording/reproducing apparatus and method
CN1316376C (en) * 2002-12-12 2007-05-16 建兴电子科技股份有限公司 Multistage management structure and method for buffer
CN102395957A (en) * 2009-04-17 2012-03-28 韩商英得联股份有限公司 Cache and disk management method, and a controller using the method

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6504846B1 (en) * 1999-05-21 2003-01-07 Advanced Micro Devices, Inc. Method and apparatus for reclaiming buffers using a single buffer bit
US6668304B1 (en) * 2000-01-18 2003-12-23 International Business Machines Corporation Transaction support on logical disks
US6636941B1 (en) * 2000-01-18 2003-10-21 International Business Machines Corporation Enhanced stable disk storage
US6941419B2 (en) * 2000-05-24 2005-09-06 Seagate Technology Llc Method and apparatus for disc drive buffer segment management
US6795894B1 (en) * 2000-08-08 2004-09-21 Hewlett-Packard Development Company, L.P. Fast disk cache writing system
TW487177U (en) * 2001-10-31 2002-05-11 Coretronic Corp Anti-dust lid of connecting slot
US6792504B2 (en) 2002-01-25 2004-09-14 Seagate Technology Llc Read on arrival scheme for a disc drive
US20040003172A1 (en) * 2002-07-01 2004-01-01 Hui Su Fast disc write mechanism in hard disc drives
US6961814B1 (en) * 2002-09-30 2005-11-01 Western Digital Technologies, Inc. Disk drive maintaining a cache link attribute for each of a plurality of allocation states
US7159073B2 (en) * 2003-03-27 2007-01-02 Stmicroelectronics, Inc. Data storage and caching architecture
CN101566929B (en) * 2003-08-14 2013-10-16 克姆佩棱特科技公司 Virtual disk drive system and method
US8069192B2 (en) * 2004-03-22 2011-11-29 Microsoft Corporation Computing device with relatively limited storage space and operating / file system thereof
US7647358B2 (en) * 2004-03-22 2010-01-12 Microsoft Corporation Computing device with relatively limited storage space and operating/file system thereof
KR100652386B1 (en) * 2004-11-16 2006-12-06 삼성전자주식회사 Method and apparatus for increasing operation efficiency by sharing application between a host and a hard disc drive
JP2006227923A (en) * 2005-02-17 2006-08-31 Fujitsu Ltd Disk management apparatus and program
JP2006244123A (en) 2005-03-03 2006-09-14 Fujitsu Ltd Data storage system and data storage control device
CN1312550C (en) * 2005-03-28 2007-04-25 北京紫光华宇软件股份有限公司 Method for dumping data to mobile storage medium and extracting data therefrom
TWI287790B (en) * 2005-06-23 2007-10-01 Sunplus Technology Co Ltd Buffer management device, record and reproduction device and management method for buffer memory
US7752059B2 (en) 2005-07-05 2010-07-06 Cardiac Pacemakers, Inc. Optimization of timing for data collection and analysis in advanced patient management system
US11153425B2 (en) * 2005-09-01 2021-10-19 Xtone, Inc. System and method for providing interactive services
US9456068B2 (en) * 2005-09-01 2016-09-27 Xtone, Inc. System and method for connecting a user to business services
US20100166161A1 (en) * 2005-09-01 2010-07-01 Vishal Dhawan System and methods for providing voice messaging services
US9799039B2 (en) * 2005-09-01 2017-10-24 Xtone, Inc. System and method for providing television programming recommendations and for automated tuning and recordation of television programs
US11102342B2 (en) 2005-09-01 2021-08-24 Xtone, Inc. System and method for displaying the history of a user's interaction with a voice application
AU2006287156A1 (en) 2005-09-01 2007-03-08 Vishal Dhawan Voice application network platform
US20100158217A1 (en) * 2005-09-01 2010-06-24 Vishal Dhawan System and method for placing telephone calls using a distributed voice application execution system architecture
US9253301B2 (en) * 2005-09-01 2016-02-02 Xtone Networks, Inc. System and method for announcing and routing incoming telephone calls using a distributed voice application execution system architecture
US8964960B2 (en) * 2005-09-01 2015-02-24 Xtone Networks, Inc. System and method for interacting with a user via a variable volume and variable tone audio prompt
US9426269B2 (en) 2005-09-01 2016-08-23 Xtone Networks, Inc. System and method for performing certain actions based upon a dialed telephone number
US9313307B2 (en) * 2005-09-01 2016-04-12 Xtone Networks, Inc. System and method for verifying the identity of a user by voiceprint analysis
JP4527640B2 (en) * 2005-09-15 2010-08-18 株式会社ソニー・コンピュータエンタテインメント Data reading device
JP3870215B1 (en) * 2005-09-30 2007-01-17 インターナショナル・ビジネス・マシーンズ・コーポレーション Data writing / reading control method for tape recorder
JP4892225B2 (en) * 2005-10-28 2012-03-07 株式会社日立ハイテクノロジーズ Vacuum processing method, vacuum transfer apparatus, and semiconductor processing apparatus
US8200869B2 (en) * 2006-02-07 2012-06-12 Seagate Technology Llc Storage system with alterable background behaviors
US7549021B2 (en) 2006-02-22 2009-06-16 Seagate Technology Llc Enhanced data integrity using parallel volatile and non-volatile transfer buffers
US8205059B2 (en) * 2007-02-16 2012-06-19 Mediatek Inc. Buffer management method and optical disc drive
CN101329661B (en) * 2008-07-31 2010-04-21 北京飞天诚信科技有限公司 System and method for implementing PC / SC interface of HID / SCSI apparatus
US8499120B2 (en) * 2008-10-17 2013-07-30 Seagate Technology Llc User selectable caching management
US9110594B2 (en) 2009-11-04 2015-08-18 Seagate Technology Llc File management system for devices containing solid-state media
US8725931B1 (en) 2010-03-26 2014-05-13 Western Digital Technologies, Inc. System and method for managing the execution of memory commands in a solid-state memory
US8782327B1 (en) 2010-05-11 2014-07-15 Western Digital Technologies, Inc. System and method for managing execution of internal commands and host commands in a solid-state memory
US9026716B2 (en) 2010-05-12 2015-05-05 Western Digital Technologies, Inc. System and method for managing garbage collection in solid-state memory
US9164886B1 (en) 2010-09-21 2015-10-20 Western Digital Technologies, Inc. System and method for multistage processing in a memory storage subsystem
US9021192B1 (en) 2010-09-21 2015-04-28 Western Digital Technologies, Inc. System and method for enhancing processing of memory access requests
US8780475B2 (en) * 2012-08-30 2014-07-15 International Business Machines Corporation Tape drive buffer utilization
US9612956B2 (en) 2013-03-15 2017-04-04 Seagate Technology Llc Multi-tiered caching for data storage management in a device
US10564890B2 (en) 2017-07-07 2020-02-18 Seagate Technology Llc Runt handling data storage system
CN115269455B (en) * 2022-09-30 2022-12-23 湖南兴天电子科技股份有限公司 Disk data read-write control method and device based on FPGA and storage terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0473314A2 (en) * 1990-08-31 1992-03-04 International Business Machines Corporation Peripheral data processing system control apparatus
EP0528273A2 (en) * 1991-08-16 1993-02-24 Fujitsu Limited Buffer memory and method of managing the same
US5276662A (en) * 1992-10-01 1994-01-04 Seagate Technology, Inc. Disc drive with improved data transfer management apparatus

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4428064A (en) * 1981-03-06 1984-01-24 International Business Machines Corporation Controlling buffered peripheral subsystems
US4412285A (en) * 1981-04-01 1983-10-25 Teradata Corporation Multiprocessor intercommunication system and method
JPS621067A (en) * 1985-02-25 1987-01-07 Hitachi Ltd Vector processor
US5034914A (en) * 1986-05-15 1991-07-23 Aquidneck Systems International, Inc. Optical disk data storage method and apparatus with buffered interface
USH513H (en) * 1986-06-30 1988-08-02 The United States Of America As Represented By The Secretary Of The Air Force ECM preprocessor or tracker using multi-processor modules
US5179530A (en) * 1989-11-03 1993-01-12 Zoran Corporation Architecture for integrated concurrent vector signal processor
US5765193A (en) * 1993-07-01 1998-06-09 Digital Equipment Corporation System for controlling a write operation involving data held in a write cache
US5594926A (en) * 1993-07-19 1997-01-14 Efar Microsystems, Inc. Hard disk accelerating system for computer
US5481296A (en) * 1993-08-06 1996-01-02 International Business Machines Corporation Apparatus and method for selectively viewing video information
US5636188A (en) * 1993-11-19 1997-06-03 Sony Corporation Data transmission circuit for disk storage apparatus
US5689653A (en) * 1995-02-06 1997-11-18 Hewlett-Packard Company Vector memory operations
US5721543A (en) * 1995-06-30 1998-02-24 Iterated Systems, Inc. System and method for modeling discrete data sequences
US6378051B1 (en) 1999-06-14 2002-04-23 Maxtor Corporation Interrupt signal prioritized shared buffer memory access system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0473314A2 (en) * 1990-08-31 1992-03-04 International Business Machines Corporation Peripheral data processing system control apparatus
EP0528273A2 (en) * 1991-08-16 1993-02-24 Fujitsu Limited Buffer memory and method of managing the same
US5276662A (en) * 1992-10-01 1994-01-04 Seagate Technology, Inc. Disc drive with improved data transfer management apparatus

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003052760A1 (en) * 2001-12-17 2003-06-26 Thomson Licensing S.A. Reproduction apparatus having a buffer for reducing the mean access time to an information carrier
US7263595B2 (en) 2001-12-17 2007-08-28 Thomson Licensing Reproduction apparatus having a buffer for reducing the mean access time to an information carrier
CN100424779C (en) * 2001-12-17 2008-10-08 汤姆森特许公司 Reproduction apparatus having a buffer for reducing the mean access time to an information carrier
CN1316376C (en) * 2002-12-12 2007-05-16 建兴电子科技股份有限公司 Multistage management structure and method for buffer
EP1710797A1 (en) * 2005-04-04 2006-10-11 NEC Electronics Corporation Data recording/reproducing apparatus and method
KR100787014B1 (en) 2005-04-04 2007-12-18 엔이씨 일렉트로닉스 가부시키가이샤 Data recording/reproducing apparatus and method
US7813617B2 (en) 2005-04-04 2010-10-12 Nec Electronics Corporation Data recording/reproducing apparatus and method
CN102395957A (en) * 2009-04-17 2012-03-28 韩商英得联股份有限公司 Cache and disk management method, and a controller using the method

Also Published As

Publication number Publication date
GB2373612B (en) 2004-05-05
US6757781B2 (en) 2004-06-29
GB2373612A (en) 2002-09-25
US6330640B1 (en) 2001-12-11
JP2003518313A (en) 2003-06-03
KR20020064357A (en) 2002-08-07
GB0214398D0 (en) 2002-07-31
DE10085321T1 (en) 2002-12-05
US20030149815A1 (en) 2003-08-07
CN1434936A (en) 2003-08-06

Similar Documents

Publication Publication Date Title
US6330640B1 (en) Buffer management system for managing the transfer of data into and out of a buffer in a disc drive
US6735678B2 (en) Method and apparatus for disc drive defragmentation
US6925526B2 (en) Method and apparatus for servicing mixed block size data access operations in a disk drive data storage device
US6978345B2 (en) Self-mirroring high performance disk drive
US7783828B1 (en) File system write to storage without specifying location reference
US6862151B2 (en) Method and apparatus for read error recovery
KR20040010517A (en) Disk Controller Configured to Perform Out of Order Execution of Write OperationsS
JPH05216593A (en) Indirect access storage device
US20030145165A1 (en) Interrupting disc write operations to service read commands
JP3823040B2 (en) Data storage device, data processing device, method for optimizing execution order of write request, data processing method, and hard disk drive
US6629200B1 (en) System and method for controlling cache memories, computer system, hard disk drive unit, and hard disk control unit
US6209058B1 (en) Cache management for data transfer control from target disk areas
KR19980029917A (en) How to improve read cache performance on magnetic disk drives
US5537578A (en) Transparent driving partition for processing logical volumes to be recorded onto optical media
US6092145A (en) Disk drive system using sector buffer for storing non-duplicate data in said sector buffer
US6693754B2 (en) Method and apparatus for a disc drive adaptive file system
KR100368964B1 (en) Data writing method, data reading method, disk drive apparatus and disk drive apparatus controller
KR20070060301A (en) Hard disk driver having non-volatile write cache
US6792504B2 (en) Read on arrival scheme for a disc drive
US6725330B1 (en) Adaptable cache for disc drive
US7986480B2 (en) Micro-journaling of data on a storage device
US7725666B2 (en) Micro-journaling of data on a storage device
US6842747B2 (en) Method and apparatus for a disc drive interface
JP3089683B2 (en) Information processing device
JP3080416B2 (en) Method of executing internal operation in controller and controller

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CN DE GB JP KR SG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
ENP Entry into the national phase

Ref document number: 2001 547242

Country of ref document: JP

Kind code of ref document: A

Ref document number: 200214398

Country of ref document: GB

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020027008037

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 1020027008037

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 008191220

Country of ref document: CN

RET De translation (de og part 6b)

Ref document number: 10085321

Country of ref document: DE

Date of ref document: 20021205

WWE Wipo information: entry into national phase

Ref document number: 10085321

Country of ref document: DE

WWW Wipo information: withdrawn in national office

Ref document number: 1020027008037

Country of ref document: KR