US20080162863A1 - Bucket based memory allocation - Google Patents
Bucket based memory allocation Download PDFInfo
- Publication number
- US20080162863A1 US20080162863A1 US12/002,081 US208107A US2008162863A1 US 20080162863 A1 US20080162863 A1 US 20080162863A1 US 208107 A US208107 A US 208107A US 2008162863 A1 US2008162863 A1 US 2008162863A1
- Authority
- US
- United States
- Prior art keywords
- blocks
- memory
- size
- block
- linked list
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
Abstract
Managing memory includes subdividing the memory into a first set of blocks corresponding to a first size and a second set of blocks corresponding to a second size that is greater than said first size, in response to a request for an amount of memory that is less than or equal to the first size, providing one of the first set of blocks, and, in response to a request for an amount of memory that is greater than the first size and less than or equal to the second size, providing one of the second set of blocks. Subdividing the memory may also include subdividing the memory into a plurality of sets of blocks, where each particular set contains blocks corresponding to one size that is different from that of blocks not in the particular set. Each set of blocks may correspond to a size that is a multiple of a predetermined value. Managing memory may also include providing a table containing an entry for each set of blocks. The entry for each set of blocks may be a pointer to one of: an unused block and null. Unused blocks of a set may be linked together to form a linked list where the pointer for each entry in the table points to the first block in the list.
Description
- 1. Technical Field
- This application relates to the field of memory management, and more particularly to the field of managing dynamically allocated computer memory.
- 2. Description of Related Art
- Host processor systems may store and retrieve data using a storage device containing a plurality of host interface units (host adapters), disk drives, and disk interface units (disk adapters). Such storage devices are provided, for example, by EMC Corporation of Hopkinton, Mass. and disclosed in U.S. Pat. No. 5,206,939 to Yanai et al., 5,778,394 to Galtzur et al., U.S. Pat. No. 5,845,147 to Vishlitzky et al., and U.S. Pat. No. 5,857,208 to Ofek. The host systems access the storage device through a plurality of channels provided therewith. Host systems provide data and access control information through the channels of the storage device and the storage device provides data to the host systems also through the channels. The host systems do not address the disk drives of the storage device directly, but rather, access what appears to the host systems as a plurality of logical volumes. The logical volumes may or may nor correspond to the actual disk drives.
- The host adapters, disk adapters, and other internal components of the storage device (such as RDF adapters, RA's) may each have their own local processor and operating system. Each request to an internal component of the storage device may be scheduled as a task that is serviced by the operating system of the internal component. In some cases, a task may temporarily need a block of memory for its processing. In those cases, it may be useful for the operating system to be able to dynamically obtain and release blocks of memory for temporary usage by the tasks. Conventionally, the available, unused, memory is maintained in a heap, and an operating system services memory requests by returning a portion of the heap memory to the requesting task. Once the requesting task has used the memory, the task calls a routine to return the memory to the heap.
- In some cases, repeatedly requesting and returning memory from and to the heap results in “heap fragmentation”, where the heap memory is separated into a plurality of small portions such that it becomes difficult or impossible to service a request for a relatively large contiguous block of memory. In such cases, it may be necessary to perform a heap compaction to concatenate the plurality of small portions of memory in the heap. However, the need to perform periodic heap compactions adds overhead to the system. In addition, obtaining memory from a heap, especially a fragmented heap, adds overhead, as does the process of returning memory to the heap.
- It is desirable to be able to perform dynamic memory allocation and deallocation in a way that reduces the possibility of a fragmented heap, decreases the instances of the heap becoming fragmented, and generally reduces the overhead associated with dynamically requesting and releasing blocks of memory.
- According to the present invention, managing memory includes subdividing the memory into a first set of blocks corresponding to a first size and a second set of blocks corresponding to a second size that is greater than said first size, in response to a request for an amount of memory that is less than or equal to the first size, providing one of said first set of blocks, and, in response to a request for an amount of memory that is greater than the first size and less than or equal to the second size, providing one of the second set of blocks. Subdividing the memory may also include subdividing the memory into a plurality of sets of blocks, where each particular set contains blocks corresponding to one size that is different from that of blocks not in the particular set. Each set of blocks may correspond to a size that is a multiple of a predetermined value. Managing memory may also include providing a table containing an entry for each set of blocks. The table may be stored in a first memory and the blocks may be stored in a second memory. The first and second memories may have different access rates. The entry for each set of blocks may be a pointer to one of: an unused block and null. Unused blocks of a set may be linked together to form a linked list where the pointer for each entry in the table points to the first block in the list. The predetermined value may be 2n, where n is an integer. The predetermined value may be eight. The table may include one entry for each multiple of corresponding block sizes. The table may be indexed by subtracting one from the amount of memory requested and then shifting the result right n places. Managing memory may also include, in response to a request for an amount of memory corresponding to a particular set of blocks for which there are no unused blocks, requesting a block of memory from a heap. Managing memory may also include, in response to there being no block of memory from the heap corresponding to the request, returning all unused blocks of memory to the heap. Managing memory may also include, following returning all unused blocks to the heap, rerequesting a block of memory from the heap. The blocks may contain a section in which user data is stored. Each of the first and second sets of blocks may contain a pointer to a different block of memory in which user data is stored. The different block of memory may have a different access time than memory used for the first and second blocks. Managing memory may also include, in response to a request for an amount of memory corresponding to a particular set of blocks for which there are no unused blocks, requesting a first block of memory from a first heap for the particular one of the set of blocks and requesting a second block of memory from a second heap for a corresponding one of the different blocks of memory in which user data is stored. The first heap may be different from the second heap or the first heap may be the same as the second heap. Managing memory may also include, in response to there being no blocks of memory from both the first heap and the second heap corresponding to the request, returning all unused blocks of memory to the heaps. Managing memory may also include, following returning all unused blocks to the heaps, rerequesting a block of memory from the heaps.
- According further to the present invention, a computer program product includes executable code that subdivides the memory into a first set of blocks corresponding to a first size and a second set of blocks corresponding to a second size that is greater than the first size, executable code that provides one of said first set of blocks in response to a request for an amount of memory that is less than or equal to the first size, and executable code that provides one of the second set of blocks in response to a request for an amount of memory that is greater than the first size and less than or equal to the second size. The executable code that subdivides the memory may subdivide the memory into a plurality of sets of blocks, where each particular set contains blocks corresponding to one size that is different from that of blocks not in the particular set. Each set of blocks may correspond to a size that is a multiple of a predetermined value. The predetermined value may be 2n, where n is an integer. The predetermined value may be eight. The computer program product may also include executable code that requests a block of memory from a heap in response to a request for an amount of memory corresponding to a particular set of blocks for which there are no unused blocks. The computer program product may also include executable code that returns all unused blocks of memory to the heap in response to there being no block of memory from the heap corresponding to the request. The computer program product may also include executable code that rerequests a block of memory from the heap following returning all unused blocks to the heap. The blocks may contain a section in which user data is stored. Each of the first and second sets of blocks may contain a pointer to a different block of memory in which user data is stored. The computer program product may further include executable code that requests a first block of memory from a first heap for the particular one of the set of blocks and requests a second block of memory from a second heap for a corresponding one of the different blocks of memory in which user data is stored in response to a request for an amount of memory corresponding to a particular set of blocks for which there are no unused blocks. The first heap may be different from the second heap or may be the same as the second heap. The computer program product may also include executable code that returns all unused blocks of memory to the heaps in response to there being no blocks of memory from both the first heap and the second heap corresponding to the request. The computer program product may also include executable code that rerequests a block of memory from the heaps following returning all unused blocks to the heaps.
- According further to the present invention, a data storage device includes a plurality of disk drives, a plurality of disk adapters coupled to the disk drives and to a common data bus, a plurality of host adapters that communicate with host computers to send and receive data to and from the disk drives via the common bus and at least one remote communications adapter that communicates with other storage devices, wherein at least one of the disk adapters, host adapters, and the at least one remote communications adapter includes an operating system that performs the steps of subdividing the memory in a first set of blocks corresponding to a first size and a second set of blocks corresponding to a second size that is greater than said first size, in response to a request for an amount of memory that is less than or equal to the first size, providing one of said first set of blocks, and, in response to a request for an amount of memory that is greater than the first size and less than or equal to the second size, providing one of said second set of blocks. The blocks may contain a section in which user data is stored. Each of the first and second sets of blocks may contain a pointer to a different block of memory in which user data is stored. In response to a request for an amount of memory corresponding to a particular set of blocks for which there are no unused blocks, the operating system may request a first block of memory from a first heap for the particular one of the set of blocks and request a second block of memory from a second heap for a corresponding one of the different blocks of memory in which user data is stored.
-
FIG. 1 is a diagram of a storage device used in connection with the system described herein. -
FIG. 2 shows a block of data used in connection with the system described herein. -
FIG. 3 shows a table and a plurality of linked lists used for managing free buckets of memory according to the system described herein. -
FIG. 4 is a flow chart showing steps performed in connection with a request for a block of memory according to the system described herein. -
FIG. 5 is a flow chart illustrating steps performed in connection with returning all free buckets of memory back to a heap according to the system described herein. -
FIG. 6 is a flow chart illustrating steps performed in connection with returning a block of memory to the appropriate bucket according to the system described herein. -
FIG. 7 is a flow chart illustrating steps performed in connection with initializing the buckets according to the system described herein. -
FIG. 8 shows a memory data structure according to another embodiment of the system described herein. -
FIG. 9 is a flowchart showing steps performed in connection with a request for a block of memory according to another embodiment of the system described herein. -
FIG. 10 is a flowchart illustrating steps performed in connection with initializing the buckets according to another embodiment of the system described herein. - Referring to
FIG. 1 , astorage device 30 includes a plurality of host adapters (HA) 32-34, a plurality of disk adapters (DA) 36-38 and a plurality of disk drives 42-44. Each of the disk drives 42-44 is coupled to a corresponding one of the DA's 36-38. Thestorage device 30 also includes aglobal memory 46 that may be accessed by the HA's 32-34 and the DA's 36-38. Thestorage device 30 also includes an RDF adapter (RA) 48 that may also access theglobal memory 46. TheRA 48 may communicate with one or more additional remote storage devices (not shown) and/or one or more other remote devices (not shown) via adatalink 52. The HA's 32-34, the DA's 36-38, theglobal memory 46 and theRA 48 are coupled to abus 54 that is provided to facilitate communication therebetween. - Each of the HA's 32-34 may be coupled to one or more host computers (not shown) that access the
storage device 30. The host computers (hosts) read data stored on the disk drives 42-44 and write data to the disk drives 42-44. Theglobal memory 46 contains a cache memory that holds tracks of data from the disk drives 42-44 as well as storage for tables that may be accessed by the HA's 32-34, the DA's 36-38 and theRA 48. - Each of the HA's 32-34, the DA's 36-38, and the
RA 48 may include a local processor and local memory to facilitate performing the functions thereof. For example, theRA 48 may include a local processor and local memory that handles requests made by one or more of the HA's 32-34 and/or the DA's 36-38 to transfer data via thedatalink 52. Similarly, any one or more of the HA's 32-34 and/or the DA's 36-38 may receive data transfer requests. Since many such requests may be provided at or nearly at the same time, it is desirable to be able to process the requests concurrently in an orderly fashion. Accordingly, each of the HA's 32-34, the DA's 36-38, and theRA 48 may use an operating system to facilitate the orderly processing of tasks corresponding to the concurrent requests. - One of the services provided by an operating system is dynamically allocate and release blocks of memory for temporary use by one or more tasks. The memory allocation system disclosed herein may be used by the operating systems of one or more of the HA's 32-34, the DA's 36-38 and the
RA 48 in connection with providing dynamic memory for use by tasks thereof. However, it will be appreciated by one of ordinary skill in the art that the memory allocation system disclosed herein has broad applicability to other operating systems and other types of software that dynamically allocate and release blocks of memory. - Referring to
FIG. 2 , a block ofmemory 70 is shown as including aSIZE field 72, aPNEXT field 74 corresponding to a pointer to a next block of memory, and an area for storingdata 76. As discussed in more detail elsewhere herein, theblock 70 may have a particular fixed size, such as eight bytes. TheSIZE field 72 represents how much data may be stored in thedata storage area 76. Thus, for example, if theSIZE field 72 indicates that theblock 70 contained thirty-two bytes, then thedata storage area 76 would be thirty-two bytes in length. ThePNEXT field 74 is used to link together unused blocks of the same size and is discussed in more detail elsewhere herein. - The system disclosed herein contemplates a plurality of blocks of memory of various sizes. For example, there may be a plurality of blocks that have an eight byte data storage area, a plurality having a sixteen byte data storage area, a plurality having a twenty-four byte data storage area, etc. The maximum size of the data storage area could be any amount such as, for example, 10 k. The incremental difference between successive block sizes, and the maximum block size, could be any value and may be set according to the needs of the operating system and corresponding tasks. It is possible to have the incremental difference between successive block sizes be one byte.
- A task may request a block of a particular size and receive the
block 70 shown inFIG. 2 . Note that, in the example herein of having the incremental difference between successive block sizes be eight bytes, a task that requests an amount of storage that is not a multiple of eight bytes would receive the next higher size. For example, a task that requests one byte of memory would receive an eight byte block. A task that requests twelve bytes of memory would receive a sixteen byte block, etc. Note further, however, that, in the example herein, a task that requests an amount of memory that is a multiple of eight bytes would receive a block having a storage area that is that size. For example, a task requesting thirty-two bytes of memory would receive a thirty-two byte block. - Referring to
FIG. 3 , a table 82 for memory blocks having the same size is illustrated. The table 82 contains a plurality of list pointers,PHEAD 84,PHEAD 94,PHEAD 104, that represent heads of linked lists of unused blocks of memory having the same size. Thus, for example, if the block sizes are multiples of eight bytes going from eight bytes as a minimum bucket size to 10 k as a maximum bucket size, the table 82 could contain ahead pointer PHEAD 84 for a linked list of unused blocks of memory 86-88 containing eight bytes of storage space. The table 82 may also contain ahead pointer PHEAD 94 of a linked list of unused blocks 96-98 containing sixteen bytes of storage space and may contain ahead pointer PHEAD 104 of a linked list of unused blocks 106-108 containing 10 k bytes of storage space. Of course, the table 82 may also contain head pointers for lists of unused blocks in sizes between sixteen and 10 k bytes. - The linked lists are constructed by having each of the unused blocks use the PNEXT field (discussed above in connection with
FIG. 2 ) to point to the next unused block of the same size in the list. Thus, for example, the PNEXT field of theblock 86 points to theunused block 87 and the PNEXT field of theunused block 87 points to theblock 88. The PNEXT field of theblock 88 points to the null pointer (as do the PNEXT fields of thebuckets 98, 108) to indicate the end of the linked list. Each of thehead pointers - Referring to
FIG. 4 , aflowchart 110 illustrates steps performed in connection with an application requesting an amount of memory. Theflowchart 110 assumes that the buckets are provided in multiples of eight bytes beginning with the smallest size of eight bytes. Processing begins at afirst step 112 where an index variable, I, is set to the sum of the requested size of the amount of memory (REQSIZE) and digital 111(7). As discussed in more detail below, the index variable I will be used to index a table such as the table 82 illustrated inFIG. 3 . Following thestep 112 is astep 114 where the index variable, I, is shifted right three times. The effect of thesteps step 112 is eight. Shifting that value right three places at thestep 114 results in an index table of one meaning that the entry corresponding to eight byte blocks (e.g.,PHEAD 84 of the table 82) will be accessed. As another example, if the number of requested bytes is eight, then the result of adding seven at thestep 112 is fifteen. However, shifting fifteen right three times atstep 114 results in the index variable, I, also being one, thus correctly providing an unused block having eight bytes of storage. - Following the
step 114 is astep 116 where it is determined if the pointer in the table 82 points to null, meaning that there are no unused blocks corresponding to the requested amount of memory. If the pointer at the head of the table 82 does not point to null, meaning that there are available unused blocks corresponding to the requested size, then control transfers from thestep 116 to astep 118 where the return value, RETVAL, which is a pointer to the requested block of memory, is set to be equal to PHEAD[I] (the pointer at the head of the linked list of unused blocks indexed in the table 82 by I) plus a value of OFFSET, which is the offset corresponding to the SIZE and PNEXT fields of the block pointed to by PHEAD[I]. The OFFSET value may be added to prevent the task that requests the block of memory from overwriting the SIZE field of the block when the block of memory is used. Other techniques may also be employed, such as having a separate list of used blocks that include the size of each used bucket. - Following the
step 118 is astep 122 where the head of the list is modified to reflect the fact that the first item on the list of unused blocks is now being used by a task, and thus is no longer an unused block. Following thestep 122, processing is complete. - If it is determined at the
test step 116 that the head pointer of the linked list of unused blocks points to null, meaning that there are no unused blocks having a size corresponding to the requested amount of memory, then control transfers from thestep 116 to astep 124 where memory is requested from the heap. The request at thestep 124 may be performed by using a conventional memory heap request routine, such as malloc. Note also that the amount of memory requested at thestep 124 may be the index, I, multiplied by eight (shifted left three times) plus the OFFSET, which corresponds to the amount of memory space used by the SIZE field and the PNEXT field. Thus, the requested memory may be converted into a block that, when freed, may be returned to the appropriate linked list of unused blocks rather than being returned to the heap. In an embodiment disclosed herein, a single heap is used for the blocks of memory that correspond to the table 82. - Following the
step 124 is atest step 126 where it is determined if the memory requested at thestep 124 was successful. If so, then astep 127 where the SIZE field of the block of memory obtained from the heap is set to the index, I, times eight (i.e., I shifted left three times). The value placed in the SIZE field corresponds to the size of the block that is being created and will be used when the task that requested the memory returns the block to the appropriate list of unused blocks. Following thestep 127 is astep 128 where the return value, RETVAL, is adjusted in a manner similar to that discussed above in connection with thestep 118. Following thestep 128, processing is complete. - If it is determined at the
step 126 that the request for a block of memory from the heap at thestep 124 was unsuccessful, then control passes from thestep 126 to astep 132 where all of the unused blocks of memory from the table 82 are returned to the heap memory. Freeing the memory corresponding to the unused buckets at thestep 132 is discussed in more detail hereinafter. - Following the
step 132 is astep 134 where the request for an appropriate block of memory from the heap, similar to the request presented at thestep 124, is made. Note, however, that since thestep 134 follows thestep 132 where all memory corresponding to the unused blocks was returned to the heap, it is more likely that thestep 134 will successfully be able to provide the requested block of memory, since freeing all the memory corresponding to unused blocks at thestep 132 should increase the amount of heap memory available. Following thestep 134 is atest step 136 where its determined if the memory requested at thestep 134 was successful. If so, then control transfers from thestep 136 to thestep 127, discussed above. Otherwise, control transfers from thestep 136 to astep 138 where an error is returned. Returning the error at thestep 138 indicates that the memory request by the task cannot be filled either with a block of memory from the table 82 or with memory from the heap. Followingstep 138, processing is complete. - Referring to
FIG. 5 , aflow chart 140 illustrates steps performed in connection with freeing all the unused blocks at thestep 132 ofFIG. 4 . Processing begins at afirst step 142 where an index, I, is set to one. The index I is used to index the table 82 that contains the list head pointers for all of the lists of unused blocks. - Following the
step 142 is atest step 144 where it is determined if PHEAD [I] equals null. The test at thestep 144 determines if the head pointer of a linked list of unused blocks of a particular size (the linked list corresponding to the index I) equals null. If so, then all of the free blocks corresponding to the particular size have been returned and control transfers from thestep 144 to astep 146 where the index, I, is incremented. Following thestep 146 is atest step 148 where it is determined if the index, I, is greater than the number of entries in the table, IMAX. If so, then processing is complete. Otherwise, control transfers from thestep 148 back to thestep 144 for the next iteration that processes the next entry in the table. - If it is determined at the
test step 144 that the list head pointer PHEAD [I] does not equal null, then control transfers from thestep 144 to astep 152 where a temporary variable, TEMP, is set to equal to PHEAD[I]. Following thestep 152 is astep 154 where the head of the list is adjusted to be equal to the next unused block in the list by setting PHEAD[I] equal to PHEAD[I].NEXT. Thus, for example, if the head of thelist PHEAD 84 initially points to thebucket 86, then execution at thestep 154 would cause the head of thelist PHEAD 84 to point to thenext block 87. Following thestep 154 is astep 156 where the memory pointed to by TEMP is freed. Freeing the memory at thestep 156 is performed in a conventional manner by, for example, calling a heap memory management routine that will free memory. Following thestep 156, control transfers back to thetest step 144. - Referring to
FIG. 6 , aflow chart 160 illustrates steps performed in connection with returning a block (RTBLOCK) back to the table of unused blocks. Processing begins at afirst step 162 where an index, I, is set equal to the size of the block being returned, which is found at the RTBLOCK.SIZE field. Following thestep 162 is astep 164 where I is shifted right three times to obtain an index for the table 82. Following thestep 164 is astep 166 where the NEXT field of the block being returned is set equal to the head of the linked list of blocks having the same size as the blocks being returned. Following thestep 166 is astep 168 where the head of the linked list of blocks of the same size is set to point to the block being returned. Followingstep 168, processing is complete. - In an embodiment of the invention described herein, the table 82 of unused blocks may initially be empty, in which case then initial memory requests will result in obtaining memory from the heap and then returning the unused blocks to the table 82. Alternatively, it may be possible upon initialization to populate the table 82 with lists of unused blocks, as described below.
- Referring to
FIG. 7 , aflow chart 170 illustrates steps performed in connection with populating the table 82 with lists of unused blocks. Processing begins at afirst step 172 where an index variable, I, is set to one. Following thestep 172 is astep 174 where another index variable, N, is set equal to one. The index variable I represents the index in the table 82 of lists of different size blocks. The index variable N represents the number of free blocks of the same size placed on each list. - Following
step 174 is astep 176 where a pointer to a block of memory, (labeled “NEWVAL” in the flow chart 170) is created by calling a conventional heap memory allocation routine, such as malloc, to allocate a number of bytes corresponding to the index I times eight plus the extra bytes (OFFSET) introduced by the SIZE field and the PNEXT field. Following thestep 176 is astep 177 where the SIZE field of the new block of memory being created is set to equal I times eight. Note that the value I times eight may be obtained by shifting I left three bit positions. - Following the
step 177 is astep 178 where the routine for returning unused blocks of memory is called for the new block just obtained. In an embodiment disclosed herein, the steps performed at thestep 178 correspond to theflow chart 160 ofFIG. 6 . Following thestep 178 is atest step 182 where N, the index that counts the number of unused blocks in each linked list of the same size blocks, is tested to determine if N is greater than NMAX, the maximum number of unused blocks provided on each list of the same size blocks. If not, then control transfers from thestep 182 to astep 184 where N is incremented. Following thestep 184, control transfers back to thestep 176 to continue creating new blocks of memory. - If it is determined at the
test step 182 that the value of N is greater than NMAX, (i.e., the number of unused blocks provided on each list of blocks of the same size), then control transfers from thestep 182 to astep 186 where the index variable I is incremented. Following thestep 186 is atest step 188 where it is determined if I is greater than IMAX, the maximum number of entries in the table 82. If so, then processing is complete. Otherwise, control transfers from thestep 188 back to thestep 174, discussed above. - Referring to
FIG. 8 , an alternative embodiment of amemory structure 270 is shown as including aSIZE field 272, aPNEXT field 274 corresponding to a pointer to a next block of memory, and aPDATA field 276′, which points to the data corresponding to thestructure 270. The data for thestructure 270 is provided by adata storage area 276. TheSIZE field 272 and thePNEXT field 274 are like theSIZE field 72 and thePNEXT field 74 of theblock 70 ofFIG. 2 . However, unlike theblock 70 ofFIG. 2 , thestructure 270 does not include the data with thestructure 270. Instead, thestructure 270 has a pointer to the data,PDATA 276′, which points to thedata storage area 276. In some embodiments, it may be possible that thestructure 270 and thedata storage area 276 are provided by different heaps, although it is also possible to use the same heap for both. In addition, it may be possible to have thedata storage area 276 be provided in a faster (or slower) memory than thestructure 270, thus allowing quicker access of the data storage area 276 (or structure 270). Of course, the table that contains the pointer to thestructure 270 may also be provided in faster or slower memory. - Note also that the
structure 270 and thedata storage 276 may be accessed in parallel by parallel processors that access thestructure 270 and thedata storage area 276 at the same time. In other words, the structures and the data corresponding to the structures may be stored in separate memories that may be manipulated independently and in parallel. This may be advantageous for multi-processor/parallel processor architectures. Of course, it is also possible to have thestructure 270 and thedata storage area 276 reside in the same memory and be accessed by one processor, in series. Note that the table 82 ofFIG. 3 can be used in connection with thestructure 270 ofFIG. 8 in the same way that the table 82 is used with theblock 70 ofFIG. 2 . - Referring to
FIG. 9 , aflowchart 310 illustrates steps performed in connection with an application requesting a block of memory using thestructure 270 ofFIG. 8 . Theflowchart 310 assumes that the blocks of memory are provided in multiples of eight bytes beginning with the smallest size of eight bytes. Processing begins at afirst step 312 where an index variable, I, is set to the sum of the requested amount of memory (REQSIZE) and digital 111(7). As discussed in more detail below, the index variable I will be used to index a table such as the table 82 illustrated inFIG. 3 . Following thestep 312 is astep 314 where the index variable, I, is shifted right three times. - Following the
step 314 is astep 316 where it is determined if the pointer at location I in the table 82 points to null, meaning that there are no unused structures corresponding to the requested size. If the pointer at the head of the table 82 does not point to null, meaning that there are available unused blocks of memory corresponding to the requested size, then control transfers from thestep 316 to astep 318 where the return value, RETVAL, which is a pointer to the requested block of memory, is set to be equal to PHEAD[I] (the pointer at the head of the linked list of unused structures indexed in the table 82 by I). Note that, for an embodiment disclosed herein, the process that receives the pointer to a bucket (RETVAL) will use RETVAL.PDATA as a pointer to the data. Following thestep 318 is astep 322 where the head of the list in the table is modified to reflect the fact that the first item on the list of unused structures is now being used, and thus is no longer an unused structure. Following thestep 322, processing is complete. - If it is determined at the
test step 316 that the head pointer of the linked list of unused structures points to null, meaning that there are no unused structures having a size corresponding to the requested block of memory, then control transfers from thestep 316 to astep 324 where memory is requested from the heap(s). The request at thestep 324 may be performed by using a conventional memory heap request routine, such as malloc. Note also that there are two memory requests at the step 324: a first request from heap H1 for memory for the structure and a second request from heap H2 for memory for the data storage area. As discussed above, the heaps H1 and H2 may correspond to different memories or the same memory. If the same memory is used for H1 and H2, then one heap (i.e., H1 or H2) may be used. In some embodiments, the storage for the structures may be preallocated, and thus only the memory for the data storage area needs to be allocated at thestep 324. Also note that, in some embodiments, H1 and H2 could be the same heap. - Following the
step 324 is atest step 326 where it is determined if the memory requested at thestep 324 was successful. If so, then control passes to astep 327 where the SIZE field of memory for the data storage area is set to the index, I, times eight (i.e., I shifted left three times). The value placed in the SIZE field corresponds to the size of the data storage area associated with the structure that is being created and will be used when the task that requested the memory returns the memory to the table of unused blocks of memory. Following thestep 327, processing is complete. - If it is determined at the
step 326 that the request(s) for memory from the heap(s) at thestep 324 were unsuccessful, then control passes from thestep 326 to astep 332 where blocks of memory used by all the unused structures are returned to the heap memory. Freeing the memory corresponding to the unused structures at thestep 332 is discussed above in connection withFIG. 5 . Note, however, that, for the embodiment ofFIGS. 8 and 9 , thestep 156 would also include a substep corresponding to freeing TEMP.PDATA prior to freeing TEMP. - Following the
step 332 is astep 334 where the request for appropriate block(s) of memory from the heap(s), similar to the request presented at thestep 324, is made. Note, however, that since thestep 334 follows thestep 332 where all memory corresponding to the unused structures was returned to the heap(s), it is more likely that thestep 334 will successfully be able to provide the requested memory, since freeing all the memory corresponding to unused structures at thestep 332 should increase the amount of heap memory available. Following thestep 334 is atest step 336 where its determined if the memory requested at thestep 334 was successful. If so, then control transfers from thestep 336 to thestep 327, discussed above. Otherwise, control transfers from thestep 336 to astep 338 where an error is returned. Returning the error at thestep 338 indicates that the memory request by the task cannot be filled either with a structure or with memory from the heap(s). Followingstep 338, processing is complete. - Referring to
FIG. 10 , aflow chart 370 illustrates steps performed in connection with populating the table 82 with lists of unused structures in connection with thestructure 270 of the embodiment ofFIG. 8 . Processing begins at afirst step 372 where an index variable, I, is set to one. Following thestep 372 is astep 374 where another index variable, N, is set equal to one. The index variable I represents the index in the table 82 of lists of structures corresponding to the same size data storage area (i.e. each bucket). The index variable N represents the number of free structures placed on each list. - Following
step 374 is astep 376 where two pointers are created by calling a conventional heap memory allocation routine, such as malloc. The first pointer, ND, represents a pointer to a data storage area provided by the heap H1. The second pointer, NB, represents storage for the structure. The amount of memory allocated for ND is the index I times 8 (i.e., I shifted left three times). The amount of memory allocated for NB is the amount of memory taken up by a structure, which is constant for an embodiment disclosed herein. Following thestep 376 is astep 377 where the SIZE field of the new structure being created is set to equal I times eight. Note that the value I times eight may be obtained by shifting I left three bit positions. Also at thestep 377, the pointer NB.PDATA, which is the field of the structure that points to the data storage area, is set equal to ND. - Following the
step 377 is astep 378 where the routine for returning unused memory to the table is called for the structure created by the previous steps. In an embodiment disclosed herein, the steps performed at thestep 378 correspond to theflow chart 160 ofFIG. 6 . Following thestep 378 is atest step 382 where N, the index that counts the number of unused structures in each linked list, is tested to determine if N is greater than NMAX, the maximum number of structures provided on each list corresponding to the same size data storage area. If not, then control transfers from thestep 382 to astep 384 where N is incremented. Following thestep 384, control transfers back to thestep 376 to continue creating new structures. - If it is determined at the
test step 382 that the value of N is greater than NMAX, (i.e., the maximum number of structures provided for each list), then control transfers from thestep 382 to a step 386 where the index variable I is incremented. Following the step 386 is atest step 388 where it is determined if I is greater than IMAX, the maximum number of entries in the table 82. If so, then processing is complete. Otherwise, control transfers from thestep 388 back to thestep 374, discussed above. - While the invention has been disclosed in connection with various embodiments, modifications thereon will be readily apparent to those skilled in the art. Accordingly, the spirit and scope of the invention is set forth in the following claims.
Claims (20)
1-42. (canceled)
43. A method of managing memory, comprising:
subdividing a first volatile memory into at least a first linked list of blocks corresponding to a first size and at least a second linked list of blocks corresponding to a second size that is greater than said first size, wherein user data stored by each linked list of blocks is provided entirely in the first volatile memory;
providing, in a second volatile memory different from said first memory, a table that includes pointers that point directly to a first block of each of said linked lists, wherein the second volatile memory does not contain any linked lists of blocks provided in the first volatile memory;
in response to a request for an amount of memory that is less than or equal to the first size, providing a pointer to said first block of said first linked list of blocks and modifying said table to point to a remaining portion of said first linked list of blocks that does not include said first block; and
in response to a request for an amount of memory that is greater than the first size and less than or equal to the second size, providing a pointer to said first block of said second linked list of blocks and modifying said table to point to a remaining portion of said second linked list of blocks that does not include said first block of said second linked list of blocks.
44. A method, according to claim 43 , wherein subdividing said first memory further includes subdividing said first memory into a plurality of linked lists of blocks, wherein each particular linked list contains blocks corresponding to one size that is different from that of blocks not in the particular linked list.
45. A method, according to claim 44 , wherein each set of blocks corresponds to a size that is a multiple of a predetermined value.
46. A method, according to claim 45 , wherein the predetermined value is 2n, where n is an integer.
47. A method, according to claim 46 , wherein the predetermined value is eight.
48. A method, according to claim 46 , wherein the table includes one entry for each multiple of corresponding block sizes.
49. A method, according to claim 43 , wherein the first and second memories have different access rates.
50. A method, according to claim 43 , wherein the entry for each linked list of blocks is a pointer to one of: said first block and null.
51. A method, according to claim 43 , wherein the blocks contain a section in which user data is stored.
52. A method, according to claim 43 , further comprising:
in response to a request for an amount of memory corresponding to a particular linked list of blocks for which there are no unused blocks, requesting a block of memory from a heap.
53. A method, according to claim 52 , further comprising:
in response to there being no block of memory from the heap corresponding to the request, returning all unused blocks of memory to the heap.
54. A method, according to claim 53 , further comprising:
following returning all unused blocks to the heap, rerequesting a block of memory from the heap.
55. A computer program product, contained on a tangible computer-readable medium, comprising:
executable code that, when executed by a computer, subdivides a first volatile memory into at least a first linked list of blocks corresponding to a first size and at least a second linked list of blocks corresponding to a second size that is greater than said first size, wherein user data stored by each linked list of blocks is provided entirely in the first volatile memory;
executable code that, when executed by a computer, provides, in a second volatile memory different from said first memory, a table that includes pointers that point directly to a first block of each of said linked lists, wherein the second volatile memory does not contain any linked lists of blocks provided in the first volatile memory;
executable code that, when executed by a computer, provides a pointer to said first block of said first linked list of blocks and modifies said table to point to a remaining portion of said first linked list of blocks that does not include said first block in response to a request for an amount of memory that is less than or equal to the first size; and
executable code that, when executed by a computer, provides a pointer to said first block of said second linked list of blocks and modifies said table to point to a remaining portion of said second linked list of blocks that does not include said first block of said second linked list of blocks in response to a request for an amount of memory that is greater than the first size and less than or equal to the second size.
56. A computer program product, according to claim 55 , wherein the executable code that subdivides the first memory subdivides the first memory into a plurality of linked lists of blocks, wherein each particular linked list contains blocks corresponding to one size that is different from that of blocks not in the particular linked list.
57. A computer program product, according to claim 56 , wherein each linked list of blocks corresponds to a size that is a multiple of a predetermined value.
58. A computer program product, according to claim 57 , wherein the predetermined value is 2n, where n is an integer.
59. A computer program product, according to claim 58 , wherein the predetermined value is eight.
60. A computer program product, according to claim 55 , further comprising:
executable code that, when executed by a computer, requests a block of memory from a heap in response to a request for an amount of memory corresponding to a particular set of blocks for which there are no unused blocks.
61. A computer program product, according to claim 59 , further comprising:
executable code that, when executed by a computer, returns all unused blocks of memory to the heap in response to there being no block of memory from the heap corresponding to the request.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/002,081 US20080162863A1 (en) | 2002-04-16 | 2007-12-13 | Bucket based memory allocation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/123,661 US7330956B1 (en) | 2002-04-16 | 2002-04-16 | Bucket based memory allocation |
US12/002,081 US20080162863A1 (en) | 2002-04-16 | 2007-12-13 | Bucket based memory allocation |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/123,661 Continuation US7330956B1 (en) | 2002-04-16 | 2002-04-16 | Bucket based memory allocation |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080162863A1 true US20080162863A1 (en) | 2008-07-03 |
Family
ID=34651945
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/123,661 Expired - Lifetime US7330956B1 (en) | 2002-04-16 | 2002-04-16 | Bucket based memory allocation |
US12/002,081 Abandoned US20080162863A1 (en) | 2002-04-16 | 2007-12-13 | Bucket based memory allocation |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/123,661 Expired - Lifetime US7330956B1 (en) | 2002-04-16 | 2002-04-16 | Bucket based memory allocation |
Country Status (1)
Country | Link |
---|---|
US (2) | US7330956B1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070229900A1 (en) * | 2006-03-31 | 2007-10-04 | Konica Minolta Systems Laboratory, Inc. | Systems and methods for display list management |
US20090244593A1 (en) * | 2008-03-31 | 2009-10-01 | Tim Prebble | Systems and Methods for Parallel Display List Rasterization |
US20100060934A1 (en) * | 2008-09-11 | 2010-03-11 | Darrell Eugene Bellert | Systems and Methods for Optimal Memory Allocation Units |
US20100079809A1 (en) * | 2008-09-30 | 2010-04-01 | Darrell Eugene Bellert | Systems and Methods for Optimized Printer Throughput in a Multi-Core Environment |
CN101799786A (en) * | 2009-02-11 | 2010-08-11 | 三星电子株式会社 | Embedded system for managing dynamic memory and methods of dynamic memory management |
US20110022853A1 (en) * | 2009-07-23 | 2011-01-27 | International Business Machines Corporation | Encrypting data in volatile memory |
US8069330B2 (en) * | 2006-03-31 | 2011-11-29 | Infovista Sa | Memory management system for reducing memory fragmentation |
US20130198479A1 (en) * | 2012-01-30 | 2013-08-01 | Stephen Jones | Parallel dynamic memory allocation using a lock-free pop-only fifo |
US20140143516A1 (en) * | 2012-11-20 | 2014-05-22 | International Business Machines Corporation | Out-of-memory avoidance in dynamic virtual machine memory adjustment |
US8782371B2 (en) * | 2008-03-31 | 2014-07-15 | Konica Minolta Laboratory U.S.A., Inc. | Systems and methods for memory management for rasterization |
US8817032B2 (en) | 2008-08-29 | 2014-08-26 | Konica Minolta Laboratory U.S.A., Inc. | Systems and methods for framebuffer management |
US11182089B2 (en) * | 2019-07-01 | 2021-11-23 | International Business Machines.Corporation | Adapting memory block pool sizes using hybrid controllers |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7546588B2 (en) * | 2004-09-09 | 2009-06-09 | International Business Machines Corporation | Self-optimizable code with code path selection and efficient memory allocation |
CN101673246A (en) * | 2009-08-06 | 2010-03-17 | 深圳市融创天下科技发展有限公司 | High-efficient first-in first-out (FIFO) data pool reading and writing method |
US9542227B2 (en) * | 2012-01-30 | 2017-01-10 | Nvidia Corporation | Parallel dynamic memory allocation using a lock-free FIFO |
CN103914356A (en) * | 2014-03-12 | 2014-07-09 | 汉柏科技有限公司 | Memory rewriting location method |
JP6818982B2 (en) * | 2015-06-01 | 2021-01-27 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd | How to store files |
US10235292B2 (en) * | 2016-04-21 | 2019-03-19 | Dell Products L.P. | Method and system for implementing lock free shared memory with single writer and multiple readers |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5469559A (en) * | 1993-07-06 | 1995-11-21 | Dell Usa, L.P. | Method and apparatus for refreshing a selected portion of a dynamic random access memory |
US5784698A (en) * | 1995-12-05 | 1998-07-21 | International Business Machines Corporation | Dynamic memory allocation that enalbes efficient use of buffer pool memory segments |
US6757802B2 (en) * | 2001-04-03 | 2004-06-29 | P-Cube Ltd. | Method for memory heap and buddy system management for service aware networks |
US6779084B2 (en) * | 2002-01-23 | 2004-08-17 | Intel Corporation | Enqueue operations for multi-buffer packets |
US6829769B2 (en) * | 2000-10-04 | 2004-12-07 | Microsoft Corporation | High performance interprocess communication |
US6907508B2 (en) * | 2003-02-26 | 2005-06-14 | Emulex Design & Manufacturing Corporation | Structure and method for managing available memory resources |
US7035989B1 (en) * | 2000-02-16 | 2006-04-25 | Sun Microsystems, Inc. | Adaptive memory allocation |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5206939A (en) | 1990-09-24 | 1993-04-27 | Emc Corporation | System and method for disk mapping and data retrieval |
WO1994002898A1 (en) * | 1992-07-24 | 1994-02-03 | Microsoft Corporation | Computer method and system for allocating and freeing memory |
US5577243A (en) * | 1994-03-31 | 1996-11-19 | Lexmark International, Inc. | Reallocation of returned memory blocks sorted in predetermined sizes and addressed by pointer addresses in a free memory list |
US5623654A (en) * | 1994-08-31 | 1997-04-22 | Texas Instruments Incorporated | Fast fragmentation free memory manager using multiple free block size access table for a free list |
US5845147A (en) | 1996-03-19 | 1998-12-01 | Emc Corporation | Single lock command for an I/O storage system that performs both locking and I/O data operation |
US5784699A (en) * | 1996-05-24 | 1998-07-21 | Oracle Corporation | Dynamic memory allocation in a computer using a bit map index |
US5857208A (en) * | 1996-05-31 | 1999-01-05 | Emc Corporation | Method and apparatus for performing point in time backup operation in a computer system |
US5778394A (en) | 1996-12-23 | 1998-07-07 | Emc Corporation | Space reclamation system and method for use in connection with tape logging system |
US6175900B1 (en) * | 1998-02-09 | 2001-01-16 | Microsoft Corporation | Hierarchical bitmap-based memory manager |
US6490670B1 (en) * | 1998-04-24 | 2002-12-03 | International Business Machines Corporation | Method and apparatus for efficiently allocating objects in object oriented systems |
US6219772B1 (en) * | 1998-08-11 | 2001-04-17 | Autodesk, Inc. | Method for efficient memory allocation of small data blocks |
US6446183B1 (en) * | 2000-02-15 | 2002-09-03 | International Business Machines Corporation | Systems and methods for persistent and robust memory management |
US6539464B1 (en) * | 2000-04-08 | 2003-03-25 | Radoslav Nenkov Getov | Memory allocator for multithread environment |
US6427195B1 (en) * | 2000-06-13 | 2002-07-30 | Hewlett-Packard Company | Thread local cache memory allocator in a multitasking operating system |
US20020161982A1 (en) * | 2001-04-30 | 2002-10-31 | Erik Riedel | System and method for implementing a storage area network system protocol |
-
2002
- 2002-04-16 US US10/123,661 patent/US7330956B1/en not_active Expired - Lifetime
-
2007
- 2007-12-13 US US12/002,081 patent/US20080162863A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5469559A (en) * | 1993-07-06 | 1995-11-21 | Dell Usa, L.P. | Method and apparatus for refreshing a selected portion of a dynamic random access memory |
US5784698A (en) * | 1995-12-05 | 1998-07-21 | International Business Machines Corporation | Dynamic memory allocation that enalbes efficient use of buffer pool memory segments |
US7035989B1 (en) * | 2000-02-16 | 2006-04-25 | Sun Microsystems, Inc. | Adaptive memory allocation |
US6829769B2 (en) * | 2000-10-04 | 2004-12-07 | Microsoft Corporation | High performance interprocess communication |
US6757802B2 (en) * | 2001-04-03 | 2004-06-29 | P-Cube Ltd. | Method for memory heap and buddy system management for service aware networks |
US6779084B2 (en) * | 2002-01-23 | 2004-08-17 | Intel Corporation | Enqueue operations for multi-buffer packets |
US6907508B2 (en) * | 2003-02-26 | 2005-06-14 | Emulex Design & Manufacturing Corporation | Structure and method for managing available memory resources |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8526049B2 (en) | 2006-03-31 | 2013-09-03 | Konica Minolta Laboratory U.S.A., Inc. | Systems and methods for display list management |
US20070236733A1 (en) * | 2006-03-31 | 2007-10-11 | Stuart Guarnieri | Systems and methods for display list management |
US20070229900A1 (en) * | 2006-03-31 | 2007-10-04 | Konica Minolta Systems Laboratory, Inc. | Systems and methods for display list management |
US8069330B2 (en) * | 2006-03-31 | 2011-11-29 | Infovista Sa | Memory management system for reducing memory fragmentation |
US20090244593A1 (en) * | 2008-03-31 | 2009-10-01 | Tim Prebble | Systems and Methods for Parallel Display List Rasterization |
US8782371B2 (en) * | 2008-03-31 | 2014-07-15 | Konica Minolta Laboratory U.S.A., Inc. | Systems and methods for memory management for rasterization |
US8228555B2 (en) | 2008-03-31 | 2012-07-24 | Konica Minolta Laboratory U.S.A., Inc. | Systems and methods for parallel display list rasterization |
US8817032B2 (en) | 2008-08-29 | 2014-08-26 | Konica Minolta Laboratory U.S.A., Inc. | Systems and methods for framebuffer management |
US20100060934A1 (en) * | 2008-09-11 | 2010-03-11 | Darrell Eugene Bellert | Systems and Methods for Optimal Memory Allocation Units |
US8854680B2 (en) | 2008-09-11 | 2014-10-07 | Konica Minolta Laboratory U.S.A., Inc. | Systems and methods for optimal memory allocation units |
US8861014B2 (en) | 2008-09-30 | 2014-10-14 | Konica Minolta Laboratory U.S.A., Inc. | Systems and methods for optimized printer throughput in a multi-core environment |
US20100079809A1 (en) * | 2008-09-30 | 2010-04-01 | Darrell Eugene Bellert | Systems and Methods for Optimized Printer Throughput in a Multi-Core Environment |
US20100205374A1 (en) * | 2009-02-11 | 2010-08-12 | Samsung Electronics Co., Ltd. | Embedded system for managing dynamic memory and methods of dynamic memory management |
CN101799786A (en) * | 2009-02-11 | 2010-08-11 | 三星电子株式会社 | Embedded system for managing dynamic memory and methods of dynamic memory management |
US8954753B2 (en) | 2009-07-23 | 2015-02-10 | International Business Machines Corporation | Encrypting data in volatile memory |
US8281154B2 (en) | 2009-07-23 | 2012-10-02 | International Business Machines Corporation | Encrypting data in volatile memory |
US20110022853A1 (en) * | 2009-07-23 | 2011-01-27 | International Business Machines Corporation | Encrypting data in volatile memory |
US9417881B2 (en) * | 2012-01-30 | 2016-08-16 | Nvidia Corporation | Parallel dynamic memory allocation using a lock-free pop-only FIFO |
US20130198479A1 (en) * | 2012-01-30 | 2013-08-01 | Stephen Jones | Parallel dynamic memory allocation using a lock-free pop-only fifo |
CN103838633A (en) * | 2012-11-20 | 2014-06-04 | 国际商业机器公司 | Out-of-memory avoidance in dynamic virtual machine memory adjustment |
US9298611B2 (en) | 2012-11-20 | 2016-03-29 | International Business Machines Corporation | Out-of memory avoidance in dynamic virtual machine memory adjustment |
US9311236B2 (en) * | 2012-11-20 | 2016-04-12 | International Business Machines Corporation | Out-of-memory avoidance in dynamic virtual machine memory adjustment |
US20140143516A1 (en) * | 2012-11-20 | 2014-05-22 | International Business Machines Corporation | Out-of-memory avoidance in dynamic virtual machine memory adjustment |
US11182089B2 (en) * | 2019-07-01 | 2021-11-23 | International Business Machines.Corporation | Adapting memory block pool sizes using hybrid controllers |
Also Published As
Publication number | Publication date |
---|---|
US7330956B1 (en) | 2008-02-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20080162863A1 (en) | Bucket based memory allocation | |
US6425051B1 (en) | Method, system, program, and data structures for enabling a controller accessing a storage device to handle requests to data in a first data format when the storage device includes data in a second data format | |
US6490666B1 (en) | Buffering data in a hierarchical data storage environment | |
CA2436517C (en) | Method and apparatus for data processing | |
US5664160A (en) | Computer program product for simulating a contiguous addressable data space | |
US6792518B2 (en) | Data storage system having mata bit maps for indicating whether data blocks are invalid in snapshot copies | |
US6889288B2 (en) | Reducing data copy operations for writing data from a network to storage of a cached data storage system by organizing cache blocks as linked lists of data fragments | |
US5742793A (en) | Method and apparatus for dynamic memory management by association of free memory blocks using a binary tree organized in an address and size dependent manner | |
US6938134B2 (en) | System for storing block allocation information on multiple snapshots | |
US6275830B1 (en) | Compile time variable size paging of constant pools | |
US7716448B2 (en) | Page oriented memory management | |
US7587566B2 (en) | Realtime memory management via locking realtime threads and related data structures | |
US7415653B1 (en) | Method and apparatus for vectored block-level checksum for file system data integrity | |
US7225314B1 (en) | Automatic conversion of all-zero data storage blocks into file holes | |
US6804761B1 (en) | Memory allocation system and method | |
US11314689B2 (en) | Method, apparatus, and computer program product for indexing a file | |
US20100030994A1 (en) | Methods, systems, and computer readable media for memory allocation and deallocation | |
US20030217075A1 (en) | Method for reserving pages of database | |
US6219772B1 (en) | Method for efficient memory allocation of small data blocks | |
US20120151170A1 (en) | System and method of squeezing memory slabs empty | |
US5963982A (en) | Defragmentation of stored data without pointer indirection | |
US7392361B2 (en) | Generic reallocation function for heap reconstitution in a multi-processor shared memory environment | |
US7512135B2 (en) | Method for transferring data among a logical layer, physical layer, and storage device | |
CN111459884B (en) | Data processing method and device, computer equipment and storage medium | |
US6782444B1 (en) | Digital data storage subsystem including directory for efficiently providing formatting information for stored records |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MCCLURE, STEVEN T.;CHALMER, STEVEN R.;NIVER, BRETT D.;REEL/FRAME:020296/0879 Effective date: 20020412 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |