US6675281B1 - Distributed mapping scheme for mass storage system - Google Patents

Distributed mapping scheme for mass storage system Download PDF

Info

Publication number
US6675281B1
US6675281B1 US10/054,560 US5456002A US6675281B1 US 6675281 B1 US6675281 B1 US 6675281B1 US 5456002 A US5456002 A US 5456002A US 6675281 B1 US6675281 B1 US 6675281B1
Authority
US
United States
Prior art keywords
data
page
physical address
storage system
mass storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US10/054,560
Inventor
Yaw Oh
Jen Chieh Tuan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TM Tech Inc
Original Assignee
iCreate Technologies Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iCreate Technologies Corp filed Critical iCreate Technologies Corp
Priority to US10/054,560 priority Critical patent/US6675281B1/en
Assigned to ICREATE TECHNOLOGIES CORPORATION reassignment ICREATE TECHNOLOGIES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TUAN, JEN CHIEH, OH, YAW
Application granted granted Critical
Publication of US6675281B1 publication Critical patent/US6675281B1/en
Assigned to TM TECHNOLOGY INC. reassignment TM TECHNOLOGY INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ICREATE TECHNOLOGIES CORPORATION
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/10Programming or data input circuits
    • G11C16/102External programming circuits, e.g. EPROM programmers; In-circuit programming or reprogramming; EPROM emulators

Definitions

  • the invention relates to the fabrication of integrated circuit devices, and more particularly, to a method of rapidly updating the content of an arbitrarily selected flash memory cell and evenly spreading out the erasing of the blocks of data.
  • Flash memory is presently being used for mass storage, the above indicated characteristics highlight that there are a number of obvious advantages of the flash memory when compared with hard disk data storage, such as durability, having a small form factor, having short access time and relatively low power consumption.
  • disadvantages of the flash memory must be cited low storage capacity (presently available up to 1 Giga-byte which must be compared with 30 Giga-byte for a hard disk) while the flash memory is also more costly to create when compare with disk storage devices.
  • One notable difference between the flash memory and disk storage is that flash memory is limited in its erasing capability, a drawback that does not exist for disk storage devices.
  • the flash memory must be erased before write
  • the erase is bounded by a limited number of times that the erase operation can be performed, typically between 100,000 and 1,000,000 times.
  • a wear-level algorithm must be part of the design.
  • the system must be provided with a dynamic mapping table of logical and physical addresses so that the data does not need to be frequently erased.
  • an algorithm must be provided that assures even wear of the memory cells, such that the entire memory does not become inoperative due to a portion of the memory reaching the end of life of the memory.
  • FIG. 3 shows that:
  • a flash memory is divided in to blocks or cylinders, a block forms a unit of erasure
  • a block of a flash memory consists of pages or sectors, typically up to 16 pages, a page forms a unit of read/write, and
  • each of the pages of the flash memory contains 16 bytes of header information and 512 bytes of data.
  • the conventional method of erasing one block of data at the time present problems for a conventional flash memory. If for instance only one byte of data needs to be changed in the flash memory, the data that is contained in the block (that comprises this byte) must be copied to a buffer, the buffer must then be updated, then the old data must be erased in the original block and finally a copy of the new data must be copied from the buffer to the block since erasing a block of data is a relatively time consuming operation, most systems will not approach data update operations in this manner.
  • this empty block is identified by a physical address
  • mapping table can be created as follows:
  • mapping table provides and maintains a one-to-one correspondence between logical data addresses and the physical addresses where the data resides on a data storage medium.
  • mapping table The conventional methods that are applied for the storage of a mapping table comprises one of the following:
  • CAM Content Addressable Memory
  • mapping table is stored in any location of the flash memory.
  • U.S. Pat. No. 6,230,233 discloses a mass storage system made of flash EEPROM cells. The relative usage of memory banks is monitored, and physical addresses are periodically changed to even-out flash cell wear.
  • U.S. Pat. No. 6,000,006 (Bruce et al.) describes a flash memory-based mass storage system. The physical address map is maintained in RAM. Wear leveling is performed.
  • U.S. Pat. No. 5,742,934 (Shinohara) teaches a flash memory-based hard disk. An address conversion table that depends on logical and physical sector numbers is used to extend the memory life.
  • U.S. Pat. No. 5,740,396 (Mason) describes a solid state disk comprising a flash memory. An address conversion method is used to convert the logical address to the physical address.
  • a main objective of the invention is to improve the procedure that is used to update data contained in the flash memory cell.
  • Another objective of the invention is to spread the erasing of data from a flash memory cell over the sequence of blocks that is being erased.
  • a new algorithm for the updating and erasing of flash memory data.
  • the new algorithm effects and improves the write, the update and the read operations of the flash memory cell.
  • FIG. 1 shows a flow diagram of a conventional scheme for updating data in a flash memory cell.
  • FIG. 2 shows a flow diagram of the new Distributed Mapping Scheme of the invention.
  • FIG. 3 highlights the conventional organization of a flash memory.
  • FIG. 4 shows how a one-to-one correspondence between logical and physical addresses is maintained.
  • FIGS. 5 through 9 show the content of a block of data at various stages of update.
  • the flash memory As the density of the flash memory increases and the price decreases, the flash memory has became the favorite storing device for the portable systems.
  • the current method of updating flash memory will be highlighted first in order to highlight the shortcomings of this method.
  • One drawback with using flash memory devices is that it is difficult to change the content of an arbitrary memory cell.
  • This patent is designed to remedy the above highlighted drawbacks of the flash memory by defining a procedure, which can update the data quickly and spread the erasing of the blocks evenly over the whole memory that is being erased.
  • the flash memory is currently being treated as a hard disk in which the memory area is partitioned into blocks for mass-storage.
  • the flash memory controller which interfaces with a host software function, is the principal control mechanism for reading, writing and updating data is the flash memory cell.
  • Data from the host is written to the flash memory by making reference to a mapping table of logical and physical addresses, the host controls and updates the mapping table, the mapping table can be stored internally in the flash memory or externally to the flash memory.
  • the host views the flash memory as one continuous area of data that is partitioned into blocks of the (flash memory) data.
  • the flash memory data are defined by the host as continuous logical addresses, the logical addresses are part of the mapping table.
  • each block of the physical memory is being defined by a physical address.
  • a given logical address may have a different physical address at different times.
  • the host maintains the logical addresses for a block of data, each logical address corresponds with a physical flash memory address.
  • the flash memory is addressed only via the logical addresses that are maintained by the host.
  • the method that is used to identify unusable data in the flash memory cell is not germane to the invention and is typically a method of testing the flash memory cell. As part of the tests results is indicated the physical location in the flash memory that is found to be defective, this physical location is referred to by a physical address that is maintained by a host software function.
  • the logical addresses of the host software function are sequential addresses ranging from for instance 1 through 1,000. Within each of these logical addresses a record of information (that relates to the memory of the flash memory cell) is maintained by the host software function, the most important of these records is the physical address in the flash memory cell to which this logical address points. By searching the logical address and the records that are maintained associated with these logical addresses, the host can find a physical address or location of the flash memory cell and can determine data in this physical location of the flash memory cell.
  • mapping table is as follows. It thereby assumed that initially logical and physical addresses have a one-to-one correspondence, a correspondence that is valid as long as no data update to the flash memory cell have been performed.
  • the host system will retrieve this data from the physical location 1001 of the flash memory.
  • the current procedure for updating the data in the flash memory cell is as follows, see also FIG. 1 :
  • step 10 copy the data from the flash memory data block to a buffer; the data in the flash memory that needs to be updated is pointed to by the logical address that is maintained by the host in the logical mapping table; testing or other intervention have indicated to the host the logical address of the data in the flash memory that must be updated; the host finds the physical address for this data by doing a table look-up of the physical flash memory address and reading out the physical address that corresponds, in the logical to physical mapping table, with the physical address of the to be replaced data; FIG. 1, step 10
  • step 12 2. update the data in the buffer; the host has maintained the address of the buffer into which the to be updated data has been copied, the update data has been provided to the host and is entered by the host into the buffer, overlying and thereby erasing the unusable data; FIG. 1, step 12
  • step 14 look, by doing a table look-up in the logical mapping table of the host, for an empty block in the flash memory; FIG. 1, step 14
  • the host writes the data from the buffer to the empty block in the flash memory; FIG. 1, step 16 , and
  • the host updates the physical address of the previously empty block in the logical address or mapping table of the host; FIG. 1, step 18 .
  • mapping table is as follow:
  • the physical flash memory block 1 will be written to a different flash memory location each time an update takes place. In this way, the flash memory will not be damaged due to frequent updating of one flash memory.
  • mapping table The deficiencies of the present procedure, which has been highlighted above, will now be indicated.
  • the problem with the mapping table is that the host system must maintain the mapping table, from which follows that:
  • losing power source means losing the data that is stored
  • mapping table is not stored in a particular location but is distributed all over the flash memory.
  • each block of data is partitioned into multiple pages and each page contains two parts, a data part and a spare part.
  • the data part is used for storing data from the host while the spare part is for the user to define.
  • the mapping table for this patent uses the spare part.
  • the algorithm for this patent can be illustrated in the following examples, making use of the three main operations of the flash memory cell, that is a write, an update and a read operation of the flash memory cell.
  • the flash memory cell does not contain any data, so the system logical addresses will be the same as the physical addresses.
  • the system will place the data at the data portion of physical address 2 , the spare portion of the physical address will be empty.
  • step 28 enter the physical address of the new block in the spare portion of the original unusable page in flash memory; step 30 , FIG. 2.; the host support function is then available for additional updates, function 34 , FIG. 2
  • step 28 create a pointer to the updated data in the spare portion of the preceding usable data; step 32 , FIG. 2; the host support function is then available for additional updates, function 36 , FIG. 2 .
  • the new address will be appended to the old address in the spare portion of the page. So checking the last entry of the address will point to the latest updated data.
  • the system needs to update the data in the logical address 2 .
  • the system has located an empty block at the physical address 7 and has written the updated data into this address.
  • the following table illustrates the operation.
  • the host system When the host system retrieves the data, the host system will check whether the spare portion of the data is empty. For this procedure:
  • the data that needs to be retrieved is identified by the physical address of these data in flash memory (original) physical address of these data in flash memory
  • the host does a table search and locates this original physical address
  • the host finds the data page which has no entry in the spare part of the data page, identifying the (last) page that has not been updated and that therefore contains usable data.
  • the unusable data is copied to a buffer
  • the buffer is updated with new data
  • the new data is written to the data portion of the new block
  • the host finds the data page which has no entry in the spare part of the data page, identifying the (last) page that has not been updated and that therefore contains usable data.
  • FIG. 4 there is shown how a one-to-one correspondence between logical and physical addresses is maintained.
  • Each of the blocks of data, starting with block 1 and from there incrementally proceeding to additional blocks of data controlled by the distributive mapping table that is shown in overview in FIG. 4, is referred to by 16 Logical addresses (LA), each logical address containing a first or spare part and a second or data part, as highlighted in FIG. 4 .
  • the distributive mapping table of the invention will be stored in the header section of the flash memory.
  • the physical address will be the same as the logical address unless otherwise noted in the table.
  • the block of data that is shown in FIG. 5 contains, as a first record in the block, an index that points to the most up-to-date physical address for this block.
  • the first spare and data records of the block form page 0 of this block.
  • the index INDX therefore points to the location of PBA 4 since the balance of the spare part of page 2 is empty (FFFF entries or all ones).
  • PBAX refers to the physical block address replacing this block of data while INDX is the index that indicated which page in the block has the most up-to-date physical address.
  • the most up-to-date physical address is stored in the spare or header record of page 2 .
  • the first four bytes of the header record can be used to store the physical address of the updated data for the block
  • the INDX shown indicates that the third page will be read and that the physical address PBA 4 will be obtained. The new address for this block will be found in location PBA 4 .
  • FIG. 6 shows an example of an initial write operation, immediately after the initialization, all bytes will contain “1” values while date will be written as “data 0”, “data 1” etc.
  • FIG. 7 shows the updating of “data 1”, the “1001” record in the header record of “data 0” is the location of the block having the new “data 1”.
  • the new “data 1” is placed in the first empty physical block at “1001”, the INDX is still at page 0 .
  • FIG. 8 shows that block 1001 contains the updated data.
  • FIG. 9 shows updating data 1 again, the new physical address is now in block 1002 , which is located in page 1 .
  • the INDX is therefore defined at this time as 1111 1111 1111 1110, whereby the page number is defined by the first “1” bit from the right of the string. Hence “FFFE” indicates that the page number is “1”.
  • a read operation that is reading data 1, comprises:
  • the invention provides for:

Abstract

In accordance with the objectives of the invention a new method is provided for the updating and erasing of flash memory data. The new method effects and improves the write, the update and the read operations of the flash memory cell.

Description

BACKGROUND OF THE INVENTION
(1) Field of the Invention
The invention relates to the fabrication of integrated circuit devices, and more particularly, to a method of rapidly updating the content of an arbitrarily selected flash memory cell and evenly spreading out the erasing of the blocks of data.
(2) Description of the Prior Art
Some of the well known characteristics of flash memory design and application can be highlighted as the following:
durability
have a small form factor
have a short access time, and
consume lower power.
For comparative purposes, the following characteristics as they apply to hard disk can be identified:
large storage capacity
are economical, and
have no limit on erase capabilities.
Flash memory is presently being used for mass storage, the above indicated characteristics highlight that there are a number of obvious advantages of the flash memory when compared with hard disk data storage, such as durability, having a small form factor, having short access time and relatively low power consumption. As disadvantages of the flash memory must be cited low storage capacity (presently available up to 1 Giga-byte which must be compared with 30 Giga-byte for a hard disk) while the flash memory is also more costly to create when compare with disk storage devices. One notable difference between the flash memory and disk storage is that flash memory is limited in its erasing capability, a drawback that does not exist for disk storage devices.
The problems with flash memory data are:
the flash memory must be erased before write
long erase time is required
the erase is bounded by a limited number of times that the erase operation can be performed, typically between 100,000 and 1,000,000 times.
An effective system design is required that addresses the above stated issues of flash memory implementation. For this and other reasons, a mass storage system that uses flash memory must have the following design features:
dynamic mapping of logical and physical addresses must be provided, and
a wear-level algorithm must be part of the design.
The system must be provided with a dynamic mapping table of logical and physical addresses so that the data does not need to be frequently erased. In addition, an algorithm must be provided that assures even wear of the memory cells, such that the entire memory does not become inoperative due to a portion of the memory reaching the end of life of the memory.
The conventional organization of a flash memory is highlighted in FIG. 3, which shows that:
a flash memory is divided in to blocks or cylinders, a block forms a unit of erasure
a block of a flash memory consists of pages or sectors, typically up to 16 pages, a page forms a unit of read/write, and
each of the pages of the flash memory contains 16 bytes of header information and 512 bytes of data.
One of the conventional methods that is applied to update data in flash memory follows the steps of:
copy data from a block to a buffer
update the data in the buffer with new data
erase data in the original block, and
copy the updated data from the buffer back to the block.
The conventional method of erasing one block of data at the time present problems for a conventional flash memory. If for instance only one byte of data needs to be changed in the flash memory, the data that is contained in the block (that comprises this byte) must be copied to a buffer, the buffer must then be updated, then the old data must be erased in the original block and finally a copy of the new data must be copied from the buffer to the block since erasing a block of data is a relatively time consuming operation, most systems will not approach data update operations in this manner.
Another conventional method that is applied to update data in flash memory follows the steps of:
copy data from a block to a buffer
update the data in the buffer with new data
find an empty block, this empty block is identified by a physical address
copy the updated data from the buffer back to the block, and
record or update the physical address of the located empty block in a corresponding mapping table.
The advantages of this latter method are:
eliminating of frequent erasing of data
saving of the long erase time, and
evening out of the wear-level of the data blocks.
The need however remains for an additional mapping table. Such a mapping table can be created as follows:
Logical address Physical address
0 0
1 1
2 1000
3 3
4 1001
The above highlighted mapping table provides and maintains a one-to-one correspondence between logical data addresses and the physical addresses where the data resides on a data storage medium.
The conventional methods that are applied for the storage of a mapping table comprises one of the following:
stored on the SRAM in the controller
stored on the Content Addressable Memory (CAM) in the controller, and
stored in flash memory external to the controller.
The problems that are encountered with the first two of the above highlighted methods is that, if power is lost to the flash memory, all the data that is contained in the mapping table is lost. In addition, storing the mapping table in a specific place in a flash memory encounters the conventional problem of flash memory wear or endurance. The invention provides a method whereby the mapping table is stored in any location of the flash memory.
U.S. Pat. No. 6,230,233 (Lofgren et al.) discloses a mass storage system made of flash EEPROM cells. The relative usage of memory banks is monitored, and physical addresses are periodically changed to even-out flash cell wear.
U.S. Pat. No. 6,000,006 (Bruce et al.) describes a flash memory-based mass storage system. The physical address map is maintained in RAM. Wear leveling is performed.
U.S. Pat. No. 5,742,934 (Shinohara) teaches a flash memory-based hard disk. An address conversion table that depends on logical and physical sector numbers is used to extend the memory life.
U.S. Pat. No. 5,740,396 (Mason) describes a solid state disk comprising a flash memory. An address conversion method is used to convert the logical address to the physical address.
SUMMARY OF THE INVENTION
A main objective of the invention is to improve the procedure that is used to update data contained in the flash memory cell.
Another objective of the invention is to spread the erasing of data from a flash memory cell over the sequence of blocks that is being erased.
In accordance with the objectives of the invention a new algorithm is provided for the updating and erasing of flash memory data. The new algorithm effects and improves the write, the update and the read operations of the flash memory cell.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows a flow diagram of a conventional scheme for updating data in a flash memory cell.
FIG. 2 shows a flow diagram of the new Distributed Mapping Scheme of the invention.
FIG. 3 highlights the conventional organization of a flash memory.
FIG. 4 shows how a one-to-one correspondence between logical and physical addresses is maintained.
FIGS. 5 through 9 show the content of a block of data at various stages of update.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
As the density of the flash memory increases and the price decreases, the flash memory has became the favorite storing device for the portable systems.
The current method of updating flash memory will be highlighted first in order to highlight the shortcomings of this method. One drawback with using flash memory devices is that it is difficult to change the content of an arbitrary memory cell.
To change just one single bit of the memory requires the following steps:
1. copy the block of data containing the bit to be changed to a buffer; this block of data contains at least 8,000 (8K) bytes of memory data
2. update the data that is now contained in the buffer
3. erase the data contained in the original block of memory
4. copy the content of the buffer back to the location of the original block.
In addition to this cumbersome procedure that is required to update data in a flash memory cell, another drawback of the flash memory is that there is a limit to the number of times that a block of data can be erased. The data block will be damaged if this block is erased a large number of times. As a consequence of these drawbacks, the update and erasing system must be designed such that (the number of) erasure(s) is not concentrated in any particular area of the flash memory but is spread throughout the whole memory.
This patent is designed to remedy the above highlighted drawbacks of the flash memory by defining a procedure, which can update the data quickly and spread the erasing of the blocks evenly over the whole memory that is being erased.
The flash memory is currently being treated as a hard disk in which the memory area is partitioned into blocks for mass-storage. The flash memory controller, which interfaces with a host software function, is the principal control mechanism for reading, writing and updating data is the flash memory cell. Data from the host is written to the flash memory by making reference to a mapping table of logical and physical addresses, the host controls and updates the mapping table, the mapping table can be stored internally in the flash memory or externally to the flash memory. The host views the flash memory as one continuous area of data that is partitioned into blocks of the (flash memory) data. The flash memory data are defined by the host as continuous logical addresses, the logical addresses are part of the mapping table. At the same time, each block of the physical memory is being defined by a physical address. Due to the fact that some of the blocks of data in the flash memory may have been damaged or that there is a need for spreading the frequency of erasing of the data in the flask memory evenly over the flash memory, a given logical address may have a different physical address at different times. In short: the host maintains the logical addresses for a block of data, each logical address corresponds with a physical flash memory address. The flash memory is addressed only via the logical addresses that are maintained by the host.
The method that is used to identify unusable data in the flash memory cell is not germane to the invention and is typically a method of testing the flash memory cell. As part of the tests results is indicated the physical location in the flash memory that is found to be defective, this physical location is referred to by a physical address that is maintained by a host software function. The logical addresses of the host software function are sequential addresses ranging from for instance 1 through 1,000. Within each of these logical addresses a record of information (that relates to the memory of the flash memory cell) is maintained by the host software function, the most important of these records is the physical address in the flash memory cell to which this logical address points. By searching the logical address and the records that are maintained associated with these logical addresses, the host can find a physical address or location of the flash memory cell and can determine data in this physical location of the flash memory cell.
For example, and continuing the explanation of conventional updating of a flash memory, a system with a logical address range from 1-1000 and a physical flash memory address range from 1-1024, where physical locations (addresses) 3 and 7 are deemed unusable, the mapping table is as follows. It thereby assumed that initially logical and physical addresses have a one-to-one correspondence, a correspondence that is valid as long as no data update to the flash memory cell have been performed.
Logical address Physical address
1 1
2 2
3 1001
4 4
5 5
6 6
7 1002
8 8
At the time that the host system needs to retrieve memory data for logical location 3, using the table, the host system will retrieve this data from the physical location 1001 of the flash memory.
In order to have an evenly distributed erasing frequency for the data blocks of the flash memory, the current procedure for updating the data in the flash memory cell is as follows, see also FIG. 1:
1. copy the data from the flash memory data block to a buffer; the data in the flash memory that needs to be updated is pointed to by the logical address that is maintained by the host in the logical mapping table; testing or other intervention have indicated to the host the logical address of the data in the flash memory that must be updated; the host finds the physical address for this data by doing a table look-up of the physical flash memory address and reading out the physical address that corresponds, in the logical to physical mapping table, with the physical address of the to be replaced data; FIG. 1, step 10
2. update the data in the buffer; the host has maintained the address of the buffer into which the to be updated data has been copied, the update data has been provided to the host and is entered by the host into the buffer, overlying and thereby erasing the unusable data; FIG. 1, step 12
3. look, by doing a table look-up in the logical mapping table of the host, for an empty block in the flash memory; FIG. 1, step 14
4. the host writes the data from the buffer to the empty block in the flash memory; FIG. 1, step 16, and
5. the host updates the physical address of the previously empty block in the logical address or mapping table of the host; FIG. 1, step 18.
In the above cited example, if the data in the logical address 1 needs to be updated, and an empty block is available at physical address 1003, the mapping table is as follow:
Logical address Physical address
1 1003
2 2
3 1001
4 4
5 5
6 6
7 1002
8 8
If the physical block 1 needs to be updated at frequent intervals, the physical flash memory block 1 will be written to a different flash memory location each time an update takes place. In this way, the flash memory will not be damaged due to frequent updating of one flash memory.
The deficiencies of the present procedure, which has been highlighted above, will now be indicated. The problem with the mapping table is that the host system must maintain the mapping table, from which follows that:
1. if the mapping table is stored in SRAM or DRAM, losing power source means losing the data that is stored
2. if the table is stored in a particular location in the flash memory, since the table will be frequently updated, the life of the flash memory will be shortened.
This patent will eliminate the above highlighted deficiencies of the conventional method by using the special feature of the flash memory to distribute the mapping table throughout the memory, such that the lifetime of the flash memory will not be shortened. From the above it is to be concluded that the mapping table is not stored in a particular location but is distributed all over the flash memory.
In the flash memory, each block of data is partitioned into multiple pages and each page contains two parts, a data part and a spare part. The data part is used for storing data from the host while the spare part is for the user to define. The mapping table for this patent uses the spare part.
The algorithm for this patent can be illustrated in the following examples, making use of the three main operations of the flash memory cell, that is a write, an update and a read operation of the flash memory cell.
1) Write Operation.
At the start of operation, the flash memory cell does not contain any data, so the system logical addresses will be the same as the physical addresses. At the time that the host writes data to for instance logical address 2, the system will place the data at the data portion of physical address 2, the spare portion of the physical address will be empty.
2) Update Operation.
At the time that it is found that, for reasons stated above, the data in the flash memory cell must be updated, the algorithm of the invention will, under host software control:
copy the original (old) data to a buffer; this old data is extracted from the data portion of a page in flash memory that needs to be updated, the spare portion of this page is empty at this time (since it is assumed that this is a first-time update); step 20, FIG. 2
update the data that is contained in the buffer with the new data that is to replace the old data; step 22, FIG. 2
find a new, empty block to place the data from the updated buffer; the new, empty block is located-by the host by doing a table search of the spare portion of the pages of the flask memory, identifying a data page that is empty; this table search results in providing a physical address of a page in flash memory that contains no data; the logical address is not affected by this table search which implies that the original logical address that related to an original physical data page in flash memory will retain this original relationship, the one-to-one correspondence between;the logical and physical address will however no longer be true after a first update of that physical address has taken place; step 24, FIG. 2
write the data to the data portion of the new, empty block; step 26, FIG. 2
if the spare portion of the unusable data is empty, step 28, FIG. 2, enter the physical address of the new block in the spare portion of the original unusable page in flash memory; step 30, FIG. 2.; the host support function is then available for additional updates, function 34, FIG. 2
if the spare portion of the unusable data is not empty, step 28, FIG. 2, create a pointer to the updated data in the spare portion of the preceding usable data; step 32, FIG. 2; the host support function is then available for additional updates, function 36, FIG. 2.
After the block has been updated a number of times, the new address will be appended to the old address in the spare portion of the page. So checking the last entry of the address will point to the latest updated data.
As an example, the system needs to update the data in the logical address 2. The system has located an empty block at the physical address 7 and has written the updated data into this address. The following table illustrates the operation.
Before the update:
Physical Address Data portion Spare portion
2 Old data Empty
7 Empty Empty
After first update
Physical Address Data portion Spare portion
2 Old data Pointer to block
7 New data Empty
After a second update, this update to for instance physical block 9 in flash memory:
Physical Address Data portion Spare portion
2 Old data Pointer to block 7
7 Old data Empty
9 New data Empty
From the above two update examples it is clear that the host develops string of pointers with the last pointer pointing to the last page into which new data has been entered.
3) Read Operation.
When the host system retrieves the data, the host system will check whether the spare portion of the data is empty. For this procedure:
the data that needs to be retrieved is identified by the physical address of these data in flash memory (original) physical address of these data in flash memory
the host does a table search and locates this original physical address
from which the logical address for this data block can be identified
for this logical address, the host finds the data page which has no entry in the spare part of the data page, identifying the (last) page that has not been updated and that therefore contains usable data.
The algorithm of the invention can be summarized as follows:
prior to any data updates of the physical data in flash memory, a one-to-one correspondence may be assumed between logical and physical addresses of the pages in flash memory
at the time that an update is required:
the unusable data is copied to a buffer
the buffer is updated with new data
a new, empty block is located
the new data is written to the data portion of the new block, and
the physical address of the new block is entered in the spare portion of the unusable page in flash memory
at the time of data retrieval from flash memory, the host finds the data page which has no entry in the spare part of the data page, identifying the (last) page that has not been updated and that therefore contains usable data.
The above highlighted method is further illustrated using FIGS. 4 through 9.
Referring now specifically to FIG. 4, there is shown how a one-to-one correspondence between logical and physical addresses is maintained. Each of the blocks of data, starting with block 1 and from there incrementally proceeding to additional blocks of data controlled by the distributive mapping table that is shown in overview in FIG. 4, is referred to by 16 Logical addresses (LA), each logical address containing a first or spare part and a second or data part, as highlighted in FIG. 4. The distributive mapping table of the invention will be stored in the header section of the flash memory. In the scheme that is shown in FIG. 4, the physical address will be the same as the logical address unless otherwise noted in the table.
The block of data that is shown in FIG. 5 contains, as a first record in the block, an index that points to the most up-to-date physical address for this block. The first spare and data records of the block form page 0 of this block. For the example that is shown in FIG. 5 the index INDX therefore points to the location of PBA4 since the balance of the spare part of page 2 is empty (FFFF entries or all ones). In the block-data that is shown in FIG. 5, PBAX refers to the physical block address replacing this block of data while INDX is the index that indicated which page in the block has the most up-to-date physical address. In the example shown, the most up-to-date physical address is stored in the spare or header record of page 2.
To summarize the distributive mapping table that is shown in FIG. 5:
the first four bytes of the header record can be used to store the physical address of the updated data for the block
the first two bytes of page 0 will be used to indicate which page contains the most up-to-date date
as an example, the INDX shown indicates that the third page will be read and that the physical address PBA4 will be obtained. The new address for this block will be found in location PBA4.
FIG. 6 shows an example of an initial write operation, immediately after the initialization, all bytes will contain “1” values while date will be written as “data 0”, “data 1” etc.
FIG. 7 shows the updating of “data 1”, the “1001” record in the header record of “data 0” is the location of the block having the new “data 1”. The new “data 1” is placed in the first empty physical block at “1001”, the INDX is still at page 0.
FIG. 8 shows that block 1001 contains the updated data.
FIG. 9 shows updating data 1 again, the new physical address is now in block 1002, which is located in page 1.The INDX is therefore defined at this time as 1111 1111 1111 1110, whereby the page number is defined by the first “1” bit from the right of the string. Hence “FFFE” indicates that the page number is “1”.
Finally, a read operation, that is reading data 1, comprises:
reading page 0 of block 0
as indicated by INDX (of the previous example), read page 1
determine PBA of the most recent data 1, and
read data 1 in that physical address.
The invention provides for:
1. using a new algorithm for mapping logical addresses to physical addresses
2. using a mapping table that is located throughout the flash memory
3. improving the performance of the mass storage system, and
4. increasing the life of the mass storage system.
Although the invention has been described and illustrated with reference to specific illustrative embodiments thereof, it is not intended that the invention be limited to those illustrative embodiments. Those skilled in the art will recognize that variations and modifications can be made without departing from the spirit of the invention. It is therefore intended to include within the invention all such variations and modifications which fall within the scope of the appended claims and equivalents thereof.

Claims (9)

What is claimed is:
1. A distributed mapping scheme for addressing a mass storage system, said mass storage system comprising data pages being addressed by physical addresses, said data pages comprising a data part and a spare part, comprising the steps of:
(a) providing a mapping table, said mapping table being controlled by a host software function, said mapping table having been initiated, said initiation having created logical addresses of said mass storage system, said logical addresses providing a one-to-one correspondence between said logical addresses and physical addresses of said mass storage system, one physical address pointing to a data page of said mass storage system, spare portions of said date page not being empty pointing to a next data page belonging to a chain of data pages associated with said physical address, a first data page of said chain of data pages comprising an empty spare portion being a data page containing usable data; and
(b) performing a data update operation of said data part of a data page, comprising the steps of:
(i) storing a physical address of said data page;
(ii) identifying the logical address corresponding with said physical address of said data page;
(iii) copying the data part of said data page to a buffer;
(iv) updating the data in the buffer with new data;
(iv) locating a new data page of said mass storage system to place the data from the updated buffer, providing a physical address being associated with said new data page;
(iv) writing the new data from the buffer to the data portion of said new data page, leaving the spare part of said new data page empty; and
(v) entering the physical address of said new data page in the spare portion of a last data page of said chain of data pages associated with said physical address.
2. The distributed mapping scheme of claim 1, with additional steps of reading retrievable data of said data page from said mass storage system by identifying said page of said mass storage system having an empty spare part, said additional steps comprising the steps of:
providing the physical address at said page of said mass storage system; and
locating, for said physical address, a data page that has no entry in the spare part of a data page by searching through the chain of data pages associated with said physical address, identifying the page that has not been updated, said page comprising said retrievable data of the physical address of said page of said mass storage system.
3. A method of implementing a distributed mapping scheme for addressing a mass storage system, comprising the steps of:
(a) providing a host software function, said host software function providing a mapping table, said mapping table being controlled by said host software function;
initiating said mapping table, creating logical addresses of said mass storage system, said logical addresses providing a one-to-one correspondence between said logical addresses and physical addresses of one data page of said mass storage system, said one data page comprising a data part and a spare part, one physical address pointing to a data page of said mass storage system, spare portions of said data page not being empty pointing to a next data page belonging to a chain of data pages associated with said physical address, a first data page of said chain of data pages comprising an empty spare portion being a data page containing usable data;
(b) performing a data update operation of said data part of a data page, comprising the steps of:
(i) storing a physical address of said data page;
(ii) identifying the logical address corresponding with said physical address of said data page;
(iii) copying the data part of said data page to a buffer;
(iv) updating the data in the buffer with new data;
(iv) locating a new data page of said mass storage system to place the data from the updated buffer, providing a physical address being associated with said new data page;
(iv) writing the new data from-the buffer to the data portion of said new data page, leaving the spare part of said new data page empty; and
(v) entering the physical address of said new data page in the spare portion of a last data page of said chain of data pages associated with said physical address.
4. The method of claim 3, with additional steps of reading retrievable data of said data page from said mass storage system by identifying said page of said mass storage system having an empty spare part, said additional steps comprising the steps of:
providing the physical address at said page of said mass storage system; and
locating, for said physical address, a data page that has no entry in the spare part of a data page by searching through the chain of data pages associated with said physical address, identifying the page that has not been updated, said page comprising said retrievable data of the physical address of said page of said mass storage system.
5. A flash memory for use as a mass storage system, comprising the functions of:
(a) providing a host software function, said host software function providing a mapping table,: said mapping table being controlled by said host software function;
initiating said mapping table, creating logical addresses of said mass storage system, said logical addresses providing a one-to-one correspondence between said logical addresses and physical addresses of one data page of said mass storage system, said one data page comprising a data segment and a spare segment, one physical address pointing to a data page of said mass storage system, spare portions of said data page not being empty pointing to a next data page belonging to a chain of data pages associated with said physical address, a first data page of said chain of data pages comprising an empty spare portion being a data page containing usable data;
(b) performing a data update operation of said data segment of a data page, comprising the steps of:
(i) storing a physical address of said data page;
(ii) identifying the logical address corresponding with said physical address of said data page;
(iii) copying the data segment of said data page to a buffer;
(iv) updating the data in the buffer with new data;
(iv) locating a new data page of said mass storage system to place the data from the updated buffer, providing a physical address being associated with said new data page;
(iv) writing the new data from the buffer to the data portion of said new data page, leaving the spare segment of said new data page empty; and
(v) entering the physical address of said new data page in the spare portion of a last data page of said chain of data pages associated with said physical address.
6. A flash memory of claim 5, with additional functions of reading retrievable data of said data page from said mass storage system by identifying said page of said mass storage system having an empty spare segment, said additional functions comprising the steps of:
providing the physical address at said page of said mass storage system;
locating, for said physical address, a data page that has no entry in the spare segment of a data page by searching through the chain of data pages associated with said physical address, identifying the page that has not been updated, said page comprising said retrievable data of the physical address of said page of said mass storage system.
7. A flash memory for the use as a mass storage system, comprising:
(1) a plurality of blocks for storing data in said flask memory;
(2) each of said blocks comprising a plurality of pages; each of said pages comprising a header segment and a data segment; and
(3) wherein an address mapping table for said flash memory is distributed among said header segments, said address mapping table being supported by a data update operation of said data segment of a data page, said address mapping table comprising:
(i) data pages being addressed by physical addresses;
(ii) said data pages comprising a data segment and a spare segment;
(iii) being controlled by a host software function;
(iv) having been initiated, having created logical addresses of said mass storage system;
(v) said logical addresses providing a one-to-one correspondence between said logical addresses and physical addresses of said mass storage system;
(vi) one physical address pointing to a data page of said mass storage system;
(vii) spare portions of said data page not being empty pointing to a next data page belonging to a chain of data pages associated with said physical address; and
(viii) a first data page of said chain of data pages comprising an empty spare portion being a data page containing usable data.
8. The flash memory of claim 7, said data update operation of said data segment of a data page comprising the steps of:
storing a physical address of said data page;
identifying the logical address corresponding with said physical address of said data page;
copying the data segment of said data page to a buffer;
updating the data in the buffer with new data;
locating a new data page of said mass storage system to place the data from the updated buffer, providing a physical address being associated with said new data page;
writing the new data from the buffer to the data portion of said new data page, leaving the spare segment at said new data page empty; and
entering the physical address of said new data page in the spare portion of a last data page of said chain of data pages associated with said physical address.
9. The flash memory of claim 7, with additional functions of reading retrievable data of said data page from said mass storage system by identifying said page of said mass storage system having an empty spare segment, said additional functions comprising the steps of:
providing the physical address at said page of said mass storage system; and
locating, for said physical address, a data page that has no entry in the spare segment of a data page by searching through the chain of data pages associated with said physical address, identifying the page that has not been updated, said page comprising said retrievable data of the physical address of said page of said mass storage system.
US10/054,560 2002-01-22 2002-01-22 Distributed mapping scheme for mass storage system Expired - Lifetime US6675281B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/054,560 US6675281B1 (en) 2002-01-22 2002-01-22 Distributed mapping scheme for mass storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/054,560 US6675281B1 (en) 2002-01-22 2002-01-22 Distributed mapping scheme for mass storage system

Publications (1)

Publication Number Publication Date
US6675281B1 true US6675281B1 (en) 2004-01-06

Family

ID=29731544

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/054,560 Expired - Lifetime US6675281B1 (en) 2002-01-22 2002-01-22 Distributed mapping scheme for mass storage system

Country Status (1)

Country Link
US (1) US6675281B1 (en)

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040083335A1 (en) * 2002-10-28 2004-04-29 Gonzalez Carlos J. Automated wear leveling in non-volatile storage systems
US20040255141A1 (en) * 2003-04-22 2004-12-16 Hodder Leonard B. Fiscal data recorder
US20050169058A1 (en) * 2004-02-03 2005-08-04 Samsung Electronics Co., Ltd. Data management apparatus and method used for flash memory
US20060047889A1 (en) * 2004-08-31 2006-03-02 Junko Sasaki Memory device and controlling method for nonvolatile memory
US20060059297A1 (en) * 2004-09-15 2006-03-16 Kenichi Nakanishi Memory control apparatus, memory control method and program
US20070260811A1 (en) * 2006-05-08 2007-11-08 Merry David E Jr Systems and methods for measuring the useful life of solid-state storage devices
US20080147962A1 (en) * 2006-12-15 2008-06-19 Diggs Mark S Storage subsystem with multiple non-volatile memory arrays to protect against data losses
US20080189452A1 (en) * 2007-02-07 2008-08-07 Merry David E Storage subsystem with configurable buffer
US20080195828A1 (en) * 2007-02-13 2008-08-14 Samsung Electronics Co., Ltd. Methods of writing data in a non-volatile memory device to place data in an in-place arrangement
WO2008106686A1 (en) * 2007-03-01 2008-09-04 Douglas Dumitru Fast block device and methodology
US20080222492A1 (en) * 2004-06-03 2008-09-11 Inphase Technologies, Inc. Data protection system
US20090204972A1 (en) * 2008-02-12 2009-08-13 International Business Machines Corporation Authenticating a processing system accessing a resource
US20090204853A1 (en) * 2008-02-11 2009-08-13 Siliconsystems, Inc. Interface for enabling a host computer to retrieve device monitor data from a solid state storage subsystem
US20090204852A1 (en) * 2008-02-07 2009-08-13 Siliconsystems, Inc. Solid state storage subsystem that maintains and provides access to data reflective of a failure risk
US20100122019A1 (en) * 2008-11-10 2010-05-13 David Flynn Apparatus, system, and method for managing physical regions in a solid-state storage device
US20100146191A1 (en) * 2007-12-05 2010-06-10 Michael Katz System and methods employing mock thresholds to generate actual reading thresholds in flash memory devices
US20100253555A1 (en) * 2009-04-06 2010-10-07 Hanan Weingarten Encoding method and system, decoding method and system
US20100293321A1 (en) * 2009-05-12 2010-11-18 Hanan Weingarten Systems and method for flash memory management
US20110082995A1 (en) * 2009-10-06 2011-04-07 Canon Kabushiki Kaisha Information processing apparatus
US20110153919A1 (en) * 2009-12-22 2011-06-23 Erez Sabbag Device, system, and method for reducing program/read disturb in flash arrays
CN101526922B (en) * 2009-04-03 2011-09-07 深圳市宝捷信科技有限公司 Flash data access method and device thereof
US8612706B1 (en) 2011-12-21 2013-12-17 Western Digital Technologies, Inc. Metadata recovery in a disk drive
US20140025921A1 (en) * 2012-07-19 2014-01-23 Jmicron Technology Corp. Memory control method utilizing main memory for address mapping and related memory control circuit
US8724387B2 (en) 2009-10-22 2014-05-13 Densbits Technologies Ltd. Method, system, and computer readable medium for reading and programming flash memory cells using multiple bias voltages
US8730729B2 (en) 2009-10-15 2014-05-20 Densbits Technologies Ltd. Systems and methods for averaging error rates in non-volatile devices and storage systems
US8745317B2 (en) 2010-04-07 2014-06-03 Densbits Technologies Ltd. System and method for storing information in a multi-level cell memory
US8756382B1 (en) 2011-06-30 2014-06-17 Western Digital Technologies, Inc. Method for file based shingled data storage utilizing multiple media types
US8756361B1 (en) * 2010-10-01 2014-06-17 Western Digital Technologies, Inc. Disk drive modifying metadata cached in a circular buffer when a write operation is aborted
US8762800B1 (en) 2008-01-31 2014-06-24 Densbits Technologies Ltd. Systems and methods for handling immediate data errors in flash memory
US8782500B2 (en) 2007-12-12 2014-07-15 Densbits Technologies Ltd. Systems and methods for error correction and decoding on multi-level physical media
US8799563B2 (en) 2007-10-22 2014-08-05 Densbits Technologies Ltd. Methods for adaptively programming flash memory devices and flash memory systems incorporating same
US8819385B2 (en) 2009-04-06 2014-08-26 Densbits Technologies Ltd. Device and method for managing a flash memory
US8838937B1 (en) 2012-05-23 2014-09-16 Densbits Technologies Ltd. Methods, systems and computer readable medium for writing and reading data
US8850100B2 (en) 2010-12-07 2014-09-30 Densbits Technologies Ltd. Interleaving codeword portions between multiple planes and/or dies of a flash memory device
US8850297B1 (en) 2010-07-01 2014-09-30 Densbits Technologies Ltd. System and method for multi-dimensional encoding and decoding
US8879325B1 (en) 2012-05-30 2014-11-04 Densbits Technologies Ltd. System, method and computer program product for processing read threshold information and for reading a flash memory module
US8947941B2 (en) 2012-02-09 2015-02-03 Densbits Technologies Ltd. State responsive operations relating to flash memory cells
US8954664B1 (en) 2010-10-01 2015-02-10 Western Digital Technologies, Inc. Writing metadata files on a disk
US8964464B2 (en) 2010-08-24 2015-02-24 Densbits Technologies Ltd. System and method for accelerated sampling
US8972472B2 (en) 2008-03-25 2015-03-03 Densbits Technologies Ltd. Apparatus and methods for hardware-efficient unbiased rounding
US8990665B1 (en) 2011-04-06 2015-03-24 Densbits Technologies Ltd. System, method and computer program product for joint search of a read threshold and soft decoding
US8995197B1 (en) 2009-08-26 2015-03-31 Densbits Technologies Ltd. System and methods for dynamic erase and program control for flash memory device memories
US8996788B2 (en) 2012-02-09 2015-03-31 Densbits Technologies Ltd. Configurable flash interface
US8996793B1 (en) 2012-04-24 2015-03-31 Densbits Technologies Ltd. System, method and computer readable medium for generating soft information
US8996790B1 (en) 2011-05-12 2015-03-31 Densbits Technologies Ltd. System and method for flash memory management
US9063878B2 (en) 2010-11-03 2015-06-23 Densbits Technologies Ltd. Method, system and computer readable medium for copy back
US9069659B1 (en) 2013-01-03 2015-06-30 Densbits Technologies Ltd. Read threshold determination using reference read threshold
US9110785B1 (en) 2011-05-12 2015-08-18 Densbits Technologies Ltd. Ordered merge of data sectors that belong to memory space portions
US9136876B1 (en) 2013-06-13 2015-09-15 Densbits Technologies Ltd. Size limited multi-dimensional decoding
US9195592B1 (en) 2011-05-12 2015-11-24 Densbits Technologies Ltd. Advanced management of a non-volatile memory
US9330767B1 (en) 2009-08-26 2016-05-03 Avago Technologies General Ip (Singapore) Pte. Ltd. Flash memory module and method for programming a page of flash memory cells
US9348694B1 (en) 2013-10-09 2016-05-24 Avago Technologies General Ip (Singapore) Pte. Ltd. Detecting and managing bad columns
US9368225B1 (en) 2012-11-21 2016-06-14 Avago Technologies General Ip (Singapore) Pte. Ltd. Determining read thresholds based upon read error direction statistics
US9372792B1 (en) 2011-05-12 2016-06-21 Avago Technologies General Ip (Singapore) Pte. Ltd. Advanced management of a non-volatile memory
US9397706B1 (en) 2013-10-09 2016-07-19 Avago Technologies General Ip (Singapore) Pte. Ltd. System and method for irregular multiple dimension decoding and encoding
US9396106B2 (en) 2011-05-12 2016-07-19 Avago Technologies General Ip (Singapore) Pte. Ltd. Advanced management of a non-volatile memory
US9407291B1 (en) 2014-07-03 2016-08-02 Avago Technologies General Ip (Singapore) Pte. Ltd. Parallel encoding method and system
US9413491B1 (en) 2013-10-08 2016-08-09 Avago Technologies General Ip (Singapore) Pte. Ltd. System and method for multiple dimension decoding and encoding a message
US9449702B1 (en) 2014-07-08 2016-09-20 Avago Technologies General Ip (Singapore) Pte. Ltd. Power management
US9501392B1 (en) 2011-05-12 2016-11-22 Avago Technologies General Ip (Singapore) Pte. Ltd. Management of a non-volatile memory module
US9524211B1 (en) 2014-11-18 2016-12-20 Avago Technologies General Ip (Singapore) Pte. Ltd. Codeword management
US9536612B1 (en) 2014-01-23 2017-01-03 Avago Technologies General Ip (Singapore) Pte. Ltd Digital signaling processing for three dimensional flash memory arrays
US9542262B1 (en) 2014-05-29 2017-01-10 Avago Technologies General Ip (Singapore) Pte. Ltd. Error correction
US9786388B1 (en) 2013-10-09 2017-10-10 Avago Technologies General Ip (Singapore) Pte. Ltd. Detecting and managing bad columns
US9851921B1 (en) 2015-07-05 2017-12-26 Avago Technologies General Ip (Singapore) Pte. Ltd. Flash memory chip processing
US9892033B1 (en) 2014-06-24 2018-02-13 Avago Technologies General Ip (Singapore) Pte. Ltd. Management of memory units
US9921954B1 (en) 2012-08-27 2018-03-20 Avago Technologies General Ip (Singapore) Pte. Ltd. Method and system for split flash memory management between host and storage controller
US9954558B1 (en) 2016-03-03 2018-04-24 Avago Technologies General Ip (Singapore) Pte. Ltd. Fast decoding of data stored in a flash memory
US9972393B1 (en) 2014-07-03 2018-05-15 Avago Technologies General Ip (Singapore) Pte. Ltd. Accelerating programming of a flash memory module
US10120792B1 (en) 2014-01-29 2018-11-06 Avago Technologies General Ip (Singapore) Pte. Ltd. Programming an embedded flash storage device
US20190138411A1 (en) * 2015-09-14 2019-05-09 Hewlett Packard Enterprise Development Lp Memory location remapping and wear-levelling
US10305515B1 (en) 2015-02-02 2019-05-28 Avago Technologies International Sales Pte. Limited System and method for encoding using multiple linear feedback shift registers
US10628255B1 (en) 2015-06-11 2020-04-21 Avago Technologies International Sales Pte. Limited Multi-dimensional decoding

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4577274A (en) * 1983-07-11 1986-03-18 At&T Bell Laboratories Demand paging scheme for a multi-ATB shared memory processing system
US5734816A (en) * 1993-03-11 1998-03-31 International Business Machines Corporation Nonvolatile memory with flash erase capability
US5740396A (en) 1995-02-16 1998-04-14 Mitsubishi Denki Kabushiki Kaisha Solid state disk device having a flash memory accessed by utilizing an address conversion table to convert sector address information to a physical block number
US5742934A (en) 1995-09-13 1998-04-21 Mitsubishi Denki Kabushiki Kaisha Flash solid state disk card with selective use of an address conversion table depending on logical and physical sector numbers
US6000006A (en) 1997-08-25 1999-12-07 Bit Microsystems, Inc. Unified re-map and cache-index table with dual write-counters for wear-leveling of non-volatile flash RAM mass storage
US6230233B1 (en) 1991-09-13 2001-05-08 Sandisk Corporation Wear leveling techniques for flash EEPROM systems

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4577274A (en) * 1983-07-11 1986-03-18 At&T Bell Laboratories Demand paging scheme for a multi-ATB shared memory processing system
US6230233B1 (en) 1991-09-13 2001-05-08 Sandisk Corporation Wear leveling techniques for flash EEPROM systems
US5734816A (en) * 1993-03-11 1998-03-31 International Business Machines Corporation Nonvolatile memory with flash erase capability
US5740396A (en) 1995-02-16 1998-04-14 Mitsubishi Denki Kabushiki Kaisha Solid state disk device having a flash memory accessed by utilizing an address conversion table to convert sector address information to a physical block number
US5742934A (en) 1995-09-13 1998-04-21 Mitsubishi Denki Kabushiki Kaisha Flash solid state disk card with selective use of an address conversion table depending on logical and physical sector numbers
US6000006A (en) 1997-08-25 1999-12-07 Bit Microsystems, Inc. Unified re-map and cache-index table with dual write-counters for wear-leveling of non-volatile flash RAM mass storage

Cited By (112)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040083335A1 (en) * 2002-10-28 2004-04-29 Gonzalez Carlos J. Automated wear leveling in non-volatile storage systems
US7552272B2 (en) 2002-10-28 2009-06-23 Sandisk Corporation Automated wear leveling in non-volatile storage systems
US7120729B2 (en) * 2002-10-28 2006-10-10 Sandisk Corporation Automated wear leveling in non-volatile storage systems
US7913097B2 (en) 2003-04-22 2011-03-22 Seiko Epson Corporation Fiscal data recorder programmed to write only non-blank values to memory
US20040255141A1 (en) * 2003-04-22 2004-12-16 Hodder Leonard B. Fiscal data recorder
US20090182640A1 (en) * 2003-04-22 2009-07-16 Seiko Epson Corporation Fiscal Data Recorder
US7523320B2 (en) * 2003-04-22 2009-04-21 Seiko Epson Corporation Fiscal data recorder with protection circuit and tamper-proof seal
EP1562121A3 (en) * 2004-02-03 2007-12-05 Samsung Electronics Co., Ltd. Data management apparatus and method used for flash memory
EP1562121A2 (en) 2004-02-03 2005-08-10 Samsung Electronics Co., Ltd. Data management apparatus and method used for flash memory
US20050169058A1 (en) * 2004-02-03 2005-08-04 Samsung Electronics Co., Ltd. Data management apparatus and method used for flash memory
US20130094340A1 (en) * 2004-06-03 2013-04-18 Akonia Holographs, LLC Data protection system
US20080222492A1 (en) * 2004-06-03 2008-09-11 Inphase Technologies, Inc. Data protection system
US20060047889A1 (en) * 2004-08-31 2006-03-02 Junko Sasaki Memory device and controlling method for nonvolatile memory
US7647470B2 (en) * 2004-08-31 2010-01-12 Sony Corporation Memory device and controlling method for elongating the life of nonvolatile memory
US20060059297A1 (en) * 2004-09-15 2006-03-16 Kenichi Nakanishi Memory control apparatus, memory control method and program
US8060684B2 (en) * 2004-09-15 2011-11-15 Sony Corporation Memory control apparatus, memory control method and program
US8122185B2 (en) 2006-05-08 2012-02-21 Siliconsystems, Inc. Systems and methods for measuring the useful life of solid-state storage devices
US8312207B2 (en) 2006-05-08 2012-11-13 Siliconsystems, Inc. Systems and methods for measuring the useful life of solid-state storage devices
US20070260811A1 (en) * 2006-05-08 2007-11-08 Merry David E Jr Systems and methods for measuring the useful life of solid-state storage devices
US20100122200A1 (en) * 2006-05-08 2010-05-13 Siliconsystems, Inc. Systems and methods for measuring the useful life of solid-state storage devices
US7653778B2 (en) 2006-05-08 2010-01-26 Siliconsystems, Inc. Systems and methods for measuring the useful life of solid-state storage devices
US8549236B2 (en) 2006-12-15 2013-10-01 Siliconsystems, Inc. Storage subsystem with multiple non-volatile memory arrays to protect against data losses
US20080147962A1 (en) * 2006-12-15 2008-06-19 Diggs Mark S Storage subsystem with multiple non-volatile memory arrays to protect against data losses
US20100017542A1 (en) * 2007-02-07 2010-01-21 Siliconsystems, Inc. Storage subsystem with configurable buffer
US7596643B2 (en) 2007-02-07 2009-09-29 Siliconsystems, Inc. Storage subsystem with configurable buffer
US20080189452A1 (en) * 2007-02-07 2008-08-07 Merry David E Storage subsystem with configurable buffer
US8151020B2 (en) 2007-02-07 2012-04-03 Siliconsystems, Inc. Storage subsystem with configurable buffer
US20080195828A1 (en) * 2007-02-13 2008-08-14 Samsung Electronics Co., Ltd. Methods of writing data in a non-volatile memory device to place data in an in-place arrangement
US20080228992A1 (en) * 2007-03-01 2008-09-18 Douglas Dumitru System, method and apparatus for accelerating fast block devices
US10248359B2 (en) 2007-03-01 2019-04-02 Douglas Dumitru System, method and apparatus for accelerating fast block devices
US8380944B2 (en) 2007-03-01 2013-02-19 Douglas Dumitru Fast block device and methodology
WO2008106686A1 (en) * 2007-03-01 2008-09-04 Douglas Dumitru Fast block device and methodology
US20080215834A1 (en) * 2007-03-01 2008-09-04 Douglas Dumitru Fast block device and methodology
US8799563B2 (en) 2007-10-22 2014-08-05 Densbits Technologies Ltd. Methods for adaptively programming flash memory devices and flash memory systems incorporating same
US9104550B2 (en) 2007-12-05 2015-08-11 Densbits Technologies Ltd. Physical levels deterioration based determination of thresholds useful for converting cell physical levels into cell logical values in an array of digital memory cells
US8751726B2 (en) 2007-12-05 2014-06-10 Densbits Technologies Ltd. System and methods employing mock thresholds to generate actual reading thresholds in flash memory devices
US20100146191A1 (en) * 2007-12-05 2010-06-10 Michael Katz System and methods employing mock thresholds to generate actual reading thresholds in flash memory devices
US8843698B2 (en) 2007-12-05 2014-09-23 Densbits Technologies Ltd. Systems and methods for temporarily retiring memory portions
US8782500B2 (en) 2007-12-12 2014-07-15 Densbits Technologies Ltd. Systems and methods for error correction and decoding on multi-level physical media
US8762800B1 (en) 2008-01-31 2014-06-24 Densbits Technologies Ltd. Systems and methods for handling immediate data errors in flash memory
US8078918B2 (en) 2008-02-07 2011-12-13 Siliconsystems, Inc. Solid state storage subsystem that maintains and provides access to data reflective of a failure risk
US20090204852A1 (en) * 2008-02-07 2009-08-13 Siliconsystems, Inc. Solid state storage subsystem that maintains and provides access to data reflective of a failure risk
US7962792B2 (en) 2008-02-11 2011-06-14 Siliconsystems, Inc. Interface for enabling a host computer to retrieve device monitor data from a solid state storage subsystem
US20090204853A1 (en) * 2008-02-11 2009-08-13 Siliconsystems, Inc. Interface for enabling a host computer to retrieve device monitor data from a solid state storage subsystem
US8230435B2 (en) 2008-02-12 2012-07-24 International Business Machines Corporation Authenticating a processing system accessing a resource
US9442762B2 (en) 2008-02-12 2016-09-13 International Business Machines Corporation Authenticating a processing system accessing a resource
US20090204972A1 (en) * 2008-02-12 2009-08-13 International Business Machines Corporation Authenticating a processing system accessing a resource
US8640138B2 (en) 2008-02-12 2014-01-28 International Business Machines Corporation Authenticating a processing system accessing a resource via a resource alias address
US8972472B2 (en) 2008-03-25 2015-03-03 Densbits Technologies Ltd. Apparatus and methods for hardware-efficient unbiased rounding
US8725938B2 (en) * 2008-11-10 2014-05-13 Fusion-Io, Inc. Apparatus, system, and method for testing physical regions in a solid-state storage device
US8275933B2 (en) 2008-11-10 2012-09-25 Fusion-10, Inc Apparatus, system, and method for managing physical regions in a solid-state storage device
US20100122019A1 (en) * 2008-11-10 2010-05-13 David Flynn Apparatus, system, and method for managing physical regions in a solid-state storage device
US20130036262A1 (en) * 2008-11-10 2013-02-07 Fusion-Io, Inc. Apparatus, system, and method for testing physical regions in a solid-state storage device
CN101526922B (en) * 2009-04-03 2011-09-07 深圳市宝捷信科技有限公司 Flash data access method and device thereof
US8850296B2 (en) 2009-04-06 2014-09-30 Densbits Technologies Ltd. Encoding method and system, decoding method and system
US20100253555A1 (en) * 2009-04-06 2010-10-07 Hanan Weingarten Encoding method and system, decoding method and system
US8819385B2 (en) 2009-04-06 2014-08-26 Densbits Technologies Ltd. Device and method for managing a flash memory
US20100293321A1 (en) * 2009-05-12 2010-11-18 Hanan Weingarten Systems and method for flash memory management
US8566510B2 (en) * 2009-05-12 2013-10-22 Densbits Technologies Ltd. Systems and method for flash memory management
US9330767B1 (en) 2009-08-26 2016-05-03 Avago Technologies General Ip (Singapore) Pte. Ltd. Flash memory module and method for programming a page of flash memory cells
US8995197B1 (en) 2009-08-26 2015-03-31 Densbits Technologies Ltd. System and methods for dynamic erase and program control for flash memory device memories
US20110082995A1 (en) * 2009-10-06 2011-04-07 Canon Kabushiki Kaisha Information processing apparatus
US8730729B2 (en) 2009-10-15 2014-05-20 Densbits Technologies Ltd. Systems and methods for averaging error rates in non-volatile devices and storage systems
US8724387B2 (en) 2009-10-22 2014-05-13 Densbits Technologies Ltd. Method, system, and computer readable medium for reading and programming flash memory cells using multiple bias voltages
US20110153919A1 (en) * 2009-12-22 2011-06-23 Erez Sabbag Device, system, and method for reducing program/read disturb in flash arrays
US9037777B2 (en) 2009-12-22 2015-05-19 Densbits Technologies Ltd. Device, system, and method for reducing program/read disturb in flash arrays
US8745317B2 (en) 2010-04-07 2014-06-03 Densbits Technologies Ltd. System and method for storing information in a multi-level cell memory
US8850297B1 (en) 2010-07-01 2014-09-30 Densbits Technologies Ltd. System and method for multi-dimensional encoding and decoding
US8964464B2 (en) 2010-08-24 2015-02-24 Densbits Technologies Ltd. System and method for accelerated sampling
US8756361B1 (en) * 2010-10-01 2014-06-17 Western Digital Technologies, Inc. Disk drive modifying metadata cached in a circular buffer when a write operation is aborted
US8954664B1 (en) 2010-10-01 2015-02-10 Western Digital Technologies, Inc. Writing metadata files on a disk
US9063878B2 (en) 2010-11-03 2015-06-23 Densbits Technologies Ltd. Method, system and computer readable medium for copy back
US8850100B2 (en) 2010-12-07 2014-09-30 Densbits Technologies Ltd. Interleaving codeword portions between multiple planes and/or dies of a flash memory device
US8990665B1 (en) 2011-04-06 2015-03-24 Densbits Technologies Ltd. System, method and computer program product for joint search of a read threshold and soft decoding
US9195592B1 (en) 2011-05-12 2015-11-24 Densbits Technologies Ltd. Advanced management of a non-volatile memory
US9501392B1 (en) 2011-05-12 2016-11-22 Avago Technologies General Ip (Singapore) Pte. Ltd. Management of a non-volatile memory module
US9372792B1 (en) 2011-05-12 2016-06-21 Avago Technologies General Ip (Singapore) Pte. Ltd. Advanced management of a non-volatile memory
US9396106B2 (en) 2011-05-12 2016-07-19 Avago Technologies General Ip (Singapore) Pte. Ltd. Advanced management of a non-volatile memory
US8996790B1 (en) 2011-05-12 2015-03-31 Densbits Technologies Ltd. System and method for flash memory management
US9110785B1 (en) 2011-05-12 2015-08-18 Densbits Technologies Ltd. Ordered merge of data sectors that belong to memory space portions
US8756382B1 (en) 2011-06-30 2014-06-17 Western Digital Technologies, Inc. Method for file based shingled data storage utilizing multiple media types
US8612706B1 (en) 2011-12-21 2013-12-17 Western Digital Technologies, Inc. Metadata recovery in a disk drive
US8996788B2 (en) 2012-02-09 2015-03-31 Densbits Technologies Ltd. Configurable flash interface
US8947941B2 (en) 2012-02-09 2015-02-03 Densbits Technologies Ltd. State responsive operations relating to flash memory cells
US8996793B1 (en) 2012-04-24 2015-03-31 Densbits Technologies Ltd. System, method and computer readable medium for generating soft information
US8838937B1 (en) 2012-05-23 2014-09-16 Densbits Technologies Ltd. Methods, systems and computer readable medium for writing and reading data
US9431118B1 (en) 2012-05-30 2016-08-30 Avago Technologies General Ip (Singapore) Pte. Ltd. System, method and computer program product for processing read threshold information and for reading a flash memory module
US8879325B1 (en) 2012-05-30 2014-11-04 Densbits Technologies Ltd. System, method and computer program product for processing read threshold information and for reading a flash memory module
US20140025921A1 (en) * 2012-07-19 2014-01-23 Jmicron Technology Corp. Memory control method utilizing main memory for address mapping and related memory control circuit
US9921954B1 (en) 2012-08-27 2018-03-20 Avago Technologies General Ip (Singapore) Pte. Ltd. Method and system for split flash memory management between host and storage controller
US9368225B1 (en) 2012-11-21 2016-06-14 Avago Technologies General Ip (Singapore) Pte. Ltd. Determining read thresholds based upon read error direction statistics
US9069659B1 (en) 2013-01-03 2015-06-30 Densbits Technologies Ltd. Read threshold determination using reference read threshold
US9136876B1 (en) 2013-06-13 2015-09-15 Densbits Technologies Ltd. Size limited multi-dimensional decoding
US9413491B1 (en) 2013-10-08 2016-08-09 Avago Technologies General Ip (Singapore) Pte. Ltd. System and method for multiple dimension decoding and encoding a message
US9786388B1 (en) 2013-10-09 2017-10-10 Avago Technologies General Ip (Singapore) Pte. Ltd. Detecting and managing bad columns
US9348694B1 (en) 2013-10-09 2016-05-24 Avago Technologies General Ip (Singapore) Pte. Ltd. Detecting and managing bad columns
US9397706B1 (en) 2013-10-09 2016-07-19 Avago Technologies General Ip (Singapore) Pte. Ltd. System and method for irregular multiple dimension decoding and encoding
US9536612B1 (en) 2014-01-23 2017-01-03 Avago Technologies General Ip (Singapore) Pte. Ltd Digital signaling processing for three dimensional flash memory arrays
US10120792B1 (en) 2014-01-29 2018-11-06 Avago Technologies General Ip (Singapore) Pte. Ltd. Programming an embedded flash storage device
US9542262B1 (en) 2014-05-29 2017-01-10 Avago Technologies General Ip (Singapore) Pte. Ltd. Error correction
US9892033B1 (en) 2014-06-24 2018-02-13 Avago Technologies General Ip (Singapore) Pte. Ltd. Management of memory units
US9407291B1 (en) 2014-07-03 2016-08-02 Avago Technologies General Ip (Singapore) Pte. Ltd. Parallel encoding method and system
US9972393B1 (en) 2014-07-03 2018-05-15 Avago Technologies General Ip (Singapore) Pte. Ltd. Accelerating programming of a flash memory module
US9584159B1 (en) 2014-07-03 2017-02-28 Avago Technologies General Ip (Singapore) Pte. Ltd. Interleaved encoding
US9449702B1 (en) 2014-07-08 2016-09-20 Avago Technologies General Ip (Singapore) Pte. Ltd. Power management
US9524211B1 (en) 2014-11-18 2016-12-20 Avago Technologies General Ip (Singapore) Pte. Ltd. Codeword management
US10305515B1 (en) 2015-02-02 2019-05-28 Avago Technologies International Sales Pte. Limited System and method for encoding using multiple linear feedback shift registers
US10628255B1 (en) 2015-06-11 2020-04-21 Avago Technologies International Sales Pte. Limited Multi-dimensional decoding
US9851921B1 (en) 2015-07-05 2017-12-26 Avago Technologies General Ip (Singapore) Pte. Ltd. Flash memory chip processing
US20190138411A1 (en) * 2015-09-14 2019-05-09 Hewlett Packard Enterprise Development Lp Memory location remapping and wear-levelling
US10802936B2 (en) * 2015-09-14 2020-10-13 Hewlett Packard Enterprise Development Lp Memory location remapping and wear-levelling
US9954558B1 (en) 2016-03-03 2018-04-24 Avago Technologies General Ip (Singapore) Pte. Ltd. Fast decoding of data stored in a flash memory

Similar Documents

Publication Publication Date Title
US6675281B1 (en) Distributed mapping scheme for mass storage system
US8316209B2 (en) Robust index storage for non-volatile memory
US20050015557A1 (en) Nonvolatile memory unit with specific cache
KR100789406B1 (en) Flash memory system and garbage collection method therof
US7594062B2 (en) Method for changing data of a data block in a flash memory having a mapping area, a data area and an alternative area
US6115785A (en) Direct logical block addressing flash memory mass storage architecture
US6725321B1 (en) Memory system
US8301826B2 (en) Adaptive mode switching of flash memory address mapping based on host usage characteristics
US8312204B2 (en) System and method for wear leveling in a data storage device
US6772274B1 (en) Flash memory system and method implementing LBA to PBA correlation within flash memory array
US8041884B2 (en) Controller for non-volatile memories and methods of operating the memory controller
US8386698B2 (en) Data accessing method for flash memory and storage system and controller using the same
US7680977B2 (en) Page and block management algorithm for NAND flash
US20140122774A1 (en) Method for Managing Data of Solid State Storage with Data Attributes
US20100325351A1 (en) Memory system having persistent garbage collection
US10061704B2 (en) Systems and methods for managing cache of a data storage device
KR20070096429A (en) Fast mounting for a file system on nand flash memory
US8261013B2 (en) Method for even utilization of a plurality of flash memory chips
KR20010037155A (en) Flash file system
US20070005929A1 (en) Method, system, and article of manufacture for sector mapping in a flash device
EP2264602A1 (en) Memory device for managing the recovery of a non volatile memory
KR100745163B1 (en) Method for managing flash memory using dynamic mapping table
US11416151B2 (en) Data storage device with hierarchical mapping information management, and non-volatile memory control method
JP4558054B2 (en) Memory system
CN112559384B (en) Dynamic partitioning method for hybrid solid-state disk based on nonvolatile memory

Legal Events

Date Code Title Description
AS Assignment

Owner name: ICREATE TECHNOLOGIES CORPORATION, TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OH, YAW;TUAN, JEN CHIEH;REEL/FRAME:012577/0805;SIGNING DATES FROM 20011214 TO 20020108

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: TM TECHNOLOGY INC., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ICREATE TECHNOLOGIES CORPORATION;REEL/FRAME:020317/0800

Effective date: 20071218

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES FILED (ORIGINAL EVENT CODE: PMFP); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PMFG); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees
REIN Reinstatement after maintenance fee payment confirmed
FP Lapsed due to failure to pay maintenance fee

Effective date: 20160106

FPAY Fee payment

Year of fee payment: 12

SULP Surcharge for late payment
PRDP Patent reinstated due to the acceptance of a late maintenance fee

Effective date: 20160725

STCF Information on status: patent grant

Free format text: PATENTED CASE