US20120331267A1 - Method for managing a memory apparatus - Google Patents

Method for managing a memory apparatus Download PDF

Info

Publication number
US20120331267A1
US20120331267A1 US13/604,654 US201213604654A US2012331267A1 US 20120331267 A1 US20120331267 A1 US 20120331267A1 US 201213604654 A US201213604654 A US 201213604654A US 2012331267 A1 US2012331267 A1 US 2012331267A1
Authority
US
United States
Prior art keywords
page
page address
block
processing unit
physical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/604,654
Inventor
Tsai-Cheng Lin
Chun-Kun Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Silicon Motion Inc
Original Assignee
Silicon Motion Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Motion Inc filed Critical Silicon Motion Inc
Priority to US13/604,654 priority Critical patent/US20120331267A1/en
Assigned to SILICON MOTION INC. reassignment SILICON MOTION INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEE, CHUN-KUN, LIN, TSAI-CHENG
Publication of US20120331267A1 publication Critical patent/US20120331267A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7207Details relating to flash memory management management of metadata or control data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7209Validity control, e.g. using flags, time stamps or sequence numbers

Definitions

  • the present invention relates to flash memory control, and more particularly, to a method for managing a memory apparatus, and an associated memory apparatus thereof.
  • the host While a host is accessing a memory apparatus (e.g. a solid state drive, SSD), the host typically sends an accessing command and at least a corresponding logical address to the memory apparatus.
  • the controller of the memory apparatus receives the logical address and transfers the logical address into a physical address by utilizing a logical-to-physical address linking table.
  • the controller accesses at least one physical memory element (or memory component) of the memory apparatus by utilizing the physical address.
  • the memory element can be implemented with one or more flash memory chips (which can be referred to as flash chips for simplicity).
  • the logical-to-physical address linking table can be built in accordance with a memory unit in the memory element.
  • the logical-to-physical address linking table can be built by blocks or by pages.
  • the logical-to-physical address linking table can be referred to as the logical-to-physical block address linking table.
  • the logical-to-physical address linking table can be referred to as the logical-to-physical page address linking table.
  • a logical-to-physical page address linking table comprising linking relationships about pages of a plurality of blocks (or all blocks) in the memory apparatus can be referred to as the global page address linking table.
  • the memory element has X physical blocks, and each physical block has Y physical pages.
  • the associated logical-to-physical block address linking table can be built by reading a logical block address stored in a page of each physical block and recording the relationship between the physical block and the associated logical block.
  • X pages respectively corresponding to the X physical blocks have to be read, where the time required for this is assumed to be x seconds.
  • the associated global page address linking table can be built by reading a logical page address stored in each physical page of all physical blocks and recording the relationship between the physical page and the associated logical page.
  • the global page address linking table In order to build the global page address linking table, at least X ⁇ Y pages have to be read, requiring x ⁇ Y seconds. If a block has 1024 pages, the time required for building the global page address linking table is 1024 times the time required for building the logical-to-physical block address linking table, i.e. 1024 ⁇ x seconds, which is an unacceptable processing time since the processing speed is too slow. That is, when implementing the global page address linking table in this way, the overall performance of accessing the memory apparatus is retarded. Therefore, a novel method is required for efficiently building the logical-to-physical address linking table, and related methods for managing memory apparatus operated under the novel method is required.
  • a method for managing a memory apparatus comprises at least one non-volatile (NV) memory element, each of which comprises a plurality of blocks.
  • the method comprises: receiving a first access command from a host; analyzing the first access command to obtain a first host address; linking the first host address to a physical block; receiving a second access command from the host; analyzing the second access command to obtain a second host address; and linking the second host address to the physical block, wherein a difference value of the first host address and the second host address is greater than a number of pages of the physical block.
  • a method for managing a memory apparatus comprises at least one NV memory element, each of which comprises a plurality of blocks.
  • the method comprises: receiving a first access command from a host; analyzing the first access command to obtain a first host address; linking the first host address to at least a page of a first physical block; receiving a second access command from the host; analyzing the second access command to obtain a second host address; and linking the second host address to at least a page of a second physical block that is different from the first physical block, wherein a difference value of the first host address and the second host address is smaller than a number of pages of the physical block.
  • the present invention method and apparatus can greatly save the time of building logical-to-physical address linking table(s), such as the time of building a global page address linking table. Therefore, the present invention provides better performance than the related art.
  • the present invention method and apparatus can record the usage information during accessing the pages, and therefore can efficiently manage the usage of all blocks according the usage information. As a result, the arrangement of the spare region and the data region can be optimized.
  • FIG. 1 is a block diagram of a memory apparatus according to a first embodiment of the invention.
  • FIG. 2A illustrates a local page address linking table within a block of one of the NV memory elements shown in FIG. 1 , where the NV memory element of this embodiment is a flash chip.
  • FIG. 2B compares the one-dimensional (1-D) array illustration and the two-dimensional (2-D) array illustration of the local page address linking table shown in FIG. 2A .
  • FIGS. 3A-3F respectively illustrate exemplary versions of a global page address linking table of the memory apparatus shown in FIG. 1 according to an embodiment of the present invention.
  • FIG. 4 illustrates a local page address linking table within a block of the flash chip shown in FIG. 2A according to an embodiment of the present invention.
  • FIGS. 5A-5B respectively illustrate exemplary versions of the global page address linking table of the memory apparatus shown in FIG. 1 according to the embodiment shown in FIG. 4 .
  • FIG. 6 illustrates an arrangement of one of the NV memory elements shown in FIG. 1 according to an embodiment of the present invention, where the NV memory element of this embodiment is a flash chip.
  • FIGS. 7A-7D illustrate physical addresses of the NV memory elements shown in FIG. 1 according to an embodiment of the invention, where the NV memory elements of this embodiment are a plurality of flash chips.
  • FIG. 8 illustrates a data region and a spare region for managing the flash chips shown in FIGS. 7A-7D .
  • FIGS. 9A-9D respectively illustrate exemplary versions of a global page address linking table of the embodiment shown in FIGS. 7A-7D .
  • FIGS. 10A-10F respectively illustrate exemplary versions of a valid page count table of the embodiment shown in FIGS. 7A-7D .
  • FIG. 11 illustrates a valid-page-position table of the flash chips shown in FIGS. 7A-7D according to an embodiment of the present invention.
  • FIG. 1 illustrates a block diagram of a memory apparatus 100 according to a first embodiment of the invention.
  • the memory apparatus 100 comprises a processing unit 110 , a volatile memory 120 , a transmission interface 130 , a plurality of non-volatile (NV) memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N (e.g. flash chips), and a bus 150 .
  • NV non-volatile
  • a host can be arranged to access the memory apparatus 100 through the transmission interface 130 after the transmission interface 130 is coupled to the host.
  • the host can represent a personal computer such as a laptop computer or a desktop computer.
  • the processing unit 110 is arranged to manage the memory apparatus 100 according to a program code (not shown in FIG. 1 ) embedded in the processing unit 110 or received from outside the processing unit 110 .
  • the program code can be a hardware code embedded in the processing unit 110 , such as a ROM code.
  • the program code can be a firmware code received from outside the processing unit 110 .
  • the processing unit 110 is utilized for controlling the volatile memory 120 , the transmission interface 130 , the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N, and the bus 150 .
  • the processing unit 110 of this embodiment can be an ARM processor or an ARC processor. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to different variations of this embodiment, the processing unit 110 can be other kinds of processors.
  • the volatile memory 120 is utilized for storing a global page address linking table, data accessed by the host (not shown), and other required information for accessing the memory apparatus 100 .
  • the volatile memory 120 of this embodiment can be a DRAM or an SRAM. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to different variations of this embodiment, the volatile memory 120 can be other kinds of volatile memories.
  • the transmission interface 130 shown in FIG. 1 is utilized for transmitting data and commands between the host and the memory apparatus 100 , where the transmission interface 130 complies with a particular communication standard such as the Serial Advanced Technology Attachment (SATA) standard, the Parallel Advanced Technology Attachment (PATA) standard, or the Universal Serial Bus (USB) standard.
  • the memory apparatus 100 is a solid state drive (SSD) installed within the host, and the particular communication standard can be some communication standard typically utilized for implementing internal communication of the host, such as the SATA standard or the PATA standard.
  • the memory apparatus 100 is an SSD and is positioned outside the host, and the particular communication standard can be some communication standard typically utilized for implementing external communication of the host, such as the USB standard.
  • the memory apparatus 100 can be a portable memory device such as a memory card, and the particular communication standard can be some communication standards typically utilized for implementing an input/output interface of a memory card, such as the Secure Digital (SD) standard or the Compact Flash (CF) standard.
  • SD Secure Digital
  • CF Compact Flash
  • the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N are utilized for storing data, where the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N can be, but not limited to, NAND flash chips.
  • the bus 150 is utilized for coupling the processing unit 110 , the volatile memory 120 , the transmission interface 130 , and the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N, and for communication thereof.
  • the processing unit 110 can provide at least one block of the memory apparatus 100 with at least one local page address linking table within the memory apparatus 100 , where the local page address linking table comprises linking relationships between physical page addresses and logical page addresses of a plurality of pages.
  • the processing unit 110 builds the local page address linking table during programming/writing operations of the memory apparatus 100 .
  • the processing unit 110 can further build the global page address linking table mentioned above according to the local page address linking table. For example, the processing unit 110 reads a first linking relationship between a first physical page address and a first logical page address from the at least one local page address linking table, and then records the first linking relationship into the global page address linking table.
  • the processing unit 110 can further read a second linking relationship between a second physical page address and the first logical page address from the at least one local page address linking table, and then record the second linking relationship into the global page address linking table in order to update the global page address linking table.
  • the processing unit 110 provides a plurality of blocks of the memory apparatus 100 with a plurality of local page address linking tables within the memory apparatus 100 , respectively. That is, the aforementioned at least one local page address linking table comprises a plurality of local page address linking tables.
  • the processing unit 110 can further build the global page address linking table mentioned above according to the local page address linking tables. More specifically, the processing unit 110 can read one of the local page address linking tables to update the global page address linking table mentioned above. For example, the first linking relationship of a first physical page is read from a first local page address linking table of the local page address linking tables, and the second linking relationship of a second physical page is read from a second local page address linking table of the local page address linking tables. Implementation details of the local page address linking tables are further described by referring to FIG. 2A .
  • FIG. 2A illustrates a local page address linking table within a block of the NV memory element 140 _ 0 , where the NV memory element 140 _ 0 of this embodiment is referred to as a flash chip 0 for simplicity.
  • the flash chip 0 comprises a plurality of blocks, such as blocks 0 , 1 , 2 , . . . , M in this embodiment.
  • a block is an erasing unit. In other words, when erasing data is required, the processing unit 110 erases all data stored in the block at a time.
  • a block such as the block 0 shown in FIG. 2A , comprises a plurality of pages.
  • the block 0 of the flash chip 0 comprises 128 pages.
  • the pages are divided into two areas, a data area for storing data and a table area for storing a local page address linking table 0 .
  • the pages in the data area of the block can be referred to as the data pages of the block.
  • the page amount of the data area and the page amount of the table area can be determined as required.
  • pages 0 , 1 , 2 , . . . , 126 is utilized for storing data and the remaining page of the block is utilized for storing the local page address linking table 0 .
  • the data area may comprise less than 127 pages, and the table area may comprise two or more pages.
  • the total page amount of the block, the page amount of the data area, and the page amount of the table area may vary.
  • a page is a programming/writing unit.
  • the processing unit 110 programs/writes a page of data into a page at a time.
  • the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N shown in FIG. 1 are respectively referred to as the flash chips 0 , 1 , . . . , and N, where each block of the NV memory elements 140 _ 0 , 140 _ 1 , . . . , 140 _N may have a local page address linking table.
  • the local page address linking table 0 of the block 0 of the flash chip 0 is illustrated in FIG. 2A since the functions/operations of each local page address linking table are similar to each other.
  • the local page address linking table 0 is built when all the data pages in the block 0 have been programmed, namely fully programmed. Before the data pages in the block 0 are fully programmed, however, the processing unit 110 temporarily stores a temporary local page address linking table 0 in the volatile memory 120 , and further updates the temporary local page address linking table 0 when any linking relationship between a physical page address and a logical page address in the block 0 is changed.
  • the ranking of a field (entry) of the temporary/non-temporary local page address linking table represents a physical page address, and the content of this field represents an associated logical page address.
  • the illustrative table location (i P , j P ) corresponding to the (i P *4+j P ) th field represents a physical page address PPN, which can be described as follows:
  • PPN ( PBN*DPC+i P *4+ j P );
  • the notation DPC stands for the data page count of each block (e.g. 127 in this embodiment).
  • PBN physical block number of the physical block under discussion
  • DPC stands for the data page count of each block
  • the illustrative table location i P corresponding to the i P th field represents a physical page address (PBN*DPC+i P ). That is, for this 1-D array illustration, the above equation can be re-written as follows:
  • PPN ( PBN*DPC+i P ).
  • a range of the logical page addresses in the local page address linking table 0 is not greater than the number of pages in the block 0 (i.e. 128 in this embodiment). This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, a range of the logical page addresses in a local page address linking table such as the local page address linking table 0 can be greater than the number of pages in a block such as the block 0 .
  • the illustrative table location (0, 0) (i.e. the upper-left location) corresponding to the first field represents the physical page address 0x0000
  • the illustrative table location (0, 1) corresponding to the second field represents the physical page address 0x0001
  • the illustrative table location (0, 2) corresponding to the third field represents the physical page address 0x0002
  • the illustrative table location (0, 3) corresponding to the fourth field represents the physical page address 0x0003
  • the illustrative table location (1, 0) corresponding to the fifth field represents the physical page address 0x0004, and so on.
  • the processing unit 110 programs the data 0 and the logical page address 0x0002 into the page 0 of the block 0 of the flash chip 0 , wherein the data 0 is programmed in a data byte region (labeled “DBR”) of the page 0 , and the logical page address 0x0002 is programmed in a spare byte region (labeled “SBR”) of the page 0 as spare information.
  • DBR data byte region
  • SBR spare byte region
  • the processing unit 110 writes the logical page address 0x0002 into the first field of the temporary local page address linking table 0 (or the illustrative table location (0, 0) thereof in this embodiment, i.e. the illustrative table location of the first column and the first row) to thereby indicate that the logical page address 0x0002 links/maps to the page 0 of the block 0 of the flash chip 0 , whose physical page address is 00000.
  • the processing unit 110 programs the data 1 and the logical page address 0x0001 into the page 1 of the block 0 of the flash chip 0 , wherein the data 1 is programmed in a data byte region (labeled “DBR”) of the page 1 , and the logical page address 0 ⁇ 0001 is programmed in a spare byte region (labeled “SBR”) of the page 1 as spare information.
  • DBR data byte region
  • SBR spare byte region
  • the processing unit 110 writes the logical page address 0x0001 into the second field of the temporary local page address linking table 0 (or the illustrative table location (0, 1) thereof in this embodiment, i.e. the illustrative table location of the second column and the first row) to thereby indicate that the logical page address 0x0001 links/maps to page 1 of block 0 of flash chip 0 , whose physical page address is 0x0001.
  • the processing unit 110 programs the data 2 and the logical page address 0x0002 into the page 2 of the block 0 , wherein the data 2 is programmed in a data byte region (labeled “DBR”) of the page 2 , and the logical page address 0x0002 is programmed in a spare byte region (labeled “SBR”) of the page 2 as spare information.
  • the processing unit 110 writes the logical page address 0x0002 into the third field of the temporary local page address linking table 0 (or the illustrative table location (0, 2) thereof in this embodiment, i.e.
  • a serial of logical page addresses ⁇ 00002, 0x0001, 0x0002, 0x0005, 0x0003, 0x0007, 0x0010, 0x0008, . . . , 0x0000, 0x0009, 0x0004 ⁇ are written in the temporary local page address linking table 0 .
  • the processing unit 110 copies the temporary local page address linking table 0 to build the local page address linking table 0 .
  • the processing unit 110 programs the local page address linking table 0 into the table area (i.e. the remaining page 127 ) of the block 0 of the flash chip 0 in this embodiment.
  • the processing unit 110 can program a local page address linking table for a portion of data pages in a block, rather than all data pages of the block.
  • the processing unit 110 can program a first local page address linking table for the first portion of data pages, where the first local page address linking table is positioned next to the first portion of data pages.
  • the processing unit 110 can program a second local page address linking table for the second portion of data pages.
  • the second local page address linking table is positioned next to the second portion of data pages.
  • the second local page address linking table is positioned at the end (e.g. the last page) of the specific block.
  • the second local page address linking table is positioned at the beginning (e.g. the first page) of the block next to the specific block.
  • the second local page address linking table is positioned at another page (or other pages) of the block next to the specific block.
  • FIGS. 3A-3F respectively illustrate exemplary versions of the aforementioned global page address linking table of the memory apparatus 100 according to an embodiment of the present invention.
  • the processing unit 110 reads each of the local page address linking tables respectively corresponding to the blocks of the memory apparatus 100 to build the global page address linking table. For example, within the memory apparatus 100 , if only the blocks 0 and 1 of the flash chip 0 have been fully programmed, and if the local page address linking table 0 in the block 0 and the local page address linking table 1 in the block 1 have been built, the processing unit 110 reads the local page address linking tables 0 and 1 to build the global page address linking table.
  • the ranking of a field of the global page address linking table represents a logical page address, and the content of this field represents an associated physical page address.
  • the illustrative table location (i L , j L ) corresponding to the (i L *4+j L ) th field represents a logical page address (i L *4+j L ).
  • the illustrative table location (0, 0) (i.e. the upper-left location) corresponding to the first field represents the logical page address 0x0000
  • the illustrative table location (0, 1) corresponding to the second field represents the logical page address 0x0001
  • the illustrative table location (0, 2) corresponding to the third field represents the logical page address 0x0002
  • the illustrative table location (0, 3) corresponding to the fourth field represents the logical page address 0x0003
  • the illustrative table location (1, 0) corresponding to the fifth field represents the logical page address 0x0004, and so on.
  • the processing unit 110 When building the global page address linking table, the processing unit 110 reads the first field of the local page address linking table 0 shown in FIG. 2A and obtains the logical page address 0x0002, and therefore determines that the logical page address 0x0002 links to the page 0 of the block 0 of the flash chip 0 , whose physical page address is 0x0000. As shown in FIG. 3A , the processing unit 110 writes the physical page address 0x0000 (PHY Page 0x0000) into the third field of the global page address linking table (i.e. the illustrative table location (0, 2) of the 2-D array illustration thereof) to indicate that the logical page address 0x0002 (LOG Page 0x0002) links to the physical page address 0x0000.
  • PHY Page 0x0000 Physical Page 0x0000
  • the processing unit 110 reads the second field of the local page address linking table 0 shown in FIG. 2A and obtains the logical page address 0x0001, and therefore determines that the logical page address 0x0001 links to the page 1 of the block 0 of the flash chip 0 , whose physical page address is 0x0001. As shown in FIG. 3B , the processing unit 110 writes the physical page address 0x0001 into the second field of the global page address linking table to indicate that the logical page address 0x0001 (LOG Page 0x0001) links to the physical page address 0x0001 (PHY Page 0x0001).
  • the processing unit 110 reads the third field of the local page address linking table 0 shown in FIG. 2A and obtains the logical page address 0x0002, and therefore determines that the logical page address 0x0002 links to the page 2 of the block 0 of the flash chip 0 , whose physical page address is 0x0002. As shown in FIG. 3C , the processing unit 110 writes (or updates) the physical page address 0x0002 into the third field of the global page address linking table to indicate that the logical page address 0x0002 (LOG Page 0x0002) links to the physical page address 0x0002 (PHY Page 0x0002).
  • the processing unit 110 reads the fourth field of the local page address linking table 0 shown in FIG. 2A and obtains the logical page address 0x0005, and therefore determines that the logical page address 0x0005 links to the page 3 of the block 0 of the flash chip 0 , whose physical page address is 0x0003. As shown in FIG. 3D , the processing unit 110 writes the physical page address 0x0003 into the sixth field of the global page address linking table to indicate that the logical page address 0x0005 (LOG Page 0x0005) links to the physical page address 0x0003 (PHY Page 0x0003).
  • the processing unit 110 reads the fifth field of the local page address linking table 0 shown in FIG. 2A and obtains the logical page address 0x0003, and therefore determines that the logical page address 0x0003 links to the page 4 of the block 0 of the flash chip 0 , whose physical page address is 0x0004. As shown in FIG. 3E , the processing unit 110 writes the physical page address 0x0004 into the fourth field of the global page address linking table to indicate that the logical page address 0x0003 (LOG Page 0x0003) links to the physical page address 0x0004 (PHY Page 0x0004). Similar operations for the subsequent linking relationships are not repeated in detail. After reading all fields of the local page address linking table 0 shown in FIG. 2A and filling the corresponding physical page addresses into the associated fields of the global page address linking table, the processing unit 110 builds the global page address linking table as shown in FIG. 3F .
  • FIG. 4 illustrates the local page address linking table 1 within the block 1 of the flash chip 0 according to an embodiment of the present invention.
  • the processing unit 110 After reading all fields of the local page address linking table 0 shown in FIG. 2A and filling the corresponding physical page addresses into the associated fields of the global page address linking table as shown in FIG. 3F , the processing unit 110 further reads the local page address linking table 1 within the block 1 in order to complete the global page address linking table.
  • the local page address linking table 1 is built when all data pages in the block 1 have been programmed. This is for illustrative purposes only, and is not meant to be a limitation of the present invention.
  • a local page address linking table can be built for a block when at least a data page (e.g. a data page or a plurality of pages) in this block have been programmed.
  • the local page address linking table is built for this block, and more particularly, for at least the data page.
  • the local page address linking table is built for a few data pages such as physical pages 0 and 1 of this block, where the local page address linking table for the physical pages 0 and 1 is built and stored in the subsequent physical page, i.e. the physical page 2 .
  • the processing unit 110 When building (or updating) the global page address linking table, in a situation where there is no local page address linking table found in the last page of this block, the processing unit 110 tries to find the last programmed page of this block. In this variation, the processing unit 110 searches back, starting from the last page, in order to find the last programmed page of this block. As a result, the processing unit 110 reads all fields of the local page address linking table from the last programmed page of this block and fills the corresponding physical page addresses into the associated fields of the global page address linking table, in order to complete/update the global page address linking table.
  • the processing unit 110 reads the first field of the local page address linking table 1 and obtains the logical page address 0x0006, and therefore determines that the logical page address 0x0006 links to the page 0 of the block 1 of the flash chip 0 , whose physical page address is 0x0127 in this embodiment. As shown in FIG. 5A , the processing unit 110 writes the physical page address 0x0127 into the seventh field of the global page address linking table to indicate that the logical page address 0x0006 (LOG Page 0x0006) links to the physical page address 0x0127 (PHY Page 0x0127).
  • the processing unit 110 reads the second field of the local page address linking table 1 shown in FIG. 4 and obtains the logical page address 0x0002, and therefore determines that the logical page address 0x0002 links to the page 1 of the block 1 of the flash chip 0 , whose physical page address is 0x0128. As shown in FIG. 5B , the processing unit 110 writes (or updates) the physical page address 0x0128 into the third field of the global page address linking table to indicate that the logical page address 0x0002 (LOG Page 0x0002) links to the physical page address 0x0128 (PHY Page 0x0128). Similar operations for the subsequent linking relationships are not repeated in detail. After reading all fields of the local page address linking tables 0 and 1 and filling the corresponding physical page addresses into the associated fields of the global page address linking table, the processing unit 110 completes the global page address linking table.
  • the processing unit 110 of this embodiment merely reads a few number of local page address linking tables within (or representing but not within) the blocks that are fully or partially programmed. Therefore, the memory apparatus implemented according to the present invention surely have better efficiency than those implemented according to the related art.
  • the processing unit 110 in a situation where all data pages of all data blocks of the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N are fully programmed, the processing unit 110 merely reads the local page address linking tables respectively corresponding to the data blocks to build the global page address linking table. If the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N have X D data blocks in total, and each data block has Y D data pages, the processing unit 110 reads X D local page address linking tables (whose data amount is typically less than X D pages in total) to build the global page address linking table, rather than reading X D ⁇ Y D pages.
  • the time required for building the global page address linking table according to the present invention is similar to the time required for building the global block address linking table.
  • the processing unit 110 of this variation can program/write the temporary local page address linking table to the particular block before shutting down the memory apparatus 100 .
  • the host can read the local page address linking table stored in the particular block, in order to build or update the global page address linking table. This is for illustrative purposes only, and is not meant to be a limitation of the present invention.
  • the processing unit 110 can read the pages programmed in the particular block, and more particularly, the spare byte region of each page programmed in the particular block, in order to build or update the global page address linking table.
  • the processing unit 110 In a situation where the processing unit 110 reads the pages programmed in the particular block to build or update the global page address linking table, the processing unit 110 has to read less than Y D pages of data from the particular block. As a result, for completing the global page address linking table, the data amount that the processing unit 110 has to read is less than (X FP +Y PP ) pages, given that the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N have X FP fully programmed blocks in total and further have a partially programmed block having Y PP programmed data pages. Therefore, in regard to building the global page address linking table, the memory apparatus implemented according to the present invention still have better efficiency than those implemented according to the related art.
  • the global page address linking table can be built during any start-up process of the memory apparatus 100 or at any time in response to a request from a user.
  • the global page address linking table can be divided into a plurality of partial tables stored in one or more of the NV memory elements (e.g. the partial tables are respectively stored in the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N). Each divided partial table can be referred as a sub-global page address linking table.
  • the processing unit 110 can read and store at least one sub-global page address linking table (e.g.
  • the processing unit 110 can utilize the sub-global page address linking table stored in the volatile memory 120 to perform the logical-to-physical address transferring operations of the aforementioned embodiments.
  • FIG. 6 illustrates an arrangement of the NV memory element 140 _ 0 according to an embodiment of the present invention, where the NV memory element 140 _ 0 of this embodiment is referred to as the flash chip 0 as mentioned above.
  • a page comprises a plurality of sectors, e.g. sectors 0 , 1 , 2 , and 3 .
  • a sector is the minimal read unit, which can be 512 bytes in this embodiment.
  • the processing unit 110 can read one sector or a plurality of sectors during a reading operation.
  • the physical addresses of this embodiment may fall within a range that is wider than the range [0 ⁇ 0000, 0 ⁇ FFFF] utilized in some embodiments disclosed above, the physical addresses are illustrated with the decimal numeral system hereinafter for simplicity. This is for illustrative purposes only, and is not meant to be a limitation of the present invention.
  • the physical addresses can be illustrated with the hexadecimal numeral system, where the physical addresses may have more digits than those in some embodiments disclosed above.
  • the physical addresses can be illustrated with another numeral system when needed.
  • the first block of the flash chip 0 is regarded as the first block of the flash chips 0 - 3 , and is addressed as the physical block address 0 , and therefore, can be referred to as PHY BLK 0 , where “PHY BLK” stands for “physical block”.
  • the last block of the flash chip 0 is regarded as the 1024 th block of the flash chips 0 - 3 , and is addressed as the physical block address 1023 , and therefore, can be referred to as PHY BLK 1023 .
  • the first block of the flash chip 1 is regarded as the 1025 th block of the flash chips 0 - 3 , and is addressed as the physical block address 1024 , and therefore, can be referred to as PHY BLK 1024 , and so on.
  • the last block of the flash chip 3 is regarded as the 4096 th block of the flash chips 0 - 3 , and is addressed as the physical block address 4095 , and therefore, can be referred to as PHY BLK 4095 .
  • the blocks of the flash chips 0 - 3 comprise 4 sets of PHY BLKs ⁇ 0 , 1 , . . . , 1023 ⁇ , ⁇ 1024 , 1025 , . . .
  • the first page of PHY BLK 0 is regarded as the first page of the flash chips 0 - 3 , and is addressed as the physical page address 0 , and therefore, can be referred to as PHY Page 0 .
  • the last page of PHY BLK 0 is regarded as the 128 th page of the flash chips 0 - 3 , and is addressed as the physical page address 127 , and therefore, can be referred to as PHY Page 127 .
  • the first page of PHY BLK 1 is regarded as the 129 th page of the flash chips 0 - 3 , and is addressed as the physical page address 128 , and therefore, can be referred to as PHY Page 128 , and so on.
  • the last page of PHY BLK 4095 is regarded as the 524288 th page of the flash chips 0 - 3 , and is addressed as the physical page address 524287 , and therefore, can be referred to as PHY Page 524287 .
  • the pages of the flash chips 0 - 3 comprise 4096 sets of PHY Pages ⁇ 0 , 1 , . . . , 127 ⁇ , ⁇ 128 , 129 , . . . , 255 ⁇ , . . . , and ⁇ 524160 , 524161 , . . . , 524287 ⁇ , i.e. 524288 PHY Pages in total.
  • FIG. 8 illustrates a data region and a spare region for managing the flash chips 0 - 3 shown in FIGS. 7A-7D .
  • the flash chips 0 - 3 are logically divided into the data region and the spare region.
  • the data region is utilized for storing data, and may initially comprise PHY BLKs 2 , 3 , . . . , and 4095 .
  • the spare region is utilized for writing new data, where the spare region typically comprises erased blocks, and may initially comprise PHY BLKs 0 and 1 . After a lot of accessing operations, the spare region may logically comprise a different set of physical blocks, and the data region may logically comprise the other physical blocks.
  • the spare region may comprise PHY BLKs 4094 and 4095 , and the data region may comprise PHY BLKs 0 - 4093 .
  • the spare region may comprise PHYs BLK 0 , 1024 , 2048 , and 3096 , i.e. each of the flash chips 0 - 3 comprises at least a block logically belonging to the spare region.
  • the number of blocks of the data region and the number of blocks of the spare region can be determined based upon user/designer requirements.
  • the spare region may comprise 4 PHY BLKs, and the data region may comprise 4092 PHY BLKs.
  • the host sends a command C 0 to the memory apparatus 100 in order to write 4 sectors of data, D S0 -D S3 , at corresponding host addresses 0000008-0000011.
  • the volatile memory 120 temporarily stores data D S0 -D S3 .
  • the processing unit 110 parses the command C 0 to execute the writing/programming operation corresponding to the command C 0 .
  • the processing unit 110 transfers the host addresses 0000008-0000011 into associated logical addresses.
  • the processing unit 110 divides the host address 0000008 by the number of sectors of a page, i.e. 4 in this embodiment, and obtains a quotient 2 and a remainder 0 .
  • the quotient 2 means that the logical page address thereof is 2 , and therefore, the logical page indicated by the logical page address 2 can be referred to as LOG Page 2 .
  • the remainder 0 means that the data D S0 should be stored in a first sector of a page.
  • the processing unit 110 further divides the host address 0000008 by the number of sectors of a block, i.e. 512 in this embodiment, and obtains a quotient 0 and a remainder 8 .
  • the quotient 0 means that the logical block address thereof is 0 , and therefore, the logical block indicated by the logical block address 0 can be referred to as LOG BLK 0 , where “LOG BLK” stands for “logical block”.
  • the dividing operations can be performed by truncating a portion of bits of the host address. For example, when dividing the host address 0000008 by 4, the processing unit 110 extracts the last two bits (i.e. two adjacent/continuous bits including the least significant bit (LSB)) from the binary expression of the host address to obtain the remainder 0 , and extracts the other bits from this binary expression to obtain the quotient 2 . In addition, when dividing the host address 0000008 by 512, the processing unit 110 can extract the last nine bits (i.e.
  • the host address 0000008 substantially comprises the logical page address 2 and the logical block address 0 .
  • the processing unit 110 of a variation of this embodiment can parse the host address 0000008 by bit-shifting, rather than really performing the dividing operations.
  • the processing unit 110 of this embodiment determines that the logical page addresses of the host addresses 0000009, 0000010, and 0000011 are all 2 (i.e. all of the host addresses 0000009, 0000010, and 0000011 inherently belong to LOG Page 2 , or comprise the logical page address 2 ), and the logical block addresses thereof are all 0 (i.e. all of the host addresses 0000009, 0000010, and 0000011 further inherently belong to LOG BLK 0 , or comprise the logical block address 0 ).
  • the data D S1 , D S2 , and D S3 should be respectively stored in the second, the third, and the fourth sectors of a page.
  • PHY BLK 0 is erased and is logically positioned in the spare region initially, the processing unit 110 pops the PHY BLK 0 from the spare region, and writes/programs the data D S0 -D S3 into the first, the second, the third, and the fourth sectors of PHY Page 0 , respectively.
  • the processing unit 110 further records 0 in the third field of the global page address linking table of this embodiment, in order to indicate that LOG Page 2 links to PHY Page 0 .
  • FIGS. 9A-9D respectively illustrate exemplary versions of the global page address linking table of this embodiment. The arrangement of the illustrative table locations of this embodiment is similar to that of FIGS. 3A-3F , and therefore, is not explained in detail for simplicity.
  • the physical page address 0 has been written in the third field, which indicates that LOG Page 2 links to PHY Page 0 .
  • the physical page address 0 can be written in a corresponding field of a temporary local page address linking table thereof for indicating the linking relationship of the logical and physical addresses. Then, the global page address linking table can be updated accordingly.
  • the implementation details of updating the global page address linking table according to the temporary local page address linking table are similar to those of the embodiments mentioned above.
  • the processing unit 110 records usage information during accessing the pages.
  • the usage information comprises a valid page count table for recording valid page counts of the blocks, respectively.
  • the usage information comprises an invalid page count table for recording invalid page counts of the blocks, respectively.
  • each fully programmed block comprise a predetermined number of pages (e.g. 128 pages in this embodiment)
  • the valid page count and the invalid page count of the same fully programmed block are complementary to each other.
  • the processing unit 110 records 1 in the first field of the valid page count table, in order to indicate that PHY BLK 0 contains 1 valid page (i.e. 1 page of useful data; or in other words, 1 page of valid data).
  • the global page address linking table and the valid page count table can be stored in the volatile memory 120 .
  • the global page address linking table and the valid page count table can be updated easily during accessing the flash chips. This is for illustrative purposes only, and is not meant to be a limitation of the present invention.
  • the global page address linking table and the valid page count table can be loaded from the volatile memory 120 and stored in one or more of the NV memory elements 140 _ 0 , 140 _ 1 , .
  • the global page address linking table and the valid page count table can be stored in one or more link blocks of the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N. In this way, the global page address linking table and the valid page count table can be preserved while the memory apparatus 100 shuts down.
  • Each of the one or more link blocks is a particular block for preserving system information. While turning on the memory apparatus 100 next time, the global page address linking table and the valid page count table can be easily obtained from the link block(s).
  • the host sends a command C 1 to the memory apparatus 100 in order to write 4 sectors of data, D S4 -D S7 , into corresponding host addresses 0000512-0000515.
  • the processing unit 110 determines that the logical page addresses of the host addresses 0000512-0000515 are all 128 (i.e. all of the host addresses 0000512-0000515 belong to LOG Page 128 , or comprise the logical page address 128 ), and the logical block addresses thereof are all 1 (i.e. all of the host addresses 0000512-0000515 further belong to LOG BLK 1 , or comprise the logical block address 1 ).
  • the data D S4 -D S7 should be stored in the first, the second, the third, and the fourth sectors of a page, respectively. Since PHY Page 0 has been programmed, the processing unit 110 writes/programs the data D S4 -D S7 into the first, the second, the third, and the fourth sectors of PHY Page 1 (which is the page subsequent to PHY Page 0 ), respectively.
  • the processing unit 110 further records 1 in the 129 th field of the global page address linking table shown in FIG. 9A , in order to indicate that LOG Page 128 links to PHY Page 1 .
  • the processing unit 110 records 2 in the first field of the valid page count table (i.e.
  • the processing unit 110 updates the first field thereof with 2), in order to indicate that PHY BLK 0 contains 2 valid pages (i.e. 2 pages of valid data). That is, the processing unit 110 increases the valid page count of PHY BLK 0 .
  • This is for illustrative purposes only, and is not meant to be a limitation of the present invention.
  • the processing unit 110 maintains a value of an invalid page count of PHY BLK 0 .
  • the host addresses 0000512-0000515 and the host addresses 0000008-0000011 belong to different logical block (e.g. the host addresses 0000512-0000515 belong to LOG BLK 1 , and the host addresses 0000008-0000011 belong to LOG BLK 0 ), however, these host addresses all link to the associated pages in the same physical blocks, and data corresponding to the host addresses 0000512-0000515 and data corresponding to the host addresses 0000008-0000011 are both programmed/written in the same physical block, i.e. PHY BLK 0 in this embodiment.
  • the processing unit 110 can program/write both the data corresponding to the first set of host addresses and the data corresponding to the second set of host addresses in the same physical block (e.g. PHY BLK 0 ).
  • PHY BLK 0 Physical block
  • the processing unit 110 can program/write a first portion and a second portion of the data corresponding to the first set of host addresses in different physical blocks wherein the first portion and the second portion of the data are not overlap.
  • the host then sends a command C 2 to the memory apparatus 100 in order to write 4 sectors of data, D S8 -D S11 , into corresponding host addresses 0000004-0000007.
  • the processing unit 110 determines that the logical page addresses of the host addresses 0000004-0000007 are all 1 (i.e. all of the host addresses 0000004-0000007 belong to LOG Page 1 , or comprise the logical page address 1 ), and the logical block addresses thereof are all 0 (i.e. all of the host addresses 0000004-0000007 further belong to LOG BLK 0 , or comprise the logical block address 0 ).
  • the data D S8 -D S11 should be stored in the first, the second, the third, and the fourth sectors of a page, respectively. Since PHY Page 1 has been programmed, the processing unit 110 writes/programs the data D S8 -D S11 into the first, the second, the third, and the fourth sectors of PHY page 2 (which is the page subsequent to PHY Page 1 ), respectively.
  • the processing unit 110 further records 2 in the second field of the global page address linking table shown in FIG. 9A , in order to indicate that LOG Page 1 links to PHY Page 2 .
  • the processing unit 110 records 3 in the first field of the valid page count table (i.e.
  • the processing unit 110 updates the first field thereof with 3 ), in order to indicate that PHY BLK 0 contains 3 valid pages (i.e. 3 pages of valid data). That is, the processing unit 110 increases the valid page count of PHY BLK 0 .
  • the processing unit 110 maintains the value of the invalid page count of PHY BLK 0 .
  • FIGS. 10A-10F respectively illustrate exemplary versions of the valid page count table of this embodiment.
  • the ranking of a field of the valid page count table represents a physical block address, and the content of this field represents an associated valid page count.
  • the illustrative table location (i PBLK , j PBLK ) corresponding to the (i PBLK *4+j PBLK ) th field represents a physical block address (i PBLK *4+j PBLK ).
  • i PBLK is still the row number
  • i PBLK 0, 1, . . .
  • the illustrative table location i PBLK corresponding to the i PBLK th field represents a physical block address (i PBLK ).
  • the global page address linking table and the valid page count table are updated as shown in FIG. 9A and 10A , respectively.
  • the host sends a command C 3 to the memory apparatus 100 in order to write/update 4 sectors of data, D S0 ′-D S3 ′, into corresponding host addresses 0000008-00000011.
  • the processing unit 110 determines that the logical page addresses of the host addresses 0000008-00000011 are all 2 (i.e. all of the host addresses 0000008-0000011 belong to LOG Page 2 , or comprise the logical page address 2 ), and the logical block addresses thereof are all 0 (i.e. all of the host addresses 0000008-0000011 further belong to LOG BLK 0 , or comprise the logical block address 0 ).
  • the data D S0 ′-D S3 ′ should be stored in the first, the second, the third, and the fourth sectors of a page, respectively. Since PHY Page 2 has been programmed, the processing unit 110 writes/programs the data D S0 ′-D S3 ′ into the first, the second, the third, and the fourth sectors of PHY page 3 (which is the page subsequent to PHY Page 2 ), respectively. The processing unit 110 further records/updates 3 in the third field of the global page address linking table shown in FIG. 9B , in order to indicate that LOG Page 2 links to PHY Page 3 now. In addition, the processing unit 110 still records 3 in the first field of the valid page count table shown in FIG.
  • the processing unit 110 in order to indicate that PHY BLK 0 still contains 3 valid pages. That is, the processing unit 110 maintains the value 3 of the valid page count of PHY BLK 0 . This is for illustrative purposes only, and is not meant to be a limitation of the present invention. In the situation where the valid page count table is replaced by the invalid page count table mentioned above, the processing unit 110 increases the invalid page count of PHY BLK 0 .
  • PHY Pages 0 - 3 have been programmed in PHY BLK 0 , only 3 physical pages, PHY Page 1 - 3 , contain valid data. Since data of LOG Page 2 has been updated, PHY Page 0 does not contain valid data and can be deemed as an invalid page containing invalid data. As a result of executing command C 3 , the global page address linking table and the valid page count table are updated as shown in FIG. 9B and 10B , respectively.
  • the processing unit 110 determines that the logical page addresses of the host addresses 0000008-00000011 are all 2 (i.e. all of the host addresses 0000008-0000011 belong to LOG Page 2 , or comprise the logical page address 2 ), and the logical block addresses thereof are all 0 (i.e.
  • the processing unit 110 writes/programs the data D S0 ′′-D S3 ′′ into the first, the second, the third, and the fourth sectors of PHY page 128 (which is the page subsequent to PHY Page 127 ), respectively.
  • the processing unit 110 further records/updates 128 in the third field of the global page address linking table shown in FIG.
  • PHY Page 3 does not contain valid data and can be deemed as an invalid page containing invalid data.
  • the processing unit 110 records 1 in the second field of the valid page count table to indicate that PHY BLK 1 contains 1 valid page (i.e. 1 page of valid data), and records/updates 99 in the first field of the valid page count table to indicate that PHY BLK 0 contains 99 valid pages (i.e. 99 pages of valid data) now. That is, the processing unit 110 decreases the valid page count of PHY BLK 0 .
  • This is for illustrative purposes only, and is not meant to be a limitation of the present invention.
  • the processing unit 110 increases the invalid page count of PHY BLK 0 .
  • the processing unit 110 parses the command C 5 to execute the reading operation.
  • the processing unit 110 transfers the host addresses 0000008-0000011 into logical addresses.
  • the processing unit 110 divides the host address 0000008 by the number of sectors of a page, i.e. 4 in this embodiment, and obtains a quotient 2 and a remainder 0 .
  • the quotient 2 means that the logical page address thereof is 2 , where the logical page indicated by the logical page address 2 is LOG Page 2 .
  • the remainder 0 means that the data D S0 should have been stored in the first sector of a page.
  • the processing unit 110 determines that the logical page addresses of the host addresses 0000009, 0000010, and 0000011 are all 2 (i.e. all of the host addresses 0000009, 0000010, and 0000011 belong to LOG Page 2 , or comprise the logical page address 2 ), and the logical block addresses thereof are all 0 (i.e. all of the host addresses 0000009, 0000010, and 0000011 further belong to LOG BLK 0 , or comprise the logical block address 0 ).
  • the data corresponding to host addresses 0000008-00000011 should have been stored in the first, the second, the third, and the fourth sectors of a page, respectively.
  • the processing unit 110 reads the third field of the global page address linking table and obtains 128 , which indicates that the data corresponding to LOG Page 2 is stored in PHY Page 128 .
  • the processing unit 110 reads PHY Page 128 to obtain data D S0 ′′-D S3 ′′, and sends these data to the host.
  • the spare region comprises PHY BLKs 4094 and 4095 , where the valid page count table is illustrated in FIG. 10E .
  • the host sends a command C 6 to the memory apparatus 100 in order to write 4 sectors of data, D S12 -D S15 .
  • the processing unit 110 pops a physical block from the spare region, such as PHY BLK 4094 , for writing data D S12 -D S15 .
  • the minimal block count should be always greater than zero.
  • the minimal block count should be greater than zero for most of the time, where the minimal block count can temporarily reach zero as long as the operations of the memory apparatus 100 will not be hindered.
  • the processing unit 110 has to erase a physical block in the data region, in order to push this erased physical block into the spare region.
  • the processing unit 110 searches the valid page count table and finds out that PHY BLK 2 has no valid data since the valid page count of PHY BLK 2 is 0 . Since, PHY BLK 2 having the least valid page count, the processing unit 110 erases the PHY BLK 2 and then pushes the erased PHY BLK 2 into the spare region.
  • the spare region comprises PHY BLKs 2 and 4095 now. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, when the valid page count of PHY BLK 2 decreases to zero, the processing unit 110 can erase PHY BLK 2 immediately.
  • the host sends a command C 7 to the memory apparatus 100 in order to write 4 sectors of data, D S16 -D S19 .
  • the processing unit 110 pops a physical block from the spare region, such as PHY BLK 4095 , for writing data D S16 -D S19 .
  • the processing unit 110 has to erase at least a physical block in the data region in order to push the physical block(s) into the spare region.
  • the processing unit 110 of this embodiment searches the valid page count table shown in FIG. 10F and finds out that PHY BLK 0 has 40 pages of valid data and PHY BLK 1 has 50 pages of valid data, where PHY BLKs 0 and 1 have the least valid page counts among others.
  • the processing unit 110 moves the valid data of PHY BLKs 0 and 1 into PHY BLK 2 , and updates the global page address liking table to reflect the movement of the valid data.
  • the processing unit 110 reads the valid data in PHY BLKs 0 and 1 , programs/writes the valid data into PHY BLK 2 , and links the logical page addresses of the valid data to the physical pages programmed/written with the valid data, correspondingly. After moving the valid data, the processing unit 110 erases PHY BLKs 0 and 1 , and pushes the erased PHY BLKs 0 and 1 into the spare region.
  • the processing unit 110 when it is detected that the block count of the spare region is less than the predetermined value, the processing unit 110 typically searches the valid page count table to find one or more fully programmed blocks having the least valid page count(s), and erases the one or more fully programmed blocks in order to push the one or more blocks into the spare region.
  • the processing unit 110 can search the invalid page count table to find one or more fully programmed blocks having the most invalid page count(s), and erase the one or more fully programmed blocks of this variation in order to push the one or more blocks into the spare region.
  • the processing unit 110 has popped one more physical block from the spare region into the data region, such as PHY BLK 2 , for merging PHY BLKs 0 and 1 .
  • the processing unit 110 can merge the one or more fully programmed blocks having the least valid page count(s) into a partially programmed block as long as there are enough free pages in the partially programmed block, where the free pages represent the pages that have not been programmed since the latest erasure of the block comprising these valid pages.
  • the processing unit 110 can merge PHY BLKs 0 and 1 into the partially programmed block, such as PHY BLK 4095 , as long as there are enough free pages in the partially programmed block for programming data D S16 -D S19 and the valid data of PHY BLKs 0 and 1 .
  • the processing unit 110 can merge PHY BLK 0 into the partially programmed block, such as PHY BLK 4095 , as long as there are enough free pages in the partially programmed block for programming data D S16 -D S19 and the valid data of PHY BLK 0 .
  • the processing unit 110 can program/write the data D S16 -D S19 into PHY BLK 4095 , and can further move the valid data of PHY BLKs 0 and 1 into PHY BLK 4095 as long as there are enough free pages in PHY BLK 4095 for programming data D S16 -D S19 and the valid data.
  • the processing unit 110 of this variation updates the global page address liking table to reflect the movement of the valid data.
  • the processing unit 110 erases PHY BLKs 0 and 1 , and pushes the erased PHY BLKs 0 and 1 into the spare region.
  • the processing unit 110 can move valid data of N physical blocks into M physical blocks wherein N and M are positive integers, and N is greater than M. Assume that there are K pages of valid data in total within the N physical blocks, where K is smaller than the number of free pages in total within the M physical blocks.
  • the processing unit 110 can read the K pages of valid data from the N physical blocks, erase the N physical blocks, buffer the K pages of valid data into the volatile memory 120 , and program/write the K pages of valid data into the M physical blocks.
  • the N physical blocks and the M physical blocks may overlap (e.g. the N physical blocks and the M physical blocks both comprise at least a same physical block) or not overlap.
  • the K pages of valid data can be programmed/written into the M physical blocks without waiting for erasing the N physical blocks, and the processing unit 110 can generate (N-M) erased blocks eventually.
  • the processing unit 110 updates the global page address liking table to reflect the movement of the valid data.
  • the processing unit 110 can record the invalid page count of each physical block. For example, given that the page count of each physical block is 128 , a specific physical block comprises 128 pages, within which 28 pages are invalid pages containing invalid data and 100 pages are valid pages containing valid data. That is, the invalid page count and the valid page count of the specific physical block are 28 and 100 , respectively.
  • the processing unit 110 can build an invalid page count table of the flash chips 0 - 3 , and erase a particular physical block according to the invalid page count table.
  • the processing unit 110 when the processing unit 110 has to erase a physical block, the processing unit 110 can select a particular physical block having the most invalid pages according to the invalid page count table, and erase the particular physical block.
  • the processing unit 110 can record one or more positions of the valid data in the particular block. More particularly, the processing unit 110 can build a valid-page-position table for each block in order to indicate the position(s) of one or more valid pages containing valid data within the block.
  • FIG. 11 illustrates a valid-page-position table of the flash chips 0 - 3 according to an embodiment of the present invention.
  • the arrangement of the illustrative table locations of valid-page-position table is similar to that of FIGS. 10B-10F together with the right half of FIG. 10A , and therefore, is not explained in detail for simplicity.
  • each field of the valid-page-position table indicates whether any valid-page-position corresponding to an associated physical block exists.
  • each field of this embodiment comprises 128 bits respectively corresponding to the pages of the associated physical block.
  • each field of the valid-page-position table indicates the valid-page-position(s) corresponding to the associated physical block.
  • Each bit in a specific field indicates whether an associated page in the associated physical block is valid or invalid.
  • the first field of the valid-page-position table shown in FIG. 11 is recorded as “01011100101 . . . 11111”, which indicates the valid-page-position(s) within PHY BLK 0 .
  • the ranking of a specific bit in the specific field of the valid-page-position table shown in FIG. 11 represents a page address offset (or a relative page position) of an associated page within the associated physical block.
  • the least significant bit (LSB) “ 1 ” indicates that the first page of the PHY BLK 0 (i.e. the PHY Page 0 ) is a valid page containing valid data
  • the most significant bit (MSB) “ 0 ” indicates that the last page of the PHY BLK 0 (i.e.
  • the PHY Page 127 is an invalid page containing invalid data, where other bits between LSB and MSB indicate the valid/invalid state of the other physical pages of the associated physical block, respectively. Similar descriptions are not repeated for the other fields of the valid-page-position table shown in FIG. 11 . As a result, the processing unit 110 can move valid data contained in the valid pages quickly according to the valid-page-position table.
  • the LSB in the specific field indicates whether the first page of the associated physical block is a valid page or an invalid page, and the MSB in the specific field indicates whether the last page of the associated physical block is a valid page or an invalid page.
  • the LSB in the specific field indicates whether the last page of the associated physical block is a valid page or an invalid page, and the MSB in the specific field indicates whether the first page of the associated physical block is a valid page or an invalid page. For example, regarding the bits “01011100101 . . .
  • the LSB “ 1 ” indicates that the last page of the PHY BLK 0 (i.e. the PHY Page 127 ) is a valid page containing valid data
  • the most significant bit (MSB) “ 0 ” indicates that the first page of the PHY BLK 0 (i.e. the PHY Page 0 ) is an invalid page containing invalid data
  • other bits between LSB and MSB indicate the valid/invalid state of the other physical pages of the associated physical block, respectively.
  • a logical value “ 1 ” of the specific bit indicates that the associated page is a valid page, while a logical value “ 0 ” of the specific bit indicates that the associated page is an invalid page.
  • the logical value “ 0 ” of the specific bit indicates that the associated page is a valid page, while the logical value “ 1 ” of the specific bit indicates that the associated page is an invalid page.
  • the valid-page-position table can be stored in the volatile memory 120 .
  • the valid-page-position table can be updated easily during accessing the flash chips.
  • the valid-page-position table can be loaded from the volatile memory 120 and stored in one or more of the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N before shutting down the memory apparatus 100 .
  • the valid-page-position table can be stored in one or more link blocks of the NV memory elements 140 _ 0 , 140 _ 1 , . . . , and 140 _N. In this way, the valid-page-position table can be preserved while the memory apparatus 100 shuts down. While turning on the memory apparatus 100 next time, the valid-page-position table can be easily obtained from the link block(s).
  • the valid-page-position table and global page address linking table can be loaded from the volatile memory 120 and stored in the NV memory elements from time to time.
  • the valid-page-position table and global page address linking table can be stored in every predetermined time period (e.g. 2 second) or in every predetermined accessing operations (e.g. 100 times of writing).
  • the latest valid-page-position table and global page address linking table are not loaded from the volatile memory 120 and stored in the NV memory elements. Then, the memory apparatus 100 is turning on.
  • the processing unit 110 can search the blocks that have been accessed after the latest updating of the valid-page-position table and global page address linking table in the NV memory elements.
  • the processing unit 110 searches logical page addresses stored in each page of these blocks to build and update the global page address linking table. After that, the valid-page-position table can be built according to the updated global page address linking table.
  • the present invention method and apparatus can greatly save the time of building logical-to-physical page address linking table(s), such as the global page address linking table. Therefore, the present invention provides better performance than the related art.
  • the present invention method and apparatus can record the usage information during accessing the pages, and therefore can efficiently manage the usage of all blocks according the usage information. As a result, the arrangement of the spare region and the data region can be optimized.
  • managing the flash memory on a basis of page brings lots of advantages. For example, the speed of random write is greatly improved, and the write amplification index can be greatly reduced. Without introducing side effects such as those of the related art, managing the flash memory on a basis of page can be much simpler and more intuitional than managing the flash memory on a basis of block as long as the present invention is applied in real implementation.

Abstract

A method for managing a memory apparatus including at least one non-volatile (NV) memory element includes: receiving a first access command from a host; analyzing the first access command to obtain a first host address; linking the first host address to a physical block; receiving a second access command from the host; and analyzing the second access command to obtain a second host address. For example, the method may further include: linking the second host address to the physical block, wherein a difference value of the first host address and the second host address is greater than a number of pages of the physical block. In another example, the method may further include: linking the first host address to at least a page of the physical block; and linking the second host address to at least a page of another physical block.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation application of U.S. patent application Ser. No. 12/471,462 filed on May 25, 2009. The U.S. patent application Ser. No. 12/471,462 claims the benefit of U.S. provisional application No. 61/112,173, which was filed on Nov. 6, 2008, and is entitled “METHOD FOR BUILDING A LOGICAL-TO-PHYSICAL ADDRESS LINKING TABLE OF A MEMORY APPARATUS”. The U.S. patent application Ser. No. 12/471,462 further claims the benefit of U.S. provisional application No. 61/140,850, which was filed on Dec. 24, 2008, and entitled “METHOD FOR MANAGING AND ACCESSING A STORAGE APPARATUS”.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to flash memory control, and more particularly, to a method for managing a memory apparatus, and an associated memory apparatus thereof.
  • 2. Description of the Prior Art
  • While a host is accessing a memory apparatus (e.g. a solid state drive, SSD), the host typically sends an accessing command and at least a corresponding logical address to the memory apparatus. The controller of the memory apparatus receives the logical address and transfers the logical address into a physical address by utilizing a logical-to-physical address linking table. Thus, the controller accesses at least one physical memory element (or memory component) of the memory apparatus by utilizing the physical address. For example, the memory element can be implemented with one or more flash memory chips (which can be referred to as flash chips for simplicity).
  • The logical-to-physical address linking table can be built in accordance with a memory unit in the memory element. For example, the logical-to-physical address linking table can be built by blocks or by pages. When the logical-to-physical address linking table is built by blocks, the logical-to-physical address linking table can be referred to as the logical-to-physical block address linking table. When the logical-to-physical address linking table is built by pages, the logical-to-physical address linking table can be referred to as the logical-to-physical page address linking table. In addition, a logical-to-physical page address linking table comprising linking relationships about pages of a plurality of blocks (or all blocks) in the memory apparatus can be referred to as the global page address linking table.
  • Assume that the memory element has X physical blocks, and each physical block has Y physical pages. In a situation where the logical-to-physical address linking table is built by blocks, the associated logical-to-physical block address linking table can be built by reading a logical block address stored in a page of each physical block and recording the relationship between the physical block and the associated logical block. In order to build the logical-to-physical block address linking table, X pages respectively corresponding to the X physical blocks have to be read, where the time required for this is assumed to be x seconds.
  • In a situation where the logical-to-physical address linking table is built by pages, the associated global page address linking table can be built by reading a logical page address stored in each physical page of all physical blocks and recording the relationship between the physical page and the associated logical page. In order to build the global page address linking table, at least X·Y pages have to be read, requiring x·Y seconds. If a block has 1024 pages, the time required for building the global page address linking table is 1024 times the time required for building the logical-to-physical block address linking table, i.e. 1024·x seconds, which is an unacceptable processing time since the processing speed is too slow. That is, when implementing the global page address linking table in this way, the overall performance of accessing the memory apparatus is retarded. Therefore, a novel method is required for efficiently building the logical-to-physical address linking table, and related methods for managing memory apparatus operated under the novel method is required.
  • SUMMARY OF THE INVENTION
  • It is therefore an objective of the present invention to provide a method for managing a memory apparatus, and to provide an associated memory apparatus thereof, in order to solve the above-mentioned problem.
  • It is another objective of the present invention to provide a method for managing a memory apparatus, and to provide an associated memory apparatus thereof, in order to optimize the arrangement of a spare region and a data region of the memory apparatus.
  • According to at least one preferred embodiment of the present invention, a method for managing a memory apparatus is disclosed. The memory apparatus comprises at least one non-volatile (NV) memory element, each of which comprises a plurality of blocks. The method comprises: receiving a first access command from a host; analyzing the first access command to obtain a first host address; linking the first host address to a physical block; receiving a second access command from the host; analyzing the second access command to obtain a second host address; and linking the second host address to the physical block, wherein a difference value of the first host address and the second host address is greater than a number of pages of the physical block.
  • According to at least one preferred embodiment of the present invention, a method for managing a memory apparatus is disclosed. The memory apparatus comprises at least one NV memory element, each of which comprises a plurality of blocks. The method comprises: receiving a first access command from a host; analyzing the first access command to obtain a first host address; linking the first host address to at least a page of a first physical block; receiving a second access command from the host; analyzing the second access command to obtain a second host address; and linking the second host address to at least a page of a second physical block that is different from the first physical block, wherein a difference value of the first host address and the second host address is smaller than a number of pages of the physical block.
  • It is an advantage of the present invention that, in contrast to the related art, the present invention method and apparatus can greatly save the time of building logical-to-physical address linking table(s), such as the time of building a global page address linking table. Therefore, the present invention provides better performance than the related art.
  • It is another advantage of the present invention that the present invention method and apparatus can record the usage information during accessing the pages, and therefore can efficiently manage the usage of all blocks according the usage information. As a result, the arrangement of the spare region and the data region can be optimized.
  • These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a memory apparatus according to a first embodiment of the invention.
  • FIG. 2A illustrates a local page address linking table within a block of one of the NV memory elements shown in FIG. 1, where the NV memory element of this embodiment is a flash chip.
  • FIG. 2B compares the one-dimensional (1-D) array illustration and the two-dimensional (2-D) array illustration of the local page address linking table shown in FIG. 2A.
  • FIGS. 3A-3F respectively illustrate exemplary versions of a global page address linking table of the memory apparatus shown in FIG. 1 according to an embodiment of the present invention.
  • FIG. 4 illustrates a local page address linking table within a block of the flash chip shown in FIG. 2A according to an embodiment of the present invention.
  • FIGS. 5A-5B respectively illustrate exemplary versions of the global page address linking table of the memory apparatus shown in FIG. 1 according to the embodiment shown in FIG. 4.
  • FIG. 6 illustrates an arrangement of one of the NV memory elements shown in FIG. 1 according to an embodiment of the present invention, where the NV memory element of this embodiment is a flash chip.
  • FIGS. 7A-7D illustrate physical addresses of the NV memory elements shown in FIG. 1 according to an embodiment of the invention, where the NV memory elements of this embodiment are a plurality of flash chips.
  • FIG. 8 illustrates a data region and a spare region for managing the flash chips shown in FIGS. 7A-7D.
  • FIGS. 9A-9D respectively illustrate exemplary versions of a global page address linking table of the embodiment shown in FIGS. 7A-7D.
  • FIGS. 10A-10F respectively illustrate exemplary versions of a valid page count table of the embodiment shown in FIGS. 7A-7D.
  • FIG. 11 illustrates a valid-page-position table of the flash chips shown in FIGS. 7A-7D according to an embodiment of the present invention.
  • DETAILED DESCRIPTION
  • Please refer to FIG. 1, which illustrates a block diagram of a memory apparatus 100 according to a first embodiment of the invention. The memory apparatus 100 comprises a processing unit 110, a volatile memory 120, a transmission interface 130, a plurality of non-volatile (NV) memory elements 140_0, 140_1, . . . , and 140_N (e.g. flash chips), and a bus 150. Typically, a host (not shown in FIG. 1) can be arranged to access the memory apparatus 100 through the transmission interface 130 after the transmission interface 130 is coupled to the host. For example, the host can represent a personal computer such as a laptop computer or a desktop computer.
  • The processing unit 110 is arranged to manage the memory apparatus 100 according to a program code (not shown in FIG. 1) embedded in the processing unit 110 or received from outside the processing unit 110. For example, the program code can be a hardware code embedded in the processing unit 110, such as a ROM code. In another example, the program code can be a firmware code received from outside the processing unit 110. More particularly, the processing unit 110 is utilized for controlling the volatile memory 120, the transmission interface 130, the NV memory elements 140_0, 140_1, . . . , and 140_N, and the bus 150. The processing unit 110 of this embodiment can be an ARM processor or an ARC processor. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to different variations of this embodiment, the processing unit 110 can be other kinds of processors.
  • In addition, the volatile memory 120 is utilized for storing a global page address linking table, data accessed by the host (not shown), and other required information for accessing the memory apparatus 100. The volatile memory 120 of this embodiment can be a DRAM or an SRAM. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to different variations of this embodiment, the volatile memory 120 can be other kinds of volatile memories.
  • According to this embodiment, the transmission interface 130 shown in FIG. 1 is utilized for transmitting data and commands between the host and the memory apparatus 100, where the transmission interface 130 complies with a particular communication standard such as the Serial Advanced Technology Attachment (SATA) standard, the Parallel Advanced Technology Attachment (PATA) standard, or the Universal Serial Bus (USB) standard. For example, the memory apparatus 100 is a solid state drive (SSD) installed within the host, and the particular communication standard can be some communication standard typically utilized for implementing internal communication of the host, such as the SATA standard or the PATA standard. In another example, the memory apparatus 100 is an SSD and is positioned outside the host, and the particular communication standard can be some communication standard typically utilized for implementing external communication of the host, such as the USB standard. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to different variations of this embodiment, the memory apparatus 100 can be a portable memory device such as a memory card, and the particular communication standard can be some communication standards typically utilized for implementing an input/output interface of a memory card, such as the Secure Digital (SD) standard or the Compact Flash (CF) standard.
  • In addition, the NV memory elements 140_0, 140_1, . . . , and 140_N are utilized for storing data, where the NV memory elements 140_0, 140_1, . . . , and 140_N can be, but not limited to, NAND flash chips. The bus 150 is utilized for coupling the processing unit 110, the volatile memory 120, the transmission interface 130, and the NV memory elements 140_0, 140_1, . . . , and 140_N, and for communication thereof.
  • According to this embodiment, the processing unit 110 can provide at least one block of the memory apparatus 100 with at least one local page address linking table within the memory apparatus 100, where the local page address linking table comprises linking relationships between physical page addresses and logical page addresses of a plurality of pages. In this embodiment, the processing unit 110 builds the local page address linking table during programming/writing operations of the memory apparatus 100. The processing unit 110 can further build the global page address linking table mentioned above according to the local page address linking table. For example, the processing unit 110 reads a first linking relationship between a first physical page address and a first logical page address from the at least one local page address linking table, and then records the first linking relationship into the global page address linking table. The processing unit 110 can further read a second linking relationship between a second physical page address and the first logical page address from the at least one local page address linking table, and then record the second linking relationship into the global page address linking table in order to update the global page address linking table.
  • More particularly, the processing unit 110 provides a plurality of blocks of the memory apparatus 100 with a plurality of local page address linking tables within the memory apparatus 100, respectively. That is, the aforementioned at least one local page address linking table comprises a plurality of local page address linking tables. The processing unit 110 can further build the global page address linking table mentioned above according to the local page address linking tables. More specifically, the processing unit 110 can read one of the local page address linking tables to update the global page address linking table mentioned above. For example, the first linking relationship of a first physical page is read from a first local page address linking table of the local page address linking tables, and the second linking relationship of a second physical page is read from a second local page address linking table of the local page address linking tables. Implementation details of the local page address linking tables are further described by referring to FIG. 2A.
  • FIG. 2A illustrates a local page address linking table within a block of the NV memory element 140_0, where the NV memory element 140_0 of this embodiment is referred to as a flash chip 0 for simplicity. As shown in FIG. 2A, the flash chip 0 comprises a plurality of blocks, such as blocks 0, 1, 2, . . . , M in this embodiment. Please note that a block is an erasing unit. In other words, when erasing data is required, the processing unit 110 erases all data stored in the block at a time. In addition, a block, such as the block 0 shown in FIG. 2A, comprises a plurality of pages. For example, the block 0 of the flash chip 0 comprises 128 pages. Within a block such as the block 0, the pages are divided into two areas, a data area for storing data and a table area for storing a local page address linking table 0. The pages in the data area of the block can be referred to as the data pages of the block.
  • According to this embodiment, the page amount of the data area and the page amount of the table area can be determined as required. For example, pages 0, 1, 2, . . . , 126 is utilized for storing data and the remaining page of the block is utilized for storing the local page address linking table 0. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, the data area may comprise less than 127 pages, and the table area may comprise two or more pages. According to another variation of this embodiment, the total page amount of the block, the page amount of the data area, and the page amount of the table area may vary. Please note that a page is a programming/writing unit. In other words, when programming/writing data is required, the processing unit 110 programs/writes a page of data into a page at a time. According to this embodiment, the NV memory elements 140_0, 140_1, . . . , and 140_N shown in FIG. 1 are respectively referred to as the flash chips 0, 1, . . . , and N, where each block of the NV memory elements 140_0, 140_1, . . . , 140_N may have a local page address linking table. For simplicity, only the local page address linking table 0 of the block 0 of the flash chip 0 is illustrated in FIG. 2A since the functions/operations of each local page address linking table are similar to each other.
  • In this embodiment, the local page address linking table 0 is built when all the data pages in the block 0 have been programmed, namely fully programmed. Before the data pages in the block 0 are fully programmed, however, the processing unit 110 temporarily stores a temporary local page address linking table 0 in the volatile memory 120, and further updates the temporary local page address linking table 0 when any linking relationship between a physical page address and a logical page address in the block 0 is changed.
  • According to this embodiment, the ranking of a field (entry) of the temporary/non-temporary local page address linking table (e.g. the temporary local page address linking table 0 or the local page address linking table 0) represents a physical page address, and the content of this field represents an associated logical page address. For example, suppose that iP and jP are respectively the row number and the column number of the illustrative table location (iP, jP) of the temporary/non-temporary local page address linking table shown in FIG. 2A and iP=0, 1, . . . , etc. and jP=0, 1, . . . , etc. In this two-dimensional (2-D) array illustration of the temporary/non-temporary local page address linking table shown in FIG. 2A, the illustrative table location (iP, jP) corresponding to the (iP*4+jP)th field represents a physical page address PPN, which can be described as follows:

  • PPN=(PBN*DPC+i P*4+j P);
  • where the notation PBN stands for the physical block number of the physical block under discussion (e.g. PBN=0, 1, 2, . . . , etc. for the blocks 0, 1, 2, . . . , etc., respectively), and the notation DPC stands for the data page count of each block (e.g. 127 in this embodiment). This is for illustrative purposes only, and is not meant to be a limitation of the present invention. For better comprehension, the temporary/non-temporary local page address linking table can be illustrated as a single column, as shown in the right half of FIG. 2B, where “PHY Page” stands for “physical page”, and “LOG Page” stands for “logical page”. Given that iP is still the row number and iP=0, 1, . . . , etc., within the temporary/non-temporary local page address linking table of the block PBN of this one-dimensional (1-D) array illustration shown in the right half of FIG. 2B, the illustrative table location iP corresponding to the iP th field represents a physical page address (PBN*DPC+iP). That is, for this 1-D array illustration, the above equation can be re-written as follows:

  • PPN=(PBN*DPC+i P).
  • Please note that, in this embodiment, a range of the logical page addresses in the local page address linking table 0 is not greater than the number of pages in the block 0 (i.e. 128 in this embodiment). This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, a range of the logical page addresses in a local page address linking table such as the local page address linking table 0 can be greater than the number of pages in a block such as the block 0.
  • Within the temporary local page address linking table 0 or the local page address linking table 0 shown in FIG. 2A, the illustrative table location (0, 0) (i.e. the upper-left location) corresponding to the first field represents the physical page address 0x0000, the illustrative table location (0, 1) corresponding to the second field represents the physical page address 0x0001, the illustrative table location (0, 2) corresponding to the third field represents the physical page address 0x0002, the illustrative table location (0, 3) corresponding to the fourth field represents the physical page address 0x0003, the illustrative table location (1, 0) corresponding to the fifth field represents the physical page address 0x0004, and so on.
  • According to the embodiment shown in FIG. 2A, when the host sends a command 0 to the processing unit 110 in order to program data 0 at a logical page address 0x0002, the processing unit 110 programs the data 0 and the logical page address 0x0002 into the page 0 of the block 0 of the flash chip 0, wherein the data 0 is programmed in a data byte region (labeled “DBR”) of the page 0, and the logical page address 0x0002 is programmed in a spare byte region (labeled “SBR”) of the page 0 as spare information. In addition, the processing unit 110 writes the logical page address 0x0002 into the first field of the temporary local page address linking table 0 (or the illustrative table location (0, 0) thereof in this embodiment, i.e. the illustrative table location of the first column and the first row) to thereby indicate that the logical page address 0x0002 links/maps to the page 0 of the block 0 of the flash chip 0, whose physical page address is 00000.
  • Similarly, when the host then sends a command 1 to the processing unit 110 in order to program data 1 at a logical page address 0x0001, the processing unit 110 programs the data 1 and the logical page address 0x0001 into the page 1 of the block 0 of the flash chip 0, wherein the data 1 is programmed in a data byte region (labeled “DBR”) of the page 1, and the logical page address 0×0001 is programmed in a spare byte region (labeled “SBR”) of the page 1 as spare information. In addition, the processing unit 110 writes the logical page address 0x0001 into the second field of the temporary local page address linking table 0 (or the illustrative table location (0, 1) thereof in this embodiment, i.e. the illustrative table location of the second column and the first row) to thereby indicate that the logical page address 0x0001 links/maps to page 1 of block 0 of flash chip 0, whose physical page address is 0x0001. Afterward, when the host sends a command 2 to the processing unit 110 in order to program data 2 at the logical page address 0x0002 again, the processing unit 110 programs the data 2 and the logical page address 0x0002 into the page 2 of the block 0, wherein the data 2 is programmed in a data byte region (labeled “DBR”) of the page 2, and the logical page address 0x0002 is programmed in a spare byte region (labeled “SBR”) of the page 2 as spare information. In addition, the processing unit 110 writes the logical page address 0x0002 into the third field of the temporary local page address linking table 0 (or the illustrative table location (0, 2) thereof in this embodiment, i.e. the illustrative table location of the third column and the first row) to thereby update that the logical page address 0x0002 links/maps to the page 2 of the block 0 of the flash chip 0, whose physical page address is 0x0002. Similar operations for the subsequent pages are not repeated in detail for simplicity.
  • As a result of the above operations, referring to the upper-right portion of FIG. 2A, a serial of logical page addresses {00002, 0x0001, 0x0002, 0x0005, 0x0003, 0x0007, 0x0010, 0x0008, . . . , 0x0000, 0x0009, 0x0004} are written in the temporary local page address linking table 0. When all the data pages in the block 0 (i.e. pages 0, 1, 2, . . . , 126 in this embodiment) have been programmed, the processing unit 110 copies the temporary local page address linking table 0 to build the local page address linking table 0. More specifically, the processing unit 110 programs the local page address linking table 0 into the table area (i.e. the remaining page 127) of the block 0 of the flash chip 0 in this embodiment. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, the processing unit 110 can program a local page address linking table for a portion of data pages in a block, rather than all data pages of the block.
  • In this variation, after programming a first portion of data pages of a specific block, the processing unit 110 can program a first local page address linking table for the first portion of data pages, where the first local page address linking table is positioned next to the first portion of data pages. After programming a second portion of data pages of the specific block, the processing unit 110 can program a second local page address linking table for the second portion of data pages. For example, the second local page address linking table is positioned next to the second portion of data pages. In another example, the second local page address linking table is positioned at the end (e.g. the last page) of the specific block. In another example, the second local page address linking table is positioned at the beginning (e.g. the first page) of the block next to the specific block. In another example, the second local page address linking table is positioned at another page (or other pages) of the block next to the specific block.
  • FIGS. 3A-3F respectively illustrate exemplary versions of the aforementioned global page address linking table of the memory apparatus 100 according to an embodiment of the present invention. When building the global page address linking table of the memory apparatus 100, the processing unit 110 reads each of the local page address linking tables respectively corresponding to the blocks of the memory apparatus 100 to build the global page address linking table. For example, within the memory apparatus 100, if only the blocks 0 and 1 of the flash chip 0 have been fully programmed, and if the local page address linking table 0 in the block 0 and the local page address linking table 1 in the block 1 have been built, the processing unit 110 reads the local page address linking tables 0 and 1 to build the global page address linking table.
  • According to this embodiment, referring to the left half of FIG. 3A first, the ranking of a field of the global page address linking table represents a logical page address, and the content of this field represents an associated physical page address. For example, given that iL and jL are respectively the row number and the column number of the illustrative table location (iL, jL) of the global page address linking table shown in the left half of FIG. 3A and iL=0, 1, . . . , etc. and jL=0, 1, . . . , etc. in this 2-D array illustration, the illustrative table location (iL, jL) corresponding to the (iL*4+jL)th field represents a logical page address (iL*4+jL). This is for illustrative purposes only, and is not meant to be a limitation of the present invention. For better comprehension, the global page address linking table can be illustrated as a single column, as shown in the right half of FIG. 3A. Given that iL is still the row number and iL=0, 1, . . . , etc., within this 1-D array illustration of the global page address linking table, the illustrative table location iL corresponding to the iL th field represents a logical page address iL.
  • Within the global page address linking table shown in the left half of FIG. 3A, the illustrative table location (0, 0) (i.e. the upper-left location) corresponding to the first field represents the logical page address 0x0000, the illustrative table location (0, 1) corresponding to the second field represents the logical page address 0x0001, the illustrative table location (0, 2) corresponding to the third field represents the logical page address 0x0002, the illustrative table location (0, 3) corresponding to the fourth field represents the logical page address 0x0003, the illustrative table location (1, 0) corresponding to the fifth field represents the logical page address 0x0004, and so on.
  • When building the global page address linking table, the processing unit 110 reads the first field of the local page address linking table 0 shown in FIG. 2A and obtains the logical page address 0x0002, and therefore determines that the logical page address 0x0002 links to the page 0 of the block 0 of the flash chip 0, whose physical page address is 0x0000. As shown in FIG. 3A, the processing unit 110 writes the physical page address 0x0000 (PHY Page 0x0000) into the third field of the global page address linking table (i.e. the illustrative table location (0, 2) of the 2-D array illustration thereof) to indicate that the logical page address 0x0002 (LOG Page 0x0002) links to the physical page address 0x0000.
  • Next, the processing unit 110 reads the second field of the local page address linking table 0 shown in FIG. 2A and obtains the logical page address 0x0001, and therefore determines that the logical page address 0x0001 links to the page 1 of the block 0 of the flash chip 0, whose physical page address is 0x0001. As shown in FIG. 3B, the processing unit 110 writes the physical page address 0x0001 into the second field of the global page address linking table to indicate that the logical page address 0x0001 (LOG Page 0x0001) links to the physical page address 0x0001 (PHY Page 0x0001).
  • Then, the processing unit 110 reads the third field of the local page address linking table 0 shown in FIG. 2A and obtains the logical page address 0x0002, and therefore determines that the logical page address 0x0002 links to the page 2 of the block 0 of the flash chip 0, whose physical page address is 0x0002. As shown in FIG. 3C, the processing unit 110 writes (or updates) the physical page address 0x0002 into the third field of the global page address linking table to indicate that the logical page address 0x0002 (LOG Page 0x0002) links to the physical page address 0x0002 (PHY Page 0x0002).
  • Subsequently, the processing unit 110 reads the fourth field of the local page address linking table 0 shown in FIG. 2A and obtains the logical page address 0x0005, and therefore determines that the logical page address 0x0005 links to the page 3 of the block 0 of the flash chip 0, whose physical page address is 0x0003. As shown in FIG. 3D, the processing unit 110 writes the physical page address 0x0003 into the sixth field of the global page address linking table to indicate that the logical page address 0x0005 (LOG Page 0x0005) links to the physical page address 0x0003 (PHY Page 0x0003).
  • Afterward, the processing unit 110 reads the fifth field of the local page address linking table 0 shown in FIG. 2A and obtains the logical page address 0x0003, and therefore determines that the logical page address 0x0003 links to the page 4 of the block 0 of the flash chip 0, whose physical page address is 0x0004. As shown in FIG. 3E, the processing unit 110 writes the physical page address 0x0004 into the fourth field of the global page address linking table to indicate that the logical page address 0x0003 (LOG Page 0x0003) links to the physical page address 0x0004 (PHY Page 0x0004). Similar operations for the subsequent linking relationships are not repeated in detail. After reading all fields of the local page address linking table 0 shown in FIG. 2A and filling the corresponding physical page addresses into the associated fields of the global page address linking table, the processing unit 110 builds the global page address linking table as shown in FIG. 3F.
  • FIG. 4 illustrates the local page address linking table 1 within the block 1 of the flash chip 0 according to an embodiment of the present invention. After reading all fields of the local page address linking table 0 shown in FIG. 2A and filling the corresponding physical page addresses into the associated fields of the global page address linking table as shown in FIG. 3F, the processing unit 110 further reads the local page address linking table 1 within the block 1 in order to complete the global page address linking table. Please note that, in this embodiment, the local page address linking table 1 is built when all data pages in the block 1 have been programmed. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, a local page address linking table can be built for a block when at least a data page (e.g. a data page or a plurality of pages) in this block have been programmed. In this variation, the local page address linking table is built for this block, and more particularly, for at least the data page. For example, the local page address linking table is built for a few data pages such as physical pages 0 and 1 of this block, where the local page address linking table for the physical pages 0 and 1 is built and stored in the subsequent physical page, i.e. the physical page 2. When building (or updating) the global page address linking table, in a situation where there is no local page address linking table found in the last page of this block, the processing unit 110 tries to find the last programmed page of this block. In this variation, the processing unit 110 searches back, starting from the last page, in order to find the last programmed page of this block. As a result, the processing unit 110 reads all fields of the local page address linking table from the last programmed page of this block and fills the corresponding physical page addresses into the associated fields of the global page address linking table, in order to complete/update the global page address linking table.
  • According to the embodiment shown in FIG. 4, the processing unit 110 reads the first field of the local page address linking table 1 and obtains the logical page address 0x0006, and therefore determines that the logical page address 0x0006 links to the page 0 of the block 1 of the flash chip 0, whose physical page address is 0x0127 in this embodiment. As shown in FIG. 5A, the processing unit 110 writes the physical page address 0x0127 into the seventh field of the global page address linking table to indicate that the logical page address 0x0006 (LOG Page 0x0006) links to the physical page address 0x0127 (PHY Page 0x0127).
  • Next, the processing unit 110 reads the second field of the local page address linking table 1 shown in FIG. 4 and obtains the logical page address 0x0002, and therefore determines that the logical page address 0x0002 links to the page 1 of the block 1 of the flash chip 0, whose physical page address is 0x0128. As shown in FIG. 5B, the processing unit 110 writes (or updates) the physical page address 0x0128 into the third field of the global page address linking table to indicate that the logical page address 0x0002 (LOG Page 0x0002) links to the physical page address 0x0128 (PHY Page 0x0128). Similar operations for the subsequent linking relationships are not repeated in detail. After reading all fields of the local page address linking tables 0 and 1 and filling the corresponding physical page addresses into the associated fields of the global page address linking table, the processing unit 110 completes the global page address linking table.
  • Instead of reading all pages (or memory units) of the NV memory elements 140_0, 140_1, . . . , and 140_N to build the global page address linking table, the processing unit 110 of this embodiment merely reads a few number of local page address linking tables within (or representing but not within) the blocks that are fully or partially programmed. Therefore, the memory apparatus implemented according to the present invention surely have better efficiency than those implemented according to the related art.
  • According to a variation of this embodiment, in a situation where all data pages of all data blocks of the NV memory elements 140_0, 140_1, . . . , and 140_N are fully programmed, the processing unit 110 merely reads the local page address linking tables respectively corresponding to the data blocks to build the global page address linking table. If the NV memory elements 140_0, 140_1, . . . , and 140_N have XD data blocks in total, and each data block has YD data pages, the processing unit 110 reads XD local page address linking tables (whose data amount is typically less than XD pages in total) to build the global page address linking table, rather than reading XD·YD pages. In other words, the time required for building the global page address linking table according to the present invention is similar to the time required for building the global block address linking table.
  • According to another variation of this embodiment, in a situation where a particular block is not fully programmed (i.e. the particular block is partially programmed), at one time there is no local page address linking table within the particular block. In the volatile memory 120, however, there is a temporary local page address linking table of the particular block. The processing unit 110 of this variation can program/write the temporary local page address linking table to the particular block before shutting down the memory apparatus 100. For example, after the memory apparatus 100 begins a start-up process, the host can read the local page address linking table stored in the particular block, in order to build or update the global page address linking table. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. In another example, after the memory apparatus 100 begins a start-up process, the processing unit 110 can read the pages programmed in the particular block, and more particularly, the spare byte region of each page programmed in the particular block, in order to build or update the global page address linking table.
  • In a situation where the processing unit 110 reads the pages programmed in the particular block to build or update the global page address linking table, the processing unit 110 has to read less than YD pages of data from the particular block. As a result, for completing the global page address linking table, the data amount that the processing unit 110 has to read is less than (XFP+YPP) pages, given that the NV memory elements 140_0, 140_1, . . . , and 140_N have XFP fully programmed blocks in total and further have a partially programmed block having YPP programmed data pages. Therefore, in regard to building the global page address linking table, the memory apparatus implemented according to the present invention still have better efficiency than those implemented according to the related art.
  • According to different variations of the embodiments mentioned above, the global page address linking table can be built during any start-up process of the memory apparatus 100 or at any time in response to a request from a user.
  • According to different variations of the embodiments mentioned above, the global page address linking table can be divided into a plurality of partial tables stored in one or more of the NV memory elements (e.g. the partial tables are respectively stored in the NV memory elements 140_0, 140_1, . . . , and 140_N). Each divided partial table can be referred as a sub-global page address linking table. The processing unit 110 can read and store at least one sub-global page address linking table (e.g. a sub-global page address linking table, some sub-global page address linking tables, or all the sub-global page address linking tables) of the global page address linking table into the volatile memory 120, depending on the size of the global page address linking table and the size of the volatile memory 120 or depending on some requirements. The processing unit 110 can utilize the sub-global page address linking table stored in the volatile memory 120 to perform the logical-to-physical address transferring operations of the aforementioned embodiments.
  • FIG. 6 illustrates an arrangement of the NV memory element 140_0 according to an embodiment of the present invention, where the NV memory element 140_0 of this embodiment is referred to as the flash chip 0 as mentioned above. As shown in FIG. 6, a page comprises a plurality of sectors, e.g. sectors 0, 1, 2, and 3. A sector is the minimal read unit, which can be 512 bytes in this embodiment. In other words, the processing unit 110 can read one sector or a plurality of sectors during a reading operation.
  • FIGS. 7A-7D illustrate the physical addresses of the flash chips 0, 1, . . . , and N according to an embodiment of the invention, where N=3 and M=1023 in this embodiment. As the physical addresses of this embodiment may fall within a range that is wider than the range [0×0000, 0×FFFF] utilized in some embodiments disclosed above, the physical addresses are illustrated with the decimal numeral system hereinafter for simplicity. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, the physical addresses can be illustrated with the hexadecimal numeral system, where the physical addresses may have more digits than those in some embodiments disclosed above. According to another variation of this embodiment, the physical addresses can be illustrated with another numeral system when needed.
  • Regarding the physical block addresses, the first block of the flash chip 0 is regarded as the first block of the flash chips 0-3, and is addressed as the physical block address 0, and therefore, can be referred to as PHY BLK 0, where “PHY BLK” stands for “physical block”. The last block of the flash chip 0 is regarded as the 1024th block of the flash chips 0-3, and is addressed as the physical block address 1023, and therefore, can be referred to as PHY BLK 1023. The first block of the flash chip 1 is regarded as the 1025th block of the flash chips 0-3, and is addressed as the physical block address 1024, and therefore, can be referred to as PHY BLK 1024, and so on. The last block of the flash chip 3 is regarded as the 4096th block of the flash chips 0-3, and is addressed as the physical block address 4095, and therefore, can be referred to as PHY BLK 4095. In this embodiment, the blocks of the flash chips 0-3 comprise 4 sets of PHY BLKs {0, 1, . . . , 1023}, {1024, 1025, . . . , 2047}, {2048, 2049, . . . , 3071}, and {3072, 3073, . . . , 4095}, i.e. 4096 PHY BLKs in total.
  • Regarding the physical page addresses, the first page of PHY BLK 0 is regarded as the first page of the flash chips 0-3, and is addressed as the physical page address 0, and therefore, can be referred to as PHY Page 0. The last page of PHY BLK 0 is regarded as the 128th page of the flash chips 0-3, and is addressed as the physical page address 127, and therefore, can be referred to as PHY Page 127. The first page of PHY BLK 1 is regarded as the 129th page of the flash chips 0-3, and is addressed as the physical page address 128, and therefore, can be referred to as PHY Page 128, and so on. The last page of PHY BLK 4095 is regarded as the 524288th page of the flash chips 0-3, and is addressed as the physical page address 524287, and therefore, can be referred to as PHY Page 524287. In this embodiment, the pages of the flash chips 0-3 comprise 4096 sets of PHY Pages {0, 1, . . . , 127}, {128, 129, . . . , 255}, . . . , and {524160, 524161, . . . , 524287}, i.e. 524288 PHY Pages in total.
  • FIG. 8 illustrates a data region and a spare region for managing the flash chips 0-3 shown in FIGS. 7A-7D. As shown in FIG. 8, the flash chips 0-3 are logically divided into the data region and the spare region. The data region is utilized for storing data, and may initially comprise PHY BLKs 2, 3, . . . , and 4095. The spare region is utilized for writing new data, where the spare region typically comprises erased blocks, and may initially comprise PHY BLKs 0 and 1. After a lot of accessing operations, the spare region may logically comprise a different set of physical blocks, and the data region may logically comprise the other physical blocks. For example, after a lot of accessing operations, the spare region may comprise PHY BLKs 4094 and 4095, and the data region may comprise PHY BLKs 0-4093. In another embodiment, the spare region may comprise PHYs BLK 0, 1024, 2048, and 3096, i.e. each of the flash chips 0-3 comprises at least a block logically belonging to the spare region. Please note that the number of blocks of the data region and the number of blocks of the spare region can be determined based upon user/designer requirements. For example, the spare region may comprise 4 PHY BLKs, and the data region may comprise 4092 PHY BLKs.
  • During writing/programming operations, the host sends a command C0 to the memory apparatus 100 in order to write 4 sectors of data, DS0-DS3, at corresponding host addresses 0000008-0000011. The volatile memory 120 temporarily stores data DS0-DS3. The processing unit 110 parses the command C0 to execute the writing/programming operation corresponding to the command C0. The processing unit 110 transfers the host addresses 0000008-0000011 into associated logical addresses. The processing unit 110 divides the host address 0000008 by the number of sectors of a page, i.e. 4 in this embodiment, and obtains a quotient 2 and a remainder 0. The quotient 2 means that the logical page address thereof is 2, and therefore, the logical page indicated by the logical page address 2 can be referred to as LOG Page 2. In addition, the remainder 0 means that the data DS0 should be stored in a first sector of a page. The processing unit 110 further divides the host address 0000008 by the number of sectors of a block, i.e. 512 in this embodiment, and obtains a quotient 0 and a remainder 8. The quotient 0 means that the logical block address thereof is 0, and therefore, the logical block indicated by the logical block address 0 can be referred to as LOG BLK 0, where “LOG BLK” stands for “logical block”.
  • In practice, when the host address is expressed with the binary numeral system, the dividing operations can be performed by truncating a portion of bits of the host address. For example, when dividing the host address 0000008 by 4, the processing unit 110 extracts the last two bits (i.e. two adjacent/continuous bits including the least significant bit (LSB)) from the binary expression of the host address to obtain the remainder 0, and extracts the other bits from this binary expression to obtain the quotient 2. In addition, when dividing the host address 0000008 by 512, the processing unit 110 can extract the last nine bits (i.e. nine adjacent/continuous bits including the LSB) from the binary expression of the host address to obtain the remainder 8, and extract the other bits from this binary expression to obtain the quotient 0. Therefore, in this embodiment, the host address 0000008 substantially comprises the logical page address 2 and the logical block address 0. Please note that, as the host address 0000008 inherently belongs to LOG Page 2 and inherently belongs to LOG BLK 0, the processing unit 110 of a variation of this embodiment can parse the host address 0000008 by bit-shifting, rather than really performing the dividing operations.
  • Similarly, the processing unit 110 of this embodiment determines that the logical page addresses of the host addresses 0000009, 0000010, and 0000011 are all 2 (i.e. all of the host addresses 0000009, 0000010, and 0000011 inherently belong to LOG Page 2, or comprise the logical page address 2), and the logical block addresses thereof are all 0 (i.e. all of the host addresses 0000009, 0000010, and 0000011 further inherently belong to LOG BLK 0, or comprise the logical block address 0). In addition, the data DS1, DS2, and DS3 should be respectively stored in the second, the third, and the fourth sectors of a page.
  • In this embodiment, PHY BLK 0 is erased and is logically positioned in the spare region initially, the processing unit 110 pops the PHY BLK 0 from the spare region, and writes/programs the data DS0-DS3 into the first, the second, the third, and the fourth sectors of PHY Page 0, respectively. The processing unit 110 further records 0 in the third field of the global page address linking table of this embodiment, in order to indicate that LOG Page 2 links to PHY Page 0. FIGS. 9A-9D respectively illustrate exemplary versions of the global page address linking table of this embodiment. The arrangement of the illustrative table locations of this embodiment is similar to that of FIGS. 3A-3F, and therefore, is not explained in detail for simplicity. Referring to the global page address linking table shown in FIG. 9A, the physical page address 0 has been written in the third field, which indicates that LOG Page 2 links to PHY Page 0. Alternatively, the physical page address 0 can be written in a corresponding field of a temporary local page address linking table thereof for indicating the linking relationship of the logical and physical addresses. Then, the global page address linking table can be updated accordingly. The implementation details of updating the global page address linking table according to the temporary local page address linking table are similar to those of the embodiments mentioned above. For simplicity, the following embodiments only illustrate that the global page address linking table is updated for reflecting a new logical-to-physical page address linking relationship, however, those skilled in the art will appreciate that the temporary local page address linking table can be updated for reflecting the new logical-to-physical page address linking relationship when obtaining the teachings disclosed in the embodiments of the invention. Therefore, related descriptions are omitted.
  • In addition, the processing unit 110 records usage information during accessing the pages. For example, the usage information comprises a valid page count table for recording valid page counts of the blocks, respectively. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, the usage information comprises an invalid page count table for recording invalid page counts of the blocks, respectively. In practice, as each fully programmed block comprise a predetermined number of pages (e.g. 128 pages in this embodiment), the valid page count and the invalid page count of the same fully programmed block are complementary to each other.
  • According to this embodiment, the processing unit 110 records 1 in the first field of the valid page count table, in order to indicate that PHY BLK 0 contains 1 valid page (i.e. 1 page of useful data; or in other words, 1 page of valid data). Please note that the global page address linking table and the valid page count table can be stored in the volatile memory 120. In this way, the global page address linking table and the valid page count table can be updated easily during accessing the flash chips. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, the global page address linking table and the valid page count table can be loaded from the volatile memory 120 and stored in one or more of the NV memory elements 140_0, 140_1, . . . , and 140_N before shutting down the memory apparatus 100. More particularly, the global page address linking table and the valid page count table can be stored in one or more link blocks of the NV memory elements 140_0, 140_1, . . . , and 140_N. In this way, the global page address linking table and the valid page count table can be preserved while the memory apparatus 100 shuts down. Each of the one or more link blocks is a particular block for preserving system information. While turning on the memory apparatus 100 next time, the global page address linking table and the valid page count table can be easily obtained from the link block(s).
  • Next, the host sends a command C1 to the memory apparatus 100 in order to write 4 sectors of data, DS4-DS7, into corresponding host addresses 0000512-0000515. Similarly, the processing unit 110 determines that the logical page addresses of the host addresses 0000512-0000515 are all 128 (i.e. all of the host addresses 0000512-0000515 belong to LOG Page 128, or comprise the logical page address 128), and the logical block addresses thereof are all 1 (i.e. all of the host addresses 0000512-0000515 further belong to LOG BLK 1, or comprise the logical block address 1). In addition, the data DS4-DS7 should be stored in the first, the second, the third, and the fourth sectors of a page, respectively. Since PHY Page 0 has been programmed, the processing unit 110 writes/programs the data DS4-DS7 into the first, the second, the third, and the fourth sectors of PHY Page 1 (which is the page subsequent to PHY Page 0), respectively. The processing unit 110 further records 1 in the 129 th field of the global page address linking table shown in FIG. 9A, in order to indicate that LOG Page 128 links to PHY Page 1. In addition, the processing unit 110 records 2 in the first field of the valid page count table (i.e. the processing unit 110 updates the first field thereof with 2), in order to indicate that PHY BLK 0 contains 2 valid pages (i.e. 2 pages of valid data). That is, the processing unit 110 increases the valid page count of PHY BLK 0. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. In a situation where the valid page count table is replaced by the invalid page count table mentioned above, the processing unit 110 maintains a value of an invalid page count of PHY BLK 0.
  • Please note that the host addresses 0000512-0000515 and the host addresses 0000008-0000011 belong to different logical block (e.g. the host addresses 0000512-0000515 belong to LOG BLK 1, and the host addresses 0000008-0000011 belong to LOG BLK 0), however, these host addresses all link to the associated pages in the same physical blocks, and data corresponding to the host addresses 0000512-0000515 and data corresponding to the host addresses 0000008-0000011 are both programmed/written in the same physical block, i.e. PHY BLK 0 in this embodiment.
  • In the above situation, when a first set of host addresses (e.g. the host addresses 0000512-0000515) belong to a first logical block (e.g. LOG BLK 1) and a second set of host addresses (e.g. the host addresses 0000008-0000011) belong to a second logical block (e.g. LOG BLK 0), the processing unit 110 can program/write both the data corresponding to the first set of host addresses and the data corresponding to the second set of host addresses in the same physical block (e.g. PHY BLK 0). This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, when a first set of host addresses belong to a first logical block, the processing unit 110 can program/write a first portion and a second portion of the data corresponding to the first set of host addresses in different physical blocks wherein the first portion and the second portion of the data are not overlap.
  • In this embodiment, the host then sends a command C2 to the memory apparatus 100 in order to write 4 sectors of data, DS8-DS11, into corresponding host addresses 0000004-0000007. Similarly, the processing unit 110 determines that the logical page addresses of the host addresses 0000004-0000007 are all 1 (i.e. all of the host addresses 0000004-0000007 belong to LOG Page 1, or comprise the logical page address 1), and the logical block addresses thereof are all 0 (i.e. all of the host addresses 0000004-0000007 further belong to LOG BLK 0, or comprise the logical block address 0). In addition, the data DS8-DS11 should be stored in the first, the second, the third, and the fourth sectors of a page, respectively. Since PHY Page 1 has been programmed, the processing unit 110 writes/programs the data DS8-DS11 into the first, the second, the third, and the fourth sectors of PHY page 2 (which is the page subsequent to PHY Page 1), respectively. The processing unit 110 further records 2 in the second field of the global page address linking table shown in FIG. 9A, in order to indicate that LOG Page 1 links to PHY Page 2. In addition, the processing unit 110 records 3 in the first field of the valid page count table (i.e. the processing unit 110 updates the first field thereof with 3), in order to indicate that PHY BLK 0 contains 3 valid pages (i.e. 3 pages of valid data). That is, the processing unit 110 increases the valid page count of PHY BLK 0. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. In the situation where the valid page count table is replaced by the invalid page count table mentioned above, the processing unit 110 maintains the value of the invalid page count of PHY BLK 0.
  • FIGS. 10A-10F respectively illustrate exemplary versions of the valid page count table of this embodiment. Referring to the left half of FIG. 10A first, the ranking of a field of the valid page count table represents a physical block address, and the content of this field represents an associated valid page count. For example, given that iPBLK and jPBLK are respectively the row number and the column number of the illustrative table location (iPBLK, jPBLK) of the valid page count table and iPBLK=0, 1, . . . , etc. and jPBLK=0, 1, . . . , etc. in this embodiment, the illustrative table location (iPBLK, jPBLK) corresponding to the (iPBLK*4+jPBLK)th field represents a physical block address (iPBLK*4+jPBLK). This is for illustrative purposes only, and is not meant to be a limitation of the present invention. For better comprehension, the valid page count table can be illustrated as a single column, as shown in the right half of FIG. 10A. Given that iPBLK is still the row number and iPBLK=0, 1, . . . , etc., within this 1-D array illustration of the valid page count table, the illustrative table location iPBLK corresponding to the iPBLK th field represents a physical block address (iPBLK). As a result of executing command C2 in this embodiment, the global page address linking table and the valid page count table are updated as shown in FIG. 9A and 10A, respectively.
  • Subsequently, the host sends a command C3 to the memory apparatus 100 in order to write/update 4 sectors of data, DS0′-DS3′, into corresponding host addresses 0000008-00000011. Similarly, the processing unit 110 determines that the logical page addresses of the host addresses 0000008-00000011 are all 2 (i.e. all of the host addresses 0000008-0000011 belong to LOG Page 2, or comprise the logical page address 2), and the logical block addresses thereof are all 0 (i.e. all of the host addresses 0000008-0000011 further belong to LOG BLK 0, or comprise the logical block address 0). In addition, the data DS0′-DS3′ should be stored in the first, the second, the third, and the fourth sectors of a page, respectively. Since PHY Page 2 has been programmed, the processing unit 110 writes/programs the data DS0′-DS3′ into the first, the second, the third, and the fourth sectors of PHY page 3 (which is the page subsequent to PHY Page 2), respectively. The processing unit 110 further records/updates 3 in the third field of the global page address linking table shown in FIG. 9B, in order to indicate that LOG Page 2 links to PHY Page 3 now. In addition, the processing unit 110 still records 3 in the first field of the valid page count table shown in FIG. 10B, in order to indicate that PHY BLK 0 still contains 3 valid pages. That is, the processing unit 110 maintains the value 3 of the valid page count of PHY BLK 0. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. In the situation where the valid page count table is replaced by the invalid page count table mentioned above, the processing unit 110 increases the invalid page count of PHY BLK 0.
  • Although 4 pages, PHY Pages 0-3, have been programmed in PHY BLK 0, only 3 physical pages, PHY Page 1-3, contain valid data. Since data of LOG Page 2 has been updated, PHY Page 0 does not contain valid data and can be deemed as an invalid page containing invalid data. As a result of executing command C3, the global page address linking table and the valid page count table are updated as shown in FIG. 9B and 10B, respectively.
  • In this embodiment, referring to FIG. 9C and 10C, assume that after several writing/programming operations are further performed, all pages of the PHY BLK 0 have been programmed, and the valid page count of the PHY BLK 0 is 100. The host sends a command C4 to the memory apparatus 100 in order to write/update 4 sectors of data, DS0″-DS3″, into corresponding host addresses 0000008-00000011. Similarly, the processing unit 110 determines that the logical page addresses of the host addresses 0000008-00000011 are all 2 (i.e. all of the host addresses 0000008-0000011 belong to LOG Page 2, or comprise the logical page address 2), and the logical block addresses thereof are all 0 (i.e. all of the host addresses 0000008-0000011 further belong to LOG BLK 0, or comprise the logical block address 0). In addition, the data DS0″-DS3″ should be stored in the first, the second, the third, and the fourth sectors of a page, respectively. Since all pages of the PHY BLK 0 have been programmed, the processing unit 110 writes/programs the data DS0″-DS3″ into the first, the second, the third, and the fourth sectors of PHY page 128 (which is the page subsequent to PHY Page 127), respectively. The processing unit 110 further records/updates 128 in the third field of the global page address linking table shown in FIG. 9D, in order to indicate that LOG Page 2 links to PHY Page 128 now. Here, PHY Page 3 does not contain valid data and can be deemed as an invalid page containing invalid data. In addition, the processing unit 110 records 1 in the second field of the valid page count table to indicate that PHY BLK 1 contains 1 valid page (i.e. 1 page of valid data), and records/updates 99 in the first field of the valid page count table to indicate that PHY BLK 0 contains 99 valid pages (i.e. 99 pages of valid data) now. That is, the processing unit 110 decreases the valid page count of PHY BLK 0. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. In the situation where the valid page count table is replaced by the invalid page count table mentioned above, the processing unit 110 increases the invalid page count of PHY BLK 0.
  • As a result of executing command C4, the global page address linking table and the valid page count table are updated as shown in FIG. 9D and 10D, respectively.
  • Next, the host sends a command C5 to the memory apparatus 100 in order to read 4 sectors of data corresponding host addresses 0000008-00000011. The processing unit 110 parses the command C5 to execute the reading operation. The processing unit 110 transfers the host addresses 0000008-0000011 into logical addresses. The processing unit 110 divides the host address 0000008 by the number of sectors of a page, i.e. 4 in this embodiment, and obtains a quotient 2 and a remainder 0. The quotient 2 means that the logical page address thereof is 2, where the logical page indicated by the logical page address 2 is LOG Page 2. In addition, the remainder 0 means that the data DS0 should have been stored in the first sector of a page. Similarly, the processing unit 110 determines that the logical page addresses of the host addresses 0000009, 0000010, and 0000011 are all 2 (i.e. all of the host addresses 0000009, 0000010, and 0000011 belong to LOG Page 2, or comprise the logical page address 2), and the logical block addresses thereof are all 0 (i.e. all of the host addresses 0000009, 0000010, and 0000011 further belong to LOG BLK 0, or comprise the logical block address 0). In addition, the data corresponding to host addresses 0000008-00000011 should have been stored in the first, the second, the third, and the fourth sectors of a page, respectively. The processing unit 110 reads the third field of the global page address linking table and obtains 128, which indicates that the data corresponding to LOG Page 2 is stored in PHY Page 128. The processing unit 110 reads PHY Page 128 to obtain data DS0″-DS3″, and sends these data to the host.
  • In this embodiment, assume that after a lot of writing/programming operations are further performed, all blocks of the data region (e.g. PHY BLKs 0-4093) have been fully programmed, and the spare region comprises PHY BLKs 4094 and 4095, where the valid page count table is illustrated in FIG. 10E. Then, the host sends a command C6 to the memory apparatus 100 in order to write 4 sectors of data, DS12-DS15. The processing unit 110 pops a physical block from the spare region, such as PHY BLK 4094, for writing data DS12-DS15. In general, it is suggested to maintain a sufficient block count of the spare region. For example, the minimal block count should be always greater than zero. In another example, the minimal block count should be greater than zero for most of the time, where the minimal block count can temporarily reach zero as long as the operations of the memory apparatus 100 will not be hindered.
  • Assuming that maintaining a sufficient block count of the spare region is required in this embodiment, in a situation where the block count of the spare region is (or will be) less than a predetermined value (e.g. the predetermined value is 2), the processing unit 110 has to erase a physical block in the data region, in order to push this erased physical block into the spare region. The processing unit 110 searches the valid page count table and finds out that PHY BLK 2 has no valid data since the valid page count of PHY BLK 2 is 0. Since, PHY BLK 2 having the least valid page count, the processing unit 110 erases the PHY BLK 2 and then pushes the erased PHY BLK 2 into the spare region. Thus, the spare region comprises PHY BLKs 2 and 4095 now. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, when the valid page count of PHY BLK 2 decreases to zero, the processing unit 110 can erase PHY BLK 2 immediately.
  • According to this embodiment, assume that after several writing/programming operations are further performed, all pages of the PHY BLK 4094 have been programmed, where the valid page count table is illustrated in FIG. 10F. Then, the host sends a command C7 to the memory apparatus 100 in order to write 4 sectors of data, DS16-DS19. The processing unit 110 pops a physical block from the spare region, such as PHY BLK 4095, for writing data DS16-DS19.
  • Similarly, when it is detected that the block count of the spare region is (or will be) less than the predetermined value, the processing unit 110 has to erase at least a physical block in the data region in order to push the physical block(s) into the spare region. The processing unit 110 of this embodiment searches the valid page count table shown in FIG. 10F and finds out that PHY BLK 0 has 40 pages of valid data and PHY BLK 1 has 50 pages of valid data, where PHY BLKs 0 and 1 have the least valid page counts among others. In this embodiment, the processing unit 110 moves the valid data of PHY BLKs 0 and 1 into PHY BLK 2, and updates the global page address liking table to reflect the movement of the valid data. In other words, the processing unit 110 reads the valid data in PHY BLKs 0 and 1, programs/writes the valid data into PHY BLK 2, and links the logical page addresses of the valid data to the physical pages programmed/written with the valid data, correspondingly. After moving the valid data, the processing unit 110 erases PHY BLKs 0 and 1, and pushes the erased PHY BLKs 0 and 1 into the spare region.
  • In this embodiment, when it is detected that the block count of the spare region is less than the predetermined value, the processing unit 110 typically searches the valid page count table to find one or more fully programmed blocks having the least valid page count(s), and erases the one or more fully programmed blocks in order to push the one or more blocks into the spare region. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, in a situation where the valid page count table is replaced by the invalid page count table mentioned above, the processing unit 110 can search the invalid page count table to find one or more fully programmed blocks having the most invalid page count(s), and erase the one or more fully programmed blocks of this variation in order to push the one or more blocks into the spare region.
  • According to this embodiment, the processing unit 110 has popped one more physical block from the spare region into the data region, such as PHY BLK 2, for merging PHY BLKs 0 and 1. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, the processing unit 110 can merge the one or more fully programmed blocks having the least valid page count(s) into a partially programmed block as long as there are enough free pages in the partially programmed block, where the free pages represent the pages that have not been programmed since the latest erasure of the block comprising these valid pages. For example, the processing unit 110 can merge PHY BLKs 0 and 1 into the partially programmed block, such as PHY BLK 4095, as long as there are enough free pages in the partially programmed block for programming data DS16-DS19 and the valid data of PHY BLKs 0 and 1. In another example, the processing unit 110 can merge PHY BLK 0 into the partially programmed block, such as PHY BLK 4095, as long as there are enough free pages in the partially programmed block for programming data DS16-DS19 and the valid data of PHY BLK 0.
  • In practice, the processing unit 110 can program/write the data DS16-DS19 into PHY BLK 4095, and can further move the valid data of PHY BLKs 0 and 1 into PHY BLK 4095 as long as there are enough free pages in PHY BLK 4095 for programming data DS16-DS19 and the valid data. Certainly, the processing unit 110 of this variation updates the global page address liking table to reflect the movement of the valid data. Similarly, after moving the valid data, the processing unit 110 erases PHY BLKs 0 and 1, and pushes the erased PHY BLKs 0 and 1 into the spare region.
  • In other variations of this embodiment, the processing unit 110 can move valid data of N physical blocks into M physical blocks wherein N and M are positive integers, and N is greater than M. Assume that there are K pages of valid data in total within the N physical blocks, where K is smaller than the number of free pages in total within the M physical blocks. The processing unit 110 can read the K pages of valid data from the N physical blocks, erase the N physical blocks, buffer the K pages of valid data into the volatile memory 120, and program/write the K pages of valid data into the M physical blocks. Please note that, in general, the N physical blocks and the M physical blocks may overlap (e.g. the N physical blocks and the M physical blocks both comprise at least a same physical block) or not overlap. In a situation where the N physical blocks and the M physical blocks do not overlap (i.e. none of the N physical blocks belongs to the M physical blocks, and vice versa), the K pages of valid data can be programmed/written into the M physical blocks without waiting for erasing the N physical blocks, and the processing unit 110 can generate (N-M) erased blocks eventually. Certainly, the processing unit 110 updates the global page address liking table to reflect the movement of the valid data.
  • Please note that, in other variations of this embodiment, the processing unit 110 can record the invalid page count of each physical block. For example, given that the page count of each physical block is 128, a specific physical block comprises 128 pages, within which 28 pages are invalid pages containing invalid data and 100 pages are valid pages containing valid data. That is, the invalid page count and the valid page count of the specific physical block are 28 and 100, respectively. The processing unit 110 can build an invalid page count table of the flash chips 0-3, and erase a particular physical block according to the invalid page count table. In some of the variations, when the processing unit 110 has to erase a physical block, the processing unit 110 can select a particular physical block having the most invalid pages according to the invalid page count table, and erase the particular physical block. In practice, before the particular physical is erased, the valid data contained therein have to be moved to other blocks. For efficiently moving the valid data, the processing unit 110 can record one or more positions of the valid data in the particular block. More particularly, the processing unit 110 can build a valid-page-position table for each block in order to indicate the position(s) of one or more valid pages containing valid data within the block.
  • FIG. 11 illustrates a valid-page-position table of the flash chips 0-3 according to an embodiment of the present invention. The arrangement of the illustrative table locations of valid-page-position table is similar to that of FIGS. 10B-10F together with the right half of FIG. 10A, and therefore, is not explained in detail for simplicity. In this embodiment, each field of the valid-page-position table indicates whether any valid-page-position corresponding to an associated physical block exists. For example, each field of this embodiment comprises 128 bits respectively corresponding to the pages of the associated physical block.
  • In particular, each field of the valid-page-position table indicates the valid-page-position(s) corresponding to the associated physical block. Each bit in a specific field indicates whether an associated page in the associated physical block is valid or invalid. For example, the first field of the valid-page-position table shown in FIG. 11 is recorded as “01011100101 . . . 11111”, which indicates the valid-page-position(s) within PHY BLK 0.
  • More specifically, the ranking of a specific bit in the specific field of the valid-page-position table shown in FIG. 11 represents a page address offset (or a relative page position) of an associated page within the associated physical block. For example, regarding the bits “01011100101 . . . 11111” in the first field of the valid-page-position table shown in FIG. 11, the least significant bit (LSB) “1” indicates that the first page of the PHY BLK 0 (i.e. the PHY Page 0) is a valid page containing valid data, and the most significant bit (MSB) “0” indicates that the last page of the PHY BLK 0 (i.e. the PHY Page 127) is an invalid page containing invalid data, where other bits between LSB and MSB indicate the valid/invalid state of the other physical pages of the associated physical block, respectively. Similar descriptions are not repeated for the other fields of the valid-page-position table shown in FIG. 11. As a result, the processing unit 110 can move valid data contained in the valid pages quickly according to the valid-page-position table.
  • In this embodiment, the LSB in the specific field indicates whether the first page of the associated physical block is a valid page or an invalid page, and the MSB in the specific field indicates whether the last page of the associated physical block is a valid page or an invalid page. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, the LSB in the specific field indicates whether the last page of the associated physical block is a valid page or an invalid page, and the MSB in the specific field indicates whether the first page of the associated physical block is a valid page or an invalid page. For example, regarding the bits “01011100101 . . . 11111” in the first field, the LSB “1” indicates that the last page of the PHY BLK 0 (i.e. the PHY Page 127) is a valid page containing valid data, and the most significant bit (MSB) “0” indicates that the first page of the PHY BLK 0 (i.e. the PHY Page 0) is an invalid page containing invalid data, where other bits between LSB and MSB indicate the valid/invalid state of the other physical pages of the associated physical block, respectively.
  • In this embodiment, a logical value “1” of the specific bit indicates that the associated page is a valid page, while a logical value “0” of the specific bit indicates that the associated page is an invalid page. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, the logical value “0” of the specific bit indicates that the associated page is a valid page, while the logical value “1” of the specific bit indicates that the associated page is an invalid page.
  • In addition, the valid-page-position table can be stored in the volatile memory 120. In this way, the valid-page-position table can be updated easily during accessing the flash chips. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to a variation of this embodiment, the valid-page-position table can be loaded from the volatile memory 120 and stored in one or more of the NV memory elements 140_0, 140_1, . . . , and 140_N before shutting down the memory apparatus 100. More particularly, the valid-page-position table can be stored in one or more link blocks of the NV memory elements 140_0, 140_1, . . . , and 140_N. In this way, the valid-page-position table can be preserved while the memory apparatus 100 shuts down. While turning on the memory apparatus 100 next time, the valid-page-position table can be easily obtained from the link block(s).
  • In another embodiment, during accessing the memory apparatus 100, the valid-page-position table and global page address linking table can be loaded from the volatile memory 120 and stored in the NV memory elements from time to time. For example, the valid-page-position table and global page address linking table can be stored in every predetermined time period (e.g. 2 second) or in every predetermined accessing operations (e.g. 100 times of writing). When the memory apparatus 100 is abnormally shutting down, the latest valid-page-position table and global page address linking table are not loaded from the volatile memory 120 and stored in the NV memory elements. Then, the memory apparatus 100 is turning on. For building the valid-page-position table, the processing unit 110 can search the blocks that have been accessed after the latest updating of the valid-page-position table and global page address linking table in the NV memory elements. The processing unit 110 searches logical page addresses stored in each page of these blocks to build and update the global page address linking table. After that, the valid-page-position table can be built according to the updated global page address linking table.
  • In contrast to the related art, the present invention method and apparatus can greatly save the time of building logical-to-physical page address linking table(s), such as the global page address linking table. Therefore, the present invention provides better performance than the related art.
  • It is another advantage of the present invention that the present invention method and apparatus can record the usage information during accessing the pages, and therefore can efficiently manage the usage of all blocks according the usage information. As a result, the arrangement of the spare region and the data region can be optimized.
  • In addition, managing the flash memory on a basis of page brings lots of advantages. For example, the speed of random write is greatly improved, and the write amplification index can be greatly reduced. Without introducing side effects such as those of the related art, managing the flash memory on a basis of page can be much simpler and more intuitional than managing the flash memory on a basis of block as long as the present invention is applied in real implementation.
  • Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.

Claims (6)

1. A method for managing a memory apparatus, the memory apparatus comprising at least one non-volatile (NV) memory element, each of which comprises a plurality of blocks, the method comprising:
receiving a first access command from a host;
analyzing the first access command to obtain a first host address;
linking the first host address to a physical block;
receiving a second access command from the host;
analyzing the second access command to obtain a second host address; and
linking the second host address to the physical block,
wherein a difference value of the first host address and the second host address is greater than a number of pages of the physical block.
2. The method of claim 1 further comprises:
analyzing the first access command to obtain a first data;
analyzing the second access command to obtain a second data;
storing the first data into the physical block; and
storing the second data into the physical block.
3. The method of claim 1, wherein the first host address is linked to at least a first page of the physical block, and the second host address is linked to at least a second page of the physical block.
4. A method for managing a memory apparatus, the memory apparatus comprising at least one non-volatile (NV) memory element, each of which comprises a plurality of blocks, the method comprising:
receiving a first access command from a host;
analyzing the first access command to obtain a first host address;
linking the first host address to at least a page of a first physical block;
receiving a second access command from the host;
analyzing the second access command to obtain a second host address; and
linking the second host address to at least a page of a second physical block that is different from the first physical block,
wherein a difference value of the first host address and the second host address is smaller than a number of pages of the physical block.
5. The method of claim 4, further comprises:
analyzing the first access command to obtain a first data;
analyzing the second access command to obtain a second data;
storing the first data into the first physical block; and
storing the second data into the second physical block.
6. The method of claim 5, wherein the first host address is linked to at least a first page of the first physical block, and the second host address is linked to at least a second page of the second physical block.
US13/604,654 2008-11-06 2012-09-06 Method for managing a memory apparatus Abandoned US20120331267A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/604,654 US20120331267A1 (en) 2008-11-06 2012-09-06 Method for managing a memory apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US11217308P 2008-11-06 2008-11-06
US14085008P 2008-12-24 2008-12-24
US12/471,462 US8285970B2 (en) 2008-11-06 2009-05-25 Method for managing a memory apparatus, and associated memory apparatus thereof
US13/604,654 US20120331267A1 (en) 2008-11-06 2012-09-06 Method for managing a memory apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/471,462 Continuation US8285970B2 (en) 2008-11-06 2009-05-25 Method for managing a memory apparatus, and associated memory apparatus thereof

Publications (1)

Publication Number Publication Date
US20120331267A1 true US20120331267A1 (en) 2012-12-27

Family

ID=42132878

Family Applications (15)

Application Number Title Priority Date Filing Date
US12/471,462 Active 2031-05-02 US8285970B2 (en) 2008-11-06 2009-05-25 Method for managing a memory apparatus, and associated memory apparatus thereof
US12/471,413 Active 2030-07-23 US8219781B2 (en) 2008-11-06 2009-05-25 Method for managing a memory apparatus, and associated memory apparatus thereof
US13/466,147 Active US8799622B2 (en) 2008-11-06 2012-05-08 Method for managing a memory apparatus
US13/466,138 Abandoned US20120221782A1 (en) 2008-11-06 2012-05-08 Method for managing a memory apparatus, and associated memory apparatus thereof
US13/605,977 Active US8473713B2 (en) 2008-11-06 2012-09-06 Method for managing a memory apparatus
US13/604,644 Active US9037832B2 (en) 2008-11-06 2012-09-06 Method for managing a memory apparatus, and associated memory apparatus thereof
US13/604,654 Abandoned US20120331267A1 (en) 2008-11-06 2012-09-06 Method for managing a memory apparatus
US13/604,636 Active US8473712B2 (en) 2008-11-06 2012-09-06 Method for managing a memory apparatus, and associated memory apparatus thereof
US14/566,724 Abandoned US20150095562A1 (en) 2008-11-06 2014-12-11 Method for managing a memory apparatus
US15/642,295 Active US10482011B2 (en) 2008-11-06 2017-07-05 Method for managing a memory apparatus
US16/596,703 Active US10795811B2 (en) 2008-11-06 2019-10-08 Method for managing a memory apparatus
US16/888,836 Active US11074176B2 (en) 2008-11-06 2020-05-31 Method for managing a memory apparatus
US17/351,168 Active 2029-06-16 US11520697B2 (en) 2008-11-06 2021-06-17 Method for managing a memory apparatus
US17/975,565 Active US11748258B2 (en) 2008-11-06 2022-10-27 Method for managing a memory apparatus
US18/218,122 Pending US20230350799A1 (en) 2008-11-06 2023-07-05 Method for managing a memory apparatus

Family Applications Before (6)

Application Number Title Priority Date Filing Date
US12/471,462 Active 2031-05-02 US8285970B2 (en) 2008-11-06 2009-05-25 Method for managing a memory apparatus, and associated memory apparatus thereof
US12/471,413 Active 2030-07-23 US8219781B2 (en) 2008-11-06 2009-05-25 Method for managing a memory apparatus, and associated memory apparatus thereof
US13/466,147 Active US8799622B2 (en) 2008-11-06 2012-05-08 Method for managing a memory apparatus
US13/466,138 Abandoned US20120221782A1 (en) 2008-11-06 2012-05-08 Method for managing a memory apparatus, and associated memory apparatus thereof
US13/605,977 Active US8473713B2 (en) 2008-11-06 2012-09-06 Method for managing a memory apparatus
US13/604,644 Active US9037832B2 (en) 2008-11-06 2012-09-06 Method for managing a memory apparatus, and associated memory apparatus thereof

Family Applications After (8)

Application Number Title Priority Date Filing Date
US13/604,636 Active US8473712B2 (en) 2008-11-06 2012-09-06 Method for managing a memory apparatus, and associated memory apparatus thereof
US14/566,724 Abandoned US20150095562A1 (en) 2008-11-06 2014-12-11 Method for managing a memory apparatus
US15/642,295 Active US10482011B2 (en) 2008-11-06 2017-07-05 Method for managing a memory apparatus
US16/596,703 Active US10795811B2 (en) 2008-11-06 2019-10-08 Method for managing a memory apparatus
US16/888,836 Active US11074176B2 (en) 2008-11-06 2020-05-31 Method for managing a memory apparatus
US17/351,168 Active 2029-06-16 US11520697B2 (en) 2008-11-06 2021-06-17 Method for managing a memory apparatus
US17/975,565 Active US11748258B2 (en) 2008-11-06 2022-10-27 Method for managing a memory apparatus
US18/218,122 Pending US20230350799A1 (en) 2008-11-06 2023-07-05 Method for managing a memory apparatus

Country Status (4)

Country Link
US (15) US8285970B2 (en)
CN (7) CN101739352B (en)
TW (11) TW201621669A (en)
WO (2) WO2010051717A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120042146A1 (en) * 2009-04-27 2012-02-16 Gandhi Kamlesh Device and method for storage, retrieval, relocation, insertion or removal of data in storage units
US20160266793A1 (en) * 2015-03-12 2016-09-15 Kabushiki Kaisha Toshiba Memory system

Families Citing this family (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9384818B2 (en) 2005-04-21 2016-07-05 Violin Memory Memory power management
US8200887B2 (en) 2007-03-29 2012-06-12 Violin Memory, Inc. Memory management system and method
US9632870B2 (en) 2007-03-29 2017-04-25 Violin Memory, Inc. Memory system with multiple striping of raid groups and method for performing the same
US11010076B2 (en) 2007-03-29 2021-05-18 Violin Systems Llc Memory system with multiple striping of raid groups and method for performing the same
US8843691B2 (en) * 2008-06-25 2014-09-23 Stec, Inc. Prioritized erasure of data blocks in a flash storage device
US8285970B2 (en) * 2008-11-06 2012-10-09 Silicon Motion Inc. Method for managing a memory apparatus, and associated memory apparatus thereof
TWI410976B (en) * 2008-11-18 2013-10-01 Lite On It Corp Reliability test method for solid storage medium
WO2010144587A2 (en) * 2009-06-12 2010-12-16 Violin Memory, Inc. Memory system having persistent garbage collection
US8140712B2 (en) * 2009-07-17 2012-03-20 Sandforce, Inc. System, method, and computer program product for inserting a gap in information sent from a drive to a host device
US8516166B2 (en) 2009-07-20 2013-08-20 Lsi Corporation System, method, and computer program product for reducing a rate of data transfer to at least a portion of memory
US8108737B2 (en) * 2009-10-05 2012-01-31 Sandforce, Inc. System, method, and computer program product for sending failure information from a serial ATA (SATA) solid state drive (SSD) to a host device
US9104546B2 (en) * 2010-05-24 2015-08-11 Silicon Motion Inc. Method for performing block management using dynamic threshold, and associated memory device and controller thereof
US8484420B2 (en) 2010-11-30 2013-07-09 International Business Machines Corporation Global and local counts for efficient memory page pinning in a multiprocessor system
TWI587136B (en) * 2011-05-06 2017-06-11 創惟科技股份有限公司 Flash memory system and managing and collection methods for flash memory with invalid page information thereof
JP5917016B2 (en) * 2011-05-10 2016-05-11 キヤノン株式会社 Information processing apparatus, control method thereof, and control program
US9081663B2 (en) * 2011-11-18 2015-07-14 Stec, Inc. Optimized garbage collection algorithm to improve solid state drive reliability
CN103136215A (en) * 2011-11-24 2013-06-05 腾讯科技(深圳)有限公司 Data read-write method and device of storage system
JP5907739B2 (en) * 2012-01-26 2016-04-26 株式会社日立製作所 Nonvolatile memory device
CN103226517A (en) * 2012-01-31 2013-07-31 上海华虹集成电路有限责任公司 Method for dynamically ordering physical blocks of Nandflash according to quantity of invalid pages
JP6072428B2 (en) * 2012-05-01 2017-02-01 テセラ アドバンスト テクノロジーズ インコーポレーテッド Control device, storage device, and storage control method
US9116792B2 (en) * 2012-05-18 2015-08-25 Silicon Motion, Inc. Data storage device and method for flash block management
US9244833B2 (en) 2012-05-30 2016-01-26 Silicon Motion, Inc. Data-storage device and flash memory control method
TWI448891B (en) * 2012-09-06 2014-08-11 Silicon Motion Inc Data storage device and flash memory control method
US9384125B2 (en) 2012-06-18 2016-07-05 Silicon Motion Inc. Method for accessing flash memory having pages used for data backup and associated memory device
TWI492052B (en) * 2012-06-18 2015-07-11 Silicon Motion Inc Method for accessing flash memory and associated memory device
JP6030485B2 (en) * 2013-03-21 2016-11-24 日立オートモティブシステムズ株式会社 Electronic control unit
JP2015001909A (en) * 2013-06-17 2015-01-05 富士通株式会社 Information processor, control circuit, control program, and control method
CN103559141A (en) * 2013-11-01 2014-02-05 北京昆腾微电子有限公司 Management method and device for nonvolatile memory (NVM)
US9547510B2 (en) * 2013-12-10 2017-01-17 Vmware, Inc. Tracking guest memory characteristics for memory scheduling
US9529609B2 (en) 2013-12-10 2016-12-27 Vmware, Inc. Tracking guest memory characteristics for memory scheduling
US9747961B2 (en) * 2014-09-03 2017-08-29 Micron Technology, Inc. Division operations in memory
US9542118B1 (en) * 2014-09-09 2017-01-10 Radian Memory Systems, Inc. Expositive flash memory control
KR20160075165A (en) * 2014-12-19 2016-06-29 에스케이하이닉스 주식회사 Memory system and operating method of memory system
US20160179399A1 (en) * 2014-12-23 2016-06-23 Sandisk Technologies Inc. System and Method for Selecting Blocks for Garbage Collection Based on Block Health
KR102391678B1 (en) 2015-01-22 2022-04-29 삼성전자주식회사 Storage device and sustained status accelerating method thereof
TWI545433B (en) * 2015-03-04 2016-08-11 慧榮科技股份有限公司 Methods for maintaining a storage mapping table and apparatuses using the same
TWI574274B (en) * 2015-05-07 2017-03-11 慧榮科技股份有限公司 Methods for accessing data in a circular block mode and apparatuses using the same
KR20170060206A (en) * 2015-11-23 2017-06-01 에스케이하이닉스 주식회사 Memory system and operating method of memory system
US10101925B2 (en) * 2015-12-23 2018-10-16 Toshiba Memory Corporation Data invalidation acceleration through approximation of valid data counts
TWI570559B (en) * 2015-12-28 2017-02-11 點序科技股份有限公司 Flash memory and accessing method thereof
US10620879B2 (en) * 2017-05-17 2020-04-14 Macronix International Co., Ltd. Write-while-read access method for a memory device
US10970226B2 (en) * 2017-10-06 2021-04-06 Silicon Motion, Inc. Method for performing access management in a memory device, associated memory device and controller thereof, and associated electronic device
TWI659304B (en) * 2017-10-20 2019-05-11 慧榮科技股份有限公司 Method for accessing flash memory module and associated flash memory controller and electronic device
CN108153681A (en) * 2017-11-29 2018-06-12 深圳忆联信息系统有限公司 A kind of large capacity solid-state hard disc mapping table compression method
TWI686698B (en) * 2018-05-24 2020-03-01 大陸商深圳大心電子科技有限公司 Logical-to-physical table updating method and storage controller
US10936199B2 (en) * 2018-07-17 2021-03-02 Silicon Motion, Inc. Flash controllers, methods, and corresponding storage devices capable of rapidly/fast generating or updating contents of valid page count table
KR20200030245A (en) 2018-09-12 2020-03-20 에스케이하이닉스 주식회사 Apparatus and method for managing valid data in memory system
US20200151119A1 (en) * 2018-11-08 2020-05-14 Silicon Motion, Inc. Method and apparatus for performing access control between host device and memory device
US11061598B2 (en) * 2019-03-25 2021-07-13 Western Digital Technologies, Inc. Optimized handling of multiple copies in storage management

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6000006A (en) * 1997-08-25 1999-12-07 Bit Microsystems, Inc. Unified re-map and cache-index table with dual write-counters for wear-leveling of non-volatile flash RAM mass storage
US6226202B1 (en) * 1996-07-19 2001-05-01 Tokyo Electron Device Limited Flash memory card including CIS information
US6243789B1 (en) * 1995-12-26 2001-06-05 Intel Corporation Method and apparatus for executing a program stored in nonvolatile memory

Family Cites Families (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5937425A (en) * 1997-10-16 1999-08-10 M-Systems Flash Disk Pioneers Ltd. Flash file system optimized for page-mode flash technologies
KR100319598B1 (en) * 1998-03-18 2002-04-06 김영환 Flash memory array access method and device
US7934074B2 (en) * 1999-08-04 2011-04-26 Super Talent Electronics Flash module with plane-interleaved sequential writes to restricted-write flash chips
US7318117B2 (en) * 2004-02-26 2008-01-08 Super Talent Electronics, Inc. Managing flash memory including recycling obsolete sectors
US6449689B1 (en) * 1999-08-31 2002-09-10 International Business Machines Corporation System and method for efficiently storing compressed data on a hard disk drive
US6938144B2 (en) * 2001-03-22 2005-08-30 Matsushita Electric Industrial Co., Ltd. Address conversion unit for memory device
JP2003203007A (en) * 2002-01-07 2003-07-18 Nec Corp Nonvolatile area control method for memory of mobile phone
US20060143365A1 (en) * 2002-06-19 2006-06-29 Tokyo Electron Device Limited Memory device, memory managing method and program
CN100421181C (en) 2002-07-30 2008-09-24 希旺科技股份有限公司 Rewritable non-volatile memory system and method
KR101122511B1 (en) * 2002-10-28 2012-03-15 쌘디스크 코포레이션 Automated wear leveling in non-volatile storage systems
JP2004296014A (en) * 2003-03-27 2004-10-21 Mitsubishi Electric Corp Method for leveling erase frequency of nonvolatile memory
CN1311366C (en) * 2003-05-22 2007-04-18 群联电子股份有限公司 Parallel double-track using method for quick flashing storage
CN1277182C (en) * 2003-09-04 2006-09-27 台达电子工业股份有限公司 Programmable logic controller with auxiliary processing unit
JP3912355B2 (en) * 2003-10-14 2007-05-09 ソニー株式会社 Data management device, data management method, nonvolatile memory, storage device having nonvolatile memory, and data processing system
JP2005122529A (en) 2003-10-17 2005-05-12 Matsushita Electric Ind Co Ltd Semiconductor memory device
KR20060134011A (en) * 2003-12-30 2006-12-27 쌘디스크 코포레이션 Non-volatile memory and method with memory planes alignment
KR100526188B1 (en) * 2003-12-30 2005-11-04 삼성전자주식회사 Method for address mapping and managing mapping information, and flash memory thereof
US7631138B2 (en) * 2003-12-30 2009-12-08 Sandisk Corporation Adaptive mode switching of flash memory address mapping based on host usage characteristics
TW200523946A (en) 2004-01-13 2005-07-16 Ali Corp Method for accessing a nonvolatile memory
JP4701618B2 (en) * 2004-02-23 2011-06-15 ソニー株式会社 Information processing apparatus, information processing method, and computer program
US7680977B2 (en) * 2004-02-26 2010-03-16 Super Talent Electronics, Inc. Page and block management algorithm for NAND flash
EP2977906A1 (en) * 2004-04-28 2016-01-27 Panasonic Corporation Nonvolatile storage device and data write method
CN100437517C (en) * 2004-04-28 2008-11-26 松下电器产业株式会社 Nonvolatile storage device and data write method
CN101194238B (en) * 2005-06-24 2010-05-19 松下电器产业株式会社 Memory controller, nonvolatile storage device, nonvolatile storage system, and data writing method
ATE493707T1 (en) * 2005-08-03 2011-01-15 Sandisk Corp NON-VOLATILE MEMORY WITH BLOCK MANAGEMENT
US8595434B2 (en) * 2005-08-25 2013-11-26 Silicon Image, Inc. Smart scalable storage switch architecture
CN100573476C (en) 2005-09-25 2009-12-23 深圳市朗科科技股份有限公司 Flash memory medium data management method
US20070083697A1 (en) * 2005-10-07 2007-04-12 Microsoft Corporation Flash memory management
CN100520734C (en) * 2005-11-18 2009-07-29 凌阳科技股份有限公司 Control apparatus and method of flash memory
JP2007199846A (en) 2006-01-24 2007-08-09 Toshiba Corp Memory control device and memory control method
US8756399B2 (en) * 2006-01-25 2014-06-17 Seagate Technology Llc Mutable association of a set of logical block addresses to a band of physical storage blocks
US20070300130A1 (en) * 2006-05-17 2007-12-27 Sandisk Corporation Method of Error Correction Coding for Multiple-Sector Pages in Flash Memory Devices
JP2007310823A (en) 2006-05-22 2007-11-29 Matsushita Electric Ind Co Ltd Memory card, memory card processing method, control program and integrated circuit
CN100583293C (en) 2006-08-09 2010-01-20 安国国际科技股份有限公司 Memory device and its reading and writing method
WO2008042592A2 (en) * 2006-09-29 2008-04-10 Sandisk Corporation Phased garbage collection
US7695274B2 (en) * 2006-11-07 2010-04-13 Caruso Ii Augustine Lighter with built-in clip
JP2008146253A (en) * 2006-12-07 2008-06-26 Sony Corp Storage device, computer system, and data processing method for storage device
US7958331B2 (en) 2006-12-13 2011-06-07 Seagate Technology Llc Storage device with opportunistic address space
WO2008082999A2 (en) * 2006-12-26 2008-07-10 Sandisk Corporation Configuration of host lba interface with flash memory
TWM317043U (en) * 2006-12-27 2007-08-11 Genesys Logic Inc Cache device of the flash memory address transformation layer
US7953954B2 (en) * 2007-01-26 2011-05-31 Micron Technology, Inc. Flash storage partial page caching
JP2008210057A (en) * 2007-02-23 2008-09-11 Hitachi Ltd Storage system and management method thereof
US9009440B2 (en) * 2007-11-12 2015-04-14 Lsi Corporation Adjustment of data storage capacity provided by a storage system
US8122179B2 (en) * 2007-12-14 2012-02-21 Silicon Motion, Inc. Memory apparatus and method of evenly using the blocks of a flash memory
US7949851B2 (en) * 2007-12-28 2011-05-24 Spansion Llc Translation management of logical block addresses and physical block addresses
US7941692B2 (en) 2007-12-31 2011-05-10 Intel Corporation NAND power fail recovery
US8417893B2 (en) * 2008-02-04 2013-04-09 Apple Inc. Memory mapping techniques
WO2009107506A1 (en) * 2008-02-29 2009-09-03 Kabushiki Kaisha Toshiba Memory system
CN101251788A (en) * 2008-03-07 2008-08-27 威盛电子股份有限公司 Storage unit management method and system
US8069299B2 (en) * 2008-06-30 2011-11-29 Intel Corporation Banded indirection for nonvolatile memory devices
US8285970B2 (en) * 2008-11-06 2012-10-09 Silicon Motion Inc. Method for managing a memory apparatus, and associated memory apparatus thereof
TWI450271B (en) * 2009-09-02 2014-08-21 Silicon Motion Inc Method for managing a plurality of blocks of a flash memory, and associated memory device and controller thereof
KR102233400B1 (en) * 2017-05-29 2021-03-26 에스케이하이닉스 주식회사 Data storage device and operating method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6243789B1 (en) * 1995-12-26 2001-06-05 Intel Corporation Method and apparatus for executing a program stored in nonvolatile memory
US6226202B1 (en) * 1996-07-19 2001-05-01 Tokyo Electron Device Limited Flash memory card including CIS information
US6000006A (en) * 1997-08-25 1999-12-07 Bit Microsystems, Inc. Unified re-map and cache-index table with dual write-counters for wear-leveling of non-volatile flash RAM mass storage

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120042146A1 (en) * 2009-04-27 2012-02-16 Gandhi Kamlesh Device and method for storage, retrieval, relocation, insertion or removal of data in storage units
US20160266793A1 (en) * 2015-03-12 2016-09-15 Kabushiki Kaisha Toshiba Memory system
US10223001B2 (en) * 2015-03-12 2019-03-05 Toshiba Memory Corporation Memory system

Also Published As

Publication number Publication date
TW201337558A (en) 2013-09-16
CN111414315A (en) 2020-07-14
TW201019109A (en) 2010-05-16
TWI657338B (en) 2019-04-21
US8285970B2 (en) 2012-10-09
TW201527972A (en) 2015-07-16
CN105975399A (en) 2016-09-28
WO2010051717A1 (en) 2010-05-14
US9037832B2 (en) 2015-05-19
WO2010051718A1 (en) 2010-05-14
TW201805814A (en) 2018-02-16
CN110806985B (en) 2023-11-21
TWI829251B (en) 2024-01-11
TWI775122B (en) 2022-08-21
US20230048550A1 (en) 2023-02-16
TW202042067A (en) 2020-11-16
US20120331263A1 (en) 2012-12-27
TW201621669A (en) 2016-06-16
CN103455432B (en) 2016-09-28
US20100115188A1 (en) 2010-05-06
US20100115189A1 (en) 2010-05-06
US20120331216A1 (en) 2012-12-27
TWI409632B (en) 2013-09-21
CN101739352A (en) 2010-06-16
CN111414315B (en) 2023-11-21
TW202242657A (en) 2022-11-01
US20210311870A1 (en) 2021-10-07
TWI494760B (en) 2015-08-01
TW201019107A (en) 2010-05-16
US20170300409A1 (en) 2017-10-19
CN110457231A (en) 2019-11-15
US8219781B2 (en) 2012-07-10
US8799622B2 (en) 2014-08-05
TWI703439B (en) 2020-09-01
CN101739351A (en) 2010-06-16
TW201346547A (en) 2013-11-16
US10482011B2 (en) 2019-11-19
CN105975399B (en) 2020-04-10
US8473712B2 (en) 2013-06-25
CN110806985A (en) 2020-02-18
US11748258B2 (en) 2023-09-05
CN110457231B (en) 2023-10-24
US20120331215A1 (en) 2012-12-27
CN103455432A (en) 2013-12-18
TW201714092A (en) 2017-04-16
US20200042437A1 (en) 2020-02-06
CN101739352B (en) 2013-09-18
US20200293442A1 (en) 2020-09-17
TWI536164B (en) 2016-06-01
TW201923589A (en) 2019-06-16
US20230350799A1 (en) 2023-11-02
US20150095562A1 (en) 2015-04-02
TWI459195B (en) 2014-11-01
US10795811B2 (en) 2020-10-06
US11074176B2 (en) 2021-07-27
US20120221829A1 (en) 2012-08-30
US8473713B2 (en) 2013-06-25
US11520697B2 (en) 2022-12-06
US20120221782A1 (en) 2012-08-30

Similar Documents

Publication Publication Date Title
US11074176B2 (en) Method for managing a memory apparatus

Legal Events

Date Code Title Description
AS Assignment

Owner name: SILICON MOTION INC., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIN, TSAI-CHENG;LEE, CHUN-KUN;REEL/FRAME:028903/0847

Effective date: 20090521

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION