US9342310B2 - MFENCE and LFENCE micro-architectural implementation method and system - Google Patents

MFENCE and LFENCE micro-architectural implementation method and system Download PDF

Info

Publication number
US9342310B2
US9342310B2 US13/838,229 US201313838229A US9342310B2 US 9342310 B2 US9342310 B2 US 9342310B2 US 201313838229 A US201313838229 A US 201313838229A US 9342310 B2 US9342310 B2 US 9342310B2
Authority
US
United States
Prior art keywords
instruction
mfence
processor
lfence
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US13/838,229
Other versions
US20130205117A1 (en
Inventor
Salvador Palanca
Stephen A. Fischer
Subramaniam Maiyuran
Shekoufeh Oawami
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US13/838,229 priority Critical patent/US9342310B2/en
Priority to US13/942,660 priority patent/US8959314B2/en
Publication of US20130205117A1 publication Critical patent/US20130205117A1/en
Application granted granted Critical
Publication of US9342310B2 publication Critical patent/US9342310B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30145Instruction analysis, e.g. decoding, instruction word fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/3004Arrangements for executing specific machine instructions to perform operations on memory
    • G06F9/30043LOAD or STORE instructions; Clear instruction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/3004Arrangements for executing specific machine instructions to perform operations on memory
    • G06F9/30047Prefetch instructions; cache control instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30076Arrangements for executing specific machine instructions to perform miscellaneous control operations, e.g. NOP
    • G06F9/30087Synchronisation or serialisation instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30098Register arrangements
    • G06F9/3012Organisation of register space, e.g. banked or distributed register file
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3808Instruction prefetching for instruction reuse, e.g. trace cache, branch target cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3812Instruction prefetching with instruction modification, e.g. store into instruction stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3824Operand accessing
    • G06F9/3834Maintaining memory consistency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3855
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3854Instruction completion, e.g. retiring, committing or graduating
    • G06F9/3856Reordering of instructions, e.g. using queues or age tags
    • G06F9/3857
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3854Instruction completion, e.g. retiring, committing or graduating
    • G06F9/3858Result writeback, i.e. updating the architectural state or memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3867Concurrent instruction execution, e.g. pipeline, look ahead using instruction pipelines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45591Monitoring or debugging support

Definitions

  • the present invention relates in general to computer architecture and in particular to a method and system of organizing memory access.
  • Video, graphics, communications and multimedia applications require high throughput processing power. As consumers increasingly demand these applications, microprocessors have been tailored to accelerate multimedia and communications applications.
  • Media extensions such as the Intel MMXTM technology, introduced an architecture and instructions to enhance the performance of advanced media and communications applications, while preserving compatibility with existing software and operating systems.
  • the new instructions operated in parallel on multiple data elements packed into 64-bit quantities.
  • the instructions accelerated the performance of applications with computationally intensive algorithms that performed localized, reoccurring operations on small native data.
  • These multimedia applications included: motion video, combined graphics with video, image processing, audio synthesis, speech synthesis and compression, telephony, video conferencing, and two and three-dimensional graphics applications.
  • the load fencing process and system receives a load fencing instruction that separates memory load instructions into older loads and newer loads.
  • a load buffer within the memory ordering unit is allocated to the instruction.
  • the load instructions newer than the load fencing instruction are stalled.
  • the older load instructions are gradually retired. When all older loads from the memory subsystem are retired, the load fencing instruction is dispatched.
  • FIG. 1 illustrates instruction flow through microprocessor architecture
  • FIG. 2 flowcharts an embodiment of the load fencing (LFENCE) process with senior loads retiring from the L1 cache controller
  • FIG. 3 flowcharts an embodiment of the memory fencing (MFENCE) process with senior loads retiring from the L1 cache controller
  • FIG. 4 flowcharts an embodiment of the load fencing (LFENCE) process with senior loads retiring from the memory ordering unit
  • FIG. 5 flowcharts an embodiment of the memory fencing (MFENCE) process with senior loads retiring from the memory-ordering unit.
  • MFENCE memory fence
  • LFENCE memory load fence
  • MFENCE guarantees that every memory access that precedes it, in program order, is globally visible prior to any memory instruction that follows it, in program order. Memory accesses include loads, stores, and other fence and serializing instructions. MFENCE is therefore strongly ordered with respect to other memory instructions, regardless of their memory type.
  • a micro-operation for example, Pentium IITM, and CeleronTM processors
  • a micro-operation serializes prior and subsequent micro-operations.
  • the micro-operation dispatches “at-retirement,” and it executes only once all older operations have fully completed; i.e., all L1 cache controller buffers are empty.
  • MFENCE is also dispatched “at-retirement”; however, MFENCE provides slightly better performance than the existing “store_address_fence,” since it is allowed to execute once all prior instructions have been globally observed, not necessarily completed.
  • the LFENCE instruction can be contrasted to SFENCE.
  • SFENCE also dispatches “at-retirement,” and it executes once all older stores, in program order, have been globally observed; however, it does not fence loads.
  • LFENCE guarantees that every load that precedes it, in program order, is globally visible prior to any load that follows it, in program order. It prevents speculative loads from passing the LFENCE instruction.
  • LFENCE is also ordered with respect to other LFENCE instructions, MFENCE instructions, and serializing instructions, such as CPUID. It is not ordered with respect to stores or the SFENCE instruction.
  • the behavior of LFENCE is independent of its memory type.
  • FIG. 1 an example microprocessor memory and bus subsystem is depicted with the flow of memory loads and stores.
  • FIG. 1 shows two cache levels in the microprocessor: an on-chip (“L1”) cache being the cache level closest to the processor, and second level (“L2”) cache being the cache level farthest from the processor.
  • An instruction fetch unit 102 fetches macroinstructions for an instructions decoder unit 104 .
  • the decoder unit 104 decodes the macroinstructions into a stream of microinstructions, which are forwarded to a reservation station 106 , and a reorder buffer and register file 108 .
  • an instruction As an instruction enters the memory subsystem, it is allocated in the load 112 or store buffer 114 , depending on whether it is a read or a write memory macroinstruction, respectively. In the unit of the memory subsystem where such buffers reside, the instruction goes through memory ordering checks by the memory ordering unit 110 . If no memory dependencies exist, the instruction is dispatched to the next unit in the memory subsystem after undergoing the physical address translation. At the L1 cache controller 120 , it is determined whether there is an L1 cache hit or miss. In the case of a miss, the instruction is allocated into a set of buffers, from where it is dispatched to the bus sub-system 140 of the microprocessor.
  • the instruction is sent to read buffers, 122 , or in the case of a cacheable store miss, the instruction is sent to write buffers 130 .
  • the write buffers may be either weakly ordered write combining buffers 132 or non-write combining buffers 134 .
  • the read or write micro-operation is allocated into an out-of-order queue 144 . If the micro-operation is cacheable, the L2 cache 146 is checked for a hit/miss. If a miss, the instruction is sent through an in-order queue 142 to the frontside bus 150 to retrieve or update the desired data from main memory.
  • the MFENCE and LFENCE flow through the microprocessor is slightly different to that of a memory load or store.
  • MFENCE and LFENCE never check the L1 cache 124 , 126 or the L2 cache 146 and never allocate a buffer in the L1 cache controller 120 . Consequently, neither instruction ever reaches the bus controller 140 . They are last allocated in a hardware structure in the memory-ordering unit 110 ; i.e., store and load buffers 114 , 112 for MFENCE and LFENCE, respectively.
  • LFENCE is dispatched on the memory ordering unit 110 load port, and MFENCE is dispatched on the memory ordering unit 110 store port. Their data fields are always ignored by the memory subsystem.
  • the PREFETCH macroinstruction could potentially execute after the cache line flush macroinstruction. In this case, the final location of the data would be in the cache hierarchy, with the intent of the cache line flush having been nullified.
  • the SFENCE macroinstruction serializes stores with respect to itself; but it allows senior loads, such as the PREFETCH macroinstruction, to be executed out-of-order.
  • the cache line flush macroinstruction could potentially execute out of order with respect to the older MASKMOVQ. This behavior would nullify the effect of the PREFETCH macroinstruction. Both MASKMOVQ instructions would update main memory.
  • a cache line flush could also potentially execute out of order with respect to the PREFETCH macroinstruction. In this case, the original intent of the cache line flush macroinstruction is never achieved, and the final location of the line is the local cache.
  • MFENCE is the only of three fencing macroinstructions (i.e., MFENCE, LFENCE and SFENCE) that will serialize all memory instructions, including a cache line flush.
  • MFENCE three fencing macroinstructions
  • LFENCE LFENCE
  • SFENCE SFENCE
  • MFENCE and LFENCE macroinstructions based on the behavior of senior loads.
  • the latter can either retire from the L1 cache controller unit 120 or from the memory-ordering unit 110 , depending on the hardware implementation chosen. In either case, “senior loads” are retired from the memory subsystem of the microprocessor prior to execution.
  • FIG. 2 a flowchart depicts a load fence (LFENCE) embodiment where senior loads retire from the L1 cache controller unit 120 .
  • L1 cache controller unit 120 In such an embodiment, senior loads cannot be retired unless they are dispatched from the memory ordering unit 110 , and accepted by the L1 cache controller 120 . This is the case where there is no L1 cache controller 120 blocking condition.
  • the senior load is retired from the memory subsystem upon a L1 cache hit; alternatively in the case of a L1 cache miss, the senior load is retired upon allocation of the incoming senior load in a read buffer 122 in the L1 cache controller 120 .
  • the instruction fetch unit 102 fetches an LFENCE macroinstruction, block 202 .
  • the instruction is decoded by the instruction decoder unit 104 into its constituent microinstruction operation, block 204 .
  • an entry is allocated into the reservation station 106 .
  • a load buffer 112 is allocated in the memory ordering unit 110 , block 208 .
  • the load dispatches that follow (in program order) the LFENCE instruction are stalled, block 210 .
  • the process moves to block 212 , when the LFENCE is ready to dispatch.
  • At-retirement loads are not dispatched from the memory ordering unit 110 until all older loads have been retired from the memory subsystem, as determined by decision block 214 . Therefore, with this hardware embodiment for senior loads, “at-retirement” loads dispatch from the memory-ordering unit 110 in program order with respect to other loads, block 218 . Flow continues to decision block 220 .
  • decision block 220 it is determined whether all read buffers 122 , in the L1 cache controller 120 , are globally observed. If not all read buffers 122 are globally observed, the L1 cache controller 120 blocks or aborts the LFENCE instruction in block 222 , and then flow returns to block 210 .
  • the LFENCE does not execute out of order with respect to older loads, because the LFENCE instruction is dispatched “at-retirement” from the memory-ordering unit 110 on the load port. Thus, all older loads in program order have been retired from the memory subsystem of the microprocessor.
  • a new control bit is added to each entry in the load buffers 112 in the memory-ordering unit 110 . It is set when a given entry is allocated to service a LFENCE operation; otherwise, it is cleared.
  • the tail pointer points to the next entry to be deallocated form the load buffer 112 , which is the oldest load in the machine. This implies that all older loads have been completed and deallocated.
  • the corresponding dispatch is stalled if any load buffer 112 entry between the tail pointer and the L1 cache controller 120 dispatch entry has the control bit set.
  • the control bit being set indicates that there is an LFENCE operation between the oldest load in the machine and the load for which a dispatch was attempted. The latter load cannot be dispatched out of order with respect to the LFENCE, and it is consequently stalled until retirement of the LFENCE.
  • the retirement of the LFENCE occurs the tail pointer passes LFENCE instruction.
  • a memory fence can be thought of as a more restrictive embodiment of the load fence in which an LFENCE dispatches an “all blocking” micro-operation from the store port.
  • the MFENCE instruction is allocated in the store buffers 114 , instead of load buffers 112 . It has the disadvantage of serializing both loads and stores. This can be thought of as mapping the LFENCE micro-operation to the MFENCE micro-operation.
  • a flowchart depicts a memory fence (MFENCE) embodiment where senior loads and stores retire from the L1 cache controller unit 120 .
  • MFENCE memory fence
  • senior instructions cannot be deallocated from the store buffer in the memory unit unless they are dispatched from the memory-ordering unit 110 , and accepted by the L1 cache controller 120 . This is the case where there is no L1 cache controller 120 blocking condition.
  • the senior instructions are retired from the memory subsystem upon a L1 cache hit; alternatively in the case of a L1 cache miss, the senior instructions are retired upon allocation of the incoming senior instructions in a read buffer 122 in the L1 cache controller 120 .
  • the instruction fetch unit 102 fetches an MFENCE macroinstruction, block 302 .
  • the instruction is decoded by the instruction decoder unit 104 into its constituent microinstruction operation, block 304 .
  • an entry is allocated into the reservation station 106 .
  • a store buffer 114 is allocated in the memory ordering unit 110 , block 308 .
  • the store dispatches that follow (in program order) the MFENCE instruction are stalled, block 310 .
  • the process moves to block 312 , when the MFENCE is ready to dispatch.
  • Decision block 314 determines whether all older memory access instructions have been retired from the memory subsystem before “at-retirement” instructions are dispatched from the memory ordering unit 110 . Therefore, with this hardware embodiment for senior instructions, “at-retirement” instructions dispatch from the memory-ordering unit 110 in program order with respect to other instructions, block 318 . Flow continues to decision block 320 .
  • decision block 320 it is determined whether any outstanding read buffers 122 or write buffers 130 , in the L1 cache controller 120 , are globally observed. If not all the buffers 122 , 130 are globally observed, flow moves to block 322 . In decision block 322 , it is determined whether any write combining buffers 132 in the L1 cache controller 120 are not in the eviction process. If write combining buffers 132 are in the eviction process, the L1 cache controller 120 blocks or aborts the MFENCE instruction in block 326 , and then flow returns to block 310 . If there are no write combining buffers 132 in the eviction, process, all outstanding write combining buffers 132 are evicted, block 324 , and flow moves to block 326 .
  • the L1 cache controller 120 treats the MFENCE instruction as a non-operation (NOP), and the MFENCE is retired from the L1 cache controller 120 .
  • NOP non-operation
  • MFENCE is dispatched as an “all blocking” micro-operation from the memory ordering unit 110 on the store port.
  • senior loads retire from the memory-ordering unit 110 .
  • senior loads can be retired upon their first dispatch from the memory-ordering unit 110 , even if the L1 cache controller 120 did not accept the senior load.
  • Such an example includes an L1 cache controller 120 blocking condition.
  • the instruction fetch unit 102 fetches an LFENCE macroinstruction, block 402 .
  • the instruction is decoded by the instruction decoder unit 104 into its constituent microinstruction operation, block 404 .
  • an entry is allocated into the reservation station 106 .
  • a load buffer 112 is allocated in the memory ordering unit 110 , block 408 .
  • the load dispatches that follow (in program order) the LFENCE instruction are stalled, block 410 .
  • the process moves to block 412 , when the LFENCE is ready to dispatch.
  • “At-retirement” loads are not dispatched from the memory ordering unit 110 until all older loads have been retired from the memory subsystem, and the load buffer tail pointer points to the LFENCE instruction, as determined by decision block 414 . Therefore, with this hardware embodiment for senior loads, “at-retirement” loads dispatch from the memory-ordering unit 110 in program order with respect to other loads, block 418 . Flow continues to decision block 420 .
  • decision block 420 it is determined whether all read buffers 122 , in the L1 cache controller 120 , are globally observed. If not all read buffers 422 are globally observed, the L1 cache controller 120 blocks or aborts the LFENCE instruction in block 422 , and then flow returns to block 410 .
  • the LFENCE does not execute out of order with respect to older loads, because the LFENCE instruction is not dispatched from the memory-ordering unit until two conditions are met.
  • the first condition is that the corresponding load buffer entry is pointed to by the reorder buffer retirement pointer.
  • the second condition is that the corresponding load buffer entry is also pointed to by the load buffer tail pointer.
  • the retirement pointer indicates all older instructions have been retired, and the tail pointer points to the next entry to be deallocated from the load buffer.
  • the tail pointer can also be thought of as pointing to the oldest load in the machine.
  • LFENCE uses the same implementation as for the case described earlier with senior loads retiring from the L1 cache controller.
  • a control bit is added for each load buffer entry. Prior to a load dispatch, the value of this control bit is checked for each entry between the one pointed to by the tail pointer and the one for which a memory dispatch is being attempted.
  • an MFENCE instruction can be implemented where senior loads retire from the memory-ordering unit 110 .
  • an MFENCE does not execute out of order with respect to older memory instructions, nor do any younger memory instructions execute out of order with respect to the MFENCE.
  • an additional micro-operation is required to implement the MFENCE.
  • the MFENCE could be implemented as a set of two micro-operations on the store port. Those two micro-operations are “store_data” (the data is ignored) and “store_address_mfence”. In the current embodiment, three micro-operations are needed to implement MFENCE and support senior loads retiring from the memory-ordering unit.
  • micro-operations are: an “LFENCE” micro-operation, a “Store-data” micro-operation, and a “Store_address_MFENCE” micro-operation.
  • the first micro-operation can be the same as the LFENCE embodiment described to support senior loads retiring from the memory-ordering unit 110 .
  • the last two micro-operations are the same as those used to implement MFENCE and support senior loads retiring from the L1 cache controller 110 .
  • the micro-operations are “all blocking” micro-operations dispatched from the memory ordering unit on the store port.
  • the instruction fetch unit 102 fetches an MFENCE macroinstruction, block 502 .
  • the instruction is decoded by the instruction decoder unit 104 into its constituent microinstruction operations, block 504 .
  • an entry is allocated into the reservation station 106 .
  • a load buffer 112 and store buffer 114 entries are allocated in the memory ordering unit 110 , block 508 .
  • the load dispatches that follow (in program order) the LFENCE instruction are stalled and then the MFENCE micro-operation is performed, block 510 .
  • the process moves to block 512 , when the LFENCE stalls the dispatch of the MFENCE micro-operation.
  • the LFENCE is ready to dispatch.
  • the “at-retirement” loads are dispatched from the memory ordering unit 110 when all older loads have been retired from the memory subsystem and the load buffer 112 tail pointer points to the LFENCE instruction, as determined by decision block 516 . Therefore, with this hardware embodiment for senior loads, “at-retirement” loads dispatch from the L1 cache controller on the load port, block 520 . Flow continues to decision block 522 .
  • decision block 522 it is determined whether any outstanding read buffers 122 , in the L1 cache controller 120 , are globally observed. If not all the read buffers 122 , are globally observed, flow moves to block 524 . At block 524 , the L1 cache controller the L1 cache controller 120 blocks or aborts the LFENCE instruction.
  • the L1 cache controller 120 treats the LFENCE instruction as a non-operation (NOP), and the LFENCE is retired from the L1 cache controller 120 .
  • NOP non-operation
  • the process moves to block 530 , when the MFENCE is ready to dispatch.
  • Decision block 532 determines whether all older instructions have been retired from the memory subsystem before “at-retirement” instructions are dispatched from the memory ordering unit 110 . Therefore, with this hardware embodiment for senior memory instructions, “at-retirement” instructions dispatch from the memory-ordering unit 110 in program order with respect to other instructions, block 536 . Flow continues to decision block 538 .
  • decision block 538 it is determined whether any outstanding read buffers 122 or write buffers 130 , in the L1 cache controller 120 , are globally observed. If not all the buffers 122 , 130 are globally observed, flow moves to block 540 .
  • any write combining buffers 132 in the L1 cache controller 120 are not in the eviction process. If write combining buffers 132 are in the eviction process, the L1 cache controller 120 blocks or aborts the MFENCE instruction in block 544 , and then flow returns to block 528 . If there are no write combining buffers 132 in the eviction, process, all outstanding write combining buffers 132 are evicted, block 542 , and flow moves to block 544 .
  • the L1 cache controller 120 treats the MFENCE instruction as a non-operation (NOP), and the MFENCE is retired from the L1 cache controller 120 .
  • NOP non-operation
  • LFENCE is always dispatched from the memory-ordering unit 110 to the rest of the memory subsystem once it is guaranteed to be the oldest load in the machine.
  • the LFENCE instruction Upon its dispatch from the memory-ordering unit 110 , the LFENCE instruction is blocked by the L1 cache controller 120 if there are read buffers 122 not yet globally observed.
  • the memory ordering unit 110 keeps redispatching the LFENCE until all read buffers 122 in the L1 cache controller 120 are globally observed.
  • the L1 cache controller 120 accepts the incoming LFENCE, it is retired from the memory subsystem, and it is treated as a non-operation. Consequently, the instruction is never allocated a buffer, nor are any cache hit/miss checks performed.
  • MFENCE Upon its dispatch from the memory-ordering unit 110 , MFENCE is blocked by the L1 cache controller 120 if there are any outstanding operations in the L1 cache controller 120 not yet globally observed. If blocked, the MFENCE instruction evicts any outstanding write combining buffers 132 . Once the L1 cache controller 120 accepts the incoming MFENCE instruction, it is treated as a non-operation and is retired from the memory subsystem. Note that the L1 cache controller 120 accepts the incoming MFENCE instruction only when all L1 cache controller buffers are globally observed. Just like LFENCE, MFENCE is never allocated a buffer, nor are any cache hit/miss checks performed.
  • two non-user visible mode bits can be added to enable/disable the MFENCE and LFENCE macroinstructions. If disabled, the L1 cache controller unit 120 can treat the incoming MFENCE and LFENCE micro-operations as a non-operation, and it does not check for global observation of older instructions. Thus, MFENCE and LFENCE are not blocked if their outstanding buffers in the L1 cache controller 120 not yet globally observed.
  • LFENCE the hardware implementation of LFENCE can be mapped to that of MFENCE.
  • the corresponding MFENCE micro-operations can be used for both macroinstructions. This embodiment would still satisfy the architectural requirements of LFENCE, since the MFENCE behavior is more restrictive.

Abstract

A system and method for fencing memory accesses. Memory loads can be fenced, or all memory access can be fenced. The system receives a fencing instruction that separates memory access instructions into older accesses and newer accesses. A buffer within the memory ordering unit is allocated to the instruction. The access instructions newer than the fencing instruction are stalled. The older access instructions are gradually retired. When all older memory accesses are retired, the fencing instruction is dispatched from the buffer.

Description

CROSS REFERENCE TO OTHER APPLICATIONS
The present application is a continuation of U.S. patent application Ser. No. 13/440,096, filed on Apr. 5, 2012, entitled, “MFENCE AND LFENCE MICRO-ARCHITECTURAL IMPLEMENTATION METHOD AND SYSTEM,” now pending, which is a continuation of U.S. patent application Ser. No. 10/654,573, filed on Sep. 2, 2003, entitled, “METHOD AND SYSTEM FOR ACCESSING MEMORY IN PARALLEL COMPUTING USING LOAD FENCING INSTRUCTIONS”, now U.S. Pat. No. 8,171,261, which is a continuation of U.S. patent application Ser. No. 10/194,531, filed Jul. 12, 2002, entitled “MFENCE AND LFENCE MICRO-ARCHITECTURAL IMPLEMENTATION METHOD AND SYSTEM”, now U.S. Pat. No. 6,651,151, which is a continuation of U.S. patent application Ser. No. 09/475,363, filed Dec. 30, 1999, entitled “MFENCE AND LFENCE MICRO-ARCHITECTURAL IMPLEMENTATION METHOD AND SYSTEM”, now U.S. Pat. No. 6,678,810. The U.S. patent application Ser. Nos. 13/440,096, 10/654,573, 10/194,531, and 09/475,363 are hereby incorporated herein by this reference.
BACKGROUND
1. Field of the Invention
The present invention relates in general to computer architecture and in particular to a method and system of organizing memory access.
2. Description of the Related Art
Video, graphics, communications and multimedia applications require high throughput processing power. As consumers increasingly demand these applications, microprocessors have been tailored to accelerate multimedia and communications applications.
Media extensions, such as the Intel MMX™ technology, introduced an architecture and instructions to enhance the performance of advanced media and communications applications, while preserving compatibility with existing software and operating systems. The new instructions operated in parallel on multiple data elements packed into 64-bit quantities. The instructions accelerated the performance of applications with computationally intensive algorithms that performed localized, reoccurring operations on small native data. These multimedia applications included: motion video, combined graphics with video, image processing, audio synthesis, speech synthesis and compression, telephony, video conferencing, and two and three-dimensional graphics applications.
Although parallel operations on data can accelerate overall system throughput, a problem occurs when memory is shared and communicated among processors. For example, suppose a processor performs data decompression of a video image. If a memory load or store occurs from an external agent or another processor while the data image is not complete, the external agent would receive incomplete or corrupt image data. Moreover, the situation becomes particularly acute, as many multimedia applications now require communications and data exchange between many external agents, such as external graphics processors.
Thus, what is needed is a method and system that allow computer architecture to perform computations in parallel, yet guarantee the integrity of a memory access or store.
SUMMARY
The load fencing process and system receives a load fencing instruction that separates memory load instructions into older loads and newer loads. A load buffer within the memory ordering unit is allocated to the instruction. The load instructions newer than the load fencing instruction are stalled. The older load instructions are gradually retired. When all older loads from the memory subsystem are retired, the load fencing instruction is dispatched.
BRIEF DESCRIPTION OF THE DRAWINGS
The inventions claimed herein will be described in detail with reference to the drawings in which reference characters identify correspondingly throughout and wherein:
FIG. 1 illustrates instruction flow through microprocessor architecture;
FIG. 2 flowcharts an embodiment of the load fencing (LFENCE) process with senior loads retiring from the L1 cache controller;
FIG. 3 flowcharts an embodiment of the memory fencing (MFENCE) process with senior loads retiring from the L1 cache controller;
FIG. 4 flowcharts an embodiment of the load fencing (LFENCE) process with senior loads retiring from the memory ordering unit; and
FIG. 5 flowcharts an embodiment of the memory fencing (MFENCE) process with senior loads retiring from the memory-ordering unit.
DETAILED DESCRIPTION
It is possible to order the execution of memory access in computer architecture. The method and system of implementing this memory “fencing” will be discussed in the terms of two memory fence instructions—a memory fence (“MFENCE”) and a memory load fence (“LFENCE”). These instructions complement the use of SFENCE, an existing Intel MMX2™ instruction. Neither instruction has an associated address or data operand.
MFENCE guarantees that every memory access that precedes it, in program order, is globally visible prior to any memory instruction that follows it, in program order. Memory accesses include loads, stores, and other fence and serializing instructions. MFENCE is therefore strongly ordered with respect to other memory instructions, regardless of their memory type.
In the Intel family of P6 microprocessors (for example, Pentium II™, and Celeron™ processors), a micro-operation, “store_address_fence,” serializes prior and subsequent micro-operations. The micro-operation dispatches “at-retirement,” and it executes only once all older operations have fully completed; i.e., all L1 cache controller buffers are empty. Similarly, MFENCE is also dispatched “at-retirement”; however, MFENCE provides slightly better performance than the existing “store_address_fence,” since it is allowed to execute once all prior instructions have been globally observed, not necessarily completed.
The LFENCE instruction can be contrasted to SFENCE. SFENCE also dispatches “at-retirement,” and it executes once all older stores, in program order, have been globally observed; however, it does not fence loads. LFENCE guarantees that every load that precedes it, in program order, is globally visible prior to any load that follows it, in program order. It prevents speculative loads from passing the LFENCE instruction. LFENCE is also ordered with respect to other LFENCE instructions, MFENCE instructions, and serializing instructions, such as CPUID. It is not ordered with respect to stores or the SFENCE instruction. Like with MFENCE, the behavior of LFENCE is independent of its memory type.
In FIG. 1, an example microprocessor memory and bus subsystem is depicted with the flow of memory loads and stores. FIG. 1 shows two cache levels in the microprocessor: an on-chip (“L1”) cache being the cache level closest to the processor, and second level (“L2”) cache being the cache level farthest from the processor. An instruction fetch unit 102 fetches macroinstructions for an instructions decoder unit 104. The decoder unit 104 decodes the macroinstructions into a stream of microinstructions, which are forwarded to a reservation station 106, and a reorder buffer and register file 108. As an instruction enters the memory subsystem, it is allocated in the load 112 or store buffer 114, depending on whether it is a read or a write memory macroinstruction, respectively. In the unit of the memory subsystem where such buffers reside, the instruction goes through memory ordering checks by the memory ordering unit 110. If no memory dependencies exist, the instruction is dispatched to the next unit in the memory subsystem after undergoing the physical address translation. At the L1 cache controller 120, it is determined whether there is an L1 cache hit or miss. In the case of a miss, the instruction is allocated into a set of buffers, from where it is dispatched to the bus sub-system 140 of the microprocessor. In case of a cacheable load miss, the instruction is sent to read buffers, 122, or in the case of a cacheable store miss, the instruction is sent to write buffers 130. The write buffers may be either weakly ordered write combining buffers 132 or non-write combining buffers 134. In the bus controller unit 140, the read or write micro-operation is allocated into an out-of-order queue 144. If the micro-operation is cacheable, the L2 cache 146 is checked for a hit/miss. If a miss, the instruction is sent through an in-order queue 142 to the frontside bus 150 to retrieve or update the desired data from main memory.
As it can be seen in FIG. 1, the MFENCE and LFENCE flow through the microprocessor is slightly different to that of a memory load or store. MFENCE and LFENCE never check the L1 cache 124, 126 or the L2 cache 146 and never allocate a buffer in the L1 cache controller 120. Consequently, neither instruction ever reaches the bus controller 140. They are last allocated in a hardware structure in the memory-ordering unit 110; i.e., store and load buffers 114, 112 for MFENCE and LFENCE, respectively.
LFENCE is dispatched on the memory ordering unit 110 load port, and MFENCE is dispatched on the memory ordering unit 110 store port. Their data fields are always ignored by the memory subsystem.
The memory ordering constraints of the MFENCE and LFENCE macroinstructions are seen below in Tables 1 and 2 and are compared with SFENCE.
TABLE 1
Memory ordering of instructions with respect
to later MFENCE and LFENCE macroinstructions
Later access
Earlier access MFENCE LFENCE SFENCE
Non-senior load N N Y*
Senior load N N Y*
Store N  Y* N 
CLFLUSH N  Y* Y*
MFENCE N N N 
LFENCE N N Y*
SFENCE N  Y* N 
Note:
N = Cannot pass, Y = can pass.
*= Dependent on hardware implementation, this ordering constraints can be more restrictive; while still adhering to the architectural definition of the macroinstruction.
TABLE 2
Memory ordering of instructions with respect to
earlier MFENCE and LFENCE macroinstructions
Later access
Earlier Non-senior Senior
access Load load Store CLFLUSH MFENCE LFENCE SFENCE
MFENCE N N N N  N N N
LFENCE N N  Y* Y* N N  Y*
SFENCE  Y*  Y* N Y* N  Y* N
Note:
N = Cannot pass, Y = can pass.
*Dependent on hardware implementation, this ordering constraints can be more restrictive; while still adhering to the architectural definition of the macroinstruction.
When using fencing instructions other than MFENCE, such as LFENCE or SFENCE, strong ordering with respect to a cache line flush (“CLFLUSH”) macroinstruction cannot be guaranteed. The former two instructions only serialize loads (LFENCE) or stores (SFENCE), respectively, but not both.
Take for example the code below. Masked stores write to address [x]. All instructions except MFENCE target cache line at address [x]:
PREFETCH [x]
MASKMOVQ data 1, mask 1
MFENCE
CLFLUSH [x]
MFENCE
MASQMOVQ data 2, mask 2
In the example code above, the intent of the programmer is to prefetch line [x] into the L1 cache. Then, write data1 (assuming mask1=all 1 's) to line [x], flush the line out to main memory, and write data2 (assuming mask2=all 1's) to line [x] in main memory (line [x] no longer is in the cache hierarchy).
However, if the SFENCE macroinstruction were used in place of MFENCE, the PREFETCH macroinstruction could potentially execute after the cache line flush macroinstruction. In this case, the final location of the data would be in the cache hierarchy, with the intent of the cache line flush having been nullified. The SFENCE macroinstruction serializes stores with respect to itself; but it allows senior loads, such as the PREFETCH macroinstruction, to be executed out-of-order.
Alternatively, if the LFENCE macroinstruction were used in place of MFENCE, the cache line flush macroinstruction could potentially execute out of order with respect to the older MASKMOVQ. This behavior would nullify the effect of the PREFETCH macroinstruction. Both MASKMOVQ instructions would update main memory. Dependent on the hardware implementation chosen for LFENCE, a cache line flush could also potentially execute out of order with respect to the PREFETCH macroinstruction. In this case, the original intent of the cache line flush macroinstruction is never achieved, and the final location of the line is the local cache.
MFENCE is the only of three fencing macroinstructions (i.e., MFENCE, LFENCE and SFENCE) that will serialize all memory instructions, including a cache line flush. Using MFENCE, strong ordering is achieved, as shown in the above example code.
There are two alternative hardware embodiments for the MFENCE and LFENCE macroinstructions based on the behavior of senior loads. The latter can either retire from the L1 cache controller unit 120 or from the memory-ordering unit 110, depending on the hardware implementation chosen. In either case, “senior loads” are retired from the memory subsystem of the microprocessor prior to execution.
Turning to FIG. 2, a flowchart depicts a load fence (LFENCE) embodiment where senior loads retire from the L1 cache controller unit 120. In such an embodiment, senior loads cannot be retired unless they are dispatched from the memory ordering unit 110, and accepted by the L1 cache controller 120. This is the case where there is no L1 cache controller 120 blocking condition. The senior load is retired from the memory subsystem upon a L1 cache hit; alternatively in the case of a L1 cache miss, the senior load is retired upon allocation of the incoming senior load in a read buffer 122 in the L1 cache controller 120.
Initially, the instruction fetch unit 102 fetches an LFENCE macroinstruction, block 202. The instruction is decoded by the instruction decoder unit 104 into its constituent microinstruction operation, block 204. In block 206, an entry is allocated into the reservation station 106. A load buffer 112 is allocated in the memory ordering unit 110, block 208. The load dispatches that follow (in program order) the LFENCE instruction are stalled, block 210. The process moves to block 212, when the LFENCE is ready to dispatch.
If not all older loads in program order are retired from the memory subsystem, as determined by decision block 214, the LFENCE is dispatched and older loads are retired in block 216, then the flow returns to block 210.
“At-retirement” loads are not dispatched from the memory ordering unit 110 until all older loads have been retired from the memory subsystem, as determined by decision block 214. Therefore, with this hardware embodiment for senior loads, “at-retirement” loads dispatch from the memory-ordering unit 110 in program order with respect to other loads, block 218. Flow continues to decision block 220.
In decision block 220, it is determined whether all read buffers 122, in the L1 cache controller 120, are globally observed. If not all read buffers 122 are globally observed, the L1 cache controller 120 blocks or aborts the LFENCE instruction in block 222, and then flow returns to block 210.
If all read buffers 122 are globally observed, as determined by block 220, flow ends in block 224, when the LFENCE is deallocated from the load buffer 112 in the memory ordering unit 110. The L1 cache controller 120 treats the LFENCE instruction as a non-operation (NOP), and the LFENCE is retired from the L1 cache controller 120.
It is worth noting that the LFENCE does not execute out of order with respect to older loads, because the LFENCE instruction is dispatched “at-retirement” from the memory-ordering unit 110 on the load port. Thus, all older loads in program order have been retired from the memory subsystem of the microprocessor.
Similarly, newer loads do not execute out of order with respect to a LFENCE. A new control bit is added to each entry in the load buffers 112 in the memory-ordering unit 110. It is set when a given entry is allocated to service a LFENCE operation; otherwise, it is cleared. The tail pointer points to the next entry to be deallocated form the load buffer 112, which is the oldest load in the machine. This implies that all older loads have been completed and deallocated. The corresponding dispatch is stalled if any load buffer 112 entry between the tail pointer and the L1 cache controller 120 dispatch entry has the control bit set. The control bit being set indicates that there is an LFENCE operation between the oldest load in the machine and the load for which a dispatch was attempted. The latter load cannot be dispatched out of order with respect to the LFENCE, and it is consequently stalled until retirement of the LFENCE. The retirement of the LFENCE occurs the tail pointer passes LFENCE instruction.
A memory fence (MFENCE) can be thought of as a more restrictive embodiment of the load fence in which an LFENCE dispatches an “all blocking” micro-operation from the store port. In such an embodiment, shown in FIG. 3, the MFENCE instruction is allocated in the store buffers 114, instead of load buffers 112. It has the disadvantage of serializing both loads and stores. This can be thought of as mapping the LFENCE micro-operation to the MFENCE micro-operation.
In FIG. 3, a flowchart depicts a memory fence (MFENCE) embodiment where senior loads and stores retire from the L1 cache controller unit 120. In such an embodiment, senior instructions cannot be deallocated from the store buffer in the memory unit unless they are dispatched from the memory-ordering unit 110, and accepted by the L1 cache controller 120. This is the case where there is no L1 cache controller 120 blocking condition. The senior instructions are retired from the memory subsystem upon a L1 cache hit; alternatively in the case of a L1 cache miss, the senior instructions are retired upon allocation of the incoming senior instructions in a read buffer 122 in the L1 cache controller 120.
Initially, the instruction fetch unit 102 fetches an MFENCE macroinstruction, block 302. The instruction is decoded by the instruction decoder unit 104 into its constituent microinstruction operation, block 304. In block 306, an entry is allocated into the reservation station 106. A store buffer 114 is allocated in the memory ordering unit 110, block 308. The store dispatches that follow (in program order) the MFENCE instruction are stalled, block 310. The process moves to block 312, when the MFENCE is ready to dispatch.
If not all older memory access instructions in program order are retired from the memory subsystem, as determined by decision block 314, the MFENCE is dispatched and older instructions are retired in block 316, then the flow returns to block 310.
Decision block 314 determines whether all older memory access instructions have been retired from the memory subsystem before “at-retirement” instructions are dispatched from the memory ordering unit 110. Therefore, with this hardware embodiment for senior instructions, “at-retirement” instructions dispatch from the memory-ordering unit 110 in program order with respect to other instructions, block 318. Flow continues to decision block 320.
In decision block 320, it is determined whether any outstanding read buffers 122 or write buffers 130, in the L1 cache controller 120, are globally observed. If not all the buffers 122, 130 are globally observed, flow moves to block 322. In decision block 322, it is determined whether any write combining buffers 132 in the L1 cache controller 120 are not in the eviction process. If write combining buffers 132 are in the eviction process, the L1 cache controller 120 blocks or aborts the MFENCE instruction in block 326, and then flow returns to block 310. If there are no write combining buffers 132 in the eviction, process, all outstanding write combining buffers 132 are evicted, block 324, and flow moves to block 326.
Returning to decision block 320, if all outstanding read buffers 122 or write buffers 130 are already globally observed, flow ends in block 328, when the MFENCE is deallocated from the store buffer 114 in the memory ordering unit 110. The L1 cache controller 120 treats the MFENCE instruction as a non-operation (NOP), and the MFENCE is retired from the L1 cache controller 120.
To ensure the MFENCE instruction does not execute out of order with respect to earlier memory instructions, and later memory instructions do not execute out of order with respect to MFENCE, MFENCE is dispatched as an “all blocking” micro-operation from the memory ordering unit 110 on the store port.
In an alternate hardware embodiment, senior loads retire from the memory-ordering unit 110. In this embodiment, depicted in FIG. 4, senior loads can be retired upon their first dispatch from the memory-ordering unit 110, even if the L1 cache controller 120 did not accept the senior load. Such an example includes an L1 cache controller 120 blocking condition. In this implementation, it is possible for a senior load to be retired from the memory subsystem of the microprocessor, and an entry in the load buffer 112 can still remain allocated with this senior load for subsequent re-dispatch to the L1 cache controller 120. It is therefore possible for a younger “at-retirement” load (i.e., an uncachable load) to execute out of order with respect to an older senior load.
The instruction fetch unit 102 fetches an LFENCE macroinstruction, block 402. The instruction is decoded by the instruction decoder unit 104 into its constituent microinstruction operation, block 404. In block 406, an entry is allocated into the reservation station 106. A load buffer 112 is allocated in the memory ordering unit 110, block 408. The load dispatches that follow (in program order) the LFENCE instruction are stalled, block 410. The process moves to block 412, when the LFENCE is ready to dispatch.
If not all older loads in program order are retired from the memory subsystem, and the load buffer 112 tail pointer is pointing to the LFENCE instruction, as determined by decision block 414, the LFENCE is dispatched and older loads are retired in block 416, then the flow returns to block 410.
“At-retirement” loads are not dispatched from the memory ordering unit 110 until all older loads have been retired from the memory subsystem, and the load buffer tail pointer points to the LFENCE instruction, as determined by decision block 414. Therefore, with this hardware embodiment for senior loads, “at-retirement” loads dispatch from the memory-ordering unit 110 in program order with respect to other loads, block 418. Flow continues to decision block 420.
In decision block 420, it is determined whether all read buffers 122, in the L1 cache controller 120, are globally observed. If not all read buffers 422 are globally observed, the L1 cache controller 120 blocks or aborts the LFENCE instruction in block 422, and then flow returns to block 410.
If all read buffers 122 are globally observed, as determined by block 420, flow ends in block 424, when the LFENCE is deallocated from the load buffer 112 in the memory ordering unit 110. The L1 cache controller 120 treats the LFENCE instruction as a non-operation (NOP), and the LFENCE is retired from the L1 cache controller 120.
It is worth noting that the LFENCE does not execute out of order with respect to older loads, because the LFENCE instruction is not dispatched from the memory-ordering unit until two conditions are met. The first condition is that the corresponding load buffer entry is pointed to by the reorder buffer retirement pointer. The second condition is that the corresponding load buffer entry is also pointed to by the load buffer tail pointer. The retirement pointer indicates all older instructions have been retired, and the tail pointer points to the next entry to be deallocated from the load buffer. The tail pointer can also be thought of as pointing to the oldest load in the machine.
Furthermore, newer loads do not execute out of order with respect to an LFENCE instruction. This is because LFENCE uses the same implementation as for the case described earlier with senior loads retiring from the L1 cache controller. A control bit is added for each load buffer entry. Prior to a load dispatch, the value of this control bit is checked for each entry between the one pointed to by the tail pointer and the one for which a memory dispatch is being attempted.
Similarly, an MFENCE instruction can be implemented where senior loads retire from the memory-ordering unit 110. In this embodiment, an MFENCE does not execute out of order with respect to older memory instructions, nor do any younger memory instructions execute out of order with respect to the MFENCE. In such an embodiment, an additional micro-operation is required to implement the MFENCE. In an embodiment described earlier for supporting MFENCE with senior loads retiring from the L1 cache controller, the MFENCE could be implemented as a set of two micro-operations on the store port. Those two micro-operations are “store_data” (the data is ignored) and “store_address_mfence”. In the current embodiment, three micro-operations are needed to implement MFENCE and support senior loads retiring from the memory-ordering unit. These micro-operations are: an “LFENCE” micro-operation, a “Store-data” micro-operation, and a “Store_address_MFENCE” micro-operation. The first micro-operation can be the same as the LFENCE embodiment described to support senior loads retiring from the memory-ordering unit 110. The last two micro-operations are the same as those used to implement MFENCE and support senior loads retiring from the L1 cache controller 110. The micro-operations are “all blocking” micro-operations dispatched from the memory ordering unit on the store port.
As shown in FIG. 5, the instruction fetch unit 102 fetches an MFENCE macroinstruction, block 502. The instruction is decoded by the instruction decoder unit 104 into its constituent microinstruction operations, block 504. In block 506, an entry is allocated into the reservation station 106. A load buffer 112 and store buffer 114 entries are allocated in the memory ordering unit 110, block 508. The load dispatches that follow (in program order) the LFENCE instruction are stalled and then the MFENCE micro-operation is performed, block 510. The process moves to block 512, when the LFENCE stalls the dispatch of the MFENCE micro-operation. In block 514, the LFENCE is ready to dispatch.
If not all older loads in program order are retired from the memory subsystem, and the load buffer 112 tail pointer points to the LFENCE instruction, as determined by decision block 516, the LFENCE is dispatched and older loads are retired in block 518, then the flow returns to block 510.
Conversely, the “at-retirement” loads are dispatched from the memory ordering unit 110 when all older loads have been retired from the memory subsystem and the load buffer 112 tail pointer points to the LFENCE instruction, as determined by decision block 516. Therefore, with this hardware embodiment for senior loads, “at-retirement” loads dispatch from the L1 cache controller on the load port, block 520. Flow continues to decision block 522.
In decision block 522, it is determined whether any outstanding read buffers 122, in the L1 cache controller 120, are globally observed. If not all the read buffers 122, are globally observed, flow moves to block 524. At block 524, the L1 cache controller the L1 cache controller 120 blocks or aborts the LFENCE instruction.
If all the read buffers 122, are globally observed, flow moves to block 526.
At block 526, the L1 cache controller 120 treats the LFENCE instruction as a non-operation (NOP), and the LFENCE is retired from the L1 cache controller 120. Flow continues at block 528.
All instruction dispatches following the MFENCE, in program order, are stalled, block 528.
The process moves to block 530, when the MFENCE is ready to dispatch.
If not all older memory access instructions in program order are retired from the memory subsystem, as determined by decision block 532, the MFENCE is dispatched and older memory access instructions are retired in block 534, then the flow returns to block 528.
Decision block 532 determines whether all older instructions have been retired from the memory subsystem before “at-retirement” instructions are dispatched from the memory ordering unit 110. Therefore, with this hardware embodiment for senior memory instructions, “at-retirement” instructions dispatch from the memory-ordering unit 110 in program order with respect to other instructions, block 536. Flow continues to decision block 538.
In decision block 538, it is determined whether any outstanding read buffers 122 or write buffers 130, in the L1 cache controller 120, are globally observed. If not all the buffers 122, 130 are globally observed, flow moves to block 540.
At decision block 540, it is determined whether any write combining buffers 132 in the L1 cache controller 120 are not in the eviction process. If write combining buffers 132 are in the eviction process, the L1 cache controller 120 blocks or aborts the MFENCE instruction in block 544, and then flow returns to block 528. If there are no write combining buffers 132 in the eviction, process, all outstanding write combining buffers 132 are evicted, block 542, and flow moves to block 544.
Returning to decision block 538, if all outstanding read buffers 122 or write buffers 130 are already globally observed, flow ends in block 546, when the MFENCE is deallocated from the store buffer 114 in the memory ordering unit 110. The L1 cache controller 120 treats the MFENCE instruction as a non-operation (NOP), and the MFENCE is retired from the L1 cache controller 120.
Regardless of the implementation, LFENCE is always dispatched from the memory-ordering unit 110 to the rest of the memory subsystem once it is guaranteed to be the oldest load in the machine.
Upon its dispatch from the memory-ordering unit 110, the LFENCE instruction is blocked by the L1 cache controller 120 if there are read buffers 122 not yet globally observed. The memory ordering unit 110 keeps redispatching the LFENCE until all read buffers 122 in the L1 cache controller 120 are globally observed. Once the L1 cache controller 120 accepts the incoming LFENCE, it is retired from the memory subsystem, and it is treated as a non-operation. Consequently, the instruction is never allocated a buffer, nor are any cache hit/miss checks performed.
Upon its dispatch from the memory-ordering unit 110, MFENCE is blocked by the L1 cache controller 120 if there are any outstanding operations in the L1 cache controller 120 not yet globally observed. If blocked, the MFENCE instruction evicts any outstanding write combining buffers 132. Once the L1 cache controller 120 accepts the incoming MFENCE instruction, it is treated as a non-operation and is retired from the memory subsystem. Note that the L1 cache controller 120 accepts the incoming MFENCE instruction only when all L1 cache controller buffers are globally observed. Just like LFENCE, MFENCE is never allocated a buffer, nor are any cache hit/miss checks performed.
For testability and debug purposes, two non-user visible mode bits can be added to enable/disable the MFENCE and LFENCE macroinstructions. If disabled, the L1 cache controller unit 120 can treat the incoming MFENCE and LFENCE micro-operations as a non-operation, and it does not check for global observation of older instructions. Thus, MFENCE and LFENCE are not blocked if their outstanding buffers in the L1 cache controller 120 not yet globally observed.
In alternate embodiments, the hardware implementation of LFENCE can be mapped to that of MFENCE. The corresponding MFENCE micro-operations can be used for both macroinstructions. This embodiment would still satisfy the architectural requirements of LFENCE, since the MFENCE behavior is more restrictive.
The previous description of the embodiments is provided to enable any person skilled in the art to make or use the system and method. It is well understood by those in the art, that the preceding embodiments may be implemented using hardware, firmware, or instructions encoded on a computer-readable medium. The various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty. Thus, the present invention is not intended to be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (19)

What is claimed is:
1. A processor comprising:
an instruction prefetch unit to prefetch a cache line of data responsive to a PREFETCH instruction and to store the cache line in a data cache;
an instruction fetch unit to fetch a memory load fence (LFENCE) instruction that does not use a mask field thereof, a memory fence (MFENCE) instruction that does not use a mask field thereof, and a cache line flush (CLFLUSH) instruction;
a first memory ordering portion of the processor responsive to the LFENCE instruction to prevent newer memory load instructions occurring after the LFENCE instruction in program order from being globally visible before older memory load instructions occurring before the LFENCE instruction in the program order are globally visible without causing the processor to stall dispatch of a newer memory store instruction occurring after the LFENCE instruction in the program order; and
a second memory ordering portion of the processor responsive to the MFENCE instruction to prevent a CLFLUSH instruction which follows the MFENCE instruction in program order from being globally visible until a PREFETCH instruction preceding the MFENCE instruction in program order has become globally visible.
2. The processor of claim 1, wherein the LFENCE instruction and MFENCE instructions are treated as non-operations (NOPs) by the processor after being dispatched once the older memory load instructions are globally visible, and wherein the older memory load instructions are globally visible but not necessarily completed.
3. The processor of claim 1, wherein the LFENCE instruction and MFENCE instructions comprise macroinstructions.
4. The processor of claim 1, wherein the MFENCE instruction is to cause the processor to ensure that all older memory load instructions and all older memory store instructions, which are each older than the MFENCE instruction in the program order, are globally visible, before all newer memory load instructions and all newer memory store instructions, which are each newer than the MFENCE instruction in the program order, are globally visible.
5. The processor of claim 1, implemented in a computer system also including a graphics processor.
6. The processor of claim 1, wherein the LFENCE instruction does not use a data field thereof.
7. The processor of claim 1, wherein the MFENCE instruction does not use a data field thereof.
8. A processor comprising:
an instruction prefetch unit to prefetch a cache line of data responsive to a PREFETCH instruction and to store the cache line in a data cache; and
a decoder to decode instructions including a memory fence (MFENCE) instruction wherein the MFENCE instruction does not use a mask field thereof and a cache line flush (CLFLUSH) instruction, the MFENCE instruction to cause the processor to ensure that all older memory load instructions and all older memory store instructions, which are each older than the MFENCE instruction in program order, are globally visible, before all newer memory load instructions and all newer memory store instructions, which are each newer than the MFENCE instruction in the program order, are globally visible, the MFENCE instruction further to prevent a CLFLUSH instruction which follows the MFENCE instruction in program order from becoming globally visible until a PREFETCH instruction preceding the MFENCE instruction in program order has become globally visible.
9. The processor of claim 8, wherein the MFENCE instruction does not use a mask field thereof, and wherein the older memory load instructions are globally visible but not necessarily completed.
10. The processor of claim 8, wherein the MFENCE instruction is treated as a non-operation (NOP) by the processor after being dispatched after all of the older memory load instructions and memory store instructions are globally visible.
11. The processor of claim 8, wherein the MFENCE instruction is also to cause the processor to ensure that an older CLFLUSH instruction that is older than the MFENCE instruction in the program order is globally visible before a newer CLFLUSH instruction that is newer than the MFENCE instruction in the program order is globally visible.
12. The processor of claim 8, wherein the MFENCE instruction comprises a macroinstruction.
13. The processor of claim 8, implemented in a computer system also including a graphics processor.
14. The processor of claim 8, wherein the MFENCE instruction does not use a data field thereof.
15. A processor comprising:
instruction prefetch circuitry to prefetch a cache line of data responsive to a PREFETCH instruction and to store the cache line in a data cache;
an instruction fetch unit to fetch a load fence (LFENCE) instruction that does not use a mask field thereof and a memory fence (MFENCE) instruction that does not use a mask field thereof; and
a decoder to decode the LFENCE instruction and to decode the MFENCE instruction,
wherein a portion of the processor is responsive to the LFENCE instruction to prevent newer memory load instructions occurring after the LFENCE instruction in program order from being globally visible before older memory load instructions occurring before the LFENCE instruction in the program order are globally visible, without causing the processor to stall dispatch of a newer memory store instruction occurring after the LFENCE instruction in the program order, and
the portion of the processor responsive to the MFENCE instruction is to ensure that all older memory load instructions and all older memory store instructions, which are each older than the MFENCE instruction in the program order, are globally visible, before all newer memory load instructions and all newer memory store instructions, which are each newer than the MFENCE instruction in the program order, are globally visible, the portion of the processor responsive to the MFENCE instruction further to prevent a CLFLUSH instruction which follows the MFENCE instruction in program order from becoming globally visible until a PREFETCH instruction preceding the MFENCE instruction in program order has become globally visible.
16. The processor of claim 15, wherein the LFENCE instruction and the MFENCE instruction are each treated as a non-operation (NOP) after being dispatched.
17. The processor of claim 15, wherein the MFENCE instruction is to guarantee strong ordering with respect to a cache line flush instruction but the LFENCE instruction is not to guarantee strong ordering with respect to the cache line flush instruction.
18. The processor of claim 15, wherein the LFENCE instruction does not use a data field thereof.
19. The processor of claim 15, wherein the MFENCE instruction does not use a data field thereof.
US13/838,229 1999-12-30 2013-03-15 MFENCE and LFENCE micro-architectural implementation method and system Expired - Fee Related US9342310B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/838,229 US9342310B2 (en) 1999-12-30 2013-03-15 MFENCE and LFENCE micro-architectural implementation method and system
US13/942,660 US8959314B2 (en) 1999-12-30 2013-07-15 MFENCE and LFENCE micro-architectural implementation method and system

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US09/475,363 US6678810B1 (en) 1999-12-30 1999-12-30 MFENCE and LFENCE micro-architectural implementation method and system
US10/194,531 US6651151B2 (en) 1999-12-30 2002-07-12 MFENCE and LFENCE micro-architectural implementation method and system
US10/654,573 US8171261B2 (en) 1999-12-30 2003-09-02 Method and system for accessing memory in parallel computing using load fencing instructions
US13/440,096 US9383998B2 (en) 1999-12-30 2012-04-05 MFENCE and LFENCE micro-architectural implementation method and system
US13/838,229 US9342310B2 (en) 1999-12-30 2013-03-15 MFENCE and LFENCE micro-architectural implementation method and system

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US13/440,096 Continuation US9383998B2 (en) 1999-12-30 2012-04-05 MFENCE and LFENCE micro-architectural implementation method and system
US13/619,919 Continuation US9612835B2 (en) 1999-12-30 2012-09-14 MFENCE and LFENCE micro-architectural implementation method and system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/942,660 Continuation US8959314B2 (en) 1999-12-30 2013-07-15 MFENCE and LFENCE micro-architectural implementation method and system

Publications (2)

Publication Number Publication Date
US20130205117A1 US20130205117A1 (en) 2013-08-08
US9342310B2 true US9342310B2 (en) 2016-05-17

Family

ID=23887250

Family Applications (9)

Application Number Title Priority Date Filing Date
US09/475,363 Expired - Lifetime US6678810B1 (en) 1999-12-30 1999-12-30 MFENCE and LFENCE micro-architectural implementation method and system
US10/194,531 Expired - Lifetime US6651151B2 (en) 1999-12-30 2002-07-12 MFENCE and LFENCE micro-architectural implementation method and system
US10/654,573 Expired - Fee Related US8171261B2 (en) 1999-12-30 2003-09-02 Method and system for accessing memory in parallel computing using load fencing instructions
US13/440,096 Expired - Fee Related US9383998B2 (en) 1999-12-30 2012-04-05 MFENCE and LFENCE micro-architectural implementation method and system
US13/619,832 Expired - Fee Related US9098268B2 (en) 1999-12-30 2012-09-14 MFENCE and LFENCE micro-architectural implementation method and system
US13/619,919 Expired - Fee Related US9612835B2 (en) 1999-12-30 2012-09-14 MFENCE and LFENCE micro-architectural implementation method and system
US13/838,229 Expired - Fee Related US9342310B2 (en) 1999-12-30 2013-03-15 MFENCE and LFENCE micro-architectural implementation method and system
US13/942,660 Expired - Fee Related US8959314B2 (en) 1999-12-30 2013-07-15 MFENCE and LFENCE micro-architectural implementation method and system
US15/477,177 Abandoned US20170206088A1 (en) 1999-12-30 2017-04-03 Mfence and lfence micro-architectural implementation method and system

Family Applications Before (6)

Application Number Title Priority Date Filing Date
US09/475,363 Expired - Lifetime US6678810B1 (en) 1999-12-30 1999-12-30 MFENCE and LFENCE micro-architectural implementation method and system
US10/194,531 Expired - Lifetime US6651151B2 (en) 1999-12-30 2002-07-12 MFENCE and LFENCE micro-architectural implementation method and system
US10/654,573 Expired - Fee Related US8171261B2 (en) 1999-12-30 2003-09-02 Method and system for accessing memory in parallel computing using load fencing instructions
US13/440,096 Expired - Fee Related US9383998B2 (en) 1999-12-30 2012-04-05 MFENCE and LFENCE micro-architectural implementation method and system
US13/619,832 Expired - Fee Related US9098268B2 (en) 1999-12-30 2012-09-14 MFENCE and LFENCE micro-architectural implementation method and system
US13/619,919 Expired - Fee Related US9612835B2 (en) 1999-12-30 2012-09-14 MFENCE and LFENCE micro-architectural implementation method and system

Family Applications After (2)

Application Number Title Priority Date Filing Date
US13/942,660 Expired - Fee Related US8959314B2 (en) 1999-12-30 2013-07-15 MFENCE and LFENCE micro-architectural implementation method and system
US15/477,177 Abandoned US20170206088A1 (en) 1999-12-30 2017-04-03 Mfence and lfence micro-architectural implementation method and system

Country Status (2)

Country Link
US (9) US6678810B1 (en)
TW (1) TW493123B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11392380B2 (en) * 2019-12-28 2022-07-19 Intel Corporation Apparatuses, methods, and systems to precisely monitor memory store accesses
US11782718B2 (en) * 2013-07-15 2023-10-10 Texas Instruments Incorporated Implied fence on stream open

Families Citing this family (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6678810B1 (en) 1999-12-30 2004-01-13 Intel Corporation MFENCE and LFENCE micro-architectural implementation method and system
US6862679B2 (en) * 2001-02-14 2005-03-01 Intel Corporation Synchronization of load operations using load fence instruction in pre-serialization/post-serialization mode
US7155588B1 (en) * 2002-08-12 2006-12-26 Cisco Technology, Inc. Memory fence with background lock release
US7080209B2 (en) * 2002-12-24 2006-07-18 Intel Corporation Method and apparatus for processing a load-lock instruction using a relaxed lock protocol
US7552317B2 (en) * 2004-05-04 2009-06-23 Sun Microsystems, Inc. Methods and systems for grouping instructions using memory barrier instructions
US7603544B2 (en) * 2004-12-23 2009-10-13 Intel Corporation Dynamic allocation of a buffer across multiple clients in multi-threaded processor without performing a complete flush of data associated with allocation
US7765366B2 (en) * 2005-06-23 2010-07-27 Intel Corporation Memory micro-tiling
US8332598B2 (en) * 2005-06-23 2012-12-11 Intel Corporation Memory micro-tiling request reordering
US7587521B2 (en) * 2005-06-23 2009-09-08 Intel Corporation Mechanism for assembling memory access requests while speculatively returning data
US8253751B2 (en) 2005-06-30 2012-08-28 Intel Corporation Memory controller interface for micro-tiled memory access
US7558941B2 (en) * 2005-06-30 2009-07-07 Intel Corporation Automatic detection of micro-tile enabled memory
US8289324B1 (en) 2007-12-17 2012-10-16 Nvidia Corporation System, method, and computer program product for spatial hierarchy traversal
US8502819B1 (en) 2007-12-17 2013-08-06 Nvidia Corporation System and method for performing ray tracing node traversal in image rendering
WO2009119021A1 (en) * 2008-03-28 2009-10-01 パナソニック株式会社 Instruction execution control method, instruction format, and processor
US8086826B2 (en) * 2009-03-24 2011-12-27 International Business Machines Corporation Dependency tracking for enabling successive processor instructions to issue
US8555036B1 (en) 2010-05-17 2013-10-08 Nvidia Corporation System and method for performing predicated selection of an output register
US8564589B1 (en) 2010-05-17 2013-10-22 Nvidia Corporation System and method for accelerated ray-box intersection testing
US9052890B2 (en) 2010-09-25 2015-06-09 Intel Corporation Execute at commit state update instructions, apparatus, methods, and systems
WO2013101213A1 (en) * 2011-12-30 2013-07-04 Intel Corporation Method and apparatus for cutting senior store latency using store prefetching
US20150317158A1 (en) * 2014-04-03 2015-11-05 Applied Micro Circuits Corporation Implementation of load acquire/store release instructions using load/store operation with dmb operation
US10489158B2 (en) 2014-09-26 2019-11-26 Intel Corporation Processors, methods, systems, and instructions to selectively fence only persistent storage of given data relative to subsequent stores
US10089112B2 (en) 2014-12-14 2018-10-02 Via Alliance Semiconductor Co., Ltd Mechanism to preclude load replays dependent on fuse array access in an out-of-order processor
US10083038B2 (en) 2014-12-14 2018-09-25 Via Alliance Semiconductor Co., Ltd Mechanism to preclude load replays dependent on page walks in an out-of-order processor
US10120689B2 (en) 2014-12-14 2018-11-06 Via Alliance Semiconductor Co., Ltd Mechanism to preclude load replays dependent on off-die control element access in an out-of-order processor
US10108421B2 (en) 2014-12-14 2018-10-23 Via Alliance Semiconductor Co., Ltd Mechanism to preclude shared ram-dependent load replays in an out-of-order processor
WO2016097790A1 (en) 2014-12-14 2016-06-23 Via Alliance Semiconductor Co., Ltd. Apparatus and method to preclude non-core cache-dependent load replays in out-of-order processor
KR101820221B1 (en) 2014-12-14 2018-02-28 비아 얼라이언스 세미컨덕터 씨오., 엘티디. Programmable load replay precluding mechanism
US10108420B2 (en) 2014-12-14 2018-10-23 Via Alliance Semiconductor Co., Ltd Mechanism to preclude load replays dependent on long load cycles in an out-of-order processor
EP3049956B1 (en) 2014-12-14 2018-10-10 VIA Alliance Semiconductor Co., Ltd. Mechanism to preclude i/o-dependent load replays in out-of-order processor
US10114646B2 (en) 2014-12-14 2018-10-30 Via Alliance Semiconductor Co., Ltd Programmable load replay precluding mechanism
US9703359B2 (en) 2014-12-14 2017-07-11 Via Alliance Semiconductor Co., Ltd. Power saving mechanism to reduce load replays in out-of-order processor
US10146539B2 (en) 2014-12-14 2018-12-04 Via Alliance Semiconductor Co., Ltd. Load replay precluding mechanism
US10088881B2 (en) 2014-12-14 2018-10-02 Via Alliance Semiconductor Co., Ltd Mechanism to preclude I/O-dependent load replays in an out-of-order processor
KR101819314B1 (en) 2014-12-14 2018-01-16 비아 얼라이언스 세미컨덕터 씨오., 엘티디. Mechanism to preclude load replays dependent on off-die control element access in an out-of-order processor
US10175984B2 (en) 2014-12-14 2019-01-08 Via Alliance Semiconductor Co., Ltd Apparatus and method to preclude non-core cache-dependent load replays in an out-of-order processor
EP3055769B1 (en) 2014-12-14 2018-10-31 VIA Alliance Semiconductor Co., Ltd. Mechanism to preclude load replays dependent on page walks in out-of-order processor
US10146540B2 (en) 2014-12-14 2018-12-04 Via Alliance Semiconductor Co., Ltd Apparatus and method to preclude load replays dependent on write combining memory space access in an out-of-order processor
US10127046B2 (en) 2014-12-14 2018-11-13 Via Alliance Semiconductor Co., Ltd. Mechanism to preclude uncacheable-dependent load replays in out-of-order processor
JP6286067B2 (en) * 2014-12-14 2018-02-28 ヴィア アライアンス セミコンダクター カンパニー リミテッド Mechanism to exclude load replays that depend on long load cycles in out-of-order processors
WO2016097814A1 (en) 2014-12-14 2016-06-23 Via Alliance Semiconductor Co., Ltd. Mechanism to preclude shared ram-dependent load replays in out-of-order processor
KR101819315B1 (en) 2014-12-14 2018-01-16 비아 얼라이언스 세미컨덕터 씨오., 엘티디. Apparatus and method to preclude load replays dependent on write combining memory space access in an out-of-order processor
US10146546B2 (en) 2014-12-14 2018-12-04 Via Alliance Semiconductor Co., Ltd Load replay precluding mechanism
WO2016097815A1 (en) 2014-12-14 2016-06-23 Via Alliance Semiconductor Co., Ltd. Apparatus and method to preclude x86 special bus cycle load replays in out-of-order processor
US9804845B2 (en) 2014-12-14 2017-10-31 Via Alliance Semiconductor Co., Ltd. Apparatus and method to preclude X86 special bus cycle load replays in an out-of-order processor
WO2016097811A1 (en) 2014-12-14 2016-06-23 Via Alliance Semiconductor Co., Ltd. Mechanism to preclude load replays dependent on fuse array access in out-of-order processor
KR101819316B1 (en) 2014-12-14 2018-01-16 비아 얼라이언스 세미컨덕터 씨오., 엘티디. Mechanism to preclude uncacheable­dependent load replays in out­of­order processor
JP6286065B2 (en) 2014-12-14 2018-02-28 ヴィア アライアンス セミコンダクター カンパニー リミテッド Apparatus and method for excluding load replay depending on write-coupled memory area access of out-of-order processor
US10228944B2 (en) 2014-12-14 2019-03-12 Via Alliance Semiconductor Co., Ltd. Apparatus and method for programmable load replay preclusion
US9971686B2 (en) * 2015-02-23 2018-05-15 Intel Corporation Vector cache line write back processors, methods, systems, and instructions
US10095637B2 (en) * 2016-09-15 2018-10-09 Advanced Micro Devices, Inc. Speculative retirement of post-lock instructions
US11080202B2 (en) * 2017-09-30 2021-08-03 Intel Corporation Lazy increment for high frequency counters
US11175916B2 (en) * 2017-12-19 2021-11-16 Advanced Micro Devices, Inc. System and method for a lightweight fencing operation
US10866805B2 (en) * 2018-01-03 2020-12-15 Arm Limited Speculation barrier instruction
JP7064134B2 (en) * 2018-05-11 2022-05-10 富士通株式会社 Arithmetic processing device and control method of arithmetic processing device
US10713057B2 (en) * 2018-08-23 2020-07-14 International Business Machines Corporation Mechanism to stop completions using stop codes in an instruction completion table
US11334485B2 (en) * 2018-12-14 2022-05-17 Eta Scale Ab System and method for dynamic enforcement of store atomicity
US10754782B1 (en) * 2019-03-30 2020-08-25 Intel Corporation Apparatuses, methods, and systems to accelerate store processing
US11720360B2 (en) * 2020-09-11 2023-08-08 Apple Inc. DSB operation with excluded region

Citations (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4775955A (en) 1985-10-30 1988-10-04 International Business Machines Corporation Cache coherence mechanism based on locking
US5193167A (en) * 1990-06-29 1993-03-09 Digital Equipment Corporation Ensuring data integrity by locked-load and conditional-store operations in a multiprocessor system
US5265233A (en) 1991-05-17 1993-11-23 Sun Microsystems, Inc. Method and apparatus for providing total and partial store ordering for a memory in multi-processor system
WO1997006608A1 (en) 1995-08-04 1997-02-20 Motorola Inc. Commonly coupled high frequency transmitting/receiving switching module
US5636374A (en) 1994-01-04 1997-06-03 Intel Corporation Method and apparatus for performing operations based upon the addresses of microinstructions
US5675724A (en) 1991-05-03 1997-10-07 Storage Technology Corporation Knowledge based resource management
US5694574A (en) 1994-01-04 1997-12-02 Intel Corporation Method and apparatus for performing load operations in a computer system
US5694553A (en) 1994-01-04 1997-12-02 Intel Corporation Method and apparatus for determining the dispatch readiness of buffered load operations in a processor
US5724536A (en) 1994-01-04 1998-03-03 Intel Corporation Method and apparatus for blocking execution of and storing load operations during their execution
US5751996A (en) 1994-09-30 1998-05-12 Intel Corporation Method and apparatus for processing memory-type information within a microprocessor
US5778245A (en) 1994-03-01 1998-07-07 Intel Corporation Method and apparatus for dynamic allocation of multiple buffers in a processor
US5790398A (en) * 1994-01-25 1998-08-04 Fujitsu Limited Data transmission control method and apparatus
US5802575A (en) 1995-02-16 1998-09-01 Sun Microsystems, Inc. Hit bit for indicating whether load buffer entries will hit a cache when they reach buffer head
US5802757A (en) 1997-04-30 1998-09-08 Smith & Wesson Corp. Firearm with releasably retained sight assembly
US5826109A (en) 1994-01-04 1998-10-20 Intel Corporation Method and apparatus for performing multiple load operations to the same memory location in a computer system
US5857210A (en) * 1997-06-26 1999-01-05 Sun Microsystems, Inc. Bounded-pause time garbage collection system and method including read and write barriers associated with an instance of a partially relocated object
US5860126A (en) 1996-12-17 1999-01-12 Intel Corporation Controlling shared memory access ordering in a multi-processing system using an acquire/release consistency model
US5898854A (en) 1994-01-04 1999-04-27 Intel Corporation Apparatus for indicating an oldest non-retired load operation in an array
US5903740A (en) 1996-07-24 1999-05-11 Advanced Micro Devices, Inc. Apparatus and method for retiring instructions in excess of the number of accessible write ports
US6006325A (en) * 1996-12-19 1999-12-21 Institute For The Development Of Emerging Architectures, L.L.C. Method and apparatus for instruction and data serialization in a computer processor
US6038646A (en) 1998-01-23 2000-03-14 Sun Microsystems, Inc. Method and apparatus for enforcing ordered execution of reads and writes across a memory interface
US6047334A (en) * 1997-06-17 2000-04-04 Intel Corporation System for delaying dequeue of commands received prior to fence command until commands received before fence command are ordered for execution in a fixed sequence
US6073210A (en) 1998-03-31 2000-06-06 Intel Corporation Synchronization of weakly ordered write combining operations using a fencing mechanism
US6088772A (en) * 1997-06-13 2000-07-11 Intel Corporation Method and apparatus for improving system performance when reordering commands
US6088771A (en) 1997-10-24 2000-07-11 Digital Equipment Corporation Mechanism for reducing latency of memory barrier operations on a multiprocessor system
US6148083A (en) 1996-08-23 2000-11-14 Hewlett-Packard Company Application certification for an international cryptography framework
US6148394A (en) 1998-02-10 2000-11-14 International Business Machines Corporation Apparatus and method for tracking out of order load instructions to avoid data coherency violations in a processor
US6216215B1 (en) 1998-04-02 2001-04-10 Intel Corporation Method and apparatus for senior loads
US6223258B1 (en) 1998-03-31 2001-04-24 Intel Corporation Method and apparatus for implementing non-temporal loads
US6233657B1 (en) 1996-03-26 2001-05-15 Advanced Micro Devices, Inc. Apparatus and method for performing speculative stores
US6266767B1 (en) 1999-04-22 2001-07-24 International Business Machines Corporation Apparatus and method for facilitating out-of-order execution of load instructions
US6286095B1 (en) 1994-04-28 2001-09-04 Hewlett-Packard Company Computer apparatus having special instructions to force ordered load and store operations
US6356270B2 (en) 1998-03-31 2002-03-12 Intel Corporation Efficient utilization of write-combining buffers
US6546462B1 (en) 1999-12-30 2003-04-08 Intel Corporation CLFLUSH micro-architectural implementation method and system
US6636950B1 (en) 1998-12-17 2003-10-21 Massachusetts Institute Of Technology Computer architecture for shared memory access
US6651151B2 (en) 1999-12-30 2003-11-18 Intel Corporation MFENCE and LFENCE micro-architectural implementation method and system
US6708269B1 (en) 1999-12-30 2004-03-16 Intel Corporation Method and apparatus for multi-mode fencing in a microprocessor system
US6754751B1 (en) 2001-03-30 2004-06-22 Intel Corporation Method and apparatus for handling ordered transactions
US6862679B2 (en) 2001-02-14 2005-03-01 Intel Corporation Synchronization of load operations using load fence instruction in pre-serialization/post-serialization mode

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064650B (en) 1995-08-31 2016-02-24 英特尔公司 Control the device of the bit correction of shift grouped data

Patent Citations (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4775955A (en) 1985-10-30 1988-10-04 International Business Machines Corporation Cache coherence mechanism based on locking
US5193167A (en) * 1990-06-29 1993-03-09 Digital Equipment Corporation Ensuring data integrity by locked-load and conditional-store operations in a multiprocessor system
US5675724A (en) 1991-05-03 1997-10-07 Storage Technology Corporation Knowledge based resource management
US5265233A (en) 1991-05-17 1993-11-23 Sun Microsystems, Inc. Method and apparatus for providing total and partial store ordering for a memory in multi-processor system
US5826109A (en) 1994-01-04 1998-10-20 Intel Corporation Method and apparatus for performing multiple load operations to the same memory location in a computer system
US5636374A (en) 1994-01-04 1997-06-03 Intel Corporation Method and apparatus for performing operations based upon the addresses of microinstructions
US5694574A (en) 1994-01-04 1997-12-02 Intel Corporation Method and apparatus for performing load operations in a computer system
US5694553A (en) 1994-01-04 1997-12-02 Intel Corporation Method and apparatus for determining the dispatch readiness of buffered load operations in a processor
US5724536A (en) 1994-01-04 1998-03-03 Intel Corporation Method and apparatus for blocking execution of and storing load operations during their execution
US5898854A (en) 1994-01-04 1999-04-27 Intel Corporation Apparatus for indicating an oldest non-retired load operation in an array
US5881262A (en) 1994-01-04 1999-03-09 Intel Corporation Method and apparatus for blocking execution of and storing load operations during their execution
US5790398A (en) * 1994-01-25 1998-08-04 Fujitsu Limited Data transmission control method and apparatus
US5778245A (en) 1994-03-01 1998-07-07 Intel Corporation Method and apparatus for dynamic allocation of multiple buffers in a processor
US6286095B1 (en) 1994-04-28 2001-09-04 Hewlett-Packard Company Computer apparatus having special instructions to force ordered load and store operations
US5751996A (en) 1994-09-30 1998-05-12 Intel Corporation Method and apparatus for processing memory-type information within a microprocessor
US5802575A (en) 1995-02-16 1998-09-01 Sun Microsystems, Inc. Hit bit for indicating whether load buffer entries will hit a cache when they reach buffer head
WO1997006608A1 (en) 1995-08-04 1997-02-20 Motorola Inc. Commonly coupled high frequency transmitting/receiving switching module
US6233657B1 (en) 1996-03-26 2001-05-15 Advanced Micro Devices, Inc. Apparatus and method for performing speculative stores
US5903740A (en) 1996-07-24 1999-05-11 Advanced Micro Devices, Inc. Apparatus and method for retiring instructions in excess of the number of accessible write ports
US6189089B1 (en) 1996-07-24 2001-02-13 Advanced Micro Devices, Inc. Apparatus and method for retiring instructions in excess of the number of accessible write ports
US6148083A (en) 1996-08-23 2000-11-14 Hewlett-Packard Company Application certification for an international cryptography framework
US5860126A (en) 1996-12-17 1999-01-12 Intel Corporation Controlling shared memory access ordering in a multi-processing system using an acquire/release consistency model
US6006325A (en) * 1996-12-19 1999-12-21 Institute For The Development Of Emerging Architectures, L.L.C. Method and apparatus for instruction and data serialization in a computer processor
US5802757A (en) 1997-04-30 1998-09-08 Smith & Wesson Corp. Firearm with releasably retained sight assembly
US6088772A (en) * 1997-06-13 2000-07-11 Intel Corporation Method and apparatus for improving system performance when reordering commands
US6047334A (en) * 1997-06-17 2000-04-04 Intel Corporation System for delaying dequeue of commands received prior to fence command until commands received before fence command are ordered for execution in a fixed sequence
US5857210A (en) * 1997-06-26 1999-01-05 Sun Microsystems, Inc. Bounded-pause time garbage collection system and method including read and write barriers associated with an instance of a partially relocated object
US6088771A (en) 1997-10-24 2000-07-11 Digital Equipment Corporation Mechanism for reducing latency of memory barrier operations on a multiprocessor system
US6038646A (en) 1998-01-23 2000-03-14 Sun Microsystems, Inc. Method and apparatus for enforcing ordered execution of reads and writes across a memory interface
US6148394A (en) 1998-02-10 2000-11-14 International Business Machines Corporation Apparatus and method for tracking out of order load instructions to avoid data coherency violations in a processor
US6223258B1 (en) 1998-03-31 2001-04-24 Intel Corporation Method and apparatus for implementing non-temporal loads
US6073210A (en) 1998-03-31 2000-06-06 Intel Corporation Synchronization of weakly ordered write combining operations using a fencing mechanism
US6356270B2 (en) 1998-03-31 2002-03-12 Intel Corporation Efficient utilization of write-combining buffers
US6216215B1 (en) 1998-04-02 2001-04-10 Intel Corporation Method and apparatus for senior loads
US6636950B1 (en) 1998-12-17 2003-10-21 Massachusetts Institute Of Technology Computer architecture for shared memory access
US6266767B1 (en) 1999-04-22 2001-07-24 International Business Machines Corporation Apparatus and method for facilitating out-of-order execution of load instructions
US6546462B1 (en) 1999-12-30 2003-04-08 Intel Corporation CLFLUSH micro-architectural implementation method and system
US6651151B2 (en) 1999-12-30 2003-11-18 Intel Corporation MFENCE and LFENCE micro-architectural implementation method and system
US6678810B1 (en) 1999-12-30 2004-01-13 Intel Corporation MFENCE and LFENCE micro-architectural implementation method and system
US6708269B1 (en) 1999-12-30 2004-03-16 Intel Corporation Method and apparatus for multi-mode fencing in a microprocessor system
US8959314B2 (en) 1999-12-30 2015-02-17 Intel Corporation MFENCE and LFENCE micro-architectural implementation method and system
US6862679B2 (en) 2001-02-14 2005-03-01 Intel Corporation Synchronization of load operations using load fence instruction in pre-serialization/post-serialization mode
US6754751B1 (en) 2001-03-30 2004-06-22 Intel Corporation Method and apparatus for handling ordered transactions

Non-Patent Citations (52)

* Cited by examiner, † Cited by third party
Title
Advanced Micro Devices, Inc. "AMD-3D Technology Manual", (Feb. 1998), pp. i-x, 1-58.
Alpha 21284 Microprocessor Hardware Refernce Manual, Compaq Computer Corporation, Order No. EC-RJRZA-TE, (Jul. 1999).
Barad, Haim, et al., Intel's Multimedia Architecture Extension, Nineteenth Convention of Electrical and Electronics Engineers in Israel, (1996), pp. 145-151.
Bernstein, et al., "Solutions and Debugging for Data Consistency in Multiprocessors with Noncoherent Caches", vol. 23, No. 1, Feb. 1995, 21 pages.
Control Data Corporation,"Control Data 6400/6500/6600 Computer Systems Reference Manual", Publication No. 60100000, (1967) 159 pages.
Convex Computer Corporation, "C4/XA Architecture Overiew", Convex Technical Marketing, (Feb. 1994), 279 pages.
Final Office Action received for U.S. Appl. No. 10/654,573, mailed on Apr. 20, 2006, 12 Pages.
Final Office Action received for U.S. Appl. No. 10/654,573, mailed on Jul. 25, 2007, 11 Pages.
Final Office Action received for U.S. Appl. No. 10/654,573, mailed on Jun. 24, 2008, 14 Pages.
Final Office Action received for U.S. Appl. No. 10/654,573, mailed on Jun. 30, 2011, 23 Pages.
Final Office Action received for U.S. Appl. No. 10/654,573, mailed on Jun. 7, 2010, 21 Pages.
Final Office Action received for U.S. Appl. No. 13/440,096, mailed on Apr. 22, 2013, 27 Pages.
Final Office Action received for U.S. Appl. No. 13/440,096, mailed on Jun. 12, 2015, 9 Pages.
Final Office Action received for U.S. Appl. No. 13/619,832, mailed on Jun. 13, 2014, 35 Pages.
Final Office Action received for U.S. Appl. No. 13/619,919, mailed on Jun. 12, 2015, 19 Pages.
Final Office Action received for U.S. Appl. No. 13/619,919, mailed on Nov. 25, 2013, 23 Pages.
Goodman, "Cache Consistency and Sequential Consistency", (1989), pp. 1-4.
Intel Corporation "Pentium Processor User's Manual", vol. 3: Architecture and Programming Manual (1993). Ch. 1, 3-4, 6, 8 & 18.
Intel Corporation, "i860 Microprocessor Family Programmer's Reference Manual", (1992), Ch. 1, 3, 8 & 12.
Intel Corporation, "Intel 80386 Programmer's Reference Manual", (1986), 421 pages.
Kohn L., et al., "The Visual Instruction Set (VIS) in UltraSPARC", SPARC Technology Business-Sun Microsystems, Inc.,(1995), pp. 462-469.
Lawrence Livermore Laboratory, "S-1 Uniprocessor Architecture", Apr. 21, 1983, 386 pages.
Lawrence Livermore Laboratory, "vol. I: Architecture-The 1979 Annual Report-The S-1 Project", (1979), 443 pages.
Lawrence Livermore Laboratory, "vol. II: Hardware-The 1979 Annual Report-The S-1 Project", (1979), 366 pages.
Motorola, Inc., "MC88110 Second Generation RISC Microprocessor User's Manual", MC8110UM/AD, (1991), 619 pages.
Non-Final Office Action received for U.S. Appl. No. 10/654,573, mailed on Dec. 21, 2010, 29 Pages.
Non-Final Office Action received for U.S. Appl. No. 10/654,573, mailed on Dec. 29, 2005, 11 Pages.
Non-Final Office Action received for U.S. Appl. No. 10/654,573, mailed on Feb. 09, 2007, 11 Pages.
Non-Final Office Action received for U.S. Appl. No. 10/654,573, mailed on Jan. 23, 2008, 12 Pages.
Non-Final Office Action received for U.S. Appl. No. 10/654,573, mailed on May 11, 2009, 14 Pages.
Non-Final Office Action received for U.S. Appl. No. 10/654,573, mailed on Nov. 25, 2009, 30 Pages.
Non-Final Office Action received for U.S. Appl. No. 13/440,096, mailed on Jun. 5, 2012, 13 Pages.
Non-Final Office Action received for U.S. Appl. No. 13/440,096, mailed on Mar. 25, 2014, 16 Pages.
Non-Final Office Action received for U.S. Appl. No. 13/440,096, mailed on Nov. 4, 2014, 25 Pages.
Non-Final Office Action received for U.S. Appl. No. 13/619,832, mailed on Mar. 28, 2013, 10 Pages.
Non-Final Office Action received for U.S. Appl. No. 13/619,832, mailed on Oct. 1, 2013, 23 Pages.
Non-Final Office Action received for U.S. Appl. No. 13/619,919, mailed on Mar. 27, 2013, 10 Pages.
Non-Final Office Action received for U.S. Appl. No. 13/619,919, mailed on Oct. 1, 2014, 20 Pages.
Non-Final Office Action received for U.S. Appl. No. 13/942,660, mailed on Sep. 30, 2013, 26 Pages.
Notice of Allowance received for U.S. Appl. No. 10/654,573, mailed on Dec. 27, 2011, 13 Pages.
Notice of Allowance received for U.S. Appl. No. 13/619,832, mailed on Mar. 30, 2015, 8 Pages.
Notice of Allowance received for U.S. Appl. No. 13/942,660, mailed on Oct. 7, 2014, 9 Pages.
Philips Electronics, "TriMedia TM1000 Preliminary Data Book", (1997), 496 pages.
Samsung Electronics, "21164 Alpha Microprocessor Data Sheet", (1997), 121 pages.
Shipnes, J., Graphics Processing with the 88110 RISC Microprocessor, IEEE, (1992), pp. 169-174.
Sun Microsystems, Inc., "VIS Visual Instruction Set User's Manuel", Part #805-1394-01, (Jul. 1997), pp. i-xii, pp. 1-136.
Sun Microsystems, Inc., "Visual Instruction Set (VIS) User's Guide", Version 1.1, (Mar. 1997),pp. i-xii, pp. 1-127.
Texas Instruments, "TM320C80 (MVP) Master Processor User's Guide", (1995), 595 pages.
Texas Instruments, "TM320C80 (MVP) Parallel Processor User's Guide", (1995), 705 pages.
Texas Instruments, "TMS320C2X User's Guide", (1993), pp. 3:2-3:11: 3:28-3:34; 4:1-4:22; 4:41; 4:103; 4:119-4:120; 4:122; 4:150-4:151.
Thakkar, Shreekant, "The Internet Streaming SIMD Extensions", Intel Technology Journal, Retrieved on Nov. 11, 1999, Legal Information © 1999, pp. 1-8, from http://support.intel.com/technology/iti/q21999/articles/art-1a.htm.
Weaver et al. (The SPARC Architecture Manual: Version 9, Published 1994, pp. 1-369). *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11782718B2 (en) * 2013-07-15 2023-10-10 Texas Instruments Incorporated Implied fence on stream open
US11392380B2 (en) * 2019-12-28 2022-07-19 Intel Corporation Apparatuses, methods, and systems to precisely monitor memory store accesses
US11915000B2 (en) 2019-12-28 2024-02-27 Intel Corporation Apparatuses, methods, and systems to precisely monitor memory store accesses

Also Published As

Publication number Publication date
US9612835B2 (en) 2017-04-04
US6651151B2 (en) 2003-11-18
US20130205117A1 (en) 2013-08-08
US8171261B2 (en) 2012-05-01
US20130067200A1 (en) 2013-03-14
TW493123B (en) 2002-07-01
US9383998B2 (en) 2016-07-05
US20130073834A1 (en) 2013-03-21
US9098268B2 (en) 2015-08-04
US20120191951A1 (en) 2012-07-26
US8959314B2 (en) 2015-02-17
US20130305018A1 (en) 2013-11-14
US20170206088A1 (en) 2017-07-20
US6678810B1 (en) 2004-01-13
US20030084259A1 (en) 2003-05-01
US20040044883A1 (en) 2004-03-04

Similar Documents

Publication Publication Date Title
US9342310B2 (en) MFENCE and LFENCE micro-architectural implementation method and system
EP0783735B1 (en) Method and apparatus for processing memory-type information within a microprocessor
EP2674856B1 (en) Zero cycle load instruction
JP5255614B2 (en) Transaction-based shared data operations in a multiprocessor environment
US6691220B1 (en) Multiprocessor speculation mechanism via a barrier speculation flag
US6021485A (en) Forwarding store instruction result to load instruction with reduced stall or flushing by effective/real data address bytes matching
US8301849B2 (en) Transactional memory in out-of-order processors with XABORT having immediate argument
US6625660B1 (en) Multiprocessor speculation mechanism for efficiently managing multiple barrier operations
US6542984B1 (en) Scheduler capable of issuing and reissuing dependency chains
US6035393A (en) Stalling predicted prefetch to memory location identified as uncacheable using dummy stall instruction until branch speculation resolution
US6609192B1 (en) System and method for asynchronously overlapping storage barrier operations with old and new storage operations
US7689813B2 (en) Method and apparatus for enforcing membar instruction semantics in an execute-ahead processor
US6606702B1 (en) Multiprocessor speculation mechanism with imprecise recycling of storage operations
US7584346B1 (en) Method and apparatus for supporting different modes of multi-threaded speculative execution
US20090164758A1 (en) System and Method for Performing Locked Operations
US6981128B2 (en) Atomic quad word storage in a simultaneous multithreaded system
US6915395B1 (en) Active address content addressable memory
US7634641B2 (en) Method and apparatus for using multiple threads to spectulatively execute instructions
US20080189531A1 (en) Preventing register data flow hazards in an SST processor
US6473850B1 (en) System and method for handling instructions occurring after an ISYNC instruction

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20200517