WO2004075044A2 - Method and apparatus for selective monitoring of store instructions during speculative thread execution - Google Patents

Method and apparatus for selective monitoring of store instructions during speculative thread execution Download PDF

Info

Publication number
WO2004075044A2
WO2004075044A2 PCT/US2004/003027 US2004003027W WO2004075044A2 WO 2004075044 A2 WO2004075044 A2 WO 2004075044A2 US 2004003027 W US2004003027 W US 2004003027W WO 2004075044 A2 WO2004075044 A2 WO 2004075044A2
Authority
WO
WIPO (PCT)
Prior art keywords
store
monitored
transactional execution
instruction
cache line
Prior art date
Application number
PCT/US2004/003027
Other languages
French (fr)
Other versions
WO2004075044A3 (en
Inventor
Marc Tremblay
Quinn A. Jacobson
Shaildender Chaudhry
Original Assignee
Sun Microsystems Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems Inc. filed Critical Sun Microsystems Inc.
Publication of WO2004075044A2 publication Critical patent/WO2004075044A2/en
Publication of WO2004075044A3 publication Critical patent/WO2004075044A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/3004Arrangements for executing specific machine instructions to perform operations on memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30076Arrangements for executing specific machine instructions to perform miscellaneous control operations, e.g. NOP
    • G06F9/30087Synchronisation or serialisation instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3824Operand accessing
    • G06F9/3834Maintaining memory consistency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3854Instruction completion, e.g. retiring, committing or graduating
    • G06F9/3858Result writeback, i.e. updating the architectural state or memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing
    • G06F9/467Transactional memory

Definitions

  • locks are used when they are not required.
  • many applications make use of "thread-safe" library routines that use locks to ensure that they are "thread-safe” for multi-threaded applications.
  • Unfortunately the overhead involved in acquiring and releasing these locks is still incurred, even when the thread-safe library routines are called by a single-threaded application.
  • the cache line is store-marked in the cache level closest to the processor where cache lines are coherent.
  • FIG. 6 presents a flow chart illustrating how store-marking is performed during transactional execution in accordance with an embodiment of the present invention.
  • Processor 101 has two register files 103 and 104, one of which is an "active register file” and the other of which is a backup "shadow register file.”
  • processor 101 provides a flash copy operation that instantly copies all of the values from register file 103 into register file 104. This facilitates a rapid register checkpointing operation to support transactional execution.
  • LI data cache 115 is a write-through cache
  • the store operation propagates through LI data cache 115 to L2 cache 120.
  • the system attempts to lock the cache line corresponding to the store operation in L2 data cache 115 (step 604). If the corresponding line is in L2 cache 120 (cache hit), the system "store- marks" the corresponding cache line in L2 cache 120 (step 610). This involves setting the store-marking bit for the cache line. Otherwise, if the corresponding line is not in L2 cache 120 (cache miss), the system retrieves the cache line from further levels of the memory hierarchy (step 608) and then proceeds to step 610 to store-mark the cache line in L2 cache 120.
  • the system clears load-marks from LI data cache 115 (step 704).
  • the system then commits entries from store buffer 112 for stores that are identified as needing to be marked, which were generated during the transactional execution, into the memory hierarchy (step 706). As each entry is committed, a corresponding line in L2 cache 120 is unlocked.
  • the system also commits register file changes (step 708). For example, this can involve functionally performing a flash copy between register file 103 and register file 104 in the system illustrated in FIG. 1.
  • step 904 If the system determines that a given load operation needs to be monitored, the system generates a "monitored load” instruction (step 904). Otherwise, the system generates an "unmonitored load” instruction (step 906).
  • the system instead of detecting interfering data accesses from other processes, the system does not allow a load operation from the current process cause other processes to fail. This can be accomplished by propagating additional information during the coherency transactions associated with the load operation to ensure that the load operation does not cause another process to fail.

Abstract

One embodiment of the present invention provides a system that selectively monitors store instructions to support transactional execution of a process, wherein changes made during the transactional execution are not committed to the architectural state of a processor until the transactional execution successfully completes. Upon encountering a store instruction during transactional execution of a block of instructions, the system determines whether the store instruction is a monitored store instruction or an unmonitored store instruction. If the store instruction is a monitored store instruction, the system performs the store operation, and store-marks a cache line associated with the store instruction to facilitate subsequent detection of an interfering data access to the cache line from another process. If the store instruction is an unmonitored store instruction, the system performs the store operation without store-marking the cache line.

Description

SELECTIVELY MONITORING STORES TO SUPPORT TRANSACTIONAL PROGRAM
EXECUTION
Inventors: Marc Tremblay, Quinn A. Jacobson and Shailender Chaudhry
BACKGROUND
Field of the Invention
[0001] The present invention relates to techniques for improving the performance of computer systems. More specifically, the present invention relates to a method and an apparatus for selectively monitoring loads to support transactional program execution.
Related Art
[0002] Computer system designers are presently developing mechanisms to support multi-threading within the latest generation of Chip-Multiprocessors (CMPs) as well as more traditional Shared Memory Multiprocessors (SMPs). With proper hardware support, multi-threading can dramatically increase the performance of numerous applications. However, as microprocessor performance continues to increase, the time spent synchronizing between threads (processes) is becoming a large fraction of overall execution time. In fact, as multi-threaded applications begin to use even more threads, this synchronization overhead becomes the dominant factor in limiting application performance. [0003] From a programmer's perspective, synchronization is generally accomplished through the use locks. A lock is typically acquired before a thread enters a critical section of code, and is released after the thread exits the critical section. If another thread wants to enter the same critical section, it must acquire the same lock. If it is unable to acquire the lock, because a preceding thread has grabbed the lock, the thread must wait until the preceding thread releases the lock. (Note that a lock can be implemented in a number of ways, such as through atomic operations or semaphores.)
[0004] Unfortunately, the process of acquiring a lock and the process of releasing a lock are very time-consuming in modern microprocessors. They involve atomic operations, which typically flush the load buffer and store buffer, and can consequently require hundreds, if not thousands, of processor cycles to complete.
[0005] Moreover, as multi-threaded applications use more threads, more locks are required. For example, if multiple threads need to access a shared data structure, it is impractical for performance reasons to use a single lock for the entire data structure. Instead, it is preferable to use multiple fine-grained locks to lock small portions of the data structure. This allows multiple threads to operate on different portions of the data structure in parallel. However, it also requires a single thread to acquire and release multiple locks in order to access different portions of the data structure.
[0006] In some cases, locks are used when they are not required. For example, many applications make use of "thread-safe" library routines that use locks to ensure that they are "thread-safe" for multi-threaded applications. Unfortunately, the overhead involved in acquiring and releasing these locks is still incurred, even when the thread-safe library routines are called by a single-threaded application.
[0007] Applications typically use locks to ensure mutual exclusion within critical sections of code. However, in many cases threads will not interfere with each other, even if they are allowed to execute a critical section simultaneously. In these cases, mutual exclusion is used to prevent the unlikely case in which threads actually interfere with each other. Consequently, in these cases, the overhead involved in acquiring and releasing locks is largely wasted. [0008] Hence, what is needed is a method and an apparatus that reduces the overhead involved in manipulating locks when accessing critical sections of code.
[0009] One technique to reduce the overhead involved in manipulating locks is to
"transactionally" execute a critical section, wherein changes made during the transactional execution are not committed to the architectural state of the processor until the transactional execution successfully completes. This technique is described in related
U.S. Patent Application No. 10/439,911, entitled, "Method and Apparatus for Avoiding
Locks by Speculatively Executing Critical Sections," by inventors Shailender Chaudhry
Marc Tremblay and Quinn A. Jacobson, filed on 16 May 2003 (Attorney Docket No. SUN-P9322-MEG).
[0010] During transactional execution, load and store operations are modified so that they mark cache lines that are accessed during the transactional execution. This allows the computer system to determine if an interfering data access occurs during the transactional execution, in which case the transactional execution fails, and results of the transactional execution are not committed to the architectural state of the processor.
[0011] Unfortunately, problems can arise while marking cache lines. If a large number of lines are marked, false failures are likely to occur when accesses that appear to interfere with each other do not actually touch the same data items in a cache line.
Furthermore, the marked cache lines cannot be easily moved out of cache until the transactional execution completes, which also causes performance problems.
[0012] Also, since store operations need to be buffered during transactional execution, transactional execution will sometimes be limited by the number of available store buffers on the processor.
[0013] Hence, what is needed is a method and an apparatus that reduces the number of cache lines that need to be marked during transactional program execution.
SUMMARY [0014] One embodiment of the present invention provides a system that selectively monitors load instructions to support transactional execution of a process, wherein changes made during the transactional execution are not committed to the architectural state of a processor until the transactional execution successfully completes. Upon encountering a load instruction during transactional execution of a block of instructions, the system determines whether the load instruction is a monitored load instruction or an unmonitored load instruction. If the load instruction is a monitored load instruction, the system performs the load operation, and load-marks a cache line associated with the load instruction to facilitate subsequent detection of an interfering data access to the cache line from another process. If the load instruction is an unmonitored load instruction, the system performs the load operation without load- marking the cache line.
[0015] In a variation on this embodiment, prior to executing the program, the system generates instructions for the program. During this process, the system determines whether load operations that take place during transactional execution need to be monitored. The system then generates monitored load instructions for load operations that need to be monitored, and generates unmonitored load instructions for load operations that do not need to be monitored.
[0016] In a variation on this embodiment, the system determines whether a load operation needs to be monitored by determining whether the load operation is directed to a heap, wherein loads from the heap need to be monitored and loads from outside the heap do not need to be monitored.
[0017] In a variation on this embodiment, the system determines whether a load operation needs to be monitored by examining a data structure associated with the load operation to determine whether the data structure is a "protected" data structure for which loads need to be monitored, or an "unprotected" data structure for which loads do not need to be monitored.
[0018] In a variation on this embodiment, the system determines whether a load operation needs to be monitored by allowing a programmer to determine if the load operation needs to be monitored.
[0019] In a variation on this embodiment, the system determines whether a load operation needs to be monitored by examining an op code of the load instruction.
[0020] In a variation on this embodiment, the system determines whether a load operation needs to be monitored by examining an address associated with the load instruction to determine whether the address falls within a range of addresses for which loads are monitored. Examining the address can involve comparing the address with one or more boundary registers. It can also involve examining a Translation Lookaside
Buffer (TLB) entry associated with the address.
[0021] In a variation on this embodiment, if an interfering data access from another process is encountered during transactional execution of the block of instructions, the system discards changes made during the transactional execution and attempts to re- execute the block of instructions.
[0022] In a variation on this embodiment, if transactional execution of the block of instructions completes without encountering an interfering data access from another process, the system commits changes made during the transactional execution to the architectural state of the processor, and resumes normal non-transactional execution of the program past the block of instructions.
[0023] In a variation on this embodiment, an interfering data access can include: a store by another process to a cache line that has been load-marked by the process; and a load or a store by another process to a cache line that has been store-marked by the process.
[0024] In a variation on this embodiment, the cache line is load-marked in level 1 (LI) cache.
[0025] One embodiment of the present invention provides a system that selectively monitors store instructions to support transactional execution of a process, wherein changes made during the transactional execution are not committed to the architectural state of a processor until the transactional execution successfully completes. Upon encountering a store instruction during transactional execution of a block of instructions, the system determines whether the store instruction is a monitored store instruction or an unmonitored store instruction. If the store instruction is a monitored store instruction, the system performs the store operation, and store-marks a cache line associated with the store instruction to facilitate subsequent detection of an interfering data access to the cache line from another process. If the store instruction is an unmonitored store instruction, the system performs the store operation without store- marking the cache line. [0026] In a variation on this embodiment, prior to executing the program, the system generates instructions for the program. During this process, the system determines whether store operations that take place during transactional execution need to be monitored. The system then generates monitored store instructions for store operations that need to be monitored, and generates unmonitored store instructions for store operations that do not need to be monitored.
[0027] In a variation on this embodiment, the system determines whether a store operation needs to be monitored by determining whether the store operation is directed to a heap, wherein stores from the heap need to be monitored and stores from outside the heap do not need to be monitored.
[0028] In a variation on this embodiment, the system determines whether a store pperation needs to be monitored by examining a data structure associated with the store operation to determine whether the data structure is a "protected" data structure for which stores need to be monitored, or an "unprotected" data structure for which stores do not need to be monitored.
[0029] In a variation on this embodiment, the system determines whether a store operation needs to be monitored by allowing a programmer to determine if the store operation needs to be monitored.
[0030] In a variation on this embodiment, the system determines whether the store instruction is a monitored store instruction by examining an op code of the store instruction.
[0031] In a variation on this embodiment, the system determines whether the store instruction is a monitored store instruction by examining an address associated with the store instruction to determine whether the address falls within a range of addresses for which stores are monitored.
[0032] In a variation on this embodiment, the cache line is store-marked in the cache level closest to the processor where cache lines are coherent.
[0033] In a variation on this embodiment, a store-marked cache line can indicate that: loads from other processes to the cache line should be monitored; stores from other processes to the cache line should be monitored; and stores to the cache line should be buffered until the transactional execution completes.
BRIEF DESCRIPTION OF THE FIGURES [0034] FIG. 1 illustrates a computer system in accordance with an embodiment of the present invention.
[0035] FIG. 2 illustrates how a critical section is executed in accordance with an embodiment of the present invention.
[0036] FIG. 3 presents a flow chart illustrating the transactional execution process in accordance with an embodiment of the present invention.
[0037] FIG. 4 presents a flow chart illustrating a start transactional execution (STE) operation in accordance with an embodiment of the present invention.
[0038] FIG. 5 presents a flow chart illustrating how load-marking is performed during transactional execution in accordance with an embodiment of the present invention.
[0039] FIG. 6 presents a flow chart illustrating how store-marking is performed during transactional execution in accordance with an embodiment of the present invention.
[0040] FIG. 7 presents a flow chart illustrating how a commit operation is performed in accordance with an embodiment of the present invention.
[0041] FIG. 8 presents a flow chart illustrating how changes are discarded after transactional execution completes unsuccessfully in accordance with an embodiment of the present invention.
[0042] FIG. 9A presents a flow chart illustrating how monitored and unmonitored load instructions are generated in accordance with an embodiment of the present invention.
[0043] FIG. 9B presents a flow chart illustrating how monitored and unmonitored load instructions are executed in accordance with an embodiment of the present invention. [0044] FIG. 10A presents a flow chart illustrating how monitored and unmonitored store instructions are generated in accordance with an embodiment of the present invention.
[0045] FIG. 10B presents a flow chart illustrating how monitored and unmonitored store instructions are executed in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
[0046] The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
[0047] The data structures and code described in this detailed description are typically stored on a computer readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. This includes, but is not limited to, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs) and DVDs (digital versatile discs or digital video discs), and computer instruction signals embodied in a transmission medium (with or without a carrier wave upon which the signals are modulated). For example, the transmission medium may include a communications network, such as the Internet.
Computer System
[0048] FIG. 1 illustrates a computer system 100 in accordance with an embodiment of the present invention. Computer system 100 can generally include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a personal organizer, a device controller, and a computational engine within an appliance. As is illustrated in FIG. 1, computer system 100 includes processors 101 and level 2 (L2) cache 120, which is coupled to main memory (not shown). Processor 102 is similar in structure to processor 101, so only processor 101 is described below.
[0049] Processor 101 has two register files 103 and 104, one of which is an "active register file" and the other of which is a backup "shadow register file." In one embodiment of the present invention, processor 101 provides a flash copy operation that instantly copies all of the values from register file 103 into register file 104. This facilitates a rapid register checkpointing operation to support transactional execution.
[0050] Processor 101 also includes one or more functional units, such as adder 107 and multiplier 108. These functional units are used in performing computational operations involving operands retrieved from register files 103 or 104. As in a conventional processor, load and store operations pass through load buffer 111 and store buffer 112.
[0051] Processor 101 additionally includes a level one (LI) data cache 115, which stores data items that are likely to be used by processor 101. Note that each line in LI data cache 115 includes a "load-marking bit," which indicates that a data value from the line has been loaded during transactional execution. This load-marking bit is used to determine whether any interfering memory references take place during transactional execution as is described below with reference to FIGs. 3-8. Processor 101 also includes an LI instruction cache (not shown).
[0052] Note that load-marking does not necessarily have to take place in LI data cache 115. In general load-marking can take place at any level cache, such as L2 cache 120. However, for performance reasons, the load-marking takes place at the cache level that is closest the processor as possible, which in this case is LI data cache 115. Otherwise, loads would have to go to L2 cache 120 even on an LI hit. [0053] L2 cache 120 operates in concert with LI data cache 115 (and a corresponding LI instruction cache) in processor 101, and with LI data cache 117 (and a corresponding LI instruction cache) in processor 102. Note that L2 cache 120 is associated with a coherency mechanism 122, such as the reverse directory structure described in U.S. Patent Application No. 10/186, 118, entitled, "Method and Apparatus for Facilitating Speculative Loads in a Multiprocessor System," filed on June 26, 2002, by inventors Shailender Chaudhry and Marc Tremblay (Publication No. US-2002-
0199066-A1). This coherency mechanism 122 maintains "copyback information" 121 for each cache line. This copyback information 121 facilitates sending a cache line from L2 cache 120 to a requesting processor in cases where the current version of the cache line must first be retrieved from another processor.
[0054] Each line in L2 cache 120 includes a "store-marking bit," which indicates that a data value has been stored to the line during transactional execution. This store- marking bit is used to determine whether any interfering memory references take place during transactional execution as is described below with reference to FIGs. 3-8. Note that store-marking does not necessarily have to take place in L2 cache 120.
[0055] Ideally, the store-marking takes place in the cache level closest to the processor where cache lines are coherent. For write-through LI data caches, writes are automatically propagated to L2 cache 120. However, if an LI data cache is a write-back cache, we perform store-marking in the LI data cache. (Note that the cache coherence protocol ensures that any other processor that subsequently modifies the same cache line will retrieve the cache line from the LI cache, and will hence become aware of the store- mark.)
Executing a Critical Section
[0056] FIG. 2 illustrates how a critical section is executed in accordance with an embodiment of the present invention. As is illustrated in the left-hand side of FIG. 2, a process that executes a critical section typically acquires a lock associated with the critical section before entering the critical section. If the lock has been acquired by another process, the process may have to wait until the other process releases the lock.
Upon leaving the critical section, the process releases the lock. (Note that the terms
"thread" and "process" are used interchangeably throughout this specification.)
[0057] A lock can be associated with a shared data structure. For example, before accessing a shared data structure, a process can acquire a lock on the shared data structure. The process can then execute a critical section of code that accesses the shared data structure. After the process is finished accessing the shared data structure, the process releases the lock.
[0058] In contrast, in the present invention, the process does not acquire a lock, but instead executes a start transactional execution (STE) instruction before entering the critical section. If the critical section is successfully completed without interference from other processes, the process performs a commit operation, to commit changes made during transactional execution. This sequence of events is described in more detail below with reference to FIGs. 3-8. [0059] Note that in one embodiment of the present invention a compiler replaces lock-acquiring instructions with STE instructions, and also replaces corresponding lock releasing instructions with commit instructions. (Note that there may not be a one-to-one correspondence between replaced instructions. For example, a single lock acquisition operation comprised of multiple instructions may be replaced by a single STE instruction.) The above discussion presumes that the processor's instruction set has been augmented to include an STE instruction and a commit instruction. These instructions are described in more detail below with reference to FIGs. 3-9.
Transactional Execution Process [0060] FIG. 3 presents a flow chart illustrating how transactional execution takes place in accordance with an embodiment of the present invention. A process first executes an STE instruction prior to entering of a critical section of code (step 302). Next, the system transactionally executes code within the critical section, without committing results of the transactional execution (step 304). [0061] During this transactional execution, the system continually monitors data
■ references made by other processes, and determines if an interfering data access (or other type of failure) takes place during transactional execution. If not, the system atomically commits all changes made during transactional execution (step 308) and then resumes normal non-transactional execution of the program past the critical section (step 310). [0062] On the other hand, if an interfering data access is detected, the system discards changes made during the transactional execution (step 312), and attempts to re- execute the critical section (step 314).
[0063] In one embodiment of the present invention, the system attempts the transactionally re-execute the critical section zero, one, two or more times. If these attempts are not successful, the system reverts back to the conventional technique of acquiring a lock on the critical section before entering the critical section, and then releasing the lock after leaving the critical section.
[0064] Note that an interfering data access can include a store by another process to a cache line that has been load-marked by the process. It can also include a load or a store by another process to a cache line that has been store-marked by the process.
[0065] Also note that circuitry to detect interfering data accesses can be easily implemented by making minor modifications to conventional cache coherence circuitry. This conventional cache coherence circuitry presently generates signals indicating whether a given cache line has been accessed by another processor. Hence, these signals can be used to determine whether an interfering data access has taken place.
Starting Transaetional Execution
[0066] FIG. 4 presents a flow chart illustrating a start transactional execution (STE) operation in accordance with an embodiment of the present invention. This flow chart illustrates what takes place during step 302 of the flow chart in FIG. 3. The system starts by checkpointing the register file (step 402). This can involve performing a flash copy operation from register file 103 to register file 104 (see FIG. 1). In addition to checkpointing register values, this flash copy can also checkpoint various state registers associated with the currently executing process. In general, the flash copy operation checkpoints enough state to be able to restart the corresponding thread.
[0067] At the same time the register file is checkpointed, the STE operation also causes store buffer 112 to become "gated" (step 404). This allows existing entries in store buffer to propagate to the memory sub-system, but prevents new store buffer entries generated during transactional execution from doing so.
[0068] The system then starts transactional execution (step 406), which involves load-marking and store-marking cache lines, if necessary, as well as monitoring data references in order to detect interfering references.
Load-Marking Process
[0069] FIG. 5 presents a flow chart illustrating how load-marking is performed during transactional execution in accordance with an embodiment of the present invention. During transactional execution of a critical section, the system performs a load operation. In performing this load operation if the load operation has been identified as a load operation that needs to be load-marked, system first attempts to load a data item from LI data cache 115 (step 502). If the load causes a cache hit, the system "load- marks" the corresponding cache line in LI data cache 115 (step 506). This involves setting the load-marking bit for the cache line. Otherwise, if the load causes a cache miss, the system retrieves the cache line from further levels of the memory hierarchy (step 508), and proceeds to step 506 to load-mark the cache line in LI data cache 115.
Store-Marking Process
[0070] FIG. 6 presents a flow chart illustrating how store-marking is performed during transactional execution in accordance with an embodiment of the present invention. During transactional execution of a critical section, the system performs a store operation. If this store operation has been identified as a store operation that needs to be store-marked, the system first prefetches a corresponding cache line for exclusive use (step 602). Note that this prefetch operation will do nothing if the line is already located in cache and is already in an exclusive use state.
[0071] Since in this example LI data cache 115 is a write-through cache, the store operation propagates through LI data cache 115 to L2 cache 120. The system then attempts to lock the cache line corresponding to the store operation in L2 data cache 115 (step 604). If the corresponding line is in L2 cache 120 (cache hit), the system "store- marks" the corresponding cache line in L2 cache 120 (step 610). This involves setting the store-marking bit for the cache line. Otherwise, if the corresponding line is not in L2 cache 120 (cache miss), the system retrieves the cache line from further levels of the memory hierarchy (step 608) and then proceeds to step 610 to store-mark the cache line in L2 cache 120.
[0072] Next, after the cache line is store-marked in step 610, the system enters the store data into an entry of the store buffer 112 (step 612). Note that this store data will remain in store buffer 112 until a subsequent commit operation takes place, or until changes made during the transactional execution are discarded.
[0073] Note that a cache line that is store marked by a given thread can be read by other threads. Note that this may cause the given thread to fail while the other threads continue.
Commit Operation
[0074] FIG. 7 presents a flow chart illustrating how a commit operation is performed after transactional execution completes successfully in accordance with an embodiment of the present invention. This flow chart illustrates what takes place during step 308 of the flow chart in FIG. 3. [0075] The system starts by treating store-marked cache lines as though they are locked (step 702). This means other processes that request a store-marked line must wait until the line is no longer locked before they can access the line. This is similar to how lines are locked in conventional caches.
[0076] Next, the system clears load-marks from LI data cache 115 (step 704). [0077] The system then commits entries from store buffer 112 for stores that are identified as needing to be marked, which were generated during the transactional execution, into the memory hierarchy (step 706). As each entry is committed, a corresponding line in L2 cache 120 is unlocked. [0078] The system also commits register file changes (step 708). For example, this can involve functionally performing a flash copy between register file 103 and register file 104 in the system illustrated in FIG. 1.
Discarding Changes [0079] FIG. 8 presents a flow chart illustrating how changes are discarded after transactional execution completes unsuccessfully in accordance with an embodiment of the present invention. This flow chart illustrates what takes place during step 312 of the flow chart in FIG. 3. The system first discards register file changes made during the transactional execution (step 802). This can involve either clearing or simply ignoring register file changes made during transactional execution. This is easy to accomplish because the old register values were checkpointed prior to commencing transactional execution. The system also clears load-marks from cache lines in LI data cache 115 (step 804), and drains store buffer entries generated during transactional execution without committing them to the memory hierarchy (step 806). At the same time, the system unmarks corresponding L2 cache lines. Finally, in one embodiment of the present invention, the system branches to a target location specified by the STE instruction (step 808). The code at this target location attempts to re-execute the critical section as is described above with reference to step 314 of FIG. 1.
Monitored Load Instructions
[0080] FIG. 9A presents a flow chart illustrating how monitored and unmonitored load instructions are generated in accordance with an embodiment of the present invention. This process takes place when a program is being generated to support transactional execution. For example, in one embodiment of the present invention, a compiler or virtual machine automatically generates native code to support transactional execution. In another embodiment, a programmer manually generates code to support transactional execution.
[0081] The system first determines whether a given load operation within a block of instructions to be transactionally executed needs to be monitored (step 902). In one embodiment of the present invention, the system determines whether a load operation needs to be monitored by determining whether the load operation is directed to a heap. Note that a heap contains data that can potentially be accessed by other processes. Hence, loads from the heap need to be monitored to detect interference. In contrast, loads from outside the heap, (for example, from the local stack) are not directed to data that is shared by other processes, and hence do not need to be monitored to detect interference.
[0082] One embodiment of the present invention determines whether a load operation needs to be monitored at the programming-language level, by examining a data structure associated with the load operation to determine whether the data structure is a "protected" data structure for which loads need to be monitored, or an "unprotected" data structure for which loads do not need to be monitored.
[0083] In yet another embodiment, the system allows a programmer to determine whether a load operation needs to be monitored.
[0084] If the system determines that a given load operation needs to be monitored, the system generates a "monitored load" instruction (step 904). Otherwise, the system generates an "unmonitored load" instruction (step 906).
[0085] There are a number of different ways to differentiate a monitored load instruction from an unmonitored load instruction. (1) The system can use the op code to differentiate a monitored load instruction from an unmonitored load instruction. (2) Alternatively, the system can use the address of the load instruction to differentiate between the two types of instructions. For example, loads directed to a certain range of addresses can be monitored load instructions, whereas loads directed to other address can be unmonitored load instructions. [0086] Also note that an unmonitored load instruction can either indicate that no other process can possibly interfere with the load operation, or it can indicate that interference is possible, but it is not a reason to fail. (Note that in some situations, interfering accesses to shared data can be tolerated.) [0087] FIG. 9B presents a flow chart illustrating how monitored and unmonitored load instructions are executed in accordance with an embodiment of the present invention. The system first determines whether the load instruction is a monitored load instruction or an unmonitored load instruction (step 910). This can be accomplished by looking at the op code of the load instruction, or alternatively, looking at the address for the load instruction. Note that the address can be examined by comparing the address against boundary registers, or possibly examining a translation lookaside buffer (TLB) entry fro the address to determine if the address falls within a monitored range of addresses.
[0088] If the load instruction is a monitored load instruction, the system performs the corresponding load operation and load marks the associated cache line (step 914).
Otherwise, if the load instruction is an unmonitored load instruction, the system performs the load operation without load-marking the cache line (step 916).
[0089] In another embodiment of the present invention, instead of detecting interfering data accesses from other processes, the system does not allow a load operation from the current process cause other processes to fail. This can be accomplished by propagating additional information during the coherency transactions associated with the load operation to ensure that the load operation does not cause another process to fail.
Monitored Store Instructions [0090] FIG. 10A presents a flow chart illustrating how monitored and unmonitored store instructions are generated in accordance with an embodiment of the present invention. As was described above for load operations, this process can take place when a compiler or virtual machine automatically generates native code to support transactional execution, or when a programmer manually generates code to support transactional execution.
[0091] The system first determines whether a store operation within a block of instructions to be transactionally executed needs to be monitored (step 1002). This determination can be made in the based on the same factors as for load instructions.
[0092] If the system determines that a store operation needs to be monitored, the system generates a "monitored store" instruction (step 1004). Otherwise, the system generates an "unmonitored store" instruction (step 1006).
[0093] Note that monitored store instructions can be differentiated from unmonitored store instructions in the same way that monitored load instructions can be differentiated from unmonitored load instructions, for example the system can use different op codes or different address ranges.
[0094] FIG. 10B presents a flow chart illustrating how monitored and unmonitored store instructions are executed in accordance with an embodiment of the present invention. The system first determines whether the store instruction is a monitored store instruction or an unmonitored store instruction (step 1010). This can be accomplished by looking at the op code for the store instruction, or alternatively, looking at the address for the store instruction. If the store instruction is a monitored store instruction, the system performs the corresponding store operation and store marks the associated cache line (step 1014). Otherwise, if the store instruction is an unmonitored store instruction, the system performs the store operation without store-marking the cache line (step 1016).
[0095] Note that a store-marked cache line can indicate one or more of the following: (1) loads from other processes to the cache line should be monitored; (2) stores from other processes to the cache line should be monitored; or (3) stores to the cache line should be buffered until the transactional execution completes.
[0096] In another embodiment of the present invention, instead of detecting interfering data accesses from other processes, the system does not allow a store operation from the current process cause another process to fail. This can be accomplished by propagating additional information during coherency transactions associated with the store operation to ensure that the store operation does not cause another process to fail.
[0097] The foregoing descriptions of embodiments of the present invention have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art.
Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims.

Claims

What Is Claimed Is:
1. A method for selectively monitoring store instructions to support transactional execution of a process, comprising: encountering a store instruction during transactional execution of a block of instructions in a program, wherein changes made during the transactional execution are not committed to the architectural state of a processor until the transactional execution successfully completes; determining whether the store instruction is a monitored store instruction or an unmonitored store instruction; if the store instruction is a monitored store instruction, performing a corresponding store operation, and store-marking a cache line associated with the store instruction to facilitate subsequent detection of an interfering data access to the cache line from another process; and if the store instruction is an unmonitored store instruction, performing the corresponding store operation without store-marking the cache line.
2. The method of claim 1 , wherein prior to executing the program, the method further comprises generating the instructions for the program, wherein generating the instructions involves: determining whether store operations that take place during transactional execution need to be monitored; generating monitored store instructions for store operations that need to be monitored; and generating unmonitored store instructions for store operations that do not need to be monitored.
3. The method of claim 2, wherein determining whether a store operation needs to be monitored can involve examining a data structure associated with the store operation to determine whether the data structure is a "protected" data structure for which stores need to be monitored, or an "unprotected" data structure for which stores do not need to be monitored.
4. The method of claim 2, wherein determining whether a store operation needs to be monitored can involve determining whether the store operation is directed to a heap, wherein stores from the heap need to be monitored and stores from outside the heap do not need to be monitored.
5. The method of claim 2, wherein determining whether a store operation needs to be monitored can involve allowing a programmer to determine if the store operation needs to be monitored.
6. The method of claim 1, determining whether the store instruction is a monitored store instruction involves examining an op code of the store instruction.
7. The method of claim 1 , determining whether the store instruction is a monitored store instruction involves examining an address associated with the store instruction to determine whether the address falls within a range of addresses for which stores are monitored.
8. The method of claim 7, wherein examining the address involves comparing the address with one or more boundary registers.
9. The method of claim 7, wherein examining the address involves examining a Translation Lookaside Buffer (TLB) entry associated with the address.
10. The method of claim 1 , wherein if an interfering data access from another process is encountered during transactional execution of the block of instructions, the method further comprises: discarding changes made during the transactional execution; and attempting to re-execute the block of instructions.
11. The method of claim 1 , wherein if transactional execution of the block of instructions completes without encountering an interfering data access from another process, the method further comprises: committing changes made during the transactional execution to the architectural state of the processor; and resuming normal non-transactional execution of the program past the block of instructions.
12. The method of claim 1, wherein an interfering data access can include: a store by another process to a cache line that has been load-marked by the process; and a load or a store by another process to a cache line that has been store-marked by the process. .
13. The method of claim 1 , wherein the cache line is store-marked in the cache level closest to the processor where cache lines are coherent.
14. The method of claim 1, wherein a store-marked cache line indicates at least one of the following: loads from other processes to the cache line should be monitored; stores from other processes to the cache line should be monitored; and stores to the cache line should be buffered until the transactional execution completes.
15. An apparatus that selectively monitors store instructions to support transactional execution of a process, comprising: an execution mechanism within a processor; wherein the execution mechanism is configured to support transactional execution of a block of instructions in a program, wherein changes made during the transactional execution are not committed to the architectural state of a processor until the transactional execution successfully completes; wherein upon encountering a store instruction during transactional execution, the execution mechanism is configured to, determine whether the store instruction is a monitored store instruction or an unmonitored store instruction, if the store instruction is a monitored store instruction, to perform a corresponding store operation, and to store-mark a cache line associated with the store instruction to facilitate subsequent detection of an interfering data access to the cache line from another process; and if the store instruction is an unmonitored store instruction, to perform the corresponding store operation without store-marking the cache line.
16. The apparatus of claim 15, further comprising an instruction generation mechanism configured to: determine whether store operations that take place during transactional execution need to be monitored; generate monitored store instructions for store operations that need to be monitored; and to generate unmonitored store instructions for store operations that do not need to be monitored.
17. The apparatus of claim 16, wherein the instruction generation mechanism is configured to determine whether a store operation needs to be monitored by examining a data structure associated with the store operation to determine whether the data structure is a "protected" data structure for which stores need to be monitored, or an "unprotected" data structure for which stores do not need to be monitored.
18. The apparatus of claim 16, wherein the instruction generation mechanism is configured to determine whether a store operation needs to be monitored by determining whether the store operation is directed to a heap, wherein stores from the heap need to be monitored and stores from outside the heap do not need to be monitored.
19. The, apparatus of claim 16, wherein the instruction generation mechanism is configured to determine whether a store operation needs to be monitored by allowing a programmer to determine if the store operation needs to be monitored.
20. The apparatus of claim 15, wherein the execution mechanism is configured to determine whether a store operation needs to be monitored by examining an op code of the store instruction.
21. The apparatus of claim 15 , wherein the execution mechanism is configured to determine whether a store operation needs to be monitored by examining an address associated with the store instruction to determine whether the address falls within a range of addresses for which stores are monitored.
22. The apparatus of claim 21 , wherein the execution mechanism is configured to examine the address by comparing the address with one or more boundary registers.
23. The apparatus of claim 21 , wherein the execution mechanism is configured to examine the address by examining a Translation Lookaside Buffer (TLB) entiy associated with the address.
24. The apparatus of claim 15, wherein if an interfering data access from another process is encountered during transactional execution of the block of instructions, the execution mechanism is configured to: discard changes made during the transactional execution; and to attempt to re-execute the block of instructions.
25. The apparatus of claim 15 , wherein if transactional execution of the block of instructions completes without encountering an interfering data access from another process, the execution mechanism is configured to: commit changes made during the transactional execution to the architectural state of the processor; and to resume normal non-transactional execution of the program past the block of instructions.
26. The apparatus of claim 15, wherein an interfering data access can include: a store by another process to a cache line that has been load-marked by the process; and a load or a store by another process to a cache line that has been store-marked by the process.
27. The apparatus of claim 15, wherein the cache line is store-marked in the cache level closest to the processor where cache lines are coherent.
28. The apparatus of claim 15, wherein a store-marked cache line indicates at least one of the following: loads from other processes to the cache line should be monitored; stores from other processes to the cache line should be monitored; and stores to the cache line should be buffered until the transactional execution completes.
29. A computer system that selectively monitors store instructions to support transactional execution of a process, comprising: a processor; a memory; an execution mechanism within the processor; wherein the execution mechanism is configured to support transactional execution of a block of instructions in a program, wherein changes made during the transactional execution are not committed to the architectural state of a processor until the transactional execution successfully completes; wherein upon encountering a store instruction during transactional execution, the execution mechanism is configured to, determine whether the store instruction is a monitored store instruction or an unmonitored store instruction, if the store instruction is a monitored store instruction, to perform a corresponding store operation, and to store-mark a cache line associated with the store instruction to facilitate subsequent detection of an interfering data access to the cache line from another process; and if the store instruction is an unmonitored store instruction, to perform the corresponding store operation without store-marking the cache line.
PCT/US2004/003027 2003-02-13 2004-02-03 Method and apparatus for selective monitoring of store instructions during speculative thread execution WO2004075044A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US44712803P 2003-02-13 2003-02-13
US60/447,128 2003-02-13
US10/637,167 US7269693B2 (en) 2003-02-13 2003-08-08 Selectively monitoring stores to support transactional program execution
US10/637,167 2003-08-08

Publications (2)

Publication Number Publication Date
WO2004075044A2 true WO2004075044A2 (en) 2004-09-02
WO2004075044A3 WO2004075044A3 (en) 2006-09-21

Family

ID=32912252

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/003027 WO2004075044A2 (en) 2003-02-13 2004-02-03 Method and apparatus for selective monitoring of store instructions during speculative thread execution

Country Status (2)

Country Link
US (2) US7269693B2 (en)
WO (1) WO2004075044A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006071969A1 (en) * 2004-12-29 2006-07-06 Intel Corporation Transaction based shared data operations in a multiprocessor environment

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7376800B1 (en) 2004-09-14 2008-05-20 Azul Systems, Inc. Speculative multiaddress atomicity
US7552302B1 (en) 2004-09-14 2009-06-23 Azul Systems, Inc. Ordering operation
US8180971B2 (en) 2005-12-09 2012-05-15 University Of Rochester System and method for hardware acceleration of a software transactional memory
US8117605B2 (en) * 2005-12-19 2012-02-14 Oracle America, Inc. Method and apparatus for improving transactional memory interactions by tracking object visibility
US20070186056A1 (en) * 2006-02-07 2007-08-09 Bratin Saha Hardware acceleration for a software transactional memory system
US20070300238A1 (en) * 2006-06-21 2007-12-27 Leonidas Kontothanassis Adapting software programs to operate in software transactional memory environments
US20080005504A1 (en) * 2006-06-30 2008-01-03 Jesse Barnes Global overflow method for virtualized transactional memory
US9798590B2 (en) * 2006-09-07 2017-10-24 Intel Corporation Post-retire scheme for tracking tentative accesses during transactional execution
US8719807B2 (en) * 2006-12-28 2014-05-06 Intel Corporation Handling precompiled binaries in a hardware accelerated software transactional memory system
US7730265B1 (en) * 2007-03-06 2010-06-01 Oracle America, Inc. Starvation-avoiding unbounded transactional memory
US8095741B2 (en) * 2007-05-14 2012-01-10 International Business Machines Corporation Transactional memory computing system with support for chained transactions
US8117403B2 (en) * 2007-05-14 2012-02-14 International Business Machines Corporation Transactional memory system which employs thread assists using address history tables
US9009452B2 (en) 2007-05-14 2015-04-14 International Business Machines Corporation Computing system with transactional memory using millicode assists
US8095750B2 (en) * 2007-05-14 2012-01-10 International Business Machines Corporation Transactional memory system with fast processing of common conflicts
US20080320282A1 (en) * 2007-06-22 2008-12-25 Morris Robert P Method And Systems For Providing Transaction Support For Executable Program Components
US7966459B2 (en) * 2007-12-31 2011-06-21 Oracle America, Inc. System and method for supporting phased transactional memory modes
US7904668B2 (en) * 2007-12-31 2011-03-08 Oracle America, Inc. Optimistic semi-static transactional memory implementations
US9928072B1 (en) 2008-05-02 2018-03-27 Azul Systems, Inc. Detecting and recording atomic execution
US8127057B2 (en) * 2009-08-13 2012-02-28 Advanced Micro Devices, Inc. Multi-level buffering of transactional data
US8566524B2 (en) * 2009-08-31 2013-10-22 International Business Machines Corporation Transactional memory system with efficient cache support
US8782434B1 (en) 2010-07-15 2014-07-15 The Research Foundation For The State University Of New York System and method for validating program execution at run-time
US9619301B2 (en) * 2011-04-06 2017-04-11 Telefonaktiebolaget L M Ericsson (Publ) Multi-core memory model and speculative mode processor management
US9361115B2 (en) 2012-06-15 2016-06-07 International Business Machines Corporation Saving/restoring selected registers in transactional processing
US9740549B2 (en) 2012-06-15 2017-08-22 International Business Machines Corporation Facilitating transaction completion subsequent to repeated aborts of the transaction
US9384004B2 (en) 2012-06-15 2016-07-05 International Business Machines Corporation Randomized testing within transactional execution
US10437602B2 (en) 2012-06-15 2019-10-08 International Business Machines Corporation Program interruption filtering in transactional execution
US8880959B2 (en) 2012-06-15 2014-11-04 International Business Machines Corporation Transaction diagnostic block
US9448796B2 (en) 2012-06-15 2016-09-20 International Business Machines Corporation Restricted instructions in transactional execution
US8682877B2 (en) 2012-06-15 2014-03-25 International Business Machines Corporation Constrained transaction execution
US8688661B2 (en) 2012-06-15 2014-04-01 International Business Machines Corporation Transactional processing
US8966324B2 (en) 2012-06-15 2015-02-24 International Business Machines Corporation Transactional execution branch indications
US9772854B2 (en) 2012-06-15 2017-09-26 International Business Machines Corporation Selectively controlling instruction execution in transactional processing
US9336046B2 (en) 2012-06-15 2016-05-10 International Business Machines Corporation Transaction abort processing
US9348642B2 (en) 2012-06-15 2016-05-24 International Business Machines Corporation Transaction begin/end instructions
US9442737B2 (en) 2012-06-15 2016-09-13 International Business Machines Corporation Restricting processing within a processor to facilitate transaction completion
US9436477B2 (en) 2012-06-15 2016-09-06 International Business Machines Corporation Transaction abort instruction
US9367323B2 (en) 2012-06-15 2016-06-14 International Business Machines Corporation Processor assist facility
US9317460B2 (en) 2012-06-15 2016-04-19 International Business Machines Corporation Program event recording within a transactional environment
US20130339680A1 (en) 2012-06-15 2013-12-19 International Business Machines Corporation Nontransactional store instruction
US9122873B2 (en) 2012-09-14 2015-09-01 The Research Foundation For The State University Of New York Continuous run-time validation of program execution: a practical approach
US9069782B2 (en) 2012-10-01 2015-06-30 The Research Foundation For The State University Of New York System and method for security and privacy aware virtual machine checkpointing
US9977683B2 (en) * 2012-12-14 2018-05-22 Facebook, Inc. De-coupling user interface software object input from output
US9547594B2 (en) * 2013-03-15 2017-01-17 Intel Corporation Instructions to mark beginning and end of non transactional code region requiring write back to persistent storage
US9858151B1 (en) * 2016-10-03 2018-01-02 International Business Machines Corporation Replaying processing of a restarted application

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001093028A2 (en) * 2000-05-31 2001-12-06 Sun Microsystems, Inc. Marking memory elements based upon access information during speculative execution
WO2003054693A1 (en) * 2001-12-12 2003-07-03 Telefonaktiebolaget L M Ericsson (Publ) Collision handling apparatus and method

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5428761A (en) 1992-03-12 1995-06-27 Digital Equipment Corporation System for achieving atomic non-sequential multi-word operations in shared memory
JP2500101B2 (en) 1992-12-18 1996-05-29 インターナショナル・ビジネス・マシーンズ・コーポレイション How to update the value of a shared variable
US5727203A (en) * 1995-03-31 1998-03-10 Sun Microsystems, Inc. Methods and apparatus for managing a database in a distributed object operating environment using persistent and transient cache
GB2302966A (en) 1995-06-30 1997-02-05 Ibm Transaction processing with a reduced-kernel operating system
US5701432A (en) 1995-10-13 1997-12-23 Sun Microsystems, Inc. Multi-threaded processing system having a cache that is commonly accessible to each thread
US6021480A (en) 1996-06-05 2000-02-01 Compaq Computer Corporation Aligning a memory read request with a cache line boundary when the request is for data beginning at a location in the middle of the cache line
US5758051A (en) * 1996-07-30 1998-05-26 International Business Machines Corporation Method and apparatus for reordering memory operations in a processor
JP3488347B2 (en) * 1996-08-29 2004-01-19 株式会社日立製作所 Automatic address distribution system and address distribution server
US5854928A (en) * 1996-10-10 1998-12-29 Hewlett-Packard Company Use of run-time code generation to create speculation recovery code in a computer system
US5974438A (en) 1996-12-31 1999-10-26 Compaq Computer Corporation Scoreboard for cached multi-thread processes
US5918005A (en) * 1997-03-25 1999-06-29 International Business Machines Corporation Apparatus region-based detection of interference among reordered memory operations in a processor
US5941983A (en) * 1997-06-24 1999-08-24 Hewlett-Packard Company Out-of-order execution using encoded dependencies between instructions in queues to determine stall values that control issurance of instructions from the queues
US6148300A (en) 1998-06-19 2000-11-14 Sun Microsystems, Inc. Hybrid queue and backoff computer resource lock featuring different spin speeds corresponding to multiple-states
US6185577B1 (en) 1998-06-23 2001-02-06 Oracle Corporation Method and apparatus for incremental undo
US6360220B1 (en) * 1998-08-04 2002-03-19 Microsoft Corporation Lock-free methods and systems for accessing and storing information in an indexed computer data structure having modifiable entries
US6301705B1 (en) * 1998-10-01 2001-10-09 Institute For The Development Of Emerging Architectures, L.L.C. System and method for deferring exceptions generated during speculative execution
US6665708B1 (en) * 1999-11-12 2003-12-16 Telefonaktiebolaget Lm Ericsson (Publ) Coarse grained determination of data dependence between parallel executed jobs in an information processing system
US6895527B1 (en) * 2000-09-30 2005-05-17 Intel Corporation Error recovery for speculative memory accesses
US6460124B1 (en) 2000-10-20 2002-10-01 Wisconsin Alumni Research Foundation Method of using delays to speed processing of inferred critical program portions
US7149878B1 (en) * 2000-10-30 2006-12-12 Mips Technologies, Inc. Changing instruction set architecture mode by comparison of current instruction execution address with boundary address register values
US6463511B2 (en) 2000-12-29 2002-10-08 Intel Corporation System and method for high performance execution of locked memory instructions in a system with distributed memory and a restrictive memory model
JP3729087B2 (en) * 2001-05-23 2005-12-21 日本電気株式会社 Multiprocessor system, data-dependent speculative execution control device and method thereof
US6681311B2 (en) * 2001-07-18 2004-01-20 Ip-First, Llc Translation lookaside buffer that caches memory type information
US6918012B2 (en) 2001-08-28 2005-07-12 Hewlett-Packard Development Company, L.P. Streamlined cache coherency protocol system and method for a multiple processor single chip device
US20030066056A1 (en) 2001-09-28 2003-04-03 Petersen Paul M. Method and apparatus for accessing thread-privatized global storage objects
US7120762B2 (en) * 2001-10-19 2006-10-10 Wisconsin Alumni Research Foundation Concurrent execution of critical sections by eliding ownership of locks
US6941449B2 (en) * 2002-03-04 2005-09-06 Hewlett-Packard Development Company, L.P. Method and apparatus for performing critical tasks using speculative operations
US6785789B1 (en) 2002-05-10 2004-08-31 Veritas Operating Corporation Method and apparatus for creating a virtual data copy
US7269717B2 (en) 2003-02-13 2007-09-11 Sun Microsystems, Inc. Method for reducing lock manipulation overhead during access to critical code sections
US7089374B2 (en) 2003-02-13 2006-08-08 Sun Microsystems, Inc. Selectively unmarking load-marked cache lines during transactional program execution
US6862664B2 (en) 2003-02-13 2005-03-01 Sun Microsystems, Inc. Method and apparatus for avoiding locks by speculatively executing critical sections

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001093028A2 (en) * 2000-05-31 2001-12-06 Sun Microsystems, Inc. Marking memory elements based upon access information during speculative execution
WO2003054693A1 (en) * 2001-12-12 2003-07-03 Telefonaktiebolaget L M Ericsson (Publ) Collision handling apparatus and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
STEFFAN J G ET AL: "The potential for using thread-level data speculation to facilitate automatic parallelization" HIGH-PERFORMANCE COMPUTER ARCHITECTURE, 1998. PROCEEDINGS., 1998 FOURTH INTERNATIONAL SYMPOSIUM ON LAS VEGAS, NV, USA 1-4 FEB. 1998, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 1 February 1998 (1998-02-01), pages 2-13, XP010266833 ISBN: 0-8186-8323-6 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006071969A1 (en) * 2004-12-29 2006-07-06 Intel Corporation Transaction based shared data operations in a multiprocessor environment
GB2437211A (en) * 2004-12-29 2007-10-17 Intel Corp Transaction based shared data operation in a multiprocessor environment
JP2008525923A (en) * 2004-12-29 2008-07-17 インテル・コーポレーション Transaction-based shared data operations in a multiprocessor environment
GB2437211B (en) * 2004-12-29 2008-11-19 Intel Corp Transaction based shared data operation in a multiprocessor environment
JP2011044161A (en) * 2004-12-29 2011-03-03 Intel Corp Transaction based shared data operations in multiprocessor environment
US7984248B2 (en) 2004-12-29 2011-07-19 Intel Corporation Transaction based shared data operations in a multiprocessor environment
JP4764430B2 (en) * 2004-12-29 2011-09-07 インテル・コーポレーション Transaction-based shared data operations in a multiprocessor environment
US8176266B2 (en) 2004-12-29 2012-05-08 Intel Corporation Transaction based shared data operations in a multiprocessor environment
US8458412B2 (en) 2004-12-29 2013-06-04 Intel Corporation Transaction based shared data operations in a multiprocessor environment
DE112005003874B3 (en) * 2004-12-29 2021-04-01 Intel Corporation Transaction-based processing operation with shared data in a multiprocessor environment

Also Published As

Publication number Publication date
US7269693B2 (en) 2007-09-11
US20040187115A1 (en) 2004-09-23
US7818510B2 (en) 2010-10-19
US20070271445A1 (en) 2007-11-22
WO2004075044A3 (en) 2006-09-21

Similar Documents

Publication Publication Date Title
US7818510B2 (en) Selectively monitoring stores to support transactional program execution
US7904664B2 (en) Selectively monitoring loads to support transactional program execution
US6938130B2 (en) Method and apparatus for delaying interfering accesses from other threads during transactional program execution
US7389383B2 (en) Selectively unmarking load-marked cache lines during transactional program execution
US7206903B1 (en) Method and apparatus for releasing memory locations during transactional execution
US6862664B2 (en) Method and apparatus for avoiding locks by speculatively executing critical sections
US7398355B1 (en) Avoiding locks by transactionally executing critical sections
US7500086B2 (en) Start transactional execution (STE) instruction to support transactional program execution
US7930695B2 (en) Method and apparatus for synchronizing threads on a processor that supports transactional memory
US20080126883A1 (en) Method and apparatus for reporting failure conditions during transactional execution
EP1913473A1 (en) Avoiding locks by transactionally executing critical sections
US20040163082A1 (en) Commit instruction to support transactional program execution
US7216202B1 (en) Method and apparatus for supporting one or more servers on a single semiconductor chip
US7418577B2 (en) Fail instruction to support transactional program execution
US8065670B2 (en) Method and apparatus for enabling optimistic program execution

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
WWE Wipo information: entry into national phase

Ref document number: 1640/DELNP/2006

Country of ref document: IN