US20040073905A1 - Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit - Google Patents

Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit Download PDF

Info

Publication number
US20040073905A1
US20040073905A1 US10/680,375 US68037503A US2004073905A1 US 20040073905 A1 US20040073905 A1 US 20040073905A1 US 68037503 A US68037503 A US 68037503A US 2004073905 A1 US2004073905 A1 US 2004073905A1
Authority
US
United States
Prior art keywords
event
execution
quiesce
program instructions
change
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/680,375
Inventor
Joel Emer
Rebecca Stamm
Bruce Edwards
Matthew Reilly
Craig Zilles
Tryggve Fossum
Christopher Joerg
James Hicks
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US10/680,375 priority Critical patent/US20040073905A1/en
Publication of US20040073905A1 publication Critical patent/US20040073905A1/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30076Arrangements for executing specific machine instructions to perform miscellaneous control operations, e.g. NOP
    • G06F9/30087Synchronisation or serialisation instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30076Arrangements for executing specific machine instructions to perform miscellaneous control operations, e.g. NOP
    • G06F9/3009Thread control instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3851Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • a “thread” is a stream of instructions being executed by a processor.
  • Software that is multithreaded has multiple threads of control that cooperate to perform a task.
  • a simultaneous multithreaded (SMT) central processor unit provides, on a single CPU, the capability of executing instructions from multiple threads simultaneously.
  • TPU thread processing unit
  • a 4-way issue CPU might have two functional units executing instructions from one thread, while the other two functional units are executing instructions from an unrelated thread. This is accomplished by providing enough registers and other process-specific resources on the CPU to support as many threads as can run simultaneously, and then choosing among the threads to determine which specific instructions will be executed.
  • the threads may be related, where they are cooperatively doing work, or they may be entirely unrelated.
  • FIG. 1 compares sample execution sequences for superscalar, multithreading, and simultaneous multithreading architectures.
  • Each row represents the issue slots for a single execution cycle: a filled box indicates that the processor found an instruction to execute in that issue slot on that cycle.
  • An empty box denotes an unused slot.
  • the unused slots can be characterized as horizontal or vertical waste. Horizontal waste occurs when some, but not all, of the issue slots in a cycle can be used. It typically occurs because of poor instruction-level parallelism. Vertical waste occurs when a cycle goes completely unused. This can be caused by a long latency instruction, such as a memory access, that inhibits further instruction issue.
  • Sequence (a) 2 corresponds to a conventional superscalar. As in all superscalars, it is executing a single program, or thread, from which it attempts to find multiple instructions to issue each cycle. When it cannot, the issue slots go unused, and both horizontal 3 A and vertical waste 3 B are incurred.
  • Sequence (b) 4 corresponds to a multithreaded architecture.
  • Multithreaded processors contain hardware state, i.e., a program counter and registers, for several threads. On any given cycle, a processor executes instructions from just one of the threads. On the next cycle, it switches to a different thread context and executes instructions from the new thread.
  • the primary advantage of multithreaded processors is that they better tolerate long-latency operations, effectively eliminating vertical waste. However, they cannot remove horizontal waste. Consequently, as instruction issue width continues to increase, multithreaded architectures will ultimately suffer the same fate as superscalars: they will be limited by the instruction-level parallelism in a single thread.
  • Sequence (c) 6 corresponds to a simultaneous mutlithreaded archictecture and shows how each cycle in an SMT processor selects instructions for execution from all threads. It exploits instruction-level parallelism by selecting instructions from any thread that can potentially issue. The processor then dynamically schedules machine-resources among the instructions, providing the greatest chance for the highest hardware utilization. If one thread has high instruction-level parallelism, that parallelism can be satisfied; if multiple threads each have low instruction-level parallelism, they can be executed simultaneously to compensate. In this way, SMT can recover issue slots lost to both horizontal and vertical waste.
  • Simultaneous multithreading is advantageous because it allows the CPU to get better throughput. Resources which would lie idle due to limited parallelism in one thread can be utilized by other threads.
  • a software program can be compiled, or decomposed, into multiple threads, with the purpose of achieving improved performance through parallel execution of those threads.
  • the threads may be executed on different processors in a multiprocessor, or they may be executed on different thread processing units within an SMT CPU.
  • spin lock An integral part of many locking protocols is a busy wait loop, often referred to as a “spin lock.”
  • spin lock a process loops, looking at a particular memory location, i.e., the lock, and waiting for it to change to a specific value before proceeding. Once the value has changed, the process is then free to attempt to obtain the lock via an atomic update of the location.
  • One multithreaded computer uses fine-grained multithreading, which is different from SMT, and addresses the synchronization problem with a hardware retry which traps the thread after some number of failures and deschedules it. This is described in “Exploiting Heterogeneous Parallelism on a Multithreaded Multiprocessor,” 1992, which can be found at www.tera.com/www/archives/library/psdocs.html.
  • U.S. Pat. No. 5,524,247 is a software patent on scheduling to avoid locks. It does not involve hardware and it is not related to SMT architecture.
  • the present invention resolves the problem of spin-lock in an SMT architecture by halting, or “quiescing,” spin-locking threads while they are waiting for some event, i.e., the availability of a lock.
  • a method for halting execution of a program's instructions while the program is waiting for one or more events to occur in a simultaneous multithreaded processor or multiprocessor environment includes arming an event monitor associated with the program by identifying one or more events to be monitored. Each thread preferably has its own event monitor.
  • An event may be, for example, a modification to some identified memory location or group of locations, such as a change of access state or a change of value stored in the location.
  • a change of access state may be, for example, from shared to exclusive. Such an event is typically caused by another program.
  • a change of value can be observed by monitoring a memory bus.
  • a write to the identified memory location rather than observing a change in actual value stored therein, is sufficient to recognize the event.
  • the expiration of a timer is another example of an event that may be monitored.
  • the arming of an event monitor is performed by executing an arm instruction which identifies the memory location to be monitored.
  • the physical address of the memory location is recorded in a working register associated with the program, and an indicator such as a flag is set to a first state which enables the event monitor to monitor for the event.
  • the indicator is set to a second state if a change to the memory location whose address is recorded in the working register is observed by the event monitor.
  • a lock value is loaded from the identified memory location by the same arm instruction.
  • the method further includes requesting, by executing a quiesce instruction after executing the arm instruction, that the program be halted until the event is observed by the event monitor. There is no requirement that the program be halted. However, if execution of the program is halted, the event monitor monitors for the event. Subsequent to observation of the event by the event monitor, but not necessarily immediately after, execution of the program is resumed.
  • program instructions are fetched into an instruction buffer and allowed to propagate into the instruction pipeline.
  • the instruction buffer could be managed in various ways. For example, the percentage or absolute number of the thread's instructions allowed into the buffer could be limited, or different instruction buffers could be allocated to different threads.
  • a timer associated with the program is set to time a predetermined time interval, and started. If program execution has not been resumed for other reasons, for example if the monitored event has not yet occurred, program execution is resumed upon expiration of the time. Preferably, the timer is stopped if execution of the program is resumed due to observation of the event by the event monitor.
  • Halting execution of, or quiescing, the program results in a reduction of power consumption, and allows other executing programs to utilize available resources.
  • FIG. 1 is a schematic diagram comparing sample execution sequences for three different architectures.
  • FIG. 2 is a block diagram of a preferred embodiment of the present invention.
  • FIG. 3 is a schematic diagram illustrating the operation of a fetch thread chooser of a preferred embodiment of the present invention.
  • FIG. 4 is a schematic diagram illustrating the operation of a map thread chooser of a preferred embodiment of the present invention.
  • FIG. 5 is a flowchart demonstrating the execution of the arm instruction of a preferred embodiment of the present invention.
  • FIG. 6 is a schematic diagram illustrating an embodiment of the present invention in which the event monitor watches the status of an identified block of memory.
  • FIG. 7 is a schematic diagram illustrating an embodiment of the present invention in which the event monitor watches memory address and control lines.
  • FIG. 8 is a flowchart demonstrating the execution of the quiesce instruction of a preferred embodiment of the present invention.
  • FIG. 9 is a flowchart demonstrating that a quiesce instruction does not need to follow an arm instruction.
  • the present invention allows an SMT processor to execute more useful instructions per processor cycle, than would an SMT processor without a quiescent state, resulting in improved overall processor performance.
  • LDL_ARM loads a sign-extended longword from memory to a register and arms the event monitor.
  • LDQ_ARM loads a quadword from memory to a register and arms the event monitor.
  • QUIESCE is a conditional instruction, i.e., a request to quiesce, or halt, execution of the thread executing the QUIESCE.
  • FIG. 2 is a block diagram of a preferred embodiment of the present invention.
  • a SMT CPU 100 can execute several threads simultaneously. While a TPU is somewhat abstract, there are definite physical components which belong to each TPU.
  • the CPU 100 comprises multiple TPUs, of which two, TPU # 1 101 A and TPU #N 101 N, are shown. Only the details of TPU # 1 101 A are shown, for demonstrative purposes.
  • Bus 135 connects the various TPUs to the memory 137 .
  • Each TPU has an event monitor 109 to monitor for events identified in an event identification register 103 .
  • this event identification register 103 is a watch_physical_address register which holds the address of a lock 139 located in memory 137 , as indicated by dashed arrow 141 .
  • the lock address is loaded into the event identification register 103 upon execution of an arm instruction 113 .
  • an “armed” watch_flag indication 105 is set to a state to indicate that the event monitor 109 is now armed, and the event monitor 109 begins monitoring for the identified event.
  • quiesce logic or execution scheduler 110 Upon execution of a quiesce request instruction 117 , quiesce logic or execution scheduler 110 starts a quiesce timer 107 , and if the armed indication 105 is set, i.e., the event monitor 109 is armed, the quiesce logic 110 sets the TPU's state 111 to quiesce mode, that is, the TPU 101 A is quiesced.
  • the event monitor 109 Upon observation of the event by the event monitor 109 , for example, a change to the lock 139 referenced in the event identification register 103 , the event monitor clears the armed indication 105 and notifies the quiesce logic 110 which sets the TPU's state 111 to a non-quiesce mode, that is, the TPU resumes execution. Note that even if the TPU had not quiesced, observation of an identified event by the event monitor 109 will clear the armed indicator 105 . Note also, that although not shown, expiration of the timer could be an identified event for which an event monitor could be armed.
  • FIG. 3 is a schematic diagram which illustrates the operation of a fetch thread chooser or selection circuit of a preferred embodiment of the present invention.
  • Each TPU or thread has a corresponding program counter (PC) 305 , which indicates, for the TPU, the next instruction to be fetched from an instruction cache 311 .
  • a fetch thread multiplexor 303 selects from the PCs 305 and passes the selected PC on to the instruction cache 311 .
  • the fetch thread multiplexor 303 selects a thread PC based on control signals 302 generated by a fetch thread chooser 301 . Based on the quiesce states 309 of each thread, as well as various other control signals 307 , the fetch thread chooser 301 selects a thread.
  • the fetch thread chooser 301 selects only PCs associated with non-quiescent threads.
  • the fetch thread chooser 301 may select a PC corresponding to a quiescent thread based on availability of unused instruction buffer space allocated to that thread.
  • FIG. 4 is a schematic diagram illustrating a preferred operation of a map thread chooser or selection circuit, similar to that of the fetch thread chooser described above.
  • each TPU or thread has an instruction buffer 355 .
  • a single buffer may hold instructions for all executing threads, where certain portions of the buffer are allocated for certain threads, or alternatively, where the thread instructions are interspersed within the buffer and identified with their threads.
  • a map thread multiplexor 353 selects from the buffers 355 and passes instructions from the selected buffer on to a mapper 361 , which maps a “virtual” register named in an instruction to a physical register on the CPU.
  • the map thread multiplexor 353 selects from a thread based on control signals 352 generated by a map thread chooser 351 . Based on the quiesce states 309 of each thread, as well as various other control signals 357 which may comprise some or all of the same control signals 307 of FIG. 3, the map thread chooser 351 selects a thread.
  • the map thread chooser 351 selects only instructions from threads which are in a non-quiescent state.
  • Line 1 LDQ_ARM R 1 , (R 5 )
  • the virtual address of the lock 139 is held in register R 5 .
  • LDQ_ARM computes the lock's physical address from the contents of register R 5 , records that physical address in the event identification register 103 , i.e., the watch_physical_address register, and loads the lock value from the physical address in memory into register R 1 .
  • the hardware also sets the armed, or watch_flag, indication 105 , and the event monitor 109 monitors for a change to the memory location recorded in watch_physical_address. If any such change is observed, watch_flag is cleared.
  • FIG. 5 is a flowchart 10 illustrating the operation of the LDx_ARM instruction.
  • a preferred format of the instruction is “LDx_ARM Ra, (Rb).”
  • the virtual address of the memory location, i.e., the lock 139 (FIG. 2), to be monitored by the event monitor 109 is in register Rb, where Rb is some designated register.
  • the value stored in the lock is read and loaded into the register designated by Ra.
  • the lock value is fetched from memory, sign-extended for LDL_ARM, and written to register Ra (step 12 ). If the LDx_ARM instruction encounters an exception (Step 14 ), it is treated just as a normal load instruction (Step 15 ).
  • the processor When a LDx_ARM instruction is executed without faulting, the processor records the target physical address in a per-processor watch_physical_address register (step 16 ) and sets the per-processor watch_flag (step 18 ).
  • the program at line 2 , then tests the value of the lock to see if it is available. If so, the program branches away to GetLock where it will attempt to get the lock. If not, the program continues to line 3 .
  • the QUIESCE instruction is executed. If watch_flag is still set, the TPU ceases executing instructions from the program, i.e., it quiesces. If not, execution continues immediately at line 4 .
  • One way of recognizing that a memory location has changed value in a multiprocessor is to observe that another TPU has changed the access state of the memory location, for example, from “shared” to “exclusive,” while in the same CPU, hardware monitors the addresses on the write bus.
  • each cache block is in some state.
  • one state might be “read-shared,” i.e., SHARED, wherein many processors have read access to any memory location in the block.
  • Another state might be “writable,” i.e., EXCLUSIVE, wherein only one processor has access to the block, and that is read and write access.
  • a quiescing process which is watching a memory location in the block wakes up, possibly before the actual write, but not before it can read the data.
  • the quiescent processor wakes up, that is, it resumes executing its thread, by watching the state of a block, rather than the particular location.
  • FIG. 6 is a schematic diagram illustrating an embodiment of the present invention in which the event monitor watches the status of an identified block of memory.
  • a multiprocessor system comprising multiple CPUs 201 A- 201 M. Though not necessary, for exemplary purposes, each CPU is an SMT CPU.
  • Each CPU 201 A- 201 M has several TPUs ( 101 A- 101 P shown). Each TPU has a respective event monitor 109 A- 109 P.
  • the memory system 137 has a master memory status buffer 401 , which indicates a status such as SHARED or EXCLUSIVE for each memory block.
  • each CPU has a CPU-wide memory status buffer 403 , which contains copies of the information in the master memory status buffer 401 pertinent to that CPU.
  • a message is preferably sent out to those CPUs which need to know about the change, preferably over an inter-CPU messaging bus 405 .
  • the event monitor 109 A- 109 P of this embodiment monitors the inter-CPU messaging bus 405 .
  • the corresponding event monitor When the status of an identified block, or of a block containing an identified memory location or lock, changes to EXCLUSIVE, for example, the corresponding event monitor is triggered to reset the watch_flag indicator 105 and notify the quiesce logic 110 that the event has occurred.
  • FIG. 7 is a schematic diagram illustrating an embodiment of the present invention in which the event monitor 109 watches memory address lines 135 A and control lines 135 B.
  • Comparator 180 compares the watch_physical_address 103 with the address on the memory bus 135 address lines 135 A.
  • the comparator 180 is only enabled for write operations, for example, when WRITE is asserted on a read/write control line 135 B.
  • the output 181 of the comparator 180 indicates whether a write to the identified location, i.e., the monitored event, has occurred.
  • FIG. 8 is a flowchart 20 illustrating the operation of the QUIESCE instruction 117 of FIG. 2.
  • the preferred format is simply “QUIESCE” with no parameters.
  • the watch_flag indication 105 is checked in step 22 , and if it is clear, nothing is done. Thus, the QUIESCE instruction is really a conditional quiesce, or a request to quiesce.
  • the watch_flag 105 is set, the implementation-specific quiesce timer 107 is set to time some implementation-specific finite period of time (step 24 ) and execution of the thread is halted 26 , i.e., the TPU 101 A is quiesced.
  • An examplary period of time is between 10,000 and 100,000 machine clock cycles. For some quiescing threads, the timer is disabled.
  • the event monitor 109 now monitors the memory location identified in the watch_physical_address register 103 .
  • the event monitor may always be monitoring for the identified event, regardless of the state of the watch_flag indicator—it simply would take no action if the watch_flag were not set. If the event is observed (step 28 ), the watch_flag is cleared and the quiesce logic 110 notified. If the quiesce period ends before the quiesce timer 107 expires, the timer 107 must be stopped to prevent it clearing watch_flag after a future LDx_ARM. (step 34 ). Finally execution of the program is resumed.
  • step 28 the timer is monitored at step 30 . If it has not expired, the program remains quiesced and the event monitor continues to monitor for the event at step 28 . Note that, although steps 28 and 30 are shown sequentially in the flowchart of FIG. 8, they are performed by hardware and may in fact be performed in parallel. Finally, if the timer expires before the event is observed, the watch_flag is cleared (step 32 ) and again, program execution is resumed at step 36 .
  • the quiesce timer 107 is useful and/or necessary for several reasons.
  • the timeout enables the implementation of a backoff algorithm, where a process can deschedule itself after some period of time if it has not obtained the lock.
  • the timer prevents a processor from deadlocking if there is a coding error.
  • the code updating the memory location 139 takes an access violation so that the lock is never unlocked. The quiesce timeout allows the waiting processor to wake up and discover the problem with checking code.
  • an interrupt causes a processor to end a quiescent period and immediately start executing the interrupt servicing routine (ISR)
  • ISR interrupt servicing routine
  • that ISR may return to the QUIESCE instruction only if watch_flag is guaranteed to be clear. If it is not, the ISR must return to the instruction after the QUIESCE, since the value of watch_physical_address may have been changed by a LDx_ARM executed while servicing the interrupt.
  • the ISR does not have to be started immediately after the QUIESCE.
  • the hardware may choose to delay execution of the ISR until some later point in the instruction stream.
  • register R 5 contains the address of a lock.
  • the program is spin-locking on the lock until the lock holds the value 0.
  • Register R 0 is loaded with the value of the lock by the LDQ_L instruction.
  • the flowchart 50 of FIG. 9 demonstrates this concept.
  • a LDx_ARM 52 may be followed by a conditional branch 54 .
  • a QUIESCE 58 is executed, whereas on the taken path 60 no matching QUIESCE is executed.
  • the thread may fail to QUIESCE for a variety of reasons.
  • the TPU may always fail to quiesce on some implementations. Otherwise, a direct-mapped translation buffer could thrash. Or, the memory reference could change the contents of the cache upon which the implementation might depend.
  • Some instructions such as floating-point instructions, executed between the LDx_ARM and the QUIESCE, may cause a TPU to always fail to quiesce on some implementations due to, for example, an Illegal Instruction Trap.
  • the TPU may fail to quiesce because an instruction with an unused function code is unpredictable.
  • the watch_flag and watch_physical_address register are loaded simultaneously with the reading of the value of the lock. If the lock value becomes unlocked before the QUIESCE is executed, watch_flag is cleared because the watched location has been modified, preventing the TPU from quiescing needlessly. Of course, if the watch_flag is not cleared due to the change in the lock, the quiescent timer will eventually time out and end the quiescent period.
  • watch_flag and watch_physical_address are implicity written by LDx_ARM and implicity read by QUIESCE, any speculative execution of those instructions must preserve the read-order and write-order of watch_flag and watch_physical_address, as intended in the original program.
  • these instruction are assigned codes such that a program utilizing them is still be functional even when executed in older machines.
  • opcodes for the arm and quiesce instrcutions are chosen such that they are memory format instructions, and appear as NOPs to earlier architectures.
  • AMASK is an instruction which returns a value indicative of resources on a CPU, i.e., the CPU's architecture.
  • code which depends on the register value loaded by the LDx_ARM must execute an ordinary load before the LDx_ARM , to accomplish the load operation in the older machines.
  • a QUIESCE instruction starts a timer and unconditionally quiesces, the timeout being the event upon which the event monitor wakes up the quiescing TPU. There is no arm instruction. This was found not to obtain satisfactory speedups in execution.
  • This embodiment also has no explicit LDx_ARM instruction.
  • the QUIESCE instruction performs a load. If a QUIESCE is executed when the watch_flag is clear, it loads watch_physical_address, sets watch_flag and does not quiesce the processor. Thus, it acts as an LDx_ARM instruction.
  • the load data can be tested by subsequent instructions to find out if the lock is held.
  • For a “second” quiesce it is unclear what that load means or when it is loaded. It is preferable to load the lock value at the end of the QUIESCE period, to see what it has changed to, but this is very difficult to implement.
  • the advantage of this embodiment is that it requires just one instruction. However, it is more difficult to understand and implement. For example, as discussed above, there are two flavors of the instruction, a “first” and a “second.” Furthermore, it is not clear how meaningful data would be returned to the second QUIESCE. Finally, specifying what can or cannot happen between QUIESCE instructions may be unmanageable.
  • LDQ_ARM is a load and QUIESCE is a store, of sorts.
  • a sample code sequence would appear as follows: LDQ_ARM R0, (R5) ; this is a load BEQ getlock QUIESCE R0, (R31) ; this is a “store” getlock:
  • the LDx_ARM functionality is overloaded on the LDx_L instruction. Whenever a LDx_L is executed, the watch_physical_address and the watch_flag are set, in addition to the lock_flag and the lock_physical_address.
  • the lock_flag and the lock_physical_address could be used both for LDx_L/STx_C functionality and for ARM/QUIESCE functionality. In this case, QUIESCE would watch for the clearing of the lock_flag.
  • the same LDx_L would not be used both as the partner of a QUIESCE and the partner of a STx_C.
  • LDx_ARM functionality could be specified using the low address bit of the LDx_L to specify ARM. If only the lock registers are used, no differentiation in the LDx_L instruction is needed.
  • LDx_L load lock
  • STx_C store conditional instructions
  • the QUIESCE instruction loads a value, and the processor quiesces based on that value.
  • a quiesce instruction formatted as “QUIESCE Ra, (Rb)” loads register Ra with the value stored in the memory address in register Rb.
  • the thread quiesces if the value in Ra is non-zero, and is effectively a NOP if the value in Ra is a zero.
  • the QUIESCE instruction also loads the watch-flag and the watch_physical_address.
  • This embodiment uses QUIESCE as follows:
  • the QUIESCE instruction in the above code sequence translates the virtual address in register R 5 and reads the lock value from that physical address. It then compares that lock value with the contents of register R 0 , which was previously loaded by a standard load instruction (the LDQ instruction here) preceding the QUIESCE. If the two values are equal, the QUIESCE succeeds and the thread quiesces. If they are not equal, the QUIESCE has the effect of a NOP and does not quiesce the thread.
  • a standard load instruction the LDQ instruction here
  • the hardware While the processor is asleep, i.e., quiescing, the hardware watches the physical address as calculated when the QUIESCE executed. This is analogous to the watch_physical_address register as defined in other instructions, but is entirely private to the hardware, that is, it is not visible to the software at all. The quiesce period ends if some write access occurs to that physical address.
  • this approach presents a very complicated instruction, unlike any other, requiring a load from memory, a read from a register, and a compare all in the one instruction.
  • Such an instruction is difficult to implement by introducing a datapath completely unlike anything existing in the current Alpha architecture.

Abstract

Execution of a program's instructions in a simultaneous multithreaded processor is halted while the program is waiting for one or more events to occur by first arming an event monitor upon an arm instruction, that is, identifying to the event monitor one or more events to be monitored, such as a modification to a value or state of an identified memory location or group of locations, and setting a watch flag to indicate enable the event monitor. Upon execution of a quiesce request instruction, the program quiesces if the watch flag is set, and a timer is started. Upon observation by the event monitor of an identified event, or upon expiration of the timer, the watch flag is cleared and execution of the program resumes.

Description

    RELATED APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 10/293,975 filed Nov. 11, 2002, which is a continuation of U.S. application Ser. No. 09/411,194 filed Oct. 1, 1999, now U.S. Pat. No. 6,493,741 issued Dec. 10, 2002. The entire teachings of the foregoing applications are incorporated herein by reference.[0001]
  • BACKGROUND OF THE INVENTION
  • A “thread” is a stream of instructions being executed by a processor. Software that is multithreaded has multiple threads of control that cooperate to perform a task. [0002]
  • A simultaneous multithreaded (SMT) central processor unit (CPU) provides, on a single CPU, the capability of executing instructions from multiple threads simultaneously. [0003]
  • On a simultaneously-multithreaded processor, the hardware provides facilities for executing multiple threads as if each thread were executing on its own CPU. This abstract thread processor is called a thread processing unit, or TPU. To the outside world, a TPU has all the capabilities of a conventional CPU. It holds a full process context while a process or thread is executing on that TPU. The term “processor” is used herein to refer either to a TPU or a conventional CPU. [0004]
  • For example, a 4-way issue CPU might have two functional units executing instructions from one thread, while the other two functional units are executing instructions from an unrelated thread. This is accomplished by providing enough registers and other process-specific resources on the CPU to support as many threads as can run simultaneously, and then choosing among the threads to determine which specific instructions will be executed. The threads may be related, where they are cooperatively doing work, or they may be entirely unrelated. [0005]
  • FIG. 1 compares sample execution sequences for superscalar, multithreading, and simultaneous multithreading architectures. Each row represents the issue slots for a single execution cycle: a filled box indicates that the processor found an instruction to execute in that issue slot on that cycle. An empty box denotes an unused slot. The unused slots can be characterized as horizontal or vertical waste. Horizontal waste occurs when some, but not all, of the issue slots in a cycle can be used. It typically occurs because of poor instruction-level parallelism. Vertical waste occurs when a cycle goes completely unused. This can be caused by a long latency instruction, such as a memory access, that inhibits further instruction issue. [0006]
  • Sequence (a) [0007] 2 corresponds to a conventional superscalar. As in all superscalars, it is executing a single program, or thread, from which it attempts to find multiple instructions to issue each cycle. When it cannot, the issue slots go unused, and both horizontal 3A and vertical waste 3B are incurred.
  • Sequence (b) [0008] 4 corresponds to a multithreaded architecture. Multithreaded processors contain hardware state, i.e., a program counter and registers, for several threads. On any given cycle, a processor executes instructions from just one of the threads. On the next cycle, it switches to a different thread context and executes instructions from the new thread. The primary advantage of multithreaded processors is that they better tolerate long-latency operations, effectively eliminating vertical waste. However, they cannot remove horizontal waste. Consequently, as instruction issue width continues to increase, multithreaded architectures will ultimately suffer the same fate as superscalars: they will be limited by the instruction-level parallelism in a single thread.
  • Sequence (c) [0009] 6 corresponds to a simultaneous mutlithreaded archictecture and shows how each cycle in an SMT processor selects instructions for execution from all threads. It exploits instruction-level parallelism by selecting instructions from any thread that can potentially issue. The processor then dynamically schedules machine-resources among the instructions, providing the greatest chance for the highest hardware utilization. If one thread has high instruction-level parallelism, that parallelism can be satisfied; if multiple threads each have low instruction-level parallelism, they can be executed simultaneously to compensate. In this way, SMT can recover issue slots lost to both horizontal and vertical waste.
  • Simultaneous multithreading is advantageous because it allows the CPU to get better throughput. Resources which would lie idle due to limited parallelism in one thread can be utilized by other threads. [0010]
  • A software program can be compiled, or decomposed, into multiple threads, with the purpose of achieving improved performance through parallel execution of those threads. The threads may be executed on different processors in a multiprocessor, or they may be executed on different thread processing units within an SMT CPU. [0011]
  • When programs are multithreaded in this way, locking protocols are used to control access to shared data. Assigning a special memory location, called a lock, to a section of data, controls access to that section of data. A thread can only update the data when it owns the lock. [0012]
  • An integral part of many locking protocols is a busy wait loop, often referred to as a “spin lock.” In a spin lock, a process loops, looking at a particular memory location, i.e., the lock, and waiting for it to change to a specific value before proceeding. Once the value has changed, the process is then free to attempt to obtain the lock via an atomic update of the location. [0013]
  • SUMMARY OF THE INVENTION
  • In a conventional multiprocessor, the CPU resources and memory bandwidth consumed by a task in a spin lock are not simultaneously shared with any other tasks. Thus, while the task is spinning there is no resource contention within the CPU, and no reason not to let the task spin. Various studies have shown that approximately 15% of processor time is spent in spin loops. [0014]
  • In a simultaneous multithreaded CPU, however, the resources consumed by the spinning task are being denied to the other threads that are or could be doing useful work. In fact, Applicants have found that under these circumstances, the simultaneously multithreaded CPU provided no performance increase to the decomposed application, and can actually degrade the performance of the application. [0015]
  • One multithreaded computer uses fine-grained multithreading, which is different from SMT, and addresses the synchronization problem with a hardware retry which traps the thread after some number of failures and deschedules it. This is described in “Exploiting Heterogeneous Parallelism on a Multithreaded Multiprocessor,” 1992, which can be found at www.tera.com/www/archives/library/psdocs.html. [0016]
  • Patent application Ser. No. 08/775,553 by Emer et al, “A Multi-threaded Processor and Method That Selects Threads Based On An Attribute,” (name amended in February, 1999), filed Dec. 31, 1996, assigned to a common assignee as the present invention and incorporated by reference herein in its entirety, describes an SMT architecture. [0017]
  • Many papers have been published about Simultaneous Multithreading. For a fairly complete list, see www.cs.washington,edu/research/smt/. The University of Washington has done much work on efficient synchronization on SMT. See, for example, “Supporting Fine-Grained Synchronization on a Simultaneous Multithreading Processor,” 1995, available at www.cs.washington.edu/research/smt/papers/hpca.ps. A longer version of the paper, UCSD CSE Technical Report #CS98-587, is available at www.cs.washington.edu/research/smt/papers/smt.synch.ps. [0018]
  • These papers propose a synchronization “lock-box” mechanism which has the primary goal of providing faster synchronization between threads. The lock itself is memory-based, but once the lock is obtained by a thread on a particular CPU, the lock-box passes the lock among the threads on that CPU, if they require it. If a thread fails to acquire a lock and must wait for it to become available, the thread's instructions are flushed from the pipeline to prevent that thread consuming resources on the CPU. The mechanisms of synchronizing and possibly flushing the instructions are combined into one “Acquire” instruction, and all actions required by that instruction are carried out strictly in hardware. [0019]
  • U.S. Pat. No. 5,524,247 is a software patent on scheduling to avoid locks. It does not involve hardware and it is not related to SMT architecture. [0020]
  • The present invention resolves the problem of spin-lock in an SMT architecture by halting, or “quiescing,” spin-locking threads while they are waiting for some event, i.e., the availability of a lock. [0021]
  • In accordance with an embodiment of the present invention, a method for halting execution of a program's instructions while the program is waiting for one or more events to occur in a simultaneous multithreaded processor or multiprocessor environment includes arming an event monitor associated with the program by identifying one or more events to be monitored. Each thread preferably has its own event monitor. [0022]
  • An event may be, for example, a modification to some identified memory location or group of locations, such as a change of access state or a change of value stored in the location. A change of access state may be, for example, from shared to exclusive. Such an event is typically caused by another program. [0023]
  • A change of value can be observed by monitoring a memory bus. In one embodiment, a write to the identified memory location, rather than observing a change in actual value stored therein, is sufficient to recognize the event. [0024]
  • The expiration of a timer is another example of an event that may be monitored. [0025]
  • Preferably, the arming of an event monitor is performed by executing an arm instruction which identifies the memory location to be monitored. The physical address of the memory location is recorded in a working register associated with the program, and an indicator such as a flag is set to a first state which enables the event monitor to monitor for the event. The indicator is set to a second state if a change to the memory location whose address is recorded in the working register is observed by the event monitor. Preferably, a lock value is loaded from the identified memory location by the same arm instruction. [0026]
  • The method further includes requesting, by executing a quiesce instruction after executing the arm instruction, that the program be halted until the event is observed by the event monitor. There is no requirement that the program be halted. However, if execution of the program is halted, the event monitor monitors for the event. Subsequent to observation of the event by the event monitor, but not necessarily immediately after, execution of the program is resumed. [0027]
  • Preferably, it is the responsibility of the program to check whether the event has occurred when the program resumes execution from the quiescent state. Thus, to ease implementation, hardware is permitted to release a thread from its quiescent state occasionally even if the event has not occurred. [0028]
  • After requesting that the program be halted, i.e., the indicator has been set to the first state, its execution is halted, if at all, only if the event has not yet occurred since the arming. If the indicator is set to the second state, the program is not halted in response to the request to halt. [0029]
  • Preferably, upon halting execution of the program, program instructions subsequent to the quiesce instruction are flushed from the instruction pipeline. [0030]
  • To allow for a quick restart when execution of the quiesced program or thread resumes, while the program is halted, program instructions are fetched into an instruction buffer and allowed to propagate into the instruction pipeline. The instruction buffer could be managed in various ways. For example, the percentage or absolute number of the thread's instructions allowed into the buffer could be limited, or different instruction buffers could be allocated to different threads. [0031]
  • Preferably, upon halting execution of the program, a timer associated with the program is set to time a predetermined time interval, and started. If program execution has not been resumed for other reasons, for example if the monitored event has not yet occurred, program execution is resumed upon expiration of the time. Preferably, the timer is stopped if execution of the program is resumed due to observation of the event by the event monitor. [0032]
  • Halting execution of, or quiescing, the program results in a reduction of power consumption, and allows other executing programs to utilize available resources.[0033]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. [0034]
  • FIG. 1 is a schematic diagram comparing sample execution sequences for three different architectures. [0035]
  • FIG. 2 is a block diagram of a preferred embodiment of the present invention. [0036]
  • FIG. 3 is a schematic diagram illustrating the operation of a fetch thread chooser of a preferred embodiment of the present invention. [0037]
  • FIG. 4 is a schematic diagram illustrating the operation of a map thread chooser of a preferred embodiment of the present invention. [0038]
  • FIG. 5 is a flowchart demonstrating the execution of the arm instruction of a preferred embodiment of the present invention. [0039]
  • FIG. 6 is a schematic diagram illustrating an embodiment of the present invention in which the event monitor watches the status of an identified block of memory. [0040]
  • FIG. 7 is a schematic diagram illustrating an embodiment of the present invention in which the event monitor watches memory address and control lines. [0041]
  • FIG. 8 is a flowchart demonstrating the execution of the quiesce instruction of a preferred embodiment of the present invention. [0042]
  • FIG. 9 is a flowchart demonstrating that a quiesce instruction does not need to follow an arm instruction.[0043]
  • DETAILED DESCRIPTION OF THE INVENTION
  • A description of preferred embodiments of the invention follows. [0044]
  • In a simultaneous multithreaded CPU, the resources consumed by a spinning task are being denied to the other threads that are doing useful work. Thus it is desirable to prevent the task in the spin lock from consuming resources when there is no chance that it will find the lock value it is looking for. Applicants refer to the action of pausing execution of a thread until the condition it is waiting for might be satisfied as “quiescing” the thread. In a simultaneous multithreaded machine, the act of quiescing means that no instructions are executed from the quiesced “thread processing unit” or TPU. The other TPUs continue normally. [0045]
  • The present invention allows an SMT processor to execute more useful instructions per processor cycle, than would an SMT processor without a quiescent state, resulting in improved overall processor performance. [0046]
  • Applicants' simulation results show that, using the quiescent state of the present invention, decomposed programs are executed from 1.1 to 2.5 times faster than the equivalent single-threaded program. Other runs done without a quiescent state showed no speedup at all, or even degraded in performance. [0047]
  • In one embodiment, two variations of the arm instruction are implemented. LDL_ARM loads a sign-extended longword from memory to a register and arms the event monitor. LDQ_ARM loads a quadword from memory to a register and arms the event monitor. These are herein referred to collectively as LDx_ARM. [0048]
  • QUIESCE is a conditional instruction, i.e., a request to quiesce, or halt, execution of the thread executing the QUIESCE. [0049]
  • These instructions, to be used in sequence, LDx_ARM followed by QUIESCE, allow a processor to declare that it has no work to do until some other processor writes a specified location in memory space. [0050]
  • FIG. 2 is a block diagram of a preferred embodiment of the present invention. A [0051] SMT CPU 100 can execute several threads simultaneously. While a TPU is somewhat abstract, there are definite physical components which belong to each TPU. Here, the CPU 100 comprises multiple TPUs, of which two, TPU # 1 101A and TPU #N 101N, are shown. Only the details of TPU # 1 101A are shown, for demonstrative purposes. Bus 135 connects the various TPUs to the memory 137.
  • Each TPU has an [0052] event monitor 109 to monitor for events identified in an event identification register 103. In a preferred embodiment, this event identification register 103 is a watch_physical_address register which holds the address of a lock 139 located in memory 137, as indicated by dashed arrow 141. The lock address is loaded into the event identification register 103 upon execution of an arm instruction 113. At the same time, an “armed” watch_flag indication 105 is set to a state to indicate that the event monitor 109 is now armed, and the event monitor 109 begins monitoring for the identified event.
  • Upon execution of a [0053] quiesce request instruction 117, quiesce logic or execution scheduler 110 starts a quiesce timer 107, and if the armed indication 105 is set, i.e., the event monitor 109 is armed, the quiesce logic 110 sets the TPU's state 111 to quiesce mode, that is, the TPU 101A is quiesced.
  • Upon observation of the event by the [0054] event monitor 109, for example, a change to the lock 139 referenced in the event identification register 103, the event monitor clears the armed indication 105 and notifies the quiesce logic 110 which sets the TPU's state 111 to a non-quiesce mode, that is, the TPU resumes execution. Note that even if the TPU had not quiesced, observation of an identified event by the event monitor 109 will clear the armed indicator 105. Note also, that although not shown, expiration of the timer could be an identified event for which an event monitor could be armed.
  • FIG. 3 is a schematic diagram which illustrates the operation of a fetch thread chooser or selection circuit of a preferred embodiment of the present invention. Each TPU or thread has a corresponding program counter (PC) [0055] 305, which indicates, for the TPU, the next instruction to be fetched from an instruction cache 311. A fetch thread multiplexor 303 selects from the PCs 305 and passes the selected PC on to the instruction cache 311.
  • The fetch [0056] thread multiplexor 303 selects a thread PC based on control signals 302 generated by a fetch thread chooser 301. Based on the quiesce states 309 of each thread, as well as various other control signals 307, the fetch thread chooser 301 selects a thread.
  • In one embodiment, the fetch [0057] thread chooser 301 selects only PCs associated with non-quiescent threads. Alternatively, the fetch thread chooser 301 may select a PC corresponding to a quiescent thread based on availability of unused instruction buffer space allocated to that thread.
  • FIG. 4 is a schematic diagram illustrating a preferred operation of a map thread chooser or selection circuit, similar to that of the fetch thread chooser described above. As shown, each TPU or thread has an [0058] instruction buffer 355. In practice, a single buffer may hold instructions for all executing threads, where certain portions of the buffer are allocated for certain threads, or alternatively, where the thread instructions are interspersed within the buffer and identified with their threads.
  • A [0059] map thread multiplexor 353 selects from the buffers 355 and passes instructions from the selected buffer on to a mapper 361, which maps a “virtual” register named in an instruction to a physical register on the CPU. The map thread multiplexor 353 selects from a thread based on control signals 352 generated by a map thread chooser 351. Based on the quiesce states 309 of each thread, as well as various other control signals 357 which may comprise some or all of the same control signals 307 of FIG. 3, the map thread chooser 351 selects a thread.
  • The [0060] map thread chooser 351 selects only instructions from threads which are in a non-quiescent state.
  • Here is an example code sequence: [0061]
  • Line [0062] 1: LDQ_ARM R1, (R5)
  • Line [0063] 2: <branch to GetLock if lock available)>
  • Line [0064] 3: QUIESCE
  • Line [0065] 4: GetLock:
  • In this example, the virtual address of the [0066] lock 139 is held in register R5. At line 1, LDQ_ARM computes the lock's physical address from the contents of register R5, records that physical address in the event identification register 103, i.e., the watch_physical_address register, and loads the lock value from the physical address in memory into register R1. At this time the hardware also sets the armed, or watch_flag, indication 105, and the event monitor 109 monitors for a change to the memory location recorded in watch_physical_address. If any such change is observed, watch_flag is cleared.
  • FIG. 5 is a [0067] flowchart 10 illustrating the operation of the LDx_ARM instruction. A preferred format of the instruction is “LDx_ARM Ra, (Rb).” The virtual address of the memory location, i.e., the lock 139 (FIG. 2), to be monitored by the event monitor 109 is in register Rb, where Rb is some designated register. The value stored in the lock is read and loaded into the register designated by Ra.
  • The lock value is fetched from memory, sign-extended for LDL_ARM, and written to register Ra (step [0068] 12). If the LDx_ARM instruction encounters an exception (Step 14), it is treated just as a normal load instruction (Step 15).
  • When a LDx_ARM instruction is executed without faulting, the processor records the target physical address in a per-processor watch_physical_address register (step [0069] 16) and sets the per-processor watch_flag (step 18).
  • Executing a LDx_ARM on one TPU does not affect any architecturally visible state on another TPU, and in particular cannot clear another TPU's watch_flag, causing the quiescing processor to come out of a quiescent state. Without this restriction, two processors executing LDQ_ARM/QUIESCE sequences could be continually re-arming each other. [0070]
  • Referring again to the above example code sequence, the program, at [0071] line 2, then tests the value of the lock to see if it is available. If so, the program branches away to GetLock where it will attempt to get the lock. If not, the program continues to line 3.
  • At [0072] line 3, the QUIESCE instruction is executed. If watch_flag is still set, the TPU ceases executing instructions from the program, i.e., it quiesces. If not, execution continues immediately at line 4.
  • If the program does quiesce, it stays in this quiescent state until the [0073] watch_flag 105 is cleared. This happens when some change occurs to the memory location recorded in watch_physical_address 103, but can also happen at the end of an implementation-specific timeout period, or for other implementation-specific reasons, and therefore, waking up is no guarantee to the program that an event identified for monitoring actually occurred.
  • One way of recognizing that a memory location has changed value in a multiprocessor is to observe that another TPU has changed the access state of the memory location, for example, from “shared” to “exclusive,” while in the same CPU, hardware monitors the addresses on the write bus. [0074]
  • In a preferred embodiment, monitoring the identified memory location is simplified by using existing the cache-coherence protocol. That is, each cache block is in some state. For example, one state might be “read-shared,” i.e., SHARED, wherein many processors have read access to any memory location in the block. Another state might be “writable,” i.e., EXCLUSIVE, wherein only one processor has access to the block, and that is read and write access. Here, when some process or thread is in the writeable state, a quiescing process which is watching a memory location in the block wakes up, possibly before the actual write, but not before it can read the data. Thus, the quiescent processor wakes up, that is, it resumes executing its thread, by watching the state of a block, rather than the particular location. [0075]
  • FIG. 6 is a schematic diagram illustrating an embodiment of the present invention in which the event monitor watches the status of an identified block of memory. Here, a multiprocessor system is shown, comprising [0076] multiple CPUs 201A-201M. Though not necessary, for exemplary purposes, each CPU is an SMT CPU.
  • Each [0077] CPU 201A-201M has several TPUs (101A-101P shown). Each TPU has a respective event monitor 109A-109P.
  • The [0078] memory system 137 has a master memory status buffer 401, which indicates a status such as SHARED or EXCLUSIVE for each memory block. In addition, each CPU has a CPU-wide memory status buffer 403, which contains copies of the information in the master memory status buffer 401 pertinent to that CPU. When the status of a block of memory changes, a message is preferably sent out to those CPUs which need to know about the change, preferably over an inter-CPU messaging bus 405. The event monitor 109A-109P of this embodiment monitors the inter-CPU messaging bus 405. When the status of an identified block, or of a block containing an identified memory location or lock, changes to EXCLUSIVE, for example, the corresponding event monitor is triggered to reset the watch_flag indicator 105 and notify the quiesce logic 110 that the event has occurred.
  • Alternatively, hardware could watch address/data signals on the memory bus. [0079]
  • FIG. 7 is a schematic diagram illustrating an embodiment of the present invention in which the event monitor [0080] 109 watches memory address lines 135A and control lines 135B. Comparator 180 compares the watch_physical_address 103 with the address on the memory bus 135 address lines 135A. The comparator 180 is only enabled for write operations, for example, when WRITE is asserted on a read/write control line 135B. The output 181 of the comparator 180 indicates whether a write to the identified location, i.e., the monitored event, has occurred.
  • FIG. 8 is a flowchart [0081] 20 illustrating the operation of the QUIESCE instruction 117 of FIG. 2. The preferred format is simply “QUIESCE” with no parameters. The watch_flag indication 105 is checked in step 22, and if it is clear, nothing is done. Thus, the QUIESCE instruction is really a conditional quiesce, or a request to quiesce.
  • If, on the other hand, the [0082] watch_flag 105 is set, the implementation-specific quiesce timer 107 is set to time some implementation-specific finite period of time (step 24) and execution of the thread is halted 26, i.e., the TPU 101A is quiesced. An examplary period of time is between 10,000 and 100,000 machine clock cycles. For some quiescing threads, the timer is disabled.
  • The event monitor [0083] 109 now monitors the memory location identified in the watch_physical_address register 103. Alternatively, the event monitor may always be monitoring for the identified event, regardless of the state of the watch_flag indicator—it simply would take no action if the watch_flag were not set. If the event is observed (step 28), the watch_flag is cleared and the quiesce logic 110 notified. If the quiesce period ends before the quiesce timer 107 expires, the timer 107 must be stopped to prevent it clearing watch_flag after a future LDx_ARM. (step 34). Finally execution of the program is resumed.
  • On the other hand, if the event is not observed at [0084] step 28, the timer is monitored at step 30. If it has not expired, the program remains quiesced and the event monitor continues to monitor for the event at step 28. Note that, although steps 28 and 30 are shown sequentially in the flowchart of FIG. 8, they are performed by hardware and may in fact be performed in parallel. Finally, if the timer expires before the event is observed, the watch_flag is cleared (step 32) and again, program execution is resumed at step 36.
  • The [0085] quiesce timer 107 is useful and/or necessary for several reasons. First, the timeout enables the implementation of a backoff algorithm, where a process can deschedule itself after some period of time if it has not obtained the lock. Second, the timer prevents a processor from deadlocking if there is a coding error. Third, suppose the code updating the memory location 139 takes an access violation so that the lock is never unlocked. The quiesce timeout allows the waiting processor to wake up and discover the problem with checking code.
  • If a longer quiesce period is desired than that provided by a given hardware implementation, software can implement the longer period by looping and quiescing repeatedly. [0086]
  • After the quiescent period, execution resumes at the instruction following the QUIESCE, or, if the QUIESCE was terminated because watch_flag was cleared by an interrupt, execution may resume at an interrupt servicing routine. [0087]
  • In a preferred implementation, if an interrupt causes a processor to end a quiescent period and immediately start executing the interrupt servicing routine (ISR), that ISR may return to the QUIESCE instruction only if watch_flag is guaranteed to be clear. If it is not, the ISR must return to the instruction after the QUIESCE, since the value of watch_physical_address may have been changed by a LDx_ARM executed while servicing the interrupt. [0088]
  • In at least one embodiment, if an interrupt occurs during a quiescent period, the ISR does not have to be started immediately after the QUIESCE. The hardware may choose to delay execution of the ISR until some later point in the instruction stream. [0089]
  • A more detailed example code sequence using the quiesce operation follows. In this program, register R[0090] 5 contains the address of a lock. The program is spin-locking on the lock until the lock holds the value 0. Register R0 is loaded with the value of the lock by the LDQ_L instruction.
    GetLock:
    LDQ_L R0, (R5) ;load the lock value
    BNEQ R0, HandleBusyLock ;if not available, quiesce
    <modify R0>
    STQ_C R0, (R5) ;store new lock value if lock_flag
    still set
    BEQ R0, GetLock ;if store conditional failed, try
    again
    <critical section> ;we have the lock, now do the real work
    <clear lock> ;done
    RET
    HandleBusyLock:
    LDA R2, 0×400(R31) ;set bit 10, SMT bit in AMASK
    AMASK R2, R2 ;test whether SMT processor
    BEQ R2, CheckLock ;if no SMT, skip quiesce
    LDQ_ARM R0, (R5) ;load the lock value at address R5 into R0
    ;put lock address into
    watch_physical_address
    ;set watch_flag
    BEQ R0, GetLock ;if lock available, try to get it
    QUIESCE ;if watch_flag set, go quiet
    CheckLock:
    LDQ R0, (R5) ;load lock value again
    BEQ R0, GetLock ;if available, try for it again
    <check for spinning on lock too long>
    BR HandleBusyLock ;loop again
  • In this code sequence, testing the lock just after the LDQ_ARM instruction is crucial to performance in the case where the lock is available—otherwise the code would quiesce needlessly. Having execution after the QUIESCE fall through into the CheckLock section allows the lock to be checked again, in case the quiescent state ended for some other reason than a change in the lock value, such as a timeout or interrupt. Note however, that for a lock which is highly contended, the “BEQ R[0091] 0, GetLock” lines will mispredict when the lock is finally given up, assuming that the program quiesced multiple times before getting a chance at the lock. This mispredict will slow down the attempt to get the lock.
  • Note also that if the LDQ_ARM is executed and QUIESCE is not executed, because a branch is taken to get the lock, the watch_flag will still be set. It will continue to be set until it is cleared by one of the conditions given for clearing watch_flag. This should have no actual effect since the processor is not quiesced at the time. The fact that a processor's watch_flag is set when the event monitor is not actually watching for anything is harmless. The next LDx_ARM which executes will load a new watch_physical_address and set watch_flag whether or not it is already set. Thus, LDx_ARM and QUIESCE instructions need not be paired. [0092]
  • The [0093] flowchart 50 of FIG. 9 demonstrates this concept. In particular, a LDx_ARM 52 may be followed by a conditional branch 54. On the fall-through path 56 a QUIESCE 58 is executed, whereas on the taken path 60 no matching QUIESCE is executed.
  • In some embodiment, the thread may fail to QUIESCE for a variety of reasons. [0094]
  • For example, if any other memory access is executed on the given TPU between the LDx_ARM and the QUIESCE, the TPU may always fail to quiesce on some implementations. Otherwise, a direct-mapped translation buffer could thrash. Or, the memory reference could change the contents of the cache upon which the implementation might depend. [0095]
  • Some instructions, such as floating-point instructions, executed between the LDx_ARM and the QUIESCE, may cause a TPU to always fail to quiesce on some implementations due to, for example, an Illegal Instruction Trap. [0096]
  • Similarly, if an instruction with an unused function code is executed between the LDx_ARM and and the QUIESCE, on some implementations the TPU may fail to quiesce because an instruction with an unused function code is unpredictable. [0097]
  • The watch_flag and watch_physical_address register are loaded simultaneously with the reading of the value of the lock. If the lock value becomes unlocked before the QUIESCE is executed, watch_flag is cleared because the watched location has been modified, preventing the TPU from quiescing needlessly. Of course, if the watch_flag is not cleared due to the change in the lock, the quiescent timer will eventually time out and end the quiescent period. [0098]
  • Since watch_flag and watch_physical_address are implicity written by LDx_ARM and implicity read by QUIESCE, any speculative execution of those instructions must preserve the read-order and write-order of watch_flag and watch_physical_address, as intended in the original program. [0099]
  • For example, in the code sequence below, if the first branch is incorrectly predicted taken, the second LDxARM must not be allowed to affect the behavior of the first QUIESCE by changing watch_physical_address. [0100]
  • LDQ_ARM R[0101] 1, (R5)
  • BEQ RI, test [0102]
  • QUIESCE [0103]
  • test: [0104]
  • LDQ_ARM R[0105] 1, (R5)
  • BEQ R[0106] 1, xxx
  • QUIESCE [0107]
  • When a TPU enters the quiescent state or mode, all instructions subsequent to the QUIESCE are flushed from the pipeline, the quiesce timer is started, and the QUIESCE instruction is retired. This is analogous to what happens on a branch mispredict. Instruction fetch restarts at the instruction after the QUIESCE instruction. [0108]
  • For quick restart, instructions from the quiesced thread are fetched and allowed to propagate into the pipeline up to the mapper. When execution restarts, the thread chooser that selects instructions from the buffer to be mapped and executed can immediately select from the previously-quiesced thread, without incurring the delay of fetching instructions from the instruction cache. Since instructions from the quiesced thread are not mapped, that thread does not consume valuable Inum space (Inums serve to identify “in-flight” instructions) or physical registers. Also, since instructions subsequent to the QUIESCE are no longer in the issue queue, the TPU does not consume execution resources after it quiesces. [0109]
  • In-order execution of LDx_ARM and QUIESCE is ensured through the defined dependency on watch_flag—LDx_ARM sets it and QUIESCE uses it as a condition on its operation. [0110]
  • By having the LDx_ARM load the lock value so that code can test the lock before executing the QUIESCE, the possibility of a race between the lock just becoming available, and quiescing the machine is eliminated. [0111]
  • In a preferred embodiment, these instruction are assigned codes such that a program utilizing them is still be functional even when executed in older machines. [0112]
  • Preferably, opcodes for the arm and quiesce instrcutions are chosen such that they are memory format instructions, and appear as NOPs to earlier architectures. [0113]
  • By meeting these criteria, programs could be written using LDx_ARM/QUIESCE instructions without using AMASK to condition the code based on the processor type. AMASK is an instruction which returns a value indicative of resources on a CPU, i.e., the CPU's architecture. Using the AMASK instruction, code which depends on the register value loaded by the LDx_ARM must execute an ordinary load before the LDx_ARM , to accomplish the load operation in the older machines. [0114]
  • Alternative Embodiments to the LDx_ARM/QUIESCE Approach
  • As the preferred LDx_ARM/QUIESCE embodiment was being developed, a number of alternative embodiments were also considered, as discussed below. [0115]
  • 1. Timer-Based [0116]
  • In this embodiment, a QUIESCE instruction starts a timer and unconditionally quiesces, the timeout being the event upon which the event monitor wakes up the quiescing TPU. There is no arm instruction. This was found not to obtain satisfactory speedups in execution. [0117]
  • 2. Unified QUIESCE Instruction: QUIESCE Ra, (Rb) [0118]
  • This embodiment also has no explicit LDx_ARM instruction. The QUIESCE instruction performs a load. If a QUIESCE is executed when the watch_flag is clear, it loads watch_physical_address, sets watch_flag and does not quiesce the processor. Thus, it acts as an LDx_ARM instruction. [0119]
  • If a QUIESCE is executed when the watch_flag is set, it does quiesce the processor. The processor stays quiesced until its watch_flag is cleared by a store to watch_physical_address. [0120]
  • For the “first” QUIESCE, the load data can be tested by subsequent instructions to find out if the lock is held. For a “second” quiesce, it is unclear what that load means or when it is loaded. It is preferable to load the lock value at the end of the QUIESCE period, to see what it has changed to, but this is very difficult to implement. [0121]
  • The advantage of this embodiment is that it requires just one instruction. However, it is more difficult to understand and implement. For example, as discussed above, there are two flavors of the instruction, a “first” and a “second.” Furthermore, it is not clear how meaningful data would be returned to the second QUIESCE. Finally, specifying what can or cannot happen between QUIESCE instructions may be unmanageable. [0122]
  • 3. Use of Architectural Registers to Enforce LDx_ARM/QUIESCE Dependency. [0123]
  • In this alternative embodiment, LDQ_ARM is a load and QUIESCE is a store, of sorts. A sample code sequence would appear as follows: [0124]
    LDQ_ARM R0, (R5) ; this is a load
    BEQ getlock
    QUIESCE R0, (R31) ; this is a “store” getlock:
  • Since the QUIESCE reads the value in register R[0125] 0, the already-existing hardware in an out-of-order implementation will naturally keep the QUIESCE in-order with the LDQ_ARM, upon which it is dependent. The watch_physical_address and watch_flag registers are used as in the originally preferred embodiment discussed previously.
  • 4. Add LDx_ARM Functionality to LDx_L [0126]
  • In this alternate embodiment, the LDx_ARM functionality is overloaded on the LDx_L instruction. Whenever a LDx_L is executed, the watch_physical_address and the watch_flag are set, in addition to the lock_flag and the lock_physical_address. [0127]
  • Alternatively, instead of having the watch_flag and watch_physical_address registers at all, the lock_flag and the lock_physical_address could be used both for LDx_L/STx_C functionality and for ARM/QUIESCE functionality. In this case, QUIESCE would watch for the clearing of the lock_flag. The same LDx_L would not be used both as the partner of a QUIESCE and the partner of a STx_C. If the watch register and indicator are used, LDx_ARM functionality could be specified using the low address bit of the LDx_L to specify ARM. If only the lock registers are used, no differentiation in the LDx_L instruction is needed. [0128]
  • The LDx_L (“load lock”) and STx_C (“store conditional”) instructions are described in pages 4-9 through 4-14 of “Alpha Architecture Handbook,” [0129] Version 4, Compaq Computer Corporation, 1998, which is incorporated by reference herein in its entirety.
  • These approaches have the advantage that only one new instruction, QUIESCE, is needed. In addition, a code corresponding to a no-operation (NOP) instruction for earlier architectures, could be more easily selected for QUIESCE than for LDx_ARM instruction, providing backward-compatibility. Finally, LDx-L and LDx_ARM already share a lot of functionality, so implementation is relatively straightforward. [0130]
  • However, this does overload the LDx_L instruction, making code using the instruction more difficult to understand and verify. Furthermore, implementations would be restricted by requiring two functionalities. For example, LDx_L would not be able to request write privileges for a block, since it might be used in conjunction with a QUIESCE rather than a STx_C. [0131]
  • 5. Define QUIESCE to be a Load and Test. [0132]
  • In this alternate embodiment, the QUIESCE instruction loads a value, and the processor quiesces based on that value. A quiesce instruction formatted as “QUIESCE Ra, (Rb)” loads register Ra with the value stored in the memory address in register Rb. The thread quiesces if the value in Ra is non-zero, and is effectively a NOP if the value in Ra is a zero. The QUIESCE instruction also loads the watch-flag and the watch_physical_address. [0133]
  • Thus, the advantages of this approach are that LDx_ARM instructions are not needed, and therefore coding restrictions not needed, and only one instruction is needed to accomplish the functionality. [0134]
  • Unfortunately, it is too restrictive to have just one flavor of test, so different types of QUIESCE must be defined, just as there are many types of branches. In addition, this is a different type of instruction, requiring hardware to operate on load data, that is, data loaded from memory. [0135]
  • 6. Define QUIESCE to be a Read of Memory and Compare With a Register. [0136]
  • This embodiment uses QUIESCE as follows: [0137]
  • LDQ R[0138] 0, (R5)
  • BEQ R[0139] 0, getlock
  • QUIESCE R[0140] 0, (R5)
  • getlock: [0141]
  • In this embodiment, the QUIESCE instruction in the above code sequence translates the virtual address in register R[0142] 5 and reads the lock value from that physical address. It then compares that lock value with the contents of register R0, which was previously loaded by a standard load instruction (the LDQ instruction here) preceding the QUIESCE. If the two values are equal, the QUIESCE succeeds and the thread quiesces. If they are not equal, the QUIESCE has the effect of a NOP and does not quiesce the thread.
  • While the processor is asleep, i.e., quiescing, the hardware watches the physical address as calculated when the QUIESCE executed. This is analogous to the watch_physical_address register as defined in other instructions, but is entirely private to the hardware, that is, it is not visible to the software at all. The quiesce period ends if some write access occurs to that physical address. [0143]
  • One advantage of this approach is that a LDx_ARM instruction is not needed, and therefore, coding restrictions are not necessary. Only one instruction is needed to accomplish the functionality. Furthermore, the watch_flag and watch_physical_register do not need to be defined as internal processor registers. [0144]
  • On the other hand, this approach presents a very complicated instruction, unlike any other, requiring a load from memory, a read from a register, and a compare all in the one instruction. Such an instruction is difficult to implement by introducing a datapath completely unlike anything existing in the current Alpha architecture. [0145]
  • While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims. [0146]
  • While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims. [0147]

Claims (19)

What is claimed is:
1. In a digital processor, a method for temporarily halting execution of a given stream of program instructions while a processor is waiting for a subject event to occur, comprising:
in response to waiting, arming an event monitor for monitoring occurrence of events, including identifying at least the subject event; and
halting execution of the given stream of program instructions until occurrence of any one of the identified events is observed by the event monitor, said halting execution including:
monitoring, by the event monitor, for an identified event; and
upon the event monitor observing occurrence of an identified event, resuming execution of the given stream of program instructions.
2. A digital processor system for temporarily halting execution of a given stream of program instructions while a processor is waiting for a subject event to occur, comprising:
an event monitor which in response to processor waiting is armed via identification of the subject event; and
an execution scheduler, responsive to the event monitor, which, upon a request that the given stream of program instructions be halted until the subject event is observed by the event monitor, halts execution of the given stream if the subject event has not yet occurred since the event monitor was armed, and which resumes execution of the given stream upon observation of the subject event by the event monitor.
3. In a digital processing system, a system for temporarily halting execution of a given stream of program instructions while a processor is waiting for a subject event to occur, comprising:
event monitoring means;
arming means responsive to a processor waiting, the arming means arming the event monitoring means by identification of the subject event;
requesting means for requesting that the given stream of program instructions be halted until the subject event is observed by the event monitoring means; and
halting means for halting the given stream of program instructions in response to the requesting means, wherein if execution of the given stream of program instructions is halted, execution of the given stream of program instructions is resumed subsequent to observation of the subject event by the event monitoring means.
4. An electronic circuit for temporarily halting execution of a given stream of program instructions in a digital processing system while a processor is waiting for a subject event to occur, comprising:
an event monitor circuit, for monitoring for the subject event identified in response to processor waiting;
a quiesce logic circuit, which, responsive to the event monitor circuit and to a request to quiesce, temporarily halts execution of the given stream of program instructions, and which, responsive to the event monitor circuit observing occurrence of the subject event, resumes execution of the temporarily halted given stream of program instructions.
5. The method of claim 1 wherein the step of identifying comprises identifying at least one memory location to be monitored by the event monitor, and wherein the subject event includes a modification to any such identified memory location.
6. The method of claim 5 wherein the modification comprises a change of state.
7. The method of claim 6 wherein a change of state includes a change of access state.
8. The method of claim 7 wherein a change of access state is from shared to exclusive.
9. The method of claim 7 wherein a change of access state is observed by monitoring an inter-CPU messaging bus.
10. The method of claim 6 wherein a change of state comprises a change of value.
11. The method of claim 10, wherein a change in value is observed by monitoring a memory bus.
12. The method of claim 10 wherein a change in value is observed as a write to the memory location.
13. The method of claim 1, wherein halting execution of the given stream of program instructions allows other executing program instructions to utilize available resources.
14. The system of claim 2 wherein the subject event is identified by at least one memory location to be monitored, and wherein the subject event comprises a modification to one of the identified memory locations.
15. The system of claim 14 wherein the modification comprises one of a change of state, a change of access state and a change of value.
16. The system of claim 15 wherein a change in value is observed as a write to the memory location.
17. The system of claim 14, wherein the subject event includes a write operation to one of the identified memory locations, as observed by monitoring the address on a memory write bus.
18. The system of claim 2 wherein instructions are executed out of order.
19. The circuit of claim 4 wherein instructions are executed out of order.
US10/680,375 1999-10-01 2003-10-07 Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit Abandoned US20040073905A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/680,375 US20040073905A1 (en) 1999-10-01 2003-10-07 Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/411,194 US6493741B1 (en) 1999-10-01 1999-10-01 Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit
US10/293,975 US6675192B2 (en) 1999-10-01 2002-11-11 Temporary halting of thread execution until monitoring of armed events to memory location identified in working registers
US10/680,375 US20040073905A1 (en) 1999-10-01 2003-10-07 Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/293,975 Continuation US6675192B2 (en) 1999-10-01 2002-11-11 Temporary halting of thread execution until monitoring of armed events to memory location identified in working registers

Publications (1)

Publication Number Publication Date
US20040073905A1 true US20040073905A1 (en) 2004-04-15

Family

ID=23627964

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/411,194 Expired - Fee Related US6493741B1 (en) 1999-10-01 1999-10-01 Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit
US10/293,975 Expired - Lifetime US6675192B2 (en) 1999-10-01 2002-11-11 Temporary halting of thread execution until monitoring of armed events to memory location identified in working registers
US10/680,375 Abandoned US20040073905A1 (en) 1999-10-01 2003-10-07 Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US09/411,194 Expired - Fee Related US6493741B1 (en) 1999-10-01 1999-10-01 Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit
US10/293,975 Expired - Lifetime US6675192B2 (en) 1999-10-01 2002-11-11 Temporary halting of thread execution until monitoring of armed events to memory location identified in working registers

Country Status (1)

Country Link
US (3) US6493741B1 (en)

Cited By (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040168039A1 (en) * 2003-02-20 2004-08-26 Gi-Ho Park Simultaneous Multi-Threading Processor circuits and computer program products configured to operate at different performance levels based on a number of operating threads and methods of operating
US20040215933A1 (en) * 2003-04-23 2004-10-28 International Business Machines Corporation Mechanism for effectively handling livelocks in a simultaneous multithreading processor
EP1612661A2 (en) * 2004-06-30 2006-01-04 Intel Corporation Compare-and-exchange operation using sleep-wakeup mechanism
WO2006039162A3 (en) * 2004-10-01 2007-03-15 Advanced Micro Devices Inc Sharing monitored cache lines across multiple cores
US20080133885A1 (en) * 2005-08-29 2008-06-05 Centaurus Data Llc Hierarchical multi-threading processor
US20080244231A1 (en) * 2007-03-30 2008-10-02 Aaron Kunze Method and apparatus for speculative prefetching in a multi-processor/multi-core message-passing machine
US20090055829A1 (en) * 2007-08-24 2009-02-26 Gibson Gary A Method and apparatus for fine grain performance management of computer systems
US20090199197A1 (en) * 2008-02-01 2009-08-06 International Business Machines Corporation Wake-and-Go Mechanism with Dynamic Allocation in Hardware Private Array
US20090199029A1 (en) * 2008-02-01 2009-08-06 Arimilli Ravi K Wake-and-Go Mechanism with Data Monitoring
US20090199184A1 (en) * 2008-02-01 2009-08-06 Arimilli Ravi K Wake-and-Go Mechanism With Software Save of Thread State
US20090199189A1 (en) * 2008-02-01 2009-08-06 Arimilli Ravi K Parallel Lock Spinning Using Wake-and-Go Mechanism
US20090198695A1 (en) * 2008-02-01 2009-08-06 Arimilli Lakshminarayana B Method and Apparatus for Supporting Distributed Computing Within a Multiprocessor System
US20090198962A1 (en) * 2008-02-01 2009-08-06 Levitan David S Data processing system, processor and method of data processing having branch target address cache including address type tag bit
US20090199028A1 (en) * 2008-02-01 2009-08-06 Arimilli Ravi K Wake-and-Go Mechanism with Data Exclusivity
US20090198920A1 (en) * 2008-02-01 2009-08-06 Arimilli Lakshminarayana B Processing Units Within a Multiprocessor System Adapted to Support Memory Locks
US20090198916A1 (en) * 2008-02-01 2009-08-06 Arimilli Lakshminarayana B Method and Apparatus for Supporting Low-Overhead Memory Locks Within a Multiprocessor System
US20100100708A1 (en) * 2007-06-20 2010-04-22 Fujitsu Limited Processing device
US20100268790A1 (en) * 2009-04-16 2010-10-21 International Business Machines Corporation Complex Remote Update Programming Idiom Accelerator
US20100268915A1 (en) * 2009-04-16 2010-10-21 International Business Machines Corporation Remote Update Programming Idiom Accelerator with Allocated Processor Resources
US20100269115A1 (en) * 2009-04-16 2010-10-21 International Business Machines Corporation Managing Threads in a Wake-and-Go Engine
US20100287341A1 (en) * 2008-02-01 2010-11-11 Arimilli Ravi K Wake-and-Go Mechanism with System Address Bus Transaction Master
US20100293340A1 (en) * 2008-02-01 2010-11-18 Arimilli Ravi K Wake-and-Go Mechanism with System Bus Response
US20110173593A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Compiler Providing Idiom to Idiom Accelerator
US20110173419A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Look-Ahead Wake-and-Go Engine With Speculative Execution
US20110173423A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Look-Ahead Hardware Wake-and-Go Mechanism
US20110173630A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Central Repository for Wake-and-Go Mechanism
US20110173417A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Programming Idiom Accelerators
US20110173625A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Wake-and-Go Mechanism with Prioritization of Threads
US20110173631A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Wake-and-Go Mechanism for a Data Processing System
US20110238919A1 (en) * 2010-03-26 2011-09-29 Gary Allen Gibson Control of processor cache memory occupancy
US20110239220A1 (en) * 2010-03-26 2011-09-29 Gary Allen Gibson Fine grain performance resource management of computer systems
US8082315B2 (en) 2009-04-16 2011-12-20 International Business Machines Corporation Programming idiom accelerator for remote update
WO2011160718A1 (en) * 2010-06-24 2011-12-29 International Business Machines Corporation Diagnose instruction for serializing processing
US8214603B2 (en) 2008-02-01 2012-07-03 International Business Machines Corporation Method and apparatus for handling multiple memory requests within a multiprocessor system
US8250396B2 (en) 2008-02-01 2012-08-21 International Business Machines Corporation Hardware wake-and-go mechanism for a data processing system
US8341635B2 (en) 2008-02-01 2012-12-25 International Business Machines Corporation Hardware wake-and-go mechanism with look-ahead polling
US8458387B2 (en) 2010-06-23 2013-06-04 International Business Machines Corporation Converting a message signaled interruption into an I/O adapter event notification to a guest operating system
US8478922B2 (en) 2010-06-23 2013-07-02 International Business Machines Corporation Controlling a rate at which adapter interruption requests are processed
US8505032B2 (en) 2010-06-23 2013-08-06 International Business Machines Corporation Operating system notification of actions to be taken responsive to adapter events
US8504754B2 (en) 2010-06-23 2013-08-06 International Business Machines Corporation Identification of types of sources of adapter interruptions
US8510599B2 (en) 2010-06-23 2013-08-13 International Business Machines Corporation Managing processing associated with hardware events
US8549182B2 (en) 2010-06-23 2013-10-01 International Business Machines Corporation Store/store block instructions for communicating with adapters
US8566480B2 (en) 2010-06-23 2013-10-22 International Business Machines Corporation Load instruction for communicating with adapters
US8572635B2 (en) 2010-06-23 2013-10-29 International Business Machines Corporation Converting a message signaled interruption into an I/O adapter event notification
US20130339627A1 (en) * 2012-06-15 2013-12-19 International Business Machines Corporation Monitoring a value in storage without repeated storage access
US8615645B2 (en) 2010-06-23 2013-12-24 International Business Machines Corporation Controlling the selectively setting of operational parameters for an adapter
US8621112B2 (en) 2010-06-23 2013-12-31 International Business Machines Corporation Discovery by operating system of information relating to adapter functions accessible to the operating system
US8626970B2 (en) 2010-06-23 2014-01-07 International Business Machines Corporation Controlling access by a configuration to an adapter function
US8631222B2 (en) 2010-06-23 2014-01-14 International Business Machines Corporation Translation of input/output addresses to memory addresses
US8639858B2 (en) 2010-06-23 2014-01-28 International Business Machines Corporation Resizing address spaces concurrent to accessing the address spaces
US8650337B2 (en) 2010-06-23 2014-02-11 International Business Machines Corporation Runtime determination of translation formats for adapter functions
US8650335B2 (en) 2010-06-23 2014-02-11 International Business Machines Corporation Measurement facility for adapter functions
US8725992B2 (en) 2008-02-01 2014-05-13 International Business Machines Corporation Programming language exposing idiom calls to a programming idiom accelerator
US8868843B2 (en) 2011-11-30 2014-10-21 Advanced Micro Devices, Inc. Hardware filter for tracking block presence in large caches
US9195623B2 (en) 2010-06-23 2015-11-24 International Business Machines Corporation Multiple address spaces per adapter with address translation
US9213661B2 (en) 2010-06-23 2015-12-15 International Business Machines Corporation Enable/disable adapters of a computing environment
US9342352B2 (en) 2010-06-23 2016-05-17 International Business Machines Corporation Guest access to address spaces of adapter
US10235215B2 (en) 2008-02-01 2019-03-19 International Business Machines Corporation Memory lock mechanism for a multiprocessor system
US10289516B2 (en) 2016-12-29 2019-05-14 Intel Corporation NMONITOR instruction for monitoring a plurality of addresses

Families Citing this family (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0990964A1 (en) * 1998-09-28 2000-04-05 Siemens Aktiengesellschaft Method for operating an automatic device
US6983350B1 (en) * 1999-08-31 2006-01-03 Intel Corporation SDRAM controller for parallel processor architecture
US7308686B1 (en) 1999-12-22 2007-12-11 Ubicom Inc. Software input/output using hard real time threads
US7120783B2 (en) * 1999-12-22 2006-10-10 Ubicom, Inc. System and method for reading and writing a thread state in a multithreaded central processing unit
WO2001046827A1 (en) * 1999-12-22 2001-06-28 Ubicom, Inc. System and method for instruction level multithreading in an embedded processor using zero-time context switching
US6532509B1 (en) 1999-12-22 2003-03-11 Intel Corporation Arbitrating command requests in a parallel multi-threaded processing system
US6694380B1 (en) * 1999-12-27 2004-02-17 Intel Corporation Mapping requests from a processing unit that uses memory-mapped input-output space
US6661794B1 (en) * 1999-12-29 2003-12-09 Intel Corporation Method and apparatus for gigabit packet assignment for multithreaded packet processing
US7480706B1 (en) * 1999-12-30 2009-01-20 Intel Corporation Multi-threaded round-robin receive for fast network port
US6671795B1 (en) * 2000-01-21 2003-12-30 Intel Corporation Method and apparatus for pausing execution in a processor or the like
US7047396B1 (en) 2000-06-22 2006-05-16 Ubicom, Inc. Fixed length memory to memory arithmetic and architecture for a communications embedded processor system
US7010612B1 (en) 2000-06-22 2006-03-07 Ubicom, Inc. Universal serializer/deserializer
US20020124085A1 (en) * 2000-12-28 2002-09-05 Fujitsu Limited Method of simulating operation of logical unit, and computer-readable recording medium retaining program for simulating operation of logical unit
US6988186B2 (en) * 2001-06-28 2006-01-17 International Business Machines Corporation Shared resource queue for simultaneous multithreading processing wherein entries allocated to different threads are capable of being interspersed among each other and a head pointer for one thread is capable of wrapping around its own tail in order to access a free entry
US20030009654A1 (en) * 2001-06-29 2003-01-09 Nalawadi Rajeev K. Computer system having a single processor equipped to serve as multiple logical processors for pre-boot software to execute pre-boot tasks in parallel
US20030074390A1 (en) * 2001-10-12 2003-04-17 Hudson Richard L. Hardware to support non-blocking synchronization
US6901507B2 (en) * 2001-11-19 2005-05-31 Intel Corporation Context scheduling
US20030126379A1 (en) * 2001-12-31 2003-07-03 Shiv Kaushik Instruction sequences for suspending execution of a thread until a specified memory access occurs
US20030126416A1 (en) * 2001-12-31 2003-07-03 Marr Deborah T. Suspending execution of a thread in a multi-threaded processor
US7127561B2 (en) * 2001-12-31 2006-10-24 Intel Corporation Coherency techniques for suspending execution of a thread until a specified memory access occurs
US7363474B2 (en) * 2001-12-31 2008-04-22 Intel Corporation Method and apparatus for suspending execution of a thread until a specified memory access occurs
US6892331B2 (en) * 2002-01-17 2005-05-10 International Business Machines Corporation Method and system for error detection in a managed application environment
US7471688B2 (en) * 2002-06-18 2008-12-30 Intel Corporation Scheduling system for transmission of cells to ATM virtual circuits and DSL ports
US7321369B2 (en) * 2002-08-30 2008-01-22 Intel Corporation Method and apparatus for synchronizing processing of multiple asynchronous client queues on a graphics controller device
US7653906B2 (en) * 2002-10-23 2010-01-26 Intel Corporation Apparatus and method for reducing power consumption on simultaneous multi-threading systems
US7433307B2 (en) * 2002-11-05 2008-10-07 Intel Corporation Flow control in a network environment
US7216346B2 (en) 2002-12-31 2007-05-08 International Business Machines Corporation Method and apparatus for managing thread execution in a multithread application
US7822950B1 (en) 2003-01-22 2010-10-26 Ubicom, Inc. Thread cancellation and recirculation in a computer processor for avoiding pipeline stalls
US7487502B2 (en) * 2003-02-19 2009-02-03 Intel Corporation Programmable event driven yield mechanism which may activate other threads
US7849465B2 (en) * 2003-02-19 2010-12-07 Intel Corporation Programmable event driven yield mechanism which may activate service threads
US7587584B2 (en) * 2003-02-19 2009-09-08 Intel Corporation Mechanism to exploit synchronization overhead to improve multithreaded performance
US8762694B1 (en) 2003-02-19 2014-06-24 Intel Corporation Programmable event-driven yield mechanism
TWI261198B (en) * 2003-02-20 2006-09-01 Samsung Electronics Co Ltd Simultaneous multi-threading processor circuits and computer program products configured to operate at different performance levels based on a number of operating threads and methods of operating
US7076616B2 (en) * 2003-03-24 2006-07-11 Sony Corporation Application pre-launch to reduce user interface latency
US7213135B2 (en) * 2003-04-24 2007-05-01 International Business Machines Corporation Method using a dispatch flush in a simultaneous multithread processor to resolve exception conditions
US7114042B2 (en) * 2003-05-22 2006-09-26 International Business Machines Corporation Method to provide atomic update primitives in an asymmetric heterogeneous multiprocessor environment
US7653912B2 (en) * 2003-05-30 2010-01-26 Steven Frank Virtual processor methods and apparatus with unified event notification and consumer-producer memory operations
GB2441903B (en) * 2003-06-27 2008-04-30 Intel Corp Queued locks using monitor-memory wait
US7213093B2 (en) * 2003-06-27 2007-05-01 Intel Corporation Queued locks using monitor-memory wait
US7870553B2 (en) * 2003-08-28 2011-01-11 Mips Technologies, Inc. Symmetric multiprocessor operating system for execution on non-independent lightweight thread contexts
US7594089B2 (en) * 2003-08-28 2009-09-22 Mips Technologies, Inc. Smart memory based synchronization controller for a multi-threaded multiprocessor SoC
US20050050305A1 (en) * 2003-08-28 2005-03-03 Kissell Kevin D. Integrated mechanism for suspension and deallocation of computational threads of execution in a processor
US7836450B2 (en) * 2003-08-28 2010-11-16 Mips Technologies, Inc. Symmetric multiprocessor operating system for execution on non-independent lightweight thread contexts
US9032404B2 (en) 2003-08-28 2015-05-12 Mips Technologies, Inc. Preemptive multitasking employing software emulation of directed exceptions in a multithreading processor
EP1660998A1 (en) * 2003-08-28 2006-05-31 MIPS Technologies, Inc. Mechanisms for dynamic configuration of virtual processor resources
US7849297B2 (en) 2003-08-28 2010-12-07 Mips Technologies, Inc. Software emulation of directed exceptions in a multithreading processor
US7376954B2 (en) * 2003-08-28 2008-05-20 Mips Technologies, Inc. Mechanisms for assuring quality of service for programs executing on a multithreaded processor
US7418585B2 (en) * 2003-08-28 2008-08-26 Mips Technologies, Inc. Symmetric multiprocessor operating system for execution on non-independent lightweight thread contexts
US7711931B2 (en) * 2003-08-28 2010-05-04 Mips Technologies, Inc. Synchronized storage providing multiple synchronization semantics
US7614056B1 (en) 2003-09-12 2009-11-03 Sun Microsystems, Inc. Processor specific dispatching in a heterogeneous configuration
WO2005060575A2 (en) * 2003-12-10 2005-07-07 X1 Technologies, Inc. Performing operations in response to detecting a computer idle condition
US7310722B2 (en) 2003-12-18 2007-12-18 Nvidia Corporation Across-thread out of order instruction dispatch in a multithreaded graphics processor
US8694976B2 (en) * 2003-12-19 2014-04-08 Intel Corporation Sleep state mechanism for virtual multithreading
US9323571B2 (en) * 2004-02-06 2016-04-26 Intel Corporation Methods for reducing energy consumption of buffered applications using simultaneous multi-threading processor
JP4376692B2 (en) * 2004-04-30 2009-12-02 富士通株式会社 Information processing device, processor, processor control method, information processing device control method, cache memory
US20050283783A1 (en) * 2004-06-22 2005-12-22 Desota Donald R Method for optimizing pipeline use in a multiprocessing system
US7206903B1 (en) * 2004-07-20 2007-04-17 Sun Microsystems, Inc. Method and apparatus for releasing memory locations during transactional execution
US8074030B1 (en) 2004-07-20 2011-12-06 Oracle America, Inc. Using transactional memory with early release to implement non-blocking dynamic-sized data structure
US7703098B1 (en) 2004-07-20 2010-04-20 Sun Microsystems, Inc. Technique to allow a first transaction to wait on condition that affects its working set
JP4487744B2 (en) * 2004-11-29 2010-06-23 富士通株式会社 Multi-thread control device and control method
US7600101B2 (en) * 2005-01-13 2009-10-06 Hewlett-Packard Development Company, L.P. Multithreaded hardware systems and methods
US7490230B2 (en) * 2005-02-04 2009-02-10 Mips Technologies, Inc. Fetch director employing barrel-incrementer-based round-robin apparatus for use in multithreading microprocessor
US7631130B2 (en) * 2005-02-04 2009-12-08 Mips Technologies, Inc Barrel-incrementer-based round-robin apparatus and instruction dispatch scheduler employing same for use in multithreading microprocessor
US7657891B2 (en) * 2005-02-04 2010-02-02 Mips Technologies, Inc. Multithreading microprocessor with optimized thread scheduler for increasing pipeline utilization efficiency
US7853777B2 (en) * 2005-02-04 2010-12-14 Mips Technologies, Inc. Instruction/skid buffers in a multithreading microprocessor that store dispatched instructions to avoid re-fetching flushed instructions
CN102184123B (en) * 2005-03-02 2013-10-16 英特尔公司 multithread processer for additionally support virtual multi-thread and system
US7856636B2 (en) * 2005-05-10 2010-12-21 Hewlett-Packard Development Company, L.P. Systems and methods of sharing processing resources in a multi-threading environment
US20070043916A1 (en) * 2005-08-16 2007-02-22 Aguilar Maximino Jr System and method for light weight task switching when a shared memory condition is signaled
US8766995B2 (en) 2006-04-26 2014-07-01 Qualcomm Incorporated Graphics system with configurable caches
US20070268289A1 (en) * 2006-05-16 2007-11-22 Chun Yu Graphics system with dynamic reposition of depth engine
US8884972B2 (en) 2006-05-25 2014-11-11 Qualcomm Incorporated Graphics processor with arithmetic and elementary function units
US8973094B2 (en) * 2006-05-26 2015-03-03 Intel Corporation Execution of a secured environment initialization instruction on a point-to-point interconnect system
US8869147B2 (en) 2006-05-31 2014-10-21 Qualcomm Incorporated Multi-threaded processor with deferred thread output control
US8644643B2 (en) * 2006-06-14 2014-02-04 Qualcomm Incorporated Convolution filtering in a graphics processor
US8766996B2 (en) * 2006-06-21 2014-07-01 Qualcomm Incorporated Unified virtual addressed register file
US7882381B2 (en) * 2006-06-29 2011-02-01 Intel Corporation Managing wasted active power in processors based on loop iterations and number of instructions executed since last loop
US8020166B2 (en) * 2007-01-25 2011-09-13 Hewlett-Packard Development Company, L.P. Dynamically controlling the number of busy waiters in a synchronization object
US20080222399A1 (en) * 2007-03-05 2008-09-11 International Business Machines Corporation Method for the handling of mode-setting instructions in a multithreaded computing environment
US20080270653A1 (en) * 2007-04-26 2008-10-30 Balle Susanne M Intelligent resource management in multiprocessor computer systems
US8219845B2 (en) * 2007-05-09 2012-07-10 Microsoft Corporation Timer service uses a single timer function to perform timing services for both relative and absolute timers
JP5043560B2 (en) * 2007-08-24 2012-10-10 パナソニック株式会社 Program execution control device
US20090063881A1 (en) * 2007-08-31 2009-03-05 Mips Technologies, Inc. Low-overhead/power-saving processor synchronization mechanism, and applications thereof
US8190624B2 (en) * 2007-11-29 2012-05-29 Microsoft Corporation Data parallel production and consumption
US8407425B2 (en) * 2007-12-28 2013-03-26 Intel Corporation Obscuring memory access patterns in conjunction with deadlock detection or avoidance
US9081687B2 (en) * 2007-12-28 2015-07-14 Intel Corporation Method and apparatus for MONITOR and MWAIT in a distributed cache architecture
US8380907B2 (en) * 2008-02-26 2013-02-19 International Business Machines Corporation Method, system and computer program product for providing filtering of GUEST2 quiesce requests
US8527715B2 (en) * 2008-02-26 2013-09-03 International Business Machines Corporation Providing a shared memory translation facility
US8032716B2 (en) * 2008-02-26 2011-10-04 International Business Machines Corporation System, method and computer program product for providing a new quiesce state
US8140834B2 (en) * 2008-02-26 2012-03-20 International Business Machines Corporation System, method and computer program product for providing a programmable quiesce filtering register
US8458438B2 (en) * 2008-02-26 2013-06-04 International Business Machines Corporation System, method and computer program product for providing quiesce filtering for shared memory
US8239871B2 (en) * 2008-06-24 2012-08-07 International Business Machines Corporation Managing timeout in a multithreaded system by instantiating a timer object having scheduled expiration time and set of timeout handling information
US20110072247A1 (en) * 2009-09-21 2011-03-24 International Business Machines Corporation Fast application programmable timers
GB2517494B (en) * 2013-08-23 2021-02-24 Advanced Risc Mach Ltd Handling time intensive instructions
US10489200B2 (en) * 2013-10-23 2019-11-26 Nvidia Corporation Hierarchical staging areas for scheduling threads for execution
US10025608B2 (en) 2014-11-17 2018-07-17 International Business Machines Corporation Quiesce handling in multithreaded environments
US9678830B2 (en) * 2014-11-17 2017-06-13 International Business Machines Corporation Recovery improvement for quiesced systems
US9898351B2 (en) 2015-12-24 2018-02-20 Intel Corporation Method and apparatus for user-level thread synchronization with a monitor and MWAIT architecture
US10956157B1 (en) 2018-03-06 2021-03-23 Advanced Micro Devices, Inc. Taint protection during speculative execution
WO2020121416A1 (en) * 2018-12-11 2020-06-18 サンケン電気株式会社 Processor and pipeline processing method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4642756A (en) * 1985-03-15 1987-02-10 S & H Computer Systems, Inc. Method and apparatus for scheduling the execution of multiple processing tasks in a computer system
US4710883A (en) * 1985-03-12 1987-12-01 Pitney Bowes Inc. Electronic postage meter having a status monitor
US5524247A (en) * 1992-01-30 1996-06-04 Kabushiki Kaisha Toshiba System for scheduling programming units to a resource based on status variables indicating a lock or lock-wait state thereof
US5546593A (en) * 1992-05-18 1996-08-13 Matsushita Electric Industrial Co., Ltd. Multistream instruction processor able to reduce interlocks by having a wait state for an instruction stream
US5790851A (en) * 1997-04-15 1998-08-04 Oracle Corporation Method of sequencing lock call requests to an O/S to avoid spinlock contention within a multi-processor environment
US5872963A (en) * 1997-02-18 1999-02-16 Silicon Graphics, Inc. Resumption of preempted non-privileged threads with no kernel intervention
US5937187A (en) * 1996-07-01 1999-08-10 Sun Microsystems, Inc. Method and apparatus for execution and preemption control of computer process entities
US5961584A (en) * 1994-12-09 1999-10-05 Telefonaktiebolaget Lm Ericsson System for managing internal execution threads
US6128640A (en) * 1996-10-03 2000-10-03 Sun Microsystems, Inc. Method and apparatus for user-level support for multiple event synchronization
US6256659B1 (en) * 1997-12-09 2001-07-03 Mci Communications Corporation System and method for performing hybrid preemptive and cooperative multi-tasking in a computer system
US6505229B1 (en) * 1998-09-25 2003-01-07 Intelect Communications, Inc. Method for allowing multiple processing threads and tasks to execute on one or more processor units for embedded real-time processor systems

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0588918A (en) * 1991-04-30 1993-04-09 Internatl Business Mach Corp <Ibm> Method of avoiding waste of machine-cycle

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4710883A (en) * 1985-03-12 1987-12-01 Pitney Bowes Inc. Electronic postage meter having a status monitor
US4642756A (en) * 1985-03-15 1987-02-10 S & H Computer Systems, Inc. Method and apparatus for scheduling the execution of multiple processing tasks in a computer system
US5524247A (en) * 1992-01-30 1996-06-04 Kabushiki Kaisha Toshiba System for scheduling programming units to a resource based on status variables indicating a lock or lock-wait state thereof
US5546593A (en) * 1992-05-18 1996-08-13 Matsushita Electric Industrial Co., Ltd. Multistream instruction processor able to reduce interlocks by having a wait state for an instruction stream
US5961584A (en) * 1994-12-09 1999-10-05 Telefonaktiebolaget Lm Ericsson System for managing internal execution threads
US5937187A (en) * 1996-07-01 1999-08-10 Sun Microsystems, Inc. Method and apparatus for execution and preemption control of computer process entities
US6128640A (en) * 1996-10-03 2000-10-03 Sun Microsystems, Inc. Method and apparatus for user-level support for multiple event synchronization
US5872963A (en) * 1997-02-18 1999-02-16 Silicon Graphics, Inc. Resumption of preempted non-privileged threads with no kernel intervention
US5790851A (en) * 1997-04-15 1998-08-04 Oracle Corporation Method of sequencing lock call requests to an O/S to avoid spinlock contention within a multi-processor environment
US6256659B1 (en) * 1997-12-09 2001-07-03 Mci Communications Corporation System and method for performing hybrid preemptive and cooperative multi-tasking in a computer system
US6505229B1 (en) * 1998-09-25 2003-01-07 Intelect Communications, Inc. Method for allowing multiple processing threads and tasks to execute on one or more processor units for embedded real-time processor systems

Cited By (107)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7152170B2 (en) * 2003-02-20 2006-12-19 Samsung Electronics Co., Ltd. Simultaneous multi-threading processor circuits and computer program products configured to operate at different performance levels based on a number of operating threads and methods of operating
US20040168039A1 (en) * 2003-02-20 2004-08-26 Gi-Ho Park Simultaneous Multi-Threading Processor circuits and computer program products configured to operate at different performance levels based on a number of operating threads and methods of operating
US20040215933A1 (en) * 2003-04-23 2004-10-28 International Business Machines Corporation Mechanism for effectively handling livelocks in a simultaneous multithreading processor
US7000047B2 (en) * 2003-04-23 2006-02-14 International Business Machines Corporation Mechanism for effectively handling livelocks in a simultaneous multithreading processor
US8607241B2 (en) 2004-06-30 2013-12-10 Intel Corporation Compare and exchange operation using sleep-wakeup mechanism
JP2009151793A (en) * 2004-06-30 2009-07-09 Intel Corp Compare and exchange operation using sleep-wakeup mechanism
EP1612661A3 (en) * 2004-06-30 2007-09-19 Intel Corporation Compare-and-exchange operation using sleep-wakeup mechanism
EP1612661A2 (en) * 2004-06-30 2006-01-04 Intel Corporation Compare-and-exchange operation using sleep-wakeup mechanism
US9733937B2 (en) 2004-06-30 2017-08-15 Intel Corporation Compare and exchange operation using sleep-wakeup mechanism
US7257679B2 (en) 2004-10-01 2007-08-14 Advanced Micro Devices, Inc. Sharing monitored cache lines across multiple cores
WO2006039162A3 (en) * 2004-10-01 2007-03-15 Advanced Micro Devices Inc Sharing monitored cache lines across multiple cores
US20080133885A1 (en) * 2005-08-29 2008-06-05 Centaurus Data Llc Hierarchical multi-threading processor
US8028152B2 (en) * 2005-08-29 2011-09-27 The Invention Science Fund I, Llc Hierarchical multi-threading processor for executing virtual threads in a time-multiplexed fashion
US20080244231A1 (en) * 2007-03-30 2008-10-02 Aaron Kunze Method and apparatus for speculative prefetching in a multi-processor/multi-core message-passing machine
US7937532B2 (en) 2007-03-30 2011-05-03 Intel Corporation Method and apparatus for speculative prefetching in a multi-processor/multi-core message-passing machine
US20100100708A1 (en) * 2007-06-20 2010-04-22 Fujitsu Limited Processing device
EP2192483A1 (en) * 2007-06-20 2010-06-02 Fujitsu Limited Processing device
US8291195B2 (en) 2007-06-20 2012-10-16 Fujitsu Limited Processing device
EP2192483A4 (en) * 2007-06-20 2011-05-04 Fujitsu Ltd Processing device
US20090055829A1 (en) * 2007-08-24 2009-02-26 Gibson Gary A Method and apparatus for fine grain performance management of computer systems
US20130191841A1 (en) * 2007-08-24 2013-07-25 Virtualmetrix, Inc. Method and Apparatus For Fine Grain Performance Management of Computer Systems
US8397236B2 (en) * 2007-08-24 2013-03-12 Virtualmetrix, Inc. Credit based performance managment of computer systems
US20090199184A1 (en) * 2008-02-01 2009-08-06 Arimilli Ravi K Wake-and-Go Mechanism With Software Save of Thread State
US8732683B2 (en) 2008-02-01 2014-05-20 International Business Machines Corporation Compiler providing idiom to idiom accelerator
US10235215B2 (en) 2008-02-01 2019-03-19 International Business Machines Corporation Memory lock mechanism for a multiprocessor system
US20090198920A1 (en) * 2008-02-01 2009-08-06 Arimilli Lakshminarayana B Processing Units Within a Multiprocessor System Adapted to Support Memory Locks
US20090199028A1 (en) * 2008-02-01 2009-08-06 Arimilli Ravi K Wake-and-Go Mechanism with Data Exclusivity
US20100287341A1 (en) * 2008-02-01 2010-11-11 Arimilli Ravi K Wake-and-Go Mechanism with System Address Bus Transaction Master
US20100293340A1 (en) * 2008-02-01 2010-11-18 Arimilli Ravi K Wake-and-Go Mechanism with System Bus Response
US20090198962A1 (en) * 2008-02-01 2009-08-06 Levitan David S Data processing system, processor and method of data processing having branch target address cache including address type tag bit
US20090198695A1 (en) * 2008-02-01 2009-08-06 Arimilli Lakshminarayana B Method and Apparatus for Supporting Distributed Computing Within a Multiprocessor System
US20110173593A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Compiler Providing Idiom to Idiom Accelerator
US20110173419A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Look-Ahead Wake-and-Go Engine With Speculative Execution
US20110173423A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Look-Ahead Hardware Wake-and-Go Mechanism
US20110173630A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Central Repository for Wake-and-Go Mechanism
US20110173417A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Programming Idiom Accelerators
US20110173625A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Wake-and-Go Mechanism with Prioritization of Threads
US20110173631A1 (en) * 2008-02-01 2011-07-14 Arimilli Ravi K Wake-and-Go Mechanism for a Data Processing System
US20090199189A1 (en) * 2008-02-01 2009-08-06 Arimilli Ravi K Parallel Lock Spinning Using Wake-and-Go Mechanism
US8880853B2 (en) * 2008-02-01 2014-11-04 International Business Machines Corporation CAM-based wake-and-go snooping engine for waking a thread put to sleep for spinning on a target address lock
US8788795B2 (en) 2008-02-01 2014-07-22 International Business Machines Corporation Programming idiom accelerator to examine pre-fetched instruction streams for multiple processors
US20090198916A1 (en) * 2008-02-01 2009-08-06 Arimilli Lakshminarayana B Method and Apparatus for Supporting Low-Overhead Memory Locks Within a Multiprocessor System
US8725992B2 (en) 2008-02-01 2014-05-13 International Business Machines Corporation Programming language exposing idiom calls to a programming idiom accelerator
US8127080B2 (en) 2008-02-01 2012-02-28 International Business Machines Corporation Wake-and-go mechanism with system address bus transaction master
US8145849B2 (en) 2008-02-01 2012-03-27 International Business Machines Corporation Wake-and-go mechanism with system bus response
US8640142B2 (en) 2008-02-01 2014-01-28 International Business Machines Corporation Wake-and-go mechanism with dynamic allocation in hardware private array
US8171476B2 (en) 2008-02-01 2012-05-01 International Business Machines Corporation Wake-and-go mechanism with prioritization of threads
US8214603B2 (en) 2008-02-01 2012-07-03 International Business Machines Corporation Method and apparatus for handling multiple memory requests within a multiprocessor system
US8225120B2 (en) 2008-02-01 2012-07-17 International Business Machines Corporation Wake-and-go mechanism with data exclusivity
US8640141B2 (en) 2008-02-01 2014-01-28 International Business Machines Corporation Wake-and-go mechanism with hardware private array
US8250396B2 (en) 2008-02-01 2012-08-21 International Business Machines Corporation Hardware wake-and-go mechanism for a data processing system
US20090199029A1 (en) * 2008-02-01 2009-08-06 Arimilli Ravi K Wake-and-Go Mechanism with Data Monitoring
US8312458B2 (en) 2008-02-01 2012-11-13 International Business Machines Corporation Central repository for wake-and-go mechanism
US8316218B2 (en) 2008-02-01 2012-11-20 International Business Machines Corporation Look-ahead wake-and-go engine with speculative execution
US8341635B2 (en) 2008-02-01 2012-12-25 International Business Machines Corporation Hardware wake-and-go mechanism with look-ahead polling
US8386822B2 (en) 2008-02-01 2013-02-26 International Business Machines Corporation Wake-and-go mechanism with data monitoring
US20090199183A1 (en) * 2008-02-01 2009-08-06 Arimilli Ravi K Wake-and-Go Mechanism with Hardware Private Array
US8452947B2 (en) 2008-02-01 2013-05-28 International Business Machines Corporation Hardware wake-and-go mechanism and content addressable memory with instruction pre-fetch look-ahead to detect programming idioms
US8612977B2 (en) 2008-02-01 2013-12-17 International Business Machines Corporation Wake-and-go mechanism with software save of thread state
US20090199197A1 (en) * 2008-02-01 2009-08-06 International Business Machines Corporation Wake-and-Go Mechanism with Dynamic Allocation in Hardware Private Array
US8516484B2 (en) 2008-02-01 2013-08-20 International Business Machines Corporation Wake-and-go mechanism for a data processing system
US20100268915A1 (en) * 2009-04-16 2010-10-21 International Business Machines Corporation Remote Update Programming Idiom Accelerator with Allocated Processor Resources
US8230201B2 (en) * 2009-04-16 2012-07-24 International Business Machines Corporation Migrating sleeping and waking threads between wake-and-go mechanisms in a multiple processor data processing system
US20100268790A1 (en) * 2009-04-16 2010-10-21 International Business Machines Corporation Complex Remote Update Programming Idiom Accelerator
US20100269115A1 (en) * 2009-04-16 2010-10-21 International Business Machines Corporation Managing Threads in a Wake-and-Go Engine
US8886919B2 (en) 2009-04-16 2014-11-11 International Business Machines Corporation Remote update programming idiom accelerator with allocated processor resources
US8082315B2 (en) 2009-04-16 2011-12-20 International Business Machines Corporation Programming idiom accelerator for remote update
US8145723B2 (en) 2009-04-16 2012-03-27 International Business Machines Corporation Complex remote update programming idiom accelerator
US20110238919A1 (en) * 2010-03-26 2011-09-29 Gary Allen Gibson Control of processor cache memory occupancy
US20110239220A1 (en) * 2010-03-26 2011-09-29 Gary Allen Gibson Fine grain performance resource management of computer systems
US8782653B2 (en) 2010-03-26 2014-07-15 Virtualmetrix, Inc. Fine grain performance resource management of computer systems
US8677071B2 (en) 2010-03-26 2014-03-18 Virtualmetrix, Inc. Control of processor cache memory occupancy
US8650337B2 (en) 2010-06-23 2014-02-11 International Business Machines Corporation Runtime determination of translation formats for adapter functions
US8639858B2 (en) 2010-06-23 2014-01-28 International Business Machines Corporation Resizing address spaces concurrent to accessing the address spaces
US8504754B2 (en) 2010-06-23 2013-08-06 International Business Machines Corporation Identification of types of sources of adapter interruptions
US8510599B2 (en) 2010-06-23 2013-08-13 International Business Machines Corporation Managing processing associated with hardware events
US8615645B2 (en) 2010-06-23 2013-12-24 International Business Machines Corporation Controlling the selectively setting of operational parameters for an adapter
US8621112B2 (en) 2010-06-23 2013-12-31 International Business Machines Corporation Discovery by operating system of information relating to adapter functions accessible to the operating system
US8626970B2 (en) 2010-06-23 2014-01-07 International Business Machines Corporation Controlling access by a configuration to an adapter function
US8631222B2 (en) 2010-06-23 2014-01-14 International Business Machines Corporation Translation of input/output addresses to memory addresses
US8635430B2 (en) 2010-06-23 2014-01-21 International Business Machines Corporation Translation of input/output addresses to memory addresses
US8458387B2 (en) 2010-06-23 2013-06-04 International Business Machines Corporation Converting a message signaled interruption into an I/O adapter event notification to a guest operating system
US8468284B2 (en) 2010-06-23 2013-06-18 International Business Machines Corporation Converting a message signaled interruption into an I/O adapter event notification to a guest operating system
US9213661B2 (en) 2010-06-23 2015-12-15 International Business Machines Corporation Enable/disable adapters of a computing environment
US8505032B2 (en) 2010-06-23 2013-08-06 International Business Machines Corporation Operating system notification of actions to be taken responsive to adapter events
US8650335B2 (en) 2010-06-23 2014-02-11 International Business Machines Corporation Measurement facility for adapter functions
US8601497B2 (en) 2010-06-23 2013-12-03 International Business Machines Corporation Converting a message signaled interruption into an I/O adapter event notification
US9626298B2 (en) 2010-06-23 2017-04-18 International Business Machines Corporation Translation of input/output addresses to memory addresses
US9383931B2 (en) 2010-06-23 2016-07-05 International Business Machines Corporation Controlling the selectively setting of operational parameters for an adapter
US8572635B2 (en) 2010-06-23 2013-10-29 International Business Machines Corporation Converting a message signaled interruption into an I/O adapter event notification
US8566480B2 (en) 2010-06-23 2013-10-22 International Business Machines Corporation Load instruction for communicating with adapters
US9342352B2 (en) 2010-06-23 2016-05-17 International Business Machines Corporation Guest access to address spaces of adapter
US8549182B2 (en) 2010-06-23 2013-10-01 International Business Machines Corporation Store/store block instructions for communicating with adapters
US8478922B2 (en) 2010-06-23 2013-07-02 International Business Machines Corporation Controlling a rate at which adapter interruption requests are processed
US9134911B2 (en) 2010-06-23 2015-09-15 International Business Machines Corporation Store peripheral component interconnect (PCI) function controls instruction
US9195623B2 (en) 2010-06-23 2015-11-24 International Business Machines Corporation Multiple address spaces per adapter with address translation
US8607032B2 (en) 2010-06-24 2013-12-10 International Business Machines Corporation Diagnose instruction for serializing processing
US8595469B2 (en) 2010-06-24 2013-11-26 International Business Machines Corporation Diagnose instruction for serializing processing
WO2011160718A1 (en) * 2010-06-24 2011-12-29 International Business Machines Corporation Diagnose instruction for serializing processing
US9632780B2 (en) 2010-06-24 2017-04-25 International Business Machines Corporation Diagnose instruction for serializing processing
US8868843B2 (en) 2011-11-30 2014-10-21 Advanced Micro Devices, Inc. Hardware filter for tracking block presence in large caches
US9218288B2 (en) * 2012-06-15 2015-12-22 International Business Machines Corporation Monitoring a value in storage without repeated storage access
US9274957B2 (en) * 2012-06-15 2016-03-01 International Business Machines Corporation Monitoring a value in storage without repeated storage access
US20130339630A1 (en) * 2012-06-15 2013-12-19 International Business Machines Corporation Monitoring a value in storage without repeated storage access
US20130339627A1 (en) * 2012-06-15 2013-12-19 International Business Machines Corporation Monitoring a value in storage without repeated storage access
US10289516B2 (en) 2016-12-29 2019-05-14 Intel Corporation NMONITOR instruction for monitoring a plurality of addresses
US10394678B2 (en) 2016-12-29 2019-08-27 Intel Corporation Wait and poll instructions for monitoring a plurality of addresses

Also Published As

Publication number Publication date
US6493741B1 (en) 2002-12-10
US20030105944A1 (en) 2003-06-05
US6675192B2 (en) 2004-01-06

Similar Documents

Publication Publication Date Title
US6675192B2 (en) Temporary halting of thread execution until monitoring of armed events to memory location identified in working registers
CN101833475B (en) Method and device for execution of instruction block
EP1839146B1 (en) Mechanism to schedule threads on os-sequestered without operating system intervention
US7584332B2 (en) Computer systems with lightweight multi-threaded architectures
US8341635B2 (en) Hardware wake-and-go mechanism with look-ahead polling
US8640129B2 (en) Hardware multithreading systems and methods
RU2233470C2 (en) Method and device for blocking synchronization signal in multithreaded processor
US8612977B2 (en) Wake-and-go mechanism with software save of thread state
US6785887B2 (en) Technique for using shared resources on a multi-threaded processor
US20090144742A1 (en) Method, system and computer program to optimize deterministic event record and replay
US20110173625A1 (en) Wake-and-Go Mechanism with Prioritization of Threads
US20110173423A1 (en) Look-Ahead Hardware Wake-and-Go Mechanism
JP2009151793A (en) Compare and exchange operation using sleep-wakeup mechanism
JP2006040142A (en) Processor system and thread switching control method
CN107003896B (en) Apparatus with shared transaction processing resources and data processing method
JP2003516570A (en) Method and apparatus for entering and exiting multiple threads in a multithreaded processor
Woo et al. Catnap: A Backoff Scheme for Kernel Spinlocks in Many-Core Systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: CHANGE OF NAME;ASSIGNOR:COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P.;REEL/FRAME:017096/0511

Effective date: 20021001

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION