US20030065894A1 - Technique for implementing a distributed lock in a processor-based device - Google Patents

Technique for implementing a distributed lock in a processor-based device Download PDF

Info

Publication number
US20030065894A1
US20030065894A1 US09/966,503 US96650301A US2003065894A1 US 20030065894 A1 US20030065894 A1 US 20030065894A1 US 96650301 A US96650301 A US 96650301A US 2003065894 A1 US2003065894 A1 US 2003065894A1
Authority
US
United States
Prior art keywords
lock
token
location
waiter
requester
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/966,503
Other versions
US6694411B2 (en
Inventor
Thomas Bonola
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US09/966,503 priority Critical patent/US6694411B2/en
Assigned to COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P. reassignment COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BONOLA, THOMAS J.
Publication of US20030065894A1 publication Critical patent/US20030065894A1/en
Application granted granted Critical
Publication of US6694411B2 publication Critical patent/US6694411B2/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P.
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Adjusted expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms

Definitions

  • the present invention relates generally to processor-based devices and, more particularly, to a technique for implementing a lock that controls access to a shared resource accessible by a plurality of requesters in a processor-based device.
  • multiple requesters e.g., software threads, processors, hardware, etc.
  • shared resources such as memory.
  • requesters e.g., software threads, processors, hardware, etc.
  • shared resources such as memory.
  • care must be taken in a system that provides for concurrent access to a shared resource to ensure that a requester is accessing valid data.
  • a requester that has control of the resource may be interrupted, thus providing yet further opportunity for another requester to alter the contents of the shared resource. Without some sort of scheme to govern requests for access to a shared resource, data processing errors or unrecoverable faults may occur.
  • arbitration schemes which grants only one requester at a time access to a shared resource.
  • the arbitration scheme typically results in a lock being placed on the critical region of the shared resource such that the other requesters are blocked until the current requester has completed the operation and released the lock.
  • Such arbitration schemes become less effective as the number of requesters increases, as each requester must wait its turn to access the resource. Further, because the acts of acquiring and releasing the lock may result in communications being transmitted to each of the other waiting requesters, consumption of bus bandwidth and latency increase. Thus, these arbitration schemes may not readily scale to execution environments in which a large number of concurrent requests to a shared resource are possible.
  • a lock to a particular shared resource typically is implemented as a memory location in the memory subsystem of the server or other processor-based device.
  • a requester examines the appropriate field in the memory location to determine whether ownership of the lock is available.
  • the memory location for implementing the lock may include a lock bit that is set (i.e., set to a logical “1” state) when the lock is owned and cleared (i.e., set to a logical “0” state) when the lock is available. If the lock is available, the requester sets the lock bit to the owned state and acquires the lock.
  • a variable in the memory location is altered when the requester acquires the lock, a communication must be sent to all requesters who have access to that memory location and, thus, a cache memory line that may be affected by the change.
  • each of the waiting requesters repeatedly examines the state of the lock bit to determine whether the lock has been released.
  • ownership of the lock is acquired by the first waiting requester that happens to reach the lock bit.
  • passing of the ownership of the lock may not be performed in a particularly fair manner between waiting requesters having the same priority.
  • release of the lock involves changing the state of the lock bit, which again results in a communication that is sent to all requesters having access to the memory location.
  • the present invention may be directed to addressing one or more of the problems set forth above.
  • FIG. 1 illustrates a block diagram of an exemplary processor-based device
  • FIG. 2 illustrates a block diagram of another exemplary processor-based device
  • FIG. 3 illustrates an exemplary embodiment of a memory structure for implementing a lock that may be employed in the processor-based devices shown in FIGS. 1 and 2;
  • FIG. 4 illustrates a flowchart of an exemplary technique for acquiring ownership of a lock that is implemented using the memory structure shown in FIG. 3;
  • FIG. 5 illustrates a flowchart of an exemplary technique for releasing ownership of the lock that is implemented using the memory structure shown in FIG. 3.
  • the processor-based device 10 is a multi-processor device, such as a server, which includes host processors 12 , 14 , 16 , and 18 coupled to a host bus 20 .
  • the processors 12 , 14 , 16 , and 18 may be any of a variety of types of known processors, such as an ⁇ 86 or PENTIUM® based processor, an ALPHA® processor, a POWERPC® processor, etc.
  • the host bus 20 is coupled to a host bridge 22 which manages communications between the host bus 20 , a memory bus 24 , and an I/O bus 26 .
  • the memory bus 24 connects the processors 12 , 14 , 16 , and 18 to a shared memory resource 28 , which may include one or more cacheable memory devices, such as ROM, RAM, DRAM, SRAM, etc.
  • a shared memory resource 28 which may include one or more cacheable memory devices, such as ROM, RAM, DRAM, SRAM, etc.
  • each host processor 12 , 14 , 16 , and 18 has access to a local cache memory 13 , 15 , 17 , and 19 , respectively.
  • the I/O bus 26 provides for communications to any of a variety of input/output or peripheral devices 30 (e.g., a modem, printer, etc.), which may be shared among the multiple processors 12 , 14 , 16 , and 18 and which also may have access to the shared memory resource 28 .
  • Various other devices not shown also may be in communication with the processors 12 , 14 , 16 , and 18 .
  • Such other devices may include a user interface having buttons, switches, a keyboard, a mouse, and/or a voice recognition system, for example.
  • FIG. 2 illustrates another exemplary embodiment of a processor-based device 32 (e.g., a server) which may implement the lock technique of the present invention.
  • multiple processing systems 34 , 36 , and 38 are connected to a cache-coherent switch module 40 .
  • Each processing system 34 , 36 , and 38 may include multiple processors (e.g., four processors), and each system 34 , 36 , and 38 may be configured substantially similar to the processor-based device 10 illustrated in FIG. 1.
  • the memory structure comprises an array 42 of cacheable memory locations 44 , 46 , 48 , 50 , and 52 in, for example, the host memory 28 of the processor-based device 10 or in any of the host memories in the processing systems 34 , 36 , or 38 in the processor-based device 32 .
  • the size of each of the memory locations 44 , 46 , 48 , 50 , and 52 corresponds to a size of a cache line. In any particular embodiment, the size of the cache line will be dependent on the cache architecture of the processors 12 , 14 , 16 , and 18 in the processor-based device. Thus, for instance, the size of each of the memory locations 44 , 46 , 48 , 50 , and 52 may be one of 32 bytes, 64 bytes, 128 bytes, etc.
  • each memory location 44 , 46 , 48 , 50 , and 52 contains data used in the lock acquisition and release scheme of the present invention. These quadwords are represented in FIG. 3 as the fields 54 , 56 , 58 , 60 , and 62 . The remainder of the bits in the memory locations 44 , 46 , 48 , 50 , and 52 may be padded with, for instance, “0's,” to fill out the cache line. In other embodiments, more or fewer bits may be used as may be appropriate.
  • FIG. 3 a total of five memory locations 44 , 46 , 48 , 50 , and 52 are shown in the array 42 , although different embodiments may employ a different number of memory locations, as will be explained below.
  • the memory locations include an Acquire location 44 and four Waiter locations 46 , 48 , 50 , and 52 .
  • the Acquire location 44 stores data in the field 54 which is representative of a token that allows a requester to acquire ownership of the lock. The value of the token retrieved by a requester also establishes the order in which the requester will acquire ownership of the lock.
  • each time a requester attempts to acquire the lock the requester first retrieves a value of a token from the Acquire location 44 and then increments the value of the token stored at the Acquire location 44 .
  • each successive requester attempting to acquire the lock retrieves a token having a value that is sequential to the value of the token retrieved by the immediately preceding requester. Ownership of the lock is passed to a requester based on the value of the requester's token.
  • the Acquire memory location 44 spans a cache line and because the Acquire memory location 44 contains only one variable which is stored in the first quadword (i.e., the field 54 ), the amount of data that must be transmitted to the other requesters when the token value is altered is minimal because only the first quadword can include any change.
  • the Waiter locations 46 , 48 , 50 , and 52 also include data in only the first quadword or field 56 , 58 , 60 , and 62 , respectively.
  • the data in the first quadword indicates whether a requester that has been assigned to the particular Waiter location may acquire ownership of the lock, as will be described in detail below.
  • the number of Waiter locations 46 , 48 , 50 , and 52 in any array 42 corresponds to at least the number of requesters who have access to the shared resource associated with the lock. In the exemplary embodiment, to facilitate assignment of Waiter locations to each requester, the number of Waiter locations is equal to the number of requesters rounded up to the closest power of two.
  • a particular Waiter location 46 , 48 , 50 , or 52 is assigned to a requester based on the value of the token retrieved by that requester from the field 54 in the Acquire memory location 44 .
  • each Waiter location can be identified by an identifier, such as a line number, and the identifier of the assigned Waiter location can be extracted from the value of the retrieved token as described below. For instance, as previously discussed, the contents of the field 54 in the Acquire location 44 are incremented each time a token is retrieved by a requester. However, because the number of bits in the field 54 bear no relationship to the number of Waiter locations, the token value in field 54 does not directly correspond to a line number of a Waiter location.
  • This problem may be overcome by ensuring that the number of Waiter locations corresponds to a power of two.
  • the line number of the Waiter location can be extracted from the retrieved token value by combining an appropriate mask with the contents of the field 54 .
  • the token value always must correspond to one of four different memory locations.
  • only the two lower bits of the field 54 need be used to maintain a correspondence between the token value and the number of Waiter locations.
  • a mask having all “0's” except for the two lowest bits, which are “1's” can be combined, using a bit-wide AND operation, with the contents of the field 54 .
  • a mask can be combined with the quadword 54 in a manner which extracts the three lowest bits.
  • the requester Having been assigned a Waiter memory location 46 , 48 , 50 , or 52 corresponding to the value of the retrieved token, the requester then waits until the contents of the assigned Waiter location indicate that the requester may acquire ownership of the lock.
  • the lock becomes available when the data stored in the field 56 of the Waiter location 46 match the value of the token that was retrieved by the requester assigned to the Waiter location 46 .
  • availability of lock ownership may be indicated in other appropriate manners.
  • each Waiter location spans a cache line, only the requester assigned to that Waiter location can have a cache line that may be affected by a change in the contents of the Waiter location.
  • the data in the field 56 is altered to indicate that ownership of the lock is available, only the requester assigned to the Waiter location 46 is informed of the change, thus greatly reducing the amount of traffic on the bus.
  • each Waiter location contains only one variable (i.e., in the field 56 , 58 , 60 , or 62 ) the amount of data that is transmitted on the bus when informing the requester of the change also is reduced.
  • Ownership of the lock may become available when a previous requester (i.e., the lock owner) releases the lock.
  • the lock owner releases the lock by altering the contents of the next sequential Waiter location (e.g., Waiter location 48 ) to indicate that ownership now may be acquired by the requester waiting at that Waiter location.
  • the line number of the next sequential Waiter location can be determined by incrementing the token value that had been retrieved by the lock owner and then extracting the identifier for the Waiter location from the incremented token value.
  • the lock owner then may alter the contents (e.g., the field 58 ) of the Waiter location (e.g., Waiter location 48 ) which corresponds to this extracted line number to indicate that lock ownership is available.
  • FIG. 4 illustrates another exemplary embodiment of an array 64 having a plurality of memory locations. Similar to the embodiment of the array 42 illustrated in FIG. 3, the array 64 in FIG. 4 includes the Acquire memory location 44 , and the four Waiter memory locations 46 , 48 , 50 , and 52 . As discussed above, each of memory locations 44 , 46 , 48 , 50 , and 52 have a size that corresponds to a cache line size for the particular application in which the lock is being implemented.
  • each of the locations 44 , 46 , 48 , 50 , and 52 include only one variable which is stored in a field 54 , 56 , 58 , 60 , and 62 , respectively (e.g., the first quadword of each of the memory locations).
  • the array 64 in FIG. 4 also includes a Release memory location 66 .
  • the Release location 66 also has a size that corresponds to a cache line size and has only one variable which is stored in a field 68 (e.g., the first quadword).
  • the Release location 66 may be used to store a variable related to the release of the lock.
  • the field 68 may hold a value that corresponds to the identifier of the next sequential Waiter location.
  • FIG. 5 illustrates a flow chart of an exemplary routine for acquiring ownership of a lock that is implemented using, for instance, the memory structures shown in either of FIGS. 3 and 4, and which may be concurrently performed by multiple requesters attempting to acquire the lock.
  • a current requester attempts to acquire ownership of a lock, it first disables all interrupt events (block 70 ).
  • the current requester retrieves a token from the field 54 in the Acquire memory location 44 in the array 42 or 64 and saves the retrieved value of the token (block 72 ).
  • the current requester also increments the value of the token stored in the field 54 of the Acquire location 44 such that the next lock requester retrieves the next sequential value of the token (block 74 ).
  • the acts of retrieving and incrementing the value of the token are performed atomically, such as by executing a fetch-and-add primitive as illustrated by the dashed line around blocks 72 and 74 .
  • the atomic operation ensures that another requester does not interleave read/write cycles with the current requester between the acts of retrieving the token value and incrementing the token value.
  • each requester will be guaranteed to retrieve a different token value and, thus, will be assigned to a different Waiter location 46 , 48 , 50 , or 52 .
  • the current requester extracts an identifier or line number of the assigned Waiter location from the retrieved token value (block 76 ). In the exemplary embodiment, the current requester extracts the identifier by combining an appropriate mask (as previously described) with the retrieved token value. The current requester then examines the contents of its assigned Waiter location to determine whether ownership of the lock is available. In the exemplary embodiment, for instance, ownership of the lock is determined by comparing the contents of the Waiter location (e.g., the first quadword) with the retrieved token value (block 78 ). If the comparison does not result in a match, then the current requester “waits” or “spins” at the assigned Waiter location until a match results.
  • the Waiter location e.g., the first quadword
  • the current requester may simply keep comparing the contents of its assigned Waiter location to the value of its retrieved token until a match results. Alternatively, the current requester may simply wait for a communication informing the current requester that the contents of the assigned Waiter location have been altered. Because each Waiter location includes only one variable that can be altered, the current requester then knows that if the contents of the assigned Waiter location have been changed, then ownership of the lock must be available.
  • the current requester When the contents of the assigned Waiter location match the value of the token retrieved and saved by the current requester, the current requester then may acquire the lock and perform lock operations on the protected region of the shared resource (block 80 ). In the exemplary embodiment, the current requester also increments the value of its retrieved token and stores it as a “Next Waiter” value (block 82 ). For instance, the Next Waiter value may be stored in the field 68 of the Release location 66 . In any event, because the “Next Waiter” value is the incremented value of the current requester's token, then the “Next Waiter” value also is the same as the value of the token that was retrieved from the Acquire location 44 by the next requester after the current requester. Accordingly, the “Next Waiter” value can be used to release ownership of the lock to the next requester.
  • FIG. 6 it illustrates a flowchart of an exemplary routine for releasing ownership of the lock to the next requester.
  • the current requester Once the current requester has completed the operations protected by the lock (block 84 ), the current requester is ready to release the lock.
  • the current requester first determines the next requester that should receive ownership of the lock. This determination is accomplished by retrieving the “Next Waiter” value that previously was stored in, for instance, the Release location 66 (block 86 ). The identifier or line number corresponding to the next Waiter location can be extracted from the “Next Waiter” value by applying a mask in the manner previously discussed (block 88 ).
  • next Waiter location Once the next Waiter location has been determined, then the current requester releases the lock by writing its stored “Next Waiter” value to the next Waiter location (i.e., to the field 58 of the Waiter location 48 ) (block 90 ) and restoring its original interrupt state (block 92 ).
  • next Waiter When the “Next Waiter” value has been written to the next Waiter location, either the next requester is informed that the contents of its assigned Waiter location have been altered and then can acquire the lock, or the next requester will that that ownership is available the next time it compares the contents of its assigned Waiter location to the value of its retrieved token because a match will result. In any event, release of the lock by the current requester to the next requester has been accomplished, and the next requester now becomes the lock owner.
  • FIGS. 5 and 6 may be implemented in software code embedded in a processor-based device, may exist as software code stored on a tangible medium such as a hard drive, a floppy disk, a CD ROM, etc., or may be implemented in silicon in the form of an application specific integrated circuit (ASIC), as well as in any other suitable manner.
  • ASIC application specific integrated circuit
  • processor-based devices which have multiple processors
  • the invention also is applicable to a single-processor device in which multiple entities (e.g., multiple threads, software, hardware) contend for access to a shared resource.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Multi Processors (AREA)

Abstract

A technique for implementing a distributed lock for a shared resource accessible by a plurality of requesters in a processor-based device. The lock is implemented as an array of memory locations, in which the size of each memory location corresponds to a cache line size. Each requester attempting to acquire the lock is assigned a particular memory location at which to wait until lock ownership is available. Acquisition and release of the lock is facilitated by a token-passing scheme.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field Of The Invention [0001]
  • The present invention relates generally to processor-based devices and, more particularly, to a technique for implementing a lock that controls access to a shared resource accessible by a plurality of requesters in a processor-based device. [0002]
  • 2. Background Of The Related Art [0003]
  • This section is intended to introduce the reader to various aspects of art which may be related to various aspects of the present invention which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art. [0004]
  • The use of computers has increased dramatically over the past few decades. In years past, computers were relatively few in number and primarily used as scientific tools. However, with the advent of standardized architectures and operating systems, computers soon became virtually indispensable tools for a wide variety of business applications. The types of computer systems similarly have evolved over time. For example, early scientific computers typically were stand-alone systems designed to carry out relatively specific tasks and required relatively knowledgeable users. [0005]
  • As computer systems evolved into the business arena, mainframe computers emerged. In mainframe systems, users utilized “dumb” terminals to provide input to and to receive output from the mainframe computer while all processing was done centrally by the mainframe computer. As users desired more autonomy in their choice of computing services, personal computers evolved to provide processing capability on each user's desktop. More recently, personal computers have given rise to relatively powerful computers called servers. Servers are typically multi-processor computers that couple numerous personal computers together in a network. In addition, these powerful servers are also finding applications in various other capacities, such as in the communications and Internet industries. [0006]
  • In many servers, multiple requesters (e.g., software threads, processors, hardware, etc.) may contend for access to shared resources, such as memory. Each time a requester accesses memory, it is likely that the contents of a memory location will be altered. Thus, care must be taken in a system that provides for concurrent access to a shared resource to ensure that a requester is accessing valid data. In addition to problems arising from concurrent requests, a requester that has control of the resource may be interrupted, thus providing yet further opportunity for another requester to alter the contents of the shared resource. Without some sort of scheme to govern requests for access to a shared resource, data processing errors or unrecoverable faults may occur. [0007]
  • In many systems, multiple requests to a shared resource are governed by an arbitration scheme which grants only one requester at a time access to a shared resource. The arbitration scheme typically results in a lock being placed on the critical region of the shared resource such that the other requesters are blocked until the current requester has completed the operation and released the lock. Such arbitration schemes become less effective as the number of requesters increases, as each requester must wait its turn to access the resource. Further, because the acts of acquiring and releasing the lock may result in communications being transmitted to each of the other waiting requesters, consumption of bus bandwidth and latency increase. Thus, these arbitration schemes may not readily scale to execution environments in which a large number of concurrent requests to a shared resource are possible. [0008]
  • In many known arbitration schemes, a lock to a particular shared resource typically is implemented as a memory location in the memory subsystem of the server or other processor-based device. To acquire ownership of the lock, a requester examines the appropriate field in the memory location to determine whether ownership of the lock is available. For instance, the memory location for implementing the lock may include a lock bit that is set (i.e., set to a logical “1” state) when the lock is owned and cleared (i.e., set to a logical “0” state) when the lock is available. If the lock is available, the requester sets the lock bit to the owned state and acquires the lock. However, because a variable in the memory location is altered when the requester acquires the lock, a communication must be sent to all requesters who have access to that memory location and, thus, a cache memory line that may be affected by the change. [0009]
  • While the lock is owned, each of the waiting requesters repeatedly examines the state of the lock bit to determine whether the lock has been released. When the lock is released, ownership of the lock is acquired by the first waiting requester that happens to reach the lock bit. Thus, passing of the ownership of the lock may not be performed in a particularly fair manner between waiting requesters having the same priority. Further, release of the lock involves changing the state of the lock bit, which again results in a communication that is sent to all requesters having access to the memory location. [0010]
  • Thus, known techniques for implementing a lock for a shared resource are not particularly efficient when utilized in a processor-based device in which a large number of requesters have access to the shared resource. The acts of acquiring and releasing the lock generate a great deal of traffic on the bus, thus having a detrimental effect on latency. Further, the act of passing ownership of the lock to another waiting requester is not necessarily implemented in a fair manner, thus creating uncertainty as to when a particular requester may acquire the lock. [0011]
  • Accordingly, it would be desirable to provide a scheme for arbitrating a lock on a shared resource that would minimize the number of communications transmitted on the bus when the lock is acquired and released. Such a scheme would be particularly useful in which a large number of requesters are contending for access to the shared resource. Further, the scheme would facilitate distributing ownership of the lock in a fair manner. [0012]
  • The present invention may be directed to addressing one or more of the problems set forth above.[0013]
  • DESCRIPTION OF THE DRAWINGS
  • The foregoing and other advantages of the invention will become apparent upon reading the following detailed description and upon reference to the drawings in which: [0014]
  • FIG. 1 illustrates a block diagram of an exemplary processor-based device; [0015]
  • FIG. 2 illustrates a block diagram of another exemplary processor-based device; [0016]
  • FIG. 3 illustrates an exemplary embodiment of a memory structure for implementing a lock that may be employed in the processor-based devices shown in FIGS. 1 and 2; [0017]
  • FIG. 4 illustrates a flowchart of an exemplary technique for acquiring ownership of a lock that is implemented using the memory structure shown in FIG. 3; and [0018]
  • FIG. 5 illustrates a flowchart of an exemplary technique for releasing ownership of the lock that is implemented using the memory structure shown in FIG. 3. [0019]
  • DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS
  • One or more specific embodiments of the present invention will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions are made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure. [0020]
  • Turning now to the drawings and referring first to FIG. 1, an exemplary processor-based [0021] device 10 is illustrated in which the innovative distributed lock may be utilized. The processor-based device 10 is a multi-processor device, such as a server, which includes host processors 12, 14, 16, and 18 coupled to a host bus 20. The processors 12, 14, 16, and 18 may be any of a variety of types of known processors, such as an ×86 or PENTIUM® based processor, an ALPHA® processor, a POWERPC® processor, etc. The host bus 20 is coupled to a host bridge 22 which manages communications between the host bus 20, a memory bus 24, and an I/O bus 26. The memory bus 24 connects the processors 12, 14, 16, and 18 to a shared memory resource 28, which may include one or more cacheable memory devices, such as ROM, RAM, DRAM, SRAM, etc. In addition to the shared memory resource 28, each host processor 12, 14, 16, and 18 has access to a local cache memory 13, 15, 17, and 19, respectively. The I/O bus 26 provides for communications to any of a variety of input/output or peripheral devices 30 (e.g., a modem, printer, etc.), which may be shared among the multiple processors 12, 14, 16, and 18 and which also may have access to the shared memory resource 28.
  • Various other devices not shown also may be in communication with the [0022] processors 12, 14, 16, and 18. Such other devices may include a user interface having buttons, switches, a keyboard, a mouse, and/or a voice recognition system, for example.
  • FIG. 2 illustrates another exemplary embodiment of a processor-based device [0023] 32 (e.g., a server) which may implement the lock technique of the present invention. In this embodiment, multiple processing systems 34, 36, and 38 are connected to a cache-coherent switch module 40. Each processing system 34, 36, and 38 may include multiple processors (e.g., four processors), and each system 34, 36, and 38 may be configured substantially similar to the processor-based device 10 illustrated in FIG. 1.
  • In the embodiments of a processor-based device illustrated in FIGS. 1 and 2, it can be seen that it is possible to have several entities concurrently attempting to access a shared resource. Arbitration schemes which are implemented via the use of locks generally have a detrimental effect on latency. Further, such schemes are quite intrusive on the buses and the host bridge or switch module of the processor-based device because the schemes involve the exchange of many communications between the entities having access to the shared resource. For example, each time a requester attempts to acquire a lock, a message is sent to all other entities having access to the lock. Similarly, each time a lock is released, a message is sent to the other entities. Once the lock is released, the requesters all retransmit their requests in an attempt to gain ownership of the lock, and the distribution of ownership of the lock may not be performed in a fair manner. [0024]
  • Turning now to FIG. 3, a memory structure for implementing a lock that overcomes the disadvantages of known lock implementations is shown. In the exemplary embodiment, the memory structure comprises an [0025] array 42 of cacheable memory locations 44, 46, 48, 50, and 52 in, for example, the host memory 28 of the processor-based device 10 or in any of the host memories in the processing systems 34, 36, or 38 in the processor-based device 32. The size of each of the memory locations 44, 46, 48, 50, and 52 corresponds to a size of a cache line. In any particular embodiment, the size of the cache line will be dependent on the cache architecture of the processors 12, 14, 16, and 18 in the processor-based device. Thus, for instance, the size of each of the memory locations 44, 46, 48, 50, and 52 may be one of 32 bytes, 64 bytes, 128 bytes, etc.
  • In the exemplary embodiment, only the first quadword of each [0026] memory location 44, 46, 48, 50, and 52 contains data used in the lock acquisition and release scheme of the present invention. These quadwords are represented in FIG. 3 as the fields 54, 56, 58, 60, and 62. The remainder of the bits in the memory locations 44, 46, 48, 50, and 52 may be padded with, for instance, “0's,” to fill out the cache line. In other embodiments, more or fewer bits may be used as may be appropriate.
  • In FIG. 3, a total of five [0027] memory locations 44, 46, 48, 50, and 52 are shown in the array 42, although different embodiments may employ a different number of memory locations, as will be explained below. The memory locations include an Acquire location 44 and four Waiter locations 46, 48, 50, and 52. The Acquire location 44 stores data in the field 54 which is representative of a token that allows a requester to acquire ownership of the lock. The value of the token retrieved by a requester also establishes the order in which the requester will acquire ownership of the lock. For instance, each time a requester attempts to acquire the lock, the requester first retrieves a value of a token from the Acquire location 44 and then increments the value of the token stored at the Acquire location 44. Thus, each successive requester attempting to acquire the lock retrieves a token having a value that is sequential to the value of the token retrieved by the immediately preceding requester. Ownership of the lock is passed to a requester based on the value of the requester's token.
  • Because the size of the [0028] Acquire memory location 44 spans a cache line and because the Acquire memory location 44 contains only one variable which is stored in the first quadword (i.e., the field 54), the amount of data that must be transmitted to the other requesters when the token value is altered is minimal because only the first quadword can include any change.
  • The [0029] Waiter locations 46, 48, 50, and 52 also include data in only the first quadword or field 56, 58, 60, and 62, respectively. The data in the first quadword indicates whether a requester that has been assigned to the particular Waiter location may acquire ownership of the lock, as will be described in detail below. The number of Waiter locations 46, 48, 50, and 52 in any array 42 corresponds to at least the number of requesters who have access to the shared resource associated with the lock. In the exemplary embodiment, to facilitate assignment of Waiter locations to each requester, the number of Waiter locations is equal to the number of requesters rounded up to the closest power of two. Thus, in a processor-based device in which three requesters may contend for ownership of the lock, four (i.e., 22) Waiter locations 46, 48, 50, and 52 are provided in the array 42. Similarly, in a processor-based device having four requesters, four Waiter locations also are provided. Further, in a processor-based device having five to eight requesters, eight (i.e., 23) Waiter locations are provided in the array 42, and so forth.
  • A [0030] particular Waiter location 46, 48, 50, or 52 is assigned to a requester based on the value of the token retrieved by that requester from the field 54 in the Acquire memory location 44. In the exemplary embodiment, each Waiter location can be identified by an identifier, such as a line number, and the identifier of the assigned Waiter location can be extracted from the value of the retrieved token as described below. For instance, as previously discussed, the contents of the field 54 in the Acquire location 44 are incremented each time a token is retrieved by a requester. However, because the number of bits in the field 54 bear no relationship to the number of Waiter locations, the token value in field 54 does not directly correspond to a line number of a Waiter location.
  • This problem may be overcome by ensuring that the number of Waiter locations corresponds to a power of two. Thus, the line number of the Waiter location can be extracted from the retrieved token value by combining an appropriate mask with the contents of the [0031] field 54. For instance, in an array 42 having four Waiter locations 46, 48, 50, and 52, the token value always must correspond to one of four different memory locations. Thus, only the two lower bits of the field 54 need be used to maintain a correspondence between the token value and the number of Waiter locations. To extract the two lowest bits of the field 54, a mask having all “0's” except for the two lowest bits, which are “1's”, can be combined, using a bit-wide AND operation, with the contents of the field 54. Similarly, in an array 42 having eight Waiter locations, a mask can be combined with the quadword 54 in a manner which extracts the three lowest bits.
  • Having been assigned a [0032] Waiter memory location 46, 48, 50, or 52 corresponding to the value of the retrieved token, the requester then waits until the contents of the assigned Waiter location indicate that the requester may acquire ownership of the lock. For example, in one embodiment, the lock becomes available when the data stored in the field 56 of the Waiter location 46 match the value of the token that was retrieved by the requester assigned to the Waiter location 46. In other embodiments, availability of lock ownership may be indicated in other appropriate manners. However it can be seen that by structuring the array 42 such that the number of Waiter locations corresponds to at least the number of requesters that can contend for ownership of the lock, then it is possible that only one requester at a time can be assigned to any particular Waiter location. Further, because each Waiter location spans a cache line, only the requester assigned to that Waiter location can have a cache line that may be affected by a change in the contents of the Waiter location. Thus, when the data in the field 56 is altered to indicate that ownership of the lock is available, only the requester assigned to the Waiter location 46 is informed of the change, thus greatly reducing the amount of traffic on the bus. Still further, because each Waiter location contains only one variable (i.e., in the field 56, 58, 60, or 62) the amount of data that is transmitted on the bus when informing the requester of the change also is reduced.
  • Ownership of the lock may become available when a previous requester (i.e., the lock owner) releases the lock. In an exemplary embodiment, the lock owner releases the lock by altering the contents of the next sequential Waiter location (e.g., Waiter location [0033] 48) to indicate that ownership now may be acquired by the requester waiting at that Waiter location. The line number of the next sequential Waiter location can be determined by incrementing the token value that had been retrieved by the lock owner and then extracting the identifier for the Waiter location from the incremented token value. The lock owner then may alter the contents (e.g., the field 58) of the Waiter location (e.g., Waiter location 48) which corresponds to this extracted line number to indicate that lock ownership is available.
  • By implementing a token scheme in which the values of the token are sequentially incremented, and by passing ownership of the lock to the next sequential Waiter location, arbitration of ownership of the lock is performed in a fair manner. That is, in accordance with such a scheme, a requester is guaranteed to acquire ownership of the lock in the same order in which the requester originally requested the lock. [0034]
  • FIG. 4 illustrates another exemplary embodiment of an [0035] array 64 having a plurality of memory locations. Similar to the embodiment of the array 42 illustrated in FIG. 3, the array 64 in FIG. 4 includes the Acquire memory location 44, and the four Waiter memory locations 46, 48, 50, and 52. As discussed above, each of memory locations 44, 46, 48, 50, and 52 have a size that corresponds to a cache line size for the particular application in which the lock is being implemented. Further, the contents of each of the locations 44, 46, 48, 50, and 52 include only one variable which is stored in a field 54, 56, 58, 60, and 62, respectively (e.g., the first quadword of each of the memory locations).
  • In addition to the [0036] Acquire location 44 and the Waiter locations 46, 48, 50, and 52, the array 64 in FIG. 4 also includes a Release memory location 66. The Release location 66 also has a size that corresponds to a cache line size and has only one variable which is stored in a field 68 (e.g., the first quadword). The Release location 66 may be used to store a variable related to the release of the lock. For example, the field 68 may hold a value that corresponds to the identifier of the next sequential Waiter location. Again, by configuring the Release location 66 to span a cache line, traffic on the host bus is reduced whenever a requester releases ownership of the lock.
  • FIG. 5 illustrates a flow chart of an exemplary routine for acquiring ownership of a lock that is implemented using, for instance, the memory structures shown in either of FIGS. 3 and 4, and which may be concurrently performed by multiple requesters attempting to acquire the lock. As illustrated in FIG. 5, when a current requester attempts to acquire ownership of a lock, it first disables all interrupt events (block [0037] 70). The current requester then retrieves a token from the field 54 in the Acquire memory location 44 in the array 42 or 64 and saves the retrieved value of the token (block 72). The current requester also increments the value of the token stored in the field 54 of the Acquire location 44 such that the next lock requester retrieves the next sequential value of the token (block 74). In the exemplary embodiment, the acts of retrieving and incrementing the value of the token are performed atomically, such as by executing a fetch-and-add primitive as illustrated by the dashed line around blocks 72 and 74. The atomic operation ensures that another requester does not interleave read/write cycles with the current requester between the acts of retrieving the token value and incrementing the token value. Thus, each requester will be guaranteed to retrieve a different token value and, thus, will be assigned to a different Waiter location 46, 48, 50, or 52.
  • To determine its assigned Waiter location, the current requester extracts an identifier or line number of the assigned Waiter location from the retrieved token value (block [0038] 76). In the exemplary embodiment, the current requester extracts the identifier by combining an appropriate mask (as previously described) with the retrieved token value. The current requester then examines the contents of its assigned Waiter location to determine whether ownership of the lock is available. In the exemplary embodiment, for instance, ownership of the lock is determined by comparing the contents of the Waiter location (e.g., the first quadword) with the retrieved token value (block 78). If the comparison does not result in a match, then the current requester “waits” or “spins” at the assigned Waiter location until a match results. For example, the current requester may simply keep comparing the contents of its assigned Waiter location to the value of its retrieved token until a match results. Alternatively, the current requester may simply wait for a communication informing the current requester that the contents of the assigned Waiter location have been altered. Because each Waiter location includes only one variable that can be altered, the current requester then knows that if the contents of the assigned Waiter location have been changed, then ownership of the lock must be available.
  • When the contents of the assigned Waiter location match the value of the token retrieved and saved by the current requester, the current requester then may acquire the lock and perform lock operations on the protected region of the shared resource (block [0039] 80). In the exemplary embodiment, the current requester also increments the value of its retrieved token and stores it as a “Next Waiter” value (block 82). For instance, the Next Waiter value may be stored in the field 68 of the Release location 66. In any event, because the “Next Waiter” value is the incremented value of the current requester's token, then the “Next Waiter” value also is the same as the value of the token that was retrieved from the Acquire location 44 by the next requester after the current requester. Accordingly, the “Next Waiter” value can be used to release ownership of the lock to the next requester.
  • Turning now to FIG. 6, it illustrates a flowchart of an exemplary routine for releasing ownership of the lock to the next requester. Once the current requester has completed the operations protected by the lock (block [0040] 84), the current requester is ready to release the lock. In the exemplary embodiment illustrated, to release the lock, the current requester first determines the next requester that should receive ownership of the lock. This determination is accomplished by retrieving the “Next Waiter” value that previously was stored in, for instance, the Release location 66 (block 86). The identifier or line number corresponding to the next Waiter location can be extracted from the “Next Waiter” value by applying a mask in the manner previously discussed (block 88). Once the next Waiter location has been determined, then the current requester releases the lock by writing its stored “Next Waiter” value to the next Waiter location (i.e., to the field 58 of the Waiter location 48) (block 90) and restoring its original interrupt state (block 92).
  • When the “Next Waiter” value has been written to the next Waiter location, either the next requester is informed that the contents of its assigned Waiter location have been altered and then can acquire the lock, or the next requester will that that ownership is available the next time it compares the contents of its assigned Waiter location to the value of its retrieved token because a match will result. In any event, release of the lock by the current requester to the next requester has been accomplished, and the next requester now becomes the lock owner. [0041]
  • It should be understood that the lock implementation described above with respect to FIGS. 5 and 6 may be implemented in software code embedded in a processor-based device, may exist as software code stored on a tangible medium such as a hard drive, a floppy disk, a CD ROM, etc., or may be implemented in silicon in the form of an application specific integrated circuit (ASIC), as well as in any other suitable manner. Further, it should be understood that although the acts illustrated in FIGS. 5 and 6 have been described in a particular order, this order may be altered and additional or different act performed without departing from the scope and spirit of the invention. Still further, while the embodiments described above have included processor-based devices which have multiple processors, it should be understood that the invention also is applicable to a single-processor device in which multiple entities (e.g., multiple threads, software, hardware) contend for access to a shared resource. [0042]
  • Thus, it should be clear that the invention may be susceptible to various modifications and alternative forms, and that specific embodiments have been shown in the drawings and described in detail herein by way of example only. Further, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims. [0043]

Claims (31)

What is claimed is:
1. A method of arbitrating a lock on a shared resource accessible by a plurality of requesters in a processor-based device, the method comprising the acts of:
retrieving, by a first requester of the plurality of requesters, a value of a token from a first memory location, the value of the token indicating a second memory location at which the first requester can acquire ownership of a lock;
incrementing, by the first requester, the value of the token at the first memory location;
retrieving, by a second requester, the incremented value of the token at the first memory location, the incremented value of the token indicating a third memory location at which the second requester can acquire ownership of the lock; and
waiting, by the first requester, at the second memory location until contents of the second memory location indicate that ownership of the lock is available, wherein only the first requester and not the second requester is informed of the contents of the second memory location.
2. The method as recited in claim 1, comprising the acts of:
incrementing, by the second requester, the incremented value of the token at the first memory location; and
waiting, by the second requester, at the third memory location until the contents of the third memory location indicate that ownership of the lock is available.
3. The method as recited in claim 1, wherein the size of each of the first memory location, the second memory location, and the third memory location corresponds to a cache line size.
4. The method as recited in claim 1, wherein the acts of retrieving the token value and incrementing the token value by the first requester are performed atomically.
5. The method as recited in claim 1, wherein the act of waiting, by the first requester, at the second memory location comprises the acts of:
determining whether contents of the second memory location match the value of the token retrieved by the first requester from the first memory location; and
acquiring ownership of the lock if the contents of the second memory location match the value of the token retrieved by the first requester from the first memory location.
6. The method as recited in claim 5, comprising the acts of:
incrementing, by the first requester, the value of the token retrieved by the first requester when the contents of the second memory location match the value of the token, the incremented value of the retrieved token representing a next waiter value.
7. The method as recited in claim 6, comprising the acts of:
writing the next waiter value to the third memory location when the first requester releases the ownership of the lock; and
acquiring, by the second requester, ownership of the lock if the next waiter value matches the incremented value of the token retrieved by the second requester from the first memory location.
8. The method as recited in claim 7, comprising the act of writing the next waiter value to a fourth memory location, wherein the act of writing the next waiter value to the third memory location comprises the act of retrieving the next waiter value from the fourth memory location, and wherein each of the first memory location, the second memory location, the third memory location, and the fourth memory location corresponds to a cache line size.
9. The method as recited in claim 1, wherein the first requester and the second requester comprise a first processor and a second processor, respectively.
10. A method of releasing a lock on a shared resource in a processor-based device, wherein a lock owner has acquired ownership of the lock, and wherein a plurality of waiting requesters are waiting to acquire ownership of the lock, the method comprising the acts of:
assigning a memory location in an array of memory locations to each waiting requester of the plurality of waiting requesters, wherein each waiting requester waits at its respective assigned memory location to acquire ownership of the lock; and
releasing the ownership of the lock from the lock owner to the waiting requester waiting at a first assigned memory location in the array of memory locations, wherein only the waiting requester at the first assigned memory location is informed that ownership of the lock has been released by the lock owner.
11. The method as recited in claim 10, wherein the size of each assigned memory location in the array of memory locations corresponds to a cache line size.
12. The method as recited in claim 10, wherein the act of releasing the ownership of the lock comprises the acts of:
retrieving, by the lock owner, a next waiter value from a release memory location in the array of memory locations, the next waiter value corresponding to the first assigned memory location; and
writing, by the lock owner, the next waiter value to the first assigned memory location in the array of memory locations.
13. The method as recited in claim 12, wherein the size of each of the assigned memory locations and the size of the release memory location corresponds to a cache line size.
14. A memory structure to implement a lock to control access to a shared resource by a plurality of requesters in a processor-based device, the memory structure comprising:
a plurality of memory locations, the plurality of memory locations comprising:
a plurality of waiter locations, the number of the plurality of waiter locations corresponding to at least the number of the plurality of requesters having access to the shared resource, wherein the contents of each waiter location indicates whether ownership of the lock is available; and
a token location to store a token for acquiring ownership of the lock,
wherein each of the plurality of requesters attempting to acquire ownership of the lock retrieves a token from the token location,
wherein the value of the token stored at the token location is altered each time the token is retrieved,
wherein the value of the retrieved token corresponds to a particular waiter location of the plurality of waiter locations, and
wherein only the requester that retrieved the corresponding retrieved token waits at the particular waiter location to acquire ownership of the lock.
15. The memory structure as recited in claim 14, wherein the size of each of the memory locations corresponds to a cache line size.
16. The memory structure as recited in claim 14, wherein the plurality of memory locations comprises a release location, the contents of the release location corresponding to a next waiter location of the plurality of waiter locations at which a requester that retrieved a token corresponding to the next waiter location may acquire ownership of the lock.
17. The memory structure as recited in claim 16, wherein the size of each of the memory locations corresponds to a cache line size.
18. The memory structure as recited in claim 14, wherein the number of the plurality of waiter locations corresponds to the number of the plurality of requesters having access to the shared resource rounded up to the next power of two.
19. The memory structure as recited in claim 14, wherein a requester waiting at a particular waiter location may acquire ownership of the lock when the contents of the particular waiter location correspond to the value of the token retrieved by that requester from the token location.
20. The memory structure as recited in claim 14, wherein the token is retrieved from the token location and the value of the token stored at the token location is altered atomically.
21. A lock to control access to a shared resource by a plurality of requesters in a processor-based device, the lock comprising:
a plurality of memory locations, the size of each of the plurality of memory locations corresponding to a cache line size, wherein the plurality of memory locations comprises:
a plurality of waiter locations, the number of the plurality of waiter locations corresponding to at least the number of the plurality of requesters having access to the shared resource, wherein the contents of each waiter location indicates whether ownership of the lock is available; and
a token location to store a token for assigning a waiter location of the plurality of waiter locations to each requester of the plurality of requesters attempting to acquire ownership of the lock,
wherein each of the plurality of requesters attempting to acquire ownership of the lock determines whether ownership is available by examining the contents of its respective assigned waiter location.
22. The lock as recited in claim 21, wherein the value of the token stored at the token location is altered each time the token is retrieved.
23. The lock as recited in claim 22, wherein the value of the token is retrieved and altered atomically.
24. The lock as recited in claim 21, wherein ownership of the lock is available to a particular requester of the plurality of requesters when the contents of its respective assigned waiter location corresponds to the value of the token retrieved by the particular requester from the token location.
25. The lock as recited in claim 21, wherein the number of the plurality of waiter locations corresponds to the number of the plurality of requesters having access to the shared resource rounded up to the next power of two.
26. A processor-based device, comprising:
a plurality of processors;
a shared resource accessible by the plurality of processors, wherein access to the shared resource by the plurality of processors is based on ownership of a lock; and
a memory accessible by the plurality of processors, the memory comprising:
a plurality of waiter memory locations, wherein the number of the plurality of waiter memory locations corresponds to at least the number of the plurality of processors, and wherein the size of each of the waiter memory locations corresponding to a cache line size, and wherein the contents of each of the waiter memory locations indicates whether ownership of the lock is available; and
a token memory location to store a token for assigning a waiter memory location to each processor of the plurality of processors attempting to acquire ownership of the lock, wherein a particular requester may acquire ownership of the lock when the contents of its assigned waiter memory location indicate that the ownership is available.
27. The device as recited in claim 26, wherein the size of the token memory location corresponds to a cache line size.
28. The device as recited in claim 26, wherein each requester attempting to acquire ownership of the lock retrieves a token from the token memory location, and wherein the value of the token stored at the token memory location is altered each time it is retrieved.
29. The device as recited in claim 28, wherein the token is retrieved and the value of the token is altered atomically.
30. The device as recited in claim 28, wherein the contents of an assigned waiter memory location indicates that ownership of the lock is available when the contents correspond to the value of the token retrieved from the token memory location by the particular processor assigned to that assigned waiter memory location.
31. The device as recited in 26, wherein the number of the plurality of waiter memory locations corresponds to the number of the plurality of processors rounded up to the next power of two.
US09/966,503 2001-09-28 2001-09-28 Technique for implementing a distributed lock in a processor-based device Expired - Fee Related US6694411B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/966,503 US6694411B2 (en) 2001-09-28 2001-09-28 Technique for implementing a distributed lock in a processor-based device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/966,503 US6694411B2 (en) 2001-09-28 2001-09-28 Technique for implementing a distributed lock in a processor-based device

Publications (2)

Publication Number Publication Date
US20030065894A1 true US20030065894A1 (en) 2003-04-03
US6694411B2 US6694411B2 (en) 2004-02-17

Family

ID=25511506

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/966,503 Expired - Fee Related US6694411B2 (en) 2001-09-28 2001-09-28 Technique for implementing a distributed lock in a processor-based device

Country Status (1)

Country Link
US (1) US6694411B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150032852A1 (en) * 2002-08-06 2015-01-29 Sheng Tai (Ted) Tsao Method and System for Concurrent Web Based Multi-Task Support
US9928174B1 (en) * 2016-03-16 2018-03-27 Amazon Technologies, Inc. Consistent caching
US11336754B1 (en) * 2002-08-06 2022-05-17 Sheng Tai Tsao Method and system for concurrent web based multitasking support

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6990560B2 (en) * 2003-01-16 2006-01-24 International Business Machines Corporation Task synchronization mechanism and method
US8020166B2 (en) * 2007-01-25 2011-09-13 Hewlett-Packard Development Company, L.P. Dynamically controlling the number of busy waiters in a synchronization object

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150032852A1 (en) * 2002-08-06 2015-01-29 Sheng Tai (Ted) Tsao Method and System for Concurrent Web Based Multi-Task Support
US11336754B1 (en) * 2002-08-06 2022-05-17 Sheng Tai Tsao Method and system for concurrent web based multitasking support
US9928174B1 (en) * 2016-03-16 2018-03-27 Amazon Technologies, Inc. Consistent caching

Also Published As

Publication number Publication date
US6694411B2 (en) 2004-02-17

Similar Documents

Publication Publication Date Title
US7406690B2 (en) Flow lookahead in an ordered semaphore management subsystem
US6330647B1 (en) Memory bandwidth allocation based on access count priority scheme
US6697927B2 (en) Concurrent non-blocking FIFO array
US6775727B2 (en) System and method for controlling bus arbitration during cache memory burst cycles
US6782457B2 (en) Prioritized bus request scheduling mechanism for processing devices
TWI299121B (en)
EP1247170A2 (en) Nestable reader-writer lock for multiprocessor systems
US20080112313A1 (en) Methods And Apparatus For Dynamic Redistribution Of Tokens In A Multi-Processor System
CA2556083A1 (en) Memory allocation
JPH05282165A (en) Communication system
JP2005536791A (en) Dynamic multilevel task management method and apparatus
JP3910573B2 (en) Method, system and computer software for providing contiguous memory addresses
US7000087B2 (en) Programmatically pre-selecting specific physical memory blocks to allocate to an executing application
JP4999925B2 (en) Method and apparatus for performing arbitration
US6738847B1 (en) Method for assigning a multiplicity of interrupt vectors in a symmetric multi-processor computing environment
US20030002440A1 (en) Ordered semaphore management subsystem
US7529844B2 (en) Multiprocessing systems employing hierarchical spin locks
US6694411B2 (en) Technique for implementing a distributed lock in a processor-based device
US6895454B2 (en) Method and apparatus for sharing resources between different queue types
JP5123215B2 (en) Cache locking without interference from normal allocation
US6701429B1 (en) System and method of start-up in efficient way for multi-processor systems based on returned identification information read from pre-determined memory location
US7028116B2 (en) Enhancement of transaction order queue
JP2001229058A (en) Data base server processing method
JP2004062910A (en) Method for realizing semaphore to multi-core processor and controlling access to common resource
US20030140189A1 (en) Method and apparatus for resource sharing in a multi-processor system

Legal Events

Date Code Title Description
AS Assignment

Owner name: COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BONOLA, THOMAS J.;REEL/FRAME:012218/0714

Effective date: 20010928

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: CHANGE OF NAME;ASSIGNOR:COMPAQ INFORMATION TECHNOLOGIES GROUP, L.P.;REEL/FRAME:017606/0051

Effective date: 20021001

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001

Effective date: 20151027

LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20160217