US20090259696A1 - Node Synchronization for Multi-Processor Computer Systems - Google Patents
Node Synchronization for Multi-Processor Computer Systems Download PDFInfo
- Publication number
- US20090259696A1 US20090259696A1 US12/330,413 US33041308A US2009259696A1 US 20090259696 A1 US20090259696 A1 US 20090259696A1 US 33041308 A US33041308 A US 33041308A US 2009259696 A1 US2009259696 A1 US 2009259696A1
- Authority
- US
- United States
- Prior art keywords
- node
- nodes
- memory
- variable
- remote
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
Definitions
- the invention generally relates to multi-processor computer systems and, more particularly, the invention relates to distributed shared-memory computer systems.
- Large-scale shared memory multi-processor computer systems typically have a large number of processing nodes (e.g., with one or more microprocessors and local memory) that cooperate to perform a common task.
- processing nodes e.g., with one or more microprocessors and local memory
- Such systems often use some type of synchronization construct (e.g., barrier variables or spin locks) to ensure that all executing threads maintain certain program invariants.
- such computer systems may have some number of nodes that cooperate to multiply a large matrix.
- Such computer systems typically divide the task into discrete parts that each are executed by one of the nodes. All of the nodes are synchronized (e.g., when using barrier variables), however, so that they concurrently execute their corresponding steps of the task. Accordingly, such computer systems do not permit any of the nodes to begin executing a subsequent step until all of the other nodes have completed their prior corresponding step.
- synchronization variable To maintain synchronization among nodes, many such computer systems often use a specialized variable known in the art as a “synchronization variable.” Specifically, each time a node accesses the memory of some other node (referred to as the “home node”) or its own memory (the accessing node thus also is the home node in such case), the home node synchronization variable changes in a predetermined manner (e.g., the synchronization variable may be incremented). Some time thereafter, the home node transmits the changed synchronization variable to requesting system nodes (either automatically or in response to requests from the remote nodes). This transmission may be in response to a request by the remote nodes.
- the home node transmits the changed synchronization variable to requesting system nodes (either automatically or in response to requests from the remote nodes). This transmission may be in response to a request by the remote nodes.
- each remote node Upon receipt, each remote node determines if the incremented synchronization variable satisfies some test condition (e.g., they determine if the synchronization variable equals a predetermined test variable). If satisfied, then all remote nodes can continue to the next step of the task. Conversely, if not satisfied, then the remote nodes must wait until they subsequently receive a changed synchronization variable that satisfies the test condition. To receive the changed synchronization variable, however, the remote nodes continue to poll the home node.
- some test condition e.g., they determine if the synchronization variable equals a predetermined test variable. If satisfied, then all remote nodes can continue to the next step of the task. Conversely, if not satisfied, then the remote nodes must wait until they subsequently receive a changed synchronization variable that satisfies the test condition. To receive the changed synchronization variable, however, the remote nodes continue to poll the home node.
- these repeated multidirectional transmissions and corresponding coherence operations can create a network hotspot at the home node because, among other reasons, the request rate typically is much higher than its service rate. Compounding this problem, the total number of repeated transmissions and remote node requests increases as the number of nodes in large-scale shared memory multi-processor computer systems increases. Such repeated transmissions/requests thus can congest data transmission paths, consequently degrading system performance.
- a method and apparatus for controlling access by a set of accessing nodes to memory of a home node determines that each node in the set of nodes has accessed the memory, and forwards a completion message to each node in the set of nodes after it is determined that each node has accessed the memory.
- the completion message has data indicating that each node in the set of nodes has accessed the memory of the home node.
- the method and apparatus determine node access by setting a synchronization variable to an initial value, and updating the synchronization variable each time one of the set of nodes accesses the memory of the home node. After updating the synchronization variable, the method and apparatus determine if it satisfies a relationship with a test variable. The method and apparatus may determine that the relationship is satisfied before forwarding the completion message. The synchronization variable may be considered to satisfy the relationship when both variables have equal values.
- the completion message may be broadcasted to each accessing node in the set.
- each accessing node in the set of nodes illustratively is synchronized to execute a set of steps of a common process. Each accessing node does not execute a subsequent step in the common process, however, until receipt of the completion message.
- the method and apparatus also may detect that each node in the set of accessing nodes is to access the memory of the home node.
- an apparatus for controlling access by a set of accessing nodes to memory of a home node has control logic (operatively coupled with the memory of the home node) for determining if each node in the set of nodes has accessed the memory, and a message generator for generating a completion message having data indicating that each node in the set of nodes has accessed the memory of the home node.
- the apparatus also has an interface (operatively coupled with the message generator) for forwarding the completion message to each node in the set of nodes after it is determined that each node has accessed the memory.
- Illustrative embodiments of the invention are implemented as a computer program product having a computer usable medium with computer readable program code thereon.
- the computer readable code may be read and utilized by a computer system in accordance with conventional processes.
- FIG. 1 schematically shows nodes of a multi-processor/multi-node computer system that can be configured in accordance with illustrative embodiments of the invention.
- FIG. 2 schematically shows a memory controller configured in accordance with illustrative embodiments of the invention.
- FIG. 3 shows a first process for managing memory access in accordance with illustrative embodiments of the invention.
- FIG. 4 shows a second process for managing memory access in accordance with illustrative embodiments of the invention.
- a multi-node computer system has a memory controller that broadcasts a single completion message after all remote nodes have accessed home node memory. Upon receipt of the completion message, the remote nodes may proceed to the next step in a jointly executed task/process.
- This technique thus eliminates the need for the remote nodes to repeatedly poll the home node while it is servicing the access requests. Accordingly, such a process should minimize data traffic congestion, consequently improving system performance. Details of various embodiments are discussed below.
- FIG. 1 schematically shows three nodes 10 A- 10 C of a multi-processor/multi-node computer system 12 that can be configured in accordance with illustrative embodiments of the invention.
- the nodes 10 A- 10 C respectively are identified as node 10 A, node 10 B, and node 10 C and illustratively have the same general components.
- each node 10 A- 10 C has a plurality of components coordinated by a HUB chip 14 .
- the HUB chip 14 is a gate array chip customized to perform a number of functions, including those discussed below with regard to FIGS. 3 and 4 .
- the HUB chip 14 also may include a microprocessor instead of, or in addition to, the gate arrays.
- the components coupled with the HUB chip 14 include one or more microprocessors 16 for generating data words (among other things), memory 18 for storing data, and an I/O interface 20 for communicating with devices that are external to the computer system 12 .
- the components also include a interconnect 22 to other nodes in the computer system 12 .
- the HUB implements a memory controller 24 that efficiently synchronizes remote node access to the home node memory 18 . Details are discussed below.
- the microprocessors 16 include two 4X-ITANIUM microprocessors (distributed by Intel Corporation of Santa Clara, Calif.) that generate 128 bit words for storage in a plurality of dual in-line memory modules (shown schematically as memory 18 in FIG. 1 ).
- Each DIMM illustratively has eighteen X4-type random access memory chips (e.g., DRAM chips) for storing data generated by the microprocessors 16 , and is connected to one of four 72 bit buses (not shown).
- the HUB chip 14 may transfer 72 bits of data across each bus per clock cycle.
- the buses illustratively operate independently and transmit data in a synchronized manner.
- the microprocessors 16 on the three nodes 10 A- 10 C cooperate to perform a common task. For example, at least one of the microprocessors 16 on each of the nodes 10 A- 10 C may share responsibilities with those on other nodes 10 A- 10 C for multiplying a complex matrix. To that end, certain data to be processed may be located on one of the nodes 10 A- 10 C and thus, must be accessed by the other two nodes 10 A- 10 C to complete their operation. Continuing with the above example, node 10 A may have data that nodes 10 B, 10 C must retrieve and process.
- node 10 A is considered to be the “home node 10 A,” while nodes 10 B, 10 C are considered to be the “remote nodes 10 B, 10 C.” It should be noted, however, that discussion of these three specific nodes 10 A- 10 C is exemplary and thus, not intended to limit all aspects of the invention. Accordingly, this discussion applies to multi-node computer systems 12 having more nodes (e.g., hundreds of nodes) or fewer nodes.
- FIG. 2 shows a memory controller 24 configured to control home node memory access in a manner that minimizes data traffic congestion within the computer system 12 .
- each node may have a memory controller 24 .
- the memory controller 24 has control logic 26 for tracking memory access by the remote nodes 10 B, 10 C, a message generator 28 for producing and managing messages forwarded within the computer system 12 , and a synchronization variable module 30 for controlling synchronization variables.
- Various embodiments may implement barrier variables, spin locks, and other types of synchronization constructs or variables. For simplicity, barrier variables are discussed below as an exemplary implementation. Those in the art should understand, however, that other types of synchronization constructs may be used.
- the discussed synchronization variable module 30 has an initializing module 32 for initializing a barrier variable, a variable processor 34 for controlling the value of the barrier variable, and a comparator 36 for comparing the barrier variable to a test variable.
- FIGS. 3 and 4 discuss the cooperation of these and other components in greater detail.
- the memory controller 24 has a number of other components that are not shown in the figures. Their omission, however, should not be considered to suggest that illustrative embodiments do not use them. Moreover, other functional modules may perform similar functionality to execute various embodiments of the invention. The functional modules discussed in the figures therefore merely are intended to illustrate an embodiment of the invention and thus, not intended to limit all aspects of the invention.
- FIG. 3 shows a first process for managing memory access in accordance with illustrative embodiments of the invention. Unlike the process shown in FIG. 4 , this process uses well known barrier variables to control memory access. Specifically, as known by those skilled in the art, a barrier variable ensures that no node in a group of cooperating nodes 10 A- 10 C advances beyond a specified synchronization point until all processes of a given task have reached that point. In illustrative embodiments, the barrier variable is a 32-bit word.
- some set of nodes 10 A- 10 C are designated to concurrently execute a given process, such as multiplying a complex matrix.
- an application program executing on the computer system 12 may negotiate with the operating system to request specified resources, such as the total number of nodes 10 A- 10 C or microprocessors 16 required to complete the task.
- the application program may request that four microprocessors 16 execute a given task.
- the operating system responsively may designate certain microprocessors 16 on specific nodes 10 A- 10 C to execute the task. All nodes 10 A- 10 C maintain a record of the nodes 10 /microprocessors 16 designated for various tasks. As discussed below, this data enables various steps of the process of FIG. 3 .
- the operating system may designate any number of microprocessors 16 on a given node 10 to a single task. In some embodiments, however, different microprocessors 16 on a single mode may be dedicated to separate tasks.
- the memory controllers 24 on a given home node 10 A thus may service disparate requests from multiple microprocessors 16 that are on the same remote node 10 B, 10 C, but executing different processes.
- the process of FIG. 3 begins at step 300 , in which the initializing module 32 at the home node 10 A initializes its barrier variable and a “test variable.” For example, the process may set the barrier variable to a value of zero, and the test variable to a value equaling the total number of remote nodes 10 B, 10 C that require home node memory access.
- the processes of FIGS. 3 and 4 are discussed as having nodes with one microprocessor only. Of course, as noted above, principles of various embodiments apply to systems having nodes with multiple microprocessors 16 that execute separate tasks, or the same task.
- not all designated remote nodes 10 B, 10 C access the memory 18 of the home node 10 A for each step of the process. For example, in one step of a given task, only remote node 10 B may access home node 10 A. In that case, the process sets the test variable for this step to a value of one. In a subsequent step of the same task, however, both remote nodes 10 B, 10 C may require access to the home node 10 A. Accordingly, the test variable for that step may be set to a value of two. In those embodiments, the application program may forward data indicating the total number of remote nodes 10 B, 10 C requiring access during a given step. The memory controller 24 therefore sets the test variable upon receipt of this data.
- all of the remote nodes 10 B, 10 C also initialize respective local barrier variables that are stored in their local caches. Rather than repeatedly polling the home node 10 A, however, each of the remote nodes 10 B, 10 C repeatedly poll their local cache having their local barrier variables. Each remote node 10 B and 10 C therefore spins locally on its own cache. As discussed below, the local barrier variables may be updated only upon receipt of a message (from the home node 10 A) requiring an update. Moreover, as also discussed below, the remote nodes 10 B and 10 C spin locally until they receive a barrier variable meeting a prescribed test condition.
- the home node 10 A may retrieve the data (required by the remote nodes 10 B, 10 C) from the DRAM chips for storage in its local update cache. This transfer should facilitate access to that data, while improving system speed. Moreover, the home node 10 A also may generate a record of qualities of the data in its local update cache. Among other things, the record may indicate the rights/permissions that various remote nodes 10 B, 10 C have to the home node cache (e.g., read only, write and read, etc . . . ), and the current state of that cache line. The home node 10 A maintains and updates this record throughout the process.
- step 302 determines if any remote nodes 10 B, 10 C are attempting to access memory 18 on the home node 10 A.
- many nodes 10 may forward requests messages requesting access to the home node memory data.
- the control logic 26 stores each received request message in first in-first out queue (a “FIFO”), thus processing each request message in the order received. If the queue is full and it receives a request message from a given remote node 10 B, 10 C, the home node 10 A may drop that request message and forward a retry message to the given remote node 10 B, 10 C. Upon receipt of the retry message, the given remote node 10 B, 10 C again will forward a request message to the home node 10 A.
- FIFO first in-first out queue
- the home node 10 A serially services each request message from the FIFO. To that end, the home node 10 A may forward a copy of the data in its local cache the remote node 10 B, 10 C currently being serviced. As noted above, that remote node 10 B, 10 C may have the rights to modify that data, and overwrite the data currently stored in the home node cache. Alternatively, that remote node 10 B, 10 C may have read access rights only.
- step 304 in which the variable processor 34 changes the barrier variable in some prescribed manner.
- the variable processor 34 increments the barrier variable by one.
- incrementing the barrier variable by one is but one way of modifying the barrier variable.
- alternative embodiments may multiply the barrier variable by a given constant, or use it as a variable within some pre-specified function. Some embodiments, however, permit other nodes to access the barrier variable. In those cases, the home node 10 A may perform coherence operations prior to changing the barrier variable.
- the comparator 36 determines at step 306 if the barrier variable satisfies some prescribed relationship with the test variable. To that end, in illustrative embodiments, the comparator 36 determines if the barrier variable is equal to the test variable. Although a simple comparison is discussed, alternative embodiments may further process of the barrier and test variables to determine if they satisfy some prespecified relationship.
- step 302 determines that barrier variable equals the test variable (at step 306 )
- step 306 determines that barrier variable equals the test variable (at step 306 )
- step 308 generates and broadcasts/forwards a completion message to each of the remote nodes 10 B, 10 C in the computer system 12 .
- the message generator 28 generates the completion message, and issues the broadcast message through its interface to the interconnect 22 with the other nodes 10 .
- the home node 10 A rather than broadcasting the message, the home node 10 A maintains a record of all remote nodes 10 B, 10 C attempting access. To reduce data traffic, such embodiments therefore forward the completion message only to those remote nodes 10 B, 10 C recorded as attempting to access the home node memory 18 .
- the completion message includes data that, when read by the remote nodes 10 B, 10 C, indicates that all specified remote nodes 10 B, 10 C have completed their access of the home node memory 18 . Accordingly, among other data, the completion message may include the barrier variable incremented to its maximum specified value, and a “put” request to cause receiving remote nodes 10 B, 10 C to overwrite their barrier variables with the barrier variable in the message. When it receives this data, the remote node 10 B, 10 C therefore updates the barrier variable within its local cache. The memory controller 24 therefore is considered to push such barrier variable to the remote nodes 10 B and 10 C. During its next polling cycle, the remote node 10 B, 10 C detects this maximum barrier variable, and thus is free to begin executing the next step in the process. In other words, receipt of the completion message eliminates the barrier preventing the remote node 10 B, 10 C from executing its next step.
- This process therefore issues update messages (i.e., the completion messages) that synchronize multiple nodes 10 A- 10 C while they each perform atomic operations on specified data. Accordingly, network hotspots are minimized because fewer barrier variable requests and broadcasts are transmitted between nodes 10 A- 10 C.
- update messages i.e., the completion messages
- barrier variables are discussed, other means may be used to implement various embodiments of the invention.
- various embodiments may be implemented by using spin locks.
- FIG. 4 shows one such exemplary process.
- a spinlock ensures atomic access to data or code protected by a lock.
- the process begins at step 400 , in which the home node 10 A determines which remote nodes 10 B, 10 C will have access to its memory 18 . This may be executed in a manner similar to step 300 of FIG. 4 , in which the operating system and application program negotiate resources for a given task.
- the home node 10 A then forwards a lock (a data word of a specified size) to the first remote node 10 B, 10 C that will access its memory 18 (step 402 ).
- the message generator 28 forwards the lock to the next remote node 10 B, 10 C that will access the home node memory 18 .
- the message generator 28 may forward a lock message (to the remote node 10 B, 10 C currently having the lock) requiring that the currently accessing remote node 10 B, 10 C forward the lock to the next remote node 10 B, 10 C.
- the lock message may include the address of the next remote node 10 B, 10 C, as well as the lock itself.
- the home node 10 A may forward the lock to the next remote node 10 B, 10 C, while affirmatively requiring the current remote node 10 B, 10 C to stop processing. In either case, the home node 10 A may maintain a record of the remote node 10 B, 10 C having the lock. Accordingly, upon receipt of access requests from any number of remote nodes 10 B, 10 C, the home node 10 A will only permit access by the remote node 10 B, 10 C recorded as having a lock.
- the home node 10 A determines at step 404 if the next remote node 10 B, 10 C is the last remote node 10 B, 10 C to access its memory 18 . To that end, the home node 10 A may determine if a pointer to a list having the remote nodes 10 B, 10 C has reached a terminal variable. If the next remote node 10 B, 10 C is not the last remote node 10 B, 10 C, then the process loops back to step 402 , in which the home node 10 A provides the lock to the next remote node 10 B, 10 C.
- step 404 determines at step 404 that the next remote node 10 , 10 C is the last node
- the process continues to step 406 , in which the message generator 28 broadcasts a completion message to the remote nodes 10 B, 10 C in a manner similar to that discussed above. Rather than have a barrier variable, however, the broadcast message will simply have prespecified data that, when received by the remote nodes 10 B, 10 C, enable them to begin executing the next step of the common process.
- the process of FIG. 4 also minimizes network hotspots, thus optimizing computer system performance.
- embodiments of the invention may be implemented at least in part in any conventional computer programming language. For example, some embodiments may be implemented in a procedural programming language (e.g., “C”), or in an object oriented programming language (e.g., “C++”). Other embodiments of the invention may be implemented as preprogrammed hardware elements (e.g., application specific integrated circuits, FPGAs, and digital signal processors), or other related components.
- C procedural programming language
- object oriented programming language e.g., “C++”.
- preprogrammed hardware elements e.g., application specific integrated circuits, FPGAs, and digital signal processors
- the disclosed apparatus and methods may be implemented as a computer program product for use with a computer system.
- Such implementation may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium.
- the medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., WIFI, microwave, infrared or other transmission techniques).
- the series of computer instructions can embody all or part of the functionality previously described herein with respect to the system.
- Such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems.
- such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies.
- such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web).
- a computer system e.g., on system ROM or fixed disk
- a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web).
- some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software.
Abstract
Description
- This patent application is a continuation application of U.S. patent application Ser. No. 11/113,805, filed Apr. 25, 2005 entitled, “NODE SYNCHRONIZATION FOR MULTI-PROCESSOR COMPUTER SYSTEMS,” and naming Dr. John Carter, Randall S. Passint, Donglai Dai, Zhen Fang, Lixin Zhang and Gregory M. Thorson as inventors, attorney docket number 2839/106, the disclosure of which is incorporated herein, in its entirety, by reference.
- The invention generally relates to multi-processor computer systems and, more particularly, the invention relates to distributed shared-memory computer systems.
- Large-scale shared memory multi-processor computer systems typically have a large number of processing nodes (e.g., with one or more microprocessors and local memory) that cooperate to perform a common task. Such systems often use some type of synchronization construct (e.g., barrier variables or spin locks) to ensure that all executing threads maintain certain program invariants. For example, such computer systems may have some number of nodes that cooperate to multiply a large matrix. To do this in a rapid and efficient manner, such computer systems typically divide the task into discrete parts that each are executed by one of the nodes. All of the nodes are synchronized (e.g., when using barrier variables), however, so that they concurrently execute their corresponding steps of the task. Accordingly, such computer systems do not permit any of the nodes to begin executing a subsequent step until all of the other nodes have completed their prior corresponding step.
- To maintain synchronization among nodes, many such computer systems often use a specialized variable known in the art as a “synchronization variable.” Specifically, each time a node accesses the memory of some other node (referred to as the “home node”) or its own memory (the accessing node thus also is the home node in such case), the home node synchronization variable changes in a predetermined manner (e.g., the synchronization variable may be incremented). Some time thereafter, the home node transmits the changed synchronization variable to requesting system nodes (either automatically or in response to requests from the remote nodes). This transmission may be in response to a request by the remote nodes. Upon receipt, each remote node determines if the incremented synchronization variable satisfies some test condition (e.g., they determine if the synchronization variable equals a predetermined test variable). If satisfied, then all remote nodes can continue to the next step of the task. Conversely, if not satisfied, then the remote nodes must wait until they subsequently receive a changed synchronization variable that satisfies the test condition. To receive the changed synchronization variable, however, the remote nodes continue to poll the home node.
- Undesirably, these repeated multidirectional transmissions and corresponding coherence operations can create a network hotspot at the home node because, among other reasons, the request rate typically is much higher than its service rate. Compounding this problem, the total number of repeated transmissions and remote node requests increases as the number of nodes in large-scale shared memory multi-processor computer systems increases. Such repeated transmissions/requests thus can congest data transmission paths, consequently degrading system performance.
- In accordance with one aspect of the invention, a method and apparatus for controlling access by a set of accessing nodes to memory of a home node (in a multinode computer system) determines that each node in the set of nodes has accessed the memory, and forwards a completion message to each node in the set of nodes after it is determined that each node has accessed the memory. The completion message has data indicating that each node in the set of nodes has accessed the memory of the home node.
- In illustrative embodiments, the method and apparatus determine node access by setting a synchronization variable to an initial value, and updating the synchronization variable each time one of the set of nodes accesses the memory of the home node. After updating the synchronization variable, the method and apparatus determine if it satisfies a relationship with a test variable. The method and apparatus may determine that the relationship is satisfied before forwarding the completion message. The synchronization variable may be considered to satisfy the relationship when both variables have equal values.
- Among other things, the completion message may be broadcasted to each accessing node in the set. In addition, each accessing node in the set of nodes illustratively is synchronized to execute a set of steps of a common process. Each accessing node does not execute a subsequent step in the common process, however, until receipt of the completion message. The method and apparatus also may detect that each node in the set of accessing nodes is to access the memory of the home node.
- In accordance with another aspect of the invention, an apparatus for controlling access by a set of accessing nodes to memory of a home node has control logic (operatively coupled with the memory of the home node) for determining if each node in the set of nodes has accessed the memory, and a message generator for generating a completion message having data indicating that each node in the set of nodes has accessed the memory of the home node. The apparatus also has an interface (operatively coupled with the message generator) for forwarding the completion message to each node in the set of nodes after it is determined that each node has accessed the memory.
- Illustrative embodiments of the invention are implemented as a computer program product having a computer usable medium with computer readable program code thereon. The computer readable code may be read and utilized by a computer system in accordance with conventional processes.
- The foregoing and advantages of the invention will be appreciated more fully from the following further description thereof with reference to the accompanying drawings wherein:
-
FIG. 1 schematically shows nodes of a multi-processor/multi-node computer system that can be configured in accordance with illustrative embodiments of the invention. -
FIG. 2 schematically shows a memory controller configured in accordance with illustrative embodiments of the invention. -
FIG. 3 shows a first process for managing memory access in accordance with illustrative embodiments of the invention. -
FIG. 4 shows a second process for managing memory access in accordance with illustrative embodiments of the invention. - In illustrative embodiments, a multi-node computer system has a memory controller that broadcasts a single completion message after all remote nodes have accessed home node memory. Upon receipt of the completion message, the remote nodes may proceed to the next step in a jointly executed task/process. This technique thus eliminates the need for the remote nodes to repeatedly poll the home node while it is servicing the access requests. Accordingly, such a process should minimize data traffic congestion, consequently improving system performance. Details of various embodiments are discussed below.
-
FIG. 1 schematically shows threenodes 10A-10C of a multi-processor/multi-node computer system 12 that can be configured in accordance with illustrative embodiments of the invention. Thenodes 10A-10C respectively are identified asnode 10A,node 10B, andnode 10C and illustratively have the same general components. Specifically, eachnode 10A-10C has a plurality of components coordinated by aHUB chip 14. In illustrative embodiments, theHUB chip 14 is a gate array chip customized to perform a number of functions, including those discussed below with regard toFIGS. 3 and 4 . TheHUB chip 14 also may include a microprocessor instead of, or in addition to, the gate arrays. - The components coupled with the
HUB chip 14 include one ormore microprocessors 16 for generating data words (among other things),memory 18 for storing data, and an I/O interface 20 for communicating with devices that are external to thecomputer system 12. In addition, the components also include ainterconnect 22 to other nodes in thecomputer system 12. In illustrative embodiments, the HUB implements amemory controller 24 that efficiently synchronizes remote node access to thehome node memory 18. Details are discussed below. - In one
exemplary system 12, themicroprocessors 16 include two 4X-ITANIUM microprocessors (distributed by Intel Corporation of Santa Clara, Calif.) that generate 128 bit words for storage in a plurality of dual in-line memory modules (shown schematically asmemory 18 inFIG. 1 ). Each DIMM illustratively has eighteen X4-type random access memory chips (e.g., DRAM chips) for storing data generated by themicroprocessors 16, and is connected to one of four 72 bit buses (not shown). Accordingly, the HUBchip 14 may transfer 72 bits of data across each bus per clock cycle. The buses illustratively operate independently and transmit data in a synchronized manner. - The
microprocessors 16 on the threenodes 10A-10C cooperate to perform a common task. For example, at least one of themicroprocessors 16 on each of thenodes 10A-10C may share responsibilities with those onother nodes 10A-10C for multiplying a complex matrix. To that end, certain data to be processed may be located on one of thenodes 10A-10C and thus, must be accessed by the other twonodes 10A-10C to complete their operation. Continuing with the above example,node 10A may have data thatnodes node 10A is considered to be the “home node 10A,” whilenodes remote nodes specific nodes 10A-10C is exemplary and thus, not intended to limit all aspects of the invention. Accordingly, this discussion applies tomulti-node computer systems 12 having more nodes (e.g., hundreds of nodes) or fewer nodes. -
FIG. 2 shows amemory controller 24 configured to control home node memory access in a manner that minimizes data traffic congestion within thecomputer system 12. As noted above, each node may have amemory controller 24. Among other things, thememory controller 24 hascontrol logic 26 for tracking memory access by theremote nodes message generator 28 for producing and managing messages forwarded within thecomputer system 12, and asynchronization variable module 30 for controlling synchronization variables. Various embodiments may implement barrier variables, spin locks, and other types of synchronization constructs or variables. For simplicity, barrier variables are discussed below as an exemplary implementation. Those in the art should understand, however, that other types of synchronization constructs may be used. - To perform its basic barrier functions, the discussed
synchronization variable module 30 has an initializingmodule 32 for initializing a barrier variable, avariable processor 34 for controlling the value of the barrier variable, and acomparator 36 for comparing the barrier variable to a test variable.FIGS. 3 and 4 discuss the cooperation of these and other components in greater detail. - In a manner similar to other components of the
computer system 12, it should be noted that thememory controller 24 has a number of other components that are not shown in the figures. Their omission, however, should not be considered to suggest that illustrative embodiments do not use them. Moreover, other functional modules may perform similar functionality to execute various embodiments of the invention. The functional modules discussed in the figures therefore merely are intended to illustrate an embodiment of the invention and thus, not intended to limit all aspects of the invention. -
FIG. 3 shows a first process for managing memory access in accordance with illustrative embodiments of the invention. Unlike the process shown inFIG. 4 , this process uses well known barrier variables to control memory access. Specifically, as known by those skilled in the art, a barrier variable ensures that no node in a group of cooperatingnodes 10A-10C advances beyond a specified synchronization point until all processes of a given task have reached that point. In illustrative embodiments, the barrier variable is a 32-bit word. - Before the process of
FIG. 3 begins, however, some set ofnodes 10A-10C are designated to concurrently execute a given process, such as multiplying a complex matrix. To that end, an application program executing on thecomputer system 12 may negotiate with the operating system to request specified resources, such as the total number ofnodes 10A-10C ormicroprocessors 16 required to complete the task. For example, the application program may request that fourmicroprocessors 16 execute a given task. The operating system responsively may designatecertain microprocessors 16 onspecific nodes 10A-10C to execute the task. Allnodes 10A-10C maintain a record of the nodes 10/microprocessors 16 designated for various tasks. As discussed below, this data enables various steps of the process ofFIG. 3 . - For
computer systems 12 having nodes withmultiple microprocessors 16, the operating system may designate any number ofmicroprocessors 16 on a given node 10 to a single task. In some embodiments, however,different microprocessors 16 on a single mode may be dedicated to separate tasks. Thememory controllers 24 on a givenhome node 10A thus may service disparate requests frommultiple microprocessors 16 that are on the sameremote node - The process of
FIG. 3 begins atstep 300, in which theinitializing module 32 at thehome node 10A initializes its barrier variable and a “test variable.” For example, the process may set the barrier variable to a value of zero, and the test variable to a value equaling the total number ofremote nodes FIGS. 3 and 4 are discussed as having nodes with one microprocessor only. Of course, as noted above, principles of various embodiments apply to systems having nodes withmultiple microprocessors 16 that execute separate tasks, or the same task. - In some embodiments, not all designated
remote nodes memory 18 of thehome node 10A for each step of the process. For example, in one step of a given task, onlyremote node 10B may accesshome node 10A. In that case, the process sets the test variable for this step to a value of one. In a subsequent step of the same task, however, bothremote nodes home node 10A. Accordingly, the test variable for that step may be set to a value of two. In those embodiments, the application program may forward data indicating the total number ofremote nodes memory controller 24 therefore sets the test variable upon receipt of this data. - At substantially the same time, all of the
remote nodes home node 10A, however, each of theremote nodes remote node home node 10A) requiring an update. Moreover, as also discussed below, theremote nodes - Also at this point in the process, the
home node 10A may retrieve the data (required by theremote nodes home node 10A also may generate a record of qualities of the data in its local update cache. Among other things, the record may indicate the rights/permissions that variousremote nodes home node 10A maintains and updates this record throughout the process. - The process thus continues to step 302, which determines if any
remote nodes memory 18 on thehome node 10A. In illustrative embodiments, many nodes 10 may forward requests messages requesting access to the home node memory data. Thecontrol logic 26 stores each received request message in first in-first out queue (a “FIFO”), thus processing each request message in the order received. If the queue is full and it receives a request message from a givenremote node home node 10A may drop that request message and forward a retry message to the givenremote node remote node home node 10A. - Accordingly, as noted above, the
home node 10A serially services each request message from the FIFO. To that end, thehome node 10A may forward a copy of the data in its local cache theremote node remote node remote node - After the
home node 10A processes a given request message, the process continues to step 304, in which thevariable processor 34 changes the barrier variable in some prescribed manner. In illustrative embodiments, thevariable processor 34 increments the barrier variable by one. Of course, incrementing the barrier variable by one is but one way of modifying the barrier variable. For example, alternative embodiments may multiply the barrier variable by a given constant, or use it as a variable within some pre-specified function. Some embodiments, however, permit other nodes to access the barrier variable. In those cases, thehome node 10A may perform coherence operations prior to changing the barrier variable. - The
comparator 36 then determines atstep 306 if the barrier variable satisfies some prescribed relationship with the test variable. To that end, in illustrative embodiments, thecomparator 36 determines if the barrier variable is equal to the test variable. Although a simple comparison is discussed, alternative embodiments may further process of the barrier and test variables to determine if they satisfy some prespecified relationship. - If the barrier variable does not equal the test variable, then the process loops back to step 302, which retrieves the next request message from the queue. Conversely, if the
comparator 36 determines that barrier variable equals the test variable (at step 306), then allremote nodes home node memory 18. In that case, the process continues to step 308, which generates and broadcasts/forwards a completion message to each of theremote nodes computer system 12. To that end, themessage generator 28 generates the completion message, and issues the broadcast message through its interface to theinterconnect 22 with the other nodes 10. In some embodiments, rather than broadcasting the message, thehome node 10A maintains a record of allremote nodes remote nodes home node memory 18. - The completion message includes data that, when read by the
remote nodes remote nodes home node memory 18. Accordingly, among other data, the completion message may include the barrier variable incremented to its maximum specified value, and a “put” request to cause receivingremote nodes remote node memory controller 24 therefore is considered to push such barrier variable to theremote nodes remote node remote node - This process therefore issues update messages (i.e., the completion messages) that synchronize
multiple nodes 10A-10C while they each perform atomic operations on specified data. Accordingly, network hotspots are minimized because fewer barrier variable requests and broadcasts are transmitted betweennodes 10A-10C. - Although barrier variables are discussed, other means may be used to implement various embodiments of the invention. For example, rather than using barrier variables, various embodiments may be implemented by using spin locks.
FIG. 4 shows one such exemplary process. - As known by those skilled in the art, a spinlock ensures atomic access to data or code protected by a lock. To that end, the process begins at
step 400, in which thehome node 10A determines whichremote nodes memory 18. This may be executed in a manner similar to step 300 ofFIG. 4 , in which the operating system and application program negotiate resources for a given task. - The
home node 10A then forwards a lock (a data word of a specified size) to the firstremote node remote node message generator 28 forwards the lock to the nextremote node home node memory 18. Among other ways, themessage generator 28 may forward a lock message (to theremote node remote node remote node remote node home node 10A may forward the lock to the nextremote node remote node home node 10A may maintain a record of theremote node remote nodes home node 10A will only permit access by theremote node - The
home node 10A then determines atstep 404 if the nextremote node remote node memory 18. To that end, thehome node 10A may determine if a pointer to a list having theremote nodes remote node remote node home node 10A provides the lock to the nextremote node - Conversely, if the
home node 10A determines atstep 404 that the nextremote node 10, 10C is the last node, then the process continues to step 406, in which themessage generator 28 broadcasts a completion message to theremote nodes remote nodes - Accordingly, in a manner similar to the process discussed with regard to
FIG. 3 , the process ofFIG. 4 also minimizes network hotspots, thus optimizing computer system performance. - Various embodiments of the invention may be implemented at least in part in any conventional computer programming language. For example, some embodiments may be implemented in a procedural programming language (e.g., “C”), or in an object oriented programming language (e.g., “C++”). Other embodiments of the invention may be implemented as preprogrammed hardware elements (e.g., application specific integrated circuits, FPGAs, and digital signal processors), or other related components.
- As suggested above, the disclosed apparatus and methods (e.g., see the various flow charts described above) may be implemented as a computer program product for use with a computer system. Such implementation may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium. The medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., WIFI, microwave, infrared or other transmission techniques). The series of computer instructions can embody all or part of the functionality previously described herein with respect to the system.
- Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies.
- Among other ways, such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software.
- Although the above discussion discloses various exemplary embodiments of the invention, it should be apparent that those skilled in the art can make various modifications that will achieve some of the advantages of the invention without departing from the true scope of the invention.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/330,413 US20090259696A1 (en) | 2005-04-25 | 2008-12-08 | Node Synchronization for Multi-Processor Computer Systems |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/113,805 US7464115B2 (en) | 2005-04-25 | 2005-04-25 | Node synchronization for multi-processor computer systems |
US12/330,413 US20090259696A1 (en) | 2005-04-25 | 2008-12-08 | Node Synchronization for Multi-Processor Computer Systems |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/113,805 Continuation US7464115B2 (en) | 2005-04-25 | 2005-04-25 | Node synchronization for multi-processor computer systems |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090259696A1 true US20090259696A1 (en) | 2009-10-15 |
Family
ID=37188385
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/113,805 Active 2026-02-27 US7464115B2 (en) | 2005-04-25 | 2005-04-25 | Node synchronization for multi-processor computer systems |
US12/330,413 Abandoned US20090259696A1 (en) | 2005-04-25 | 2008-12-08 | Node Synchronization for Multi-Processor Computer Systems |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/113,805 Active 2026-02-27 US7464115B2 (en) | 2005-04-25 | 2005-04-25 | Node synchronization for multi-processor computer systems |
Country Status (1)
Country | Link |
---|---|
US (2) | US7464115B2 (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7464115B2 (en) * | 2005-04-25 | 2008-12-09 | Silicon Graphics, Inc. | Node synchronization for multi-processor computer systems |
US8065681B2 (en) * | 2007-10-12 | 2011-11-22 | International Business Machines Corporation | Generic shared memory barrier |
US8275947B2 (en) | 2008-02-01 | 2012-09-25 | International Business Machines Corporation | Mechanism to prevent illegal access to task address space by unauthorized tasks |
US8255913B2 (en) | 2008-02-01 | 2012-08-28 | International Business Machines Corporation | Notification to task of completion of GSM operations by initiator node |
US8146094B2 (en) * | 2008-02-01 | 2012-03-27 | International Business Machines Corporation | Guaranteeing delivery of multi-packet GSM messages |
US8239879B2 (en) | 2008-02-01 | 2012-08-07 | International Business Machines Corporation | Notification by task of completion of GSM operations at target node |
US8200910B2 (en) | 2008-02-01 | 2012-06-12 | International Business Machines Corporation | Generating and issuing global shared memory operations via a send FIFO |
US8484307B2 (en) | 2008-02-01 | 2013-07-09 | International Business Machines Corporation | Host fabric interface (HFI) to perform global shared memory (GSM) operations |
US8214604B2 (en) * | 2008-02-01 | 2012-07-03 | International Business Machines Corporation | Mechanisms to order global shared memory operations |
US9934079B2 (en) | 2010-05-27 | 2018-04-03 | International Business Machines Corporation | Fast remote communication and computation between processors using store and load operations on direct core-to-core memory |
US9223726B2 (en) * | 2010-09-10 | 2015-12-29 | Cypress Semiconductor Corporation | Apparatus and method for programmable read preamble with training pattern |
US9519674B2 (en) | 2014-09-10 | 2016-12-13 | Amazon Technologies, Inc. | Stateless datastore-independent transactions |
US9323569B2 (en) * | 2014-09-10 | 2016-04-26 | Amazon Technologies, Inc. | Scalable log-based transaction management |
US20160232112A1 (en) * | 2015-02-06 | 2016-08-11 | Futurewei Technologies, Inc. | Unified Memory Bus and Method to Operate the Unified Memory Bus |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5897657A (en) * | 1996-07-01 | 1999-04-27 | Sun Microsystems, Inc. | Multiprocessing system employing a coherency protocol including a reply count |
US5928351A (en) * | 1996-07-31 | 1999-07-27 | Fujitsu Ltd. | Parallel computer system with communications network for selecting computer nodes for barrier synchronization |
US5958019A (en) * | 1996-07-01 | 1999-09-28 | Sun Microsystems, Inc. | Multiprocessing system configured to perform synchronization operations |
US6024477A (en) * | 1996-08-24 | 2000-02-15 | Robert Bosch Gmbh | Process and device for an accelerated execution of a program using a stored-program controller |
US6330604B1 (en) * | 1994-03-24 | 2001-12-11 | Hitachi, Ltd. | Exclusive control method with each node controlling issue of an exclusive use request to a shared resource, a computer system therefor and a computer system with a circuit for detecting writing of an event flag into a shared main storage |
US20020078305A1 (en) * | 2000-12-20 | 2002-06-20 | Manoj Khare | Method and apparatus for invalidating a cache line without data return in a multi-node architecture |
US6745294B1 (en) * | 2001-06-08 | 2004-06-01 | Hewlett-Packard Development Company, L.P. | Multi-processor computer system with lock driven cache-flushing system |
US6785888B1 (en) * | 1997-08-29 | 2004-08-31 | International Business Machines Corporation | Memory allocator for a multiprocessor computer system |
US20060002309A1 (en) * | 2004-06-30 | 2006-01-05 | International Business Machines Corporation | Method and apparatus for self-configuring routing devices in a network |
US7191294B2 (en) * | 2003-08-25 | 2007-03-13 | Hitachi, Ltd. | Method for synchronizing processors in a multiprocessor system |
US7464115B2 (en) * | 2005-04-25 | 2008-12-09 | Silicon Graphics, Inc. | Node synchronization for multi-processor computer systems |
US7640315B1 (en) * | 2000-08-04 | 2009-12-29 | Advanced Micro Devices, Inc. | Implementing locks in a distributed processing system |
-
2005
- 2005-04-25 US US11/113,805 patent/US7464115B2/en active Active
-
2008
- 2008-12-08 US US12/330,413 patent/US20090259696A1/en not_active Abandoned
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6330604B1 (en) * | 1994-03-24 | 2001-12-11 | Hitachi, Ltd. | Exclusive control method with each node controlling issue of an exclusive use request to a shared resource, a computer system therefor and a computer system with a circuit for detecting writing of an event flag into a shared main storage |
US5897657A (en) * | 1996-07-01 | 1999-04-27 | Sun Microsystems, Inc. | Multiprocessing system employing a coherency protocol including a reply count |
US5958019A (en) * | 1996-07-01 | 1999-09-28 | Sun Microsystems, Inc. | Multiprocessing system configured to perform synchronization operations |
US5928351A (en) * | 1996-07-31 | 1999-07-27 | Fujitsu Ltd. | Parallel computer system with communications network for selecting computer nodes for barrier synchronization |
US6024477A (en) * | 1996-08-24 | 2000-02-15 | Robert Bosch Gmbh | Process and device for an accelerated execution of a program using a stored-program controller |
US6785888B1 (en) * | 1997-08-29 | 2004-08-31 | International Business Machines Corporation | Memory allocator for a multiprocessor computer system |
US7640315B1 (en) * | 2000-08-04 | 2009-12-29 | Advanced Micro Devices, Inc. | Implementing locks in a distributed processing system |
US20020078305A1 (en) * | 2000-12-20 | 2002-06-20 | Manoj Khare | Method and apparatus for invalidating a cache line without data return in a multi-node architecture |
US6745294B1 (en) * | 2001-06-08 | 2004-06-01 | Hewlett-Packard Development Company, L.P. | Multi-processor computer system with lock driven cache-flushing system |
US7191294B2 (en) * | 2003-08-25 | 2007-03-13 | Hitachi, Ltd. | Method for synchronizing processors in a multiprocessor system |
US20060002309A1 (en) * | 2004-06-30 | 2006-01-05 | International Business Machines Corporation | Method and apparatus for self-configuring routing devices in a network |
US7464115B2 (en) * | 2005-04-25 | 2008-12-09 | Silicon Graphics, Inc. | Node synchronization for multi-processor computer systems |
Also Published As
Publication number | Publication date |
---|---|
US7464115B2 (en) | 2008-12-09 |
US20060242308A1 (en) | 2006-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7464115B2 (en) | Node synchronization for multi-processor computer systems | |
US6502136B1 (en) | Exclusive control method with each node controlling issue of an exclusive use request to a shared resource, a computer system therefor and a computer system with a circuit for detecting writing of an event flag into a shared main storage | |
EP3028162B1 (en) | Direct access to persistent memory of shared storage | |
US6047316A (en) | Multiprocessor computing apparatus having spin lock fairness | |
US7131120B2 (en) | Inter Java virtual machine (JVM) resource locking mechanism | |
JP5121732B2 (en) | Adaptive region lock processing | |
US8209690B2 (en) | System and method for thread handling in multithreaded parallel computing of nested threads | |
US11461151B2 (en) | Controller address contention assumption | |
US9158597B2 (en) | Controlling access to shared resource by issuing tickets to plurality of execution units | |
US20050080962A1 (en) | Hardware management of JAVA threads | |
JPH0587855B2 (en) | ||
JP2002182976A (en) | Dynamic serial conversion for memory access in multi- processor system | |
US8086766B2 (en) | Support for non-locking parallel reception of packets belonging to a single memory reception FIFO | |
JP2006107514A (en) | System and device which have interface device which can perform data communication with external device | |
US20090193229A1 (en) | High-integrity computation architecture with multiple supervised resources | |
CN116243995B (en) | Communication method, communication device, computer readable storage medium, and electronic apparatus | |
JP2018109965A (en) | Data processing | |
CN104854845B (en) | Use the method and apparatus of efficient atomic operation | |
JP2021507412A (en) | Low power management for multi-node systems | |
CN108874687A (en) | For the non-unibus of tiled last level cache(NUB)Interconnection agreement | |
JPH04271453A (en) | Composite electronic computer | |
JP4584935B2 (en) | Behavior model based multi-thread architecture | |
CN111212135A (en) | Message subscription method, device, system, electronic equipment and storage medium | |
EP0969384B1 (en) | Method and apparatus for processing information, and providing medium | |
CN116303207A (en) | Bus transmission method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MORGAN STANLEY & CO., INCORPORATED, NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNOR:SILICON GRAPHICS, INC.;REEL/FRAME:022137/0162 Effective date: 20090114 |
|
AS | Assignment |
Owner name: SILICON GRAPHICS INTERNATIONAL, CORP., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SILICON GRAPHICS, INC. ET AL.;SGI INTERNATIONAL, INC.;SIGNING DATES FROM 20090508 TO 20120320;REEL/FRAME:027904/0315 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: SILICON GRAPHICS INTERNATIONAL CORP., CALIFORNIA Free format text: MERGER;ASSIGNOR:SGI INTERNATIONAL, INC.;REEL/FRAME:039257/0994 Effective date: 20120808 Owner name: SGI INTERNATIONAL, INC., CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:SILICON GRAPHICS INTERNATIONAL, INC.;REEL/FRAME:039465/0390 Effective date: 20090513 Owner name: SILICON GRAPHICS INTERNATIONAL, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SILICON GRAPHICS, INC.;REEL/FRAME:039465/0001 Effective date: 20090508 |
|
AS | Assignment |
Owner name: SILICON GRAPHICS, INC., CALIFORNIA Free format text: ORDER...AUTHORIZING THE SALE OF ALL OR SUBSTANTIALLY ALL OF THE ASSETS OF THE DEBTORS FREE AND CLEAR OF ALL LIENS, ENCUMBRANCES, AND INTERESTS;ASSIGNOR:MORGAN STANLEY & CO., INCORPORATED;REEL/FRAME:039503/0577 Effective date: 20090430 |
|
AS | Assignment |
Owner name: SILICON GRAPHICS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CARTER, JOHN;PASSINT, RANDAL S;DAI, DONGLAI;AND OTHERS;SIGNING DATES FROM 20050801 TO 20050802;REEL/FRAME:040103/0975 |
|
AS | Assignment |
Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SILICON GRAPHICS INTERNATIONAL CORP.;REEL/FRAME:044128/0149 Effective date: 20170501 |