US7546398B2 - System and method for distributing virtual input/output operations across multiple logical partitions - Google Patents

System and method for distributing virtual input/output operations across multiple logical partitions Download PDF

Info

Publication number
US7546398B2
US7546398B2 US11/461,461 US46146106A US7546398B2 US 7546398 B2 US7546398 B2 US 7546398B2 US 46146106 A US46146106 A US 46146106A US 7546398 B2 US7546398 B2 US 7546398B2
Authority
US
United States
Prior art keywords
lpar
request
lpars
devices
vio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US11/461,461
Other versions
US20080126579A1 (en
Inventor
Karyn T. Corneli
Christopher J. DAWSON
II Rick A. Hamilton
Timothy M. Waters
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kyndryl Inc
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US11/461,461 priority Critical patent/US7546398B2/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DAWSON, CHRISTOPHER J., CORNELI, KARYN T., HAMILTON, II, RICK A., WATERS, TIMOTHY M.
Priority to CN2007101373822A priority patent/CN101118521B/en
Priority to JP2007199992A priority patent/JP5039947B2/en
Publication of US20080126579A1 publication Critical patent/US20080126579A1/en
Priority to US12/478,584 priority patent/US8024497B2/en
Application granted granted Critical
Publication of US7546398B2 publication Critical patent/US7546398B2/en
Assigned to KYNDRYL, INC. reassignment KYNDRYL, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2002Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant
    • G06F11/2005Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant using redundant communication controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3485Performance evaluation by tracing or monitoring for I/O devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2017Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where memory access, memory control or I/O control functionality is redundant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/81Threshold

Definitions

  • the invention relates generally to electrical computers and digital data processing, and specifically to selecting a path via which the computers will transfer data.
  • LPARs logical partitions
  • Mainframe computers traditionally used physical partitioning to construct multiple “system images” using separate discrete building blocks.
  • UNIX servers using logical partitions, permitted finer granularity and interchangeability of components across system images.
  • I/O input/output
  • Virtualization of I/O devices allows multiple logical partitions to share physical resources such as Ethernet adapters, disk adapters and so forth. Therefore, rather than dedicating these virtual I/O adapters to each logical partition, the adaptors are shared between partitions, where each LPAR uses only the I/O adaptors as needed.
  • VIO Virtual I/O server
  • the IBM BladeCenter approaches virtual I/O management differently using a BladeCenter chassis that allows a virtual I/O to include fibre channel and Ethernet networking interface cards. While the BladeCenter does not rely on a dedicated LPAR to perform the virtualization, a dedicated processor is housed in the management blade of the chassis, that uses a dedicated VIO server to perform the virtualization.
  • Virtual I/O servers use software to seamlessly redirect input/output to an alternate device if a first device fails. By having access to multiple Ethernet adapters, for instance, the failure of any single physical adapter no longer deprives any given LPAR of Ethernet functionality. Instead, the VIO server provides the desired functionality to its client LPAR from another physical adapter.
  • a central dedicated VIO server puts all LPARs into a state of extreme dependence upon that single dedicated VIO server. For instance, if any failure mechanism, such as a processor problem or an operating system malfunction, manifests itself on the VIO server, all applications running on LPARs dependent upon that VIO server lose their ability to communicate through the I/O adaptors. In other words, the dedicated VIO server now becomes a single point of failure for all applications and LPARs using I/O adaptors.
  • VIO server One known solution to eliminate the single point-of-failure for a VIO server is to create redundant dedicated VIO LPARs.
  • creation of redundant dedicated VIO LPARs unnecessarily consumes resources.
  • each dedicated VIO LPAR requires processor and memory allocation, as well as disk space and other such resources, which would better be used running applications and performing direct value-added computations for users. Therefore, a need exists for a distributed VIO server that can operate across some or all of the application LPARs so that it is not subject to a single point of failure and that also does not duplicate computer resources.
  • the invention meeting the need identified above is the Distributed Virtual I/O Tool.
  • the Distributed Virtual I/O Tool replaces dedicated VIO server LPARs by distributing the virtual I/O functions across several application LPARs connected by a high-speed communication channel.
  • the Distributed Virtual I/O Tool receives an I/O request for an application running on a logical partition of a shared resource with a plurality of logical partitions, wherein I/O devices are physically distributed across the plurality of logical partitions.
  • the Distributed Virtual I/O Tool assigns the I/O request to one of the I/O devices, wherein the I/O request can be assigned to any I/O device of the proper type attached to any logical partition, regardless of which logical partition runs the application receiving the I/O request, and sends the I/O request to the assigned I/O device.
  • each application or LPAR maps to a specific I/O device, binding the application or LPAR to the mapped device. If there is not a prior assignment, the Distributed Virtual I/O Tool assigns an I/O device when the I/O request is made. The Distributed Virtual I/O Tool monitors each I/O request and reassigns I/O devices when performance drops on a specific device or the LPAR connected to the device is no longer available. Assignment and reassignment of an I/O device may be based on the recommendation of an autonomic manager tasked with monitoring and managing the performance of the entire computer system.
  • An alternate embodiment of the Distributed Virtual I/O Tool queries each I/O device manager for availability and performance data, and assigns or reassigns I/O devices based on the responses of the I/O device managers.
  • the physical I/O devices may be distributed randomly across available LPARs such that LPARs with specific I/O needs may be given priority for a physical I/O device.
  • a LPAR may have a dedicated I/O device, and will not share the Virtual I/O Tool.
  • FIG. 1 is an exemplary computer network
  • FIG. 2 is a diagram of an exemplary shared resource with a dedicated VIO LPAR
  • FIG. 3 is a diagram of a shared resource with a distributed VIO tool
  • FIG. 4 describes programs and files in a memory on a computer
  • FIG. 5 is a flowchart of an I/O Management Component
  • FIG. 6 is a flowchart of an I/O Device Assignment Component
  • FIG. 7 is a flowchart of an I/O Failover Component.
  • the principles of the present invention are applicable to a variety of computer hardware and software configurations.
  • computer hardware or “hardware,” as used herein, refers to any machine or apparatus that is capable of accepting, performing logic operations on, storing, or displaying data, and includes without limitation processors and memory; the term “computer software” or “software,” refers to any set of instructions operable to cause computer hardware to perform an operation.
  • a computer program may, and often is, comprised of a plurality of smaller programming units, including without limitation subroutines, modules, functions, methods, and procedures.
  • the functions of the present invention may be distributed among a plurality of computers and computer programs.
  • the invention is described best, though, as a single computer program that configures and enables one or more general-purpose computers to implement the novel aspects of the invention.
  • the inventive computer program will be referred to as the “Distributed Virtual I/O Tool”
  • a “network” comprises any number of hardware devices coupled to and in communication with each other through a communications medium, such as the Internet.
  • a “communications medium” includes without limitation any physical, optical, electromagnetic, or other medium through which hardware or software can transmit data.
  • exemplary network 100 has only a limited number of nodes, including workstation computer 105 , workstation computer 110 , server computer 115 , and persistent storage 120 .
  • Network connection 125 comprises all hardware, software, and communications media necessary to enable communication between network nodes 105 - 120 . Unless otherwise indicated in context below, all network nodes use publicly available protocols or messaging services to communicate with each other through network connection 125 .
  • Shared Resource 200 is an example of the prior art method of providing a VIO server on a dedicated logical partition, or LPAR.
  • Shared Resource 200 has several LPARs connected by Inter-Partition Communication 220 , a high-speed communication system linking all the LPARs, such as the POWER HYPERVISOR product from IBM.
  • LPAR_ 1 211 runs applications on an AIX operating system.
  • LPAR_ 2 212 runs applications on a LINUX operating system.
  • LPAR_ 3 213 runs applications on an i5 operating system.
  • LPAR_ 4 214 has unassigned resources available for increases in demands for computing resources.
  • LPAR_ 5 215 is the VIO LPAR and physically connects to all the available I/O devices such as Ethernet adaptors, fibre channels and persistent storage media.
  • Each application LPAR ( 211 - 214 ) accesses I/O devices 250 via Inter-Partition Communication 220 and VIO server LPAR 215 .
  • FIG. 3 depicts Improved Shared Resource 300 using a VIO server distributed across several LPARs.
  • the LPARs on Improved Shared Resource 300 are connected by Inter-Partition Communication 320 , just as the prior art in FIG. 2 .
  • LPAR_ 1 311 and LPAR_ 5 315 run applications on an AIX operating system.
  • LPAR_ 2 312 runs applications on a LINUX operating system.
  • LPAR_ 3 313 runs applications on an i5 operating system.
  • LPAR_ 4 314 has unassigned resources available for increases in demands for computing resources.
  • Distributed VIO Tool 400 runs on any of the LPARs, as part of the overall server management software.
  • LPARs 311 , 312 and 315 are physically connected to I/O devices 351 , 352 and 353 respectively. Each LPAR ( 311 - 315 ) can access any of I/O devices 350 via Inter-Partition Communication 320 and the direct I/O connections through LPARs 311 , 312 and 315 . In an embodiment of the invention, LPAR 311 , 312 or 315 may have a dedicated I/O device that is not shared by the other LPARs.
  • Distributed VIO Tool 400 typically is stored in a memory, represented schematically as memory 420 in FIG. 4 .
  • memory includes without limitation any volatile or persistent medium, such as an electrical circuit, magnetic disk, or optical disk, in which a computer can store data or software for any duration.
  • a single memory may encompass and be distributed across a plurality of media.
  • FIG. 4 is included merely as a descriptive expedient and does not necessarily reflect any particular physical embodiment of memory 420 .
  • memory 420 may include additional data and programs.
  • memory 420 may include Autonomic Manager 430 , Applications 450 , I/O Device Mapping List 460 , and I/O Device Managers 470 with which Distribute VIO Tool 400 interacts. Additionally, Distributed VIO Tool 400 has three components: I/O Management Component 500 , I/O Device Assignment Component 600 and I/O Failover Component 700 .
  • Autonomic Manager 430 continuously monitors and analyzes the computer system to ensure the system operates smoothly.
  • One major function known in the art for Autonomic Manager 430 is load balancing so that system resources are efficiently used by applications on the server.
  • Applications 450 are the functional programs performing tasks for users on the server. Examples of Applications 450 include such things as databases, Internet sites, accounting software and e-mail service.
  • I/O Device Mapping List 460 is a file that maps various applications and LPARs to specific I/O devices using bindings. I/O Device Mapping List 460 may also include other configuration preferences such as a performance threshold for I/O devices or a preferred priority for assigning certain applications to an I/O device.
  • I/O Device Managers 470 are programs that configure and operate the physical I/O devices.
  • I/O Management Component 500 starts whenever an I/O request is made for one of Applications 450 on shared resource 300 ( 510 ). I/O Management Component 500 receives the I/O request ( 512 ) and accesses I/O Device Mapping List 460 ( 514 ). I/O Management Component 500 determines if an I/O device has been assigned to the application or LPAR that made or received the I/O request ( 516 ). If an I/O device is not assigned, I/O Management Component 500 starts I/O Device Assignment Component 600 ( 518 ). If an I/O device is already assigned, or after assigning an I/O device, I/O Management Component 500 determines if the assigned I/O device is available ( 520 ).
  • I/O Management Component 500 starts I/O Failover Component 700 ( 522 ). After insuring that the I/O request is assigned to an available I/O device, I/O Management Component 500 determines whether the assigned I/O device is performing at an acceptable level ( 524 ). Performance thresholds may be set in I/O device mapping list 460 , or may come from another source, such as Autonomic Manger 430 . If the I/O device performance is not acceptable, I/O Management Component 500 starts I/O Device Assignment Component 600 ( 526 ). Once an I/O request is assigned to an available, acceptable I/O device, the I/O Management Component 500 sends the I/O request to the assigned I/O device manager 470 ( 528 ) and I/O Management Component 500 stops ( 530 ).
  • FIG. 6 shows that I/O Device Assignment Component 600 starts when initiated by I/O management Component 500 ( 610 ).
  • I/O Device Assignment Component 600 reads the I/O request ( 612 ) and opens I/O Device Mapping List 460 ( 614 ).
  • I/O Device Assignment Component 600 consults Autonomic Manager 430 to identify performance metrics of available I/O devices ( 616 ).
  • I/O Device Assignment Component 600 assigns the I/O request to the best performing I/O device of the type needed by the I/O request ( 618 ). The assignment of the I/O device may also be influenced by priority preferences stored in I/O Device Mapping List 460 .
  • I/O Device Assignment Component 600 saves the assignment to I/O Device Mapping List 460 ( 620 ) so that subsequent requests in the session will already be assigned. Using bindings to link a request to a specific I/O device allows the client to encapsulate the assignment in subsequent requests in the session. I/O Device Assignment Component 600 closes I/O Device Mapping List 460 ( 622 ), sends the I/O request and assignment back to I/O Management Component 500 ( 624 ) and stops ( 628 ).
  • I/O Device Assignment Component 600 does not consult Autonomic Manager 430 or another centralized tracking and tuning program to make I/O device assignments. Instead, the alternate embodiment queries each I/O device manager 470 individually, then makes the assignment based on the responses of each I/O device manager 470 .
  • I/O Failover Component 700 starts when initiated by I/O management Component 500 ( 710 ). I/O Failover Component 700 is initiated whenever an I/O request is assigned to a failed or unavailable I/O device. An I/O device may become unavailable because the I/O device itself failed or the LPAR connected to the I/O device has failed. I/O Failover Component 700 receives the I/O request ( 712 ) and opens I/O Device Mapping List 460 ( 714 ). I/O Failover Component 700 consults Autonomic Manager 430 to identify performance metrics of available I/O devices ( 716 ).
  • I/O Failover Component 700 assigns the I/O request to the best performing I/O device of the type needed by the I/O request ( 718 ). The assignment of the I/O device may also be influenced by priority preferences stored in I/O Device Mapping List 460 . I/O Failover Component 700 saves the assignment to I/O Device Mapping List 460 ( 720 ) so that subsequent requests in the session will already be assigned. Using bindings to link a request to a specific I/O device allows the client to encapsulate the assignment in subsequent requests in the session.
  • I/O Failover Component 700 determines if any other applications, LPARs or sessions are assigned to the failed device ( 722 ) by reviewing bindings stored in I/O Device Mapping List 460 . If other assignments to the failed device are identified, I/O Failover Component 700 assigns future I/O requests for the application or LPAR to the best performing I/O device ( 724 ) and saves the assignment to I/O Device Mapping List 460 ( 726 ). After reassigning I/O requests, I/O Failover Component 700 closes I/O Device Mapping List 460 ( 728 ), sends the I/O request and assignment back to I/O Management Component 500 ( 730 ) and stops ( 732 ).
  • I/O Failover Component 700 does not consult Autonomic Manager 430 or another centralized tracking and tuning program to determine I/O device assignments. Instead, the alternate embodiment queries each I/O device manger 470 individually and then makes the assignment based on the responses of each I/O device manger 470 .

Abstract

The Distributed Virtual I/O Tool replaces dedicated VIO server LPARs by distributing the virtual I/O functions across several application LPARs connected by a high-speed communication channel. The physical I/O devices are distributed across available LPARs. The Distributed Virtual I/O Tool assigns each I/O request to an appropriate I/O device. The Distributed Virtual I/O Tool monitors each I/O request and reassigns I/O devices when performance drops on a specific device or when a device is no longer available.

Description

FIELD OF THE INVENTION
The invention relates generally to electrical computers and digital data processing, and specifically to selecting a path via which the computers will transfer data.
BACKGROUND OF THE INVENTION
The advent of logical partitions (“LPARs”) in UNIX servers enabled mid-range servers to provide a class of service previously provided only by mainframe systems. Mainframe computers traditionally used physical partitioning to construct multiple “system images” using separate discrete building blocks. UNIX servers, using logical partitions, permitted finer granularity and interchangeability of components across system images. In addition, the virtualization of input/output (“I/O”) devices across multiple partitions further enhanced logical partitioning functionality. Virtualization of I/O devices allows multiple logical partitions to share physical resources such as Ethernet adapters, disk adapters and so forth. Therefore, rather than dedicating these virtual I/O adapters to each logical partition, the adaptors are shared between partitions, where each LPAR uses only the I/O adaptors as needed.
Management of virtual I/O adapters requires a dedicated component acting on behalf of all resources. For example, a Virtual I/O server, or “VIO” server, may be created by forming a specialized LPAR dedicated to the task of possessing all shared I/O devices. The VIO server acts as a “virtual device” that fields input-output requests from all other LPARs. All of the shared I/O devices are physically attached to the VIO server. The IBM BladeCenter approaches virtual I/O management differently using a BladeCenter chassis that allows a virtual I/O to include fibre channel and Ethernet networking interface cards. While the BladeCenter does not rely on a dedicated LPAR to perform the virtualization, a dedicated processor is housed in the management blade of the chassis, that uses a dedicated VIO server to perform the virtualization.
Virtual I/O servers use software to seamlessly redirect input/output to an alternate device if a first device fails. By having access to multiple Ethernet adapters, for instance, the failure of any single physical adapter no longer deprives any given LPAR of Ethernet functionality. Instead, the VIO server provides the desired functionality to its client LPAR from another physical adapter.
The use of a central dedicated VIO server, however, puts all LPARs into a state of extreme dependence upon that single dedicated VIO server. For instance, if any failure mechanism, such as a processor problem or an operating system malfunction, manifests itself on the VIO server, all applications running on LPARs dependent upon that VIO server lose their ability to communicate through the I/O adaptors. In other words, the dedicated VIO server now becomes a single point of failure for all applications and LPARs using I/O adaptors.
One known solution to eliminate the single point-of-failure for a VIO server is to create redundant dedicated VIO LPARs. However, creation of redundant dedicated VIO LPARs unnecessarily consumes resources. For instance, each dedicated VIO LPAR requires processor and memory allocation, as well as disk space and other such resources, which would better be used running applications and performing direct value-added computations for users. Therefore, a need exists for a distributed VIO server that can operate across some or all of the application LPARs so that it is not subject to a single point of failure and that also does not duplicate computer resources.
SUMMARY OF THE INVENTION
The invention meeting the need identified above is the Distributed Virtual I/O Tool. The Distributed Virtual I/O Tool replaces dedicated VIO server LPARs by distributing the virtual I/O functions across several application LPARs connected by a high-speed communication channel. The Distributed Virtual I/O Tool receives an I/O request for an application running on a logical partition of a shared resource with a plurality of logical partitions, wherein I/O devices are physically distributed across the plurality of logical partitions. The Distributed Virtual I/O Tool assigns the I/O request to one of the I/O devices, wherein the I/O request can be assigned to any I/O device of the proper type attached to any logical partition, regardless of which logical partition runs the application receiving the I/O request, and sends the I/O request to the assigned I/O device.
Generally, each application or LPAR maps to a specific I/O device, binding the application or LPAR to the mapped device. If there is not a prior assignment, the Distributed Virtual I/O Tool assigns an I/O device when the I/O request is made. The Distributed Virtual I/O Tool monitors each I/O request and reassigns I/O devices when performance drops on a specific device or the LPAR connected to the device is no longer available. Assignment and reassignment of an I/O device may be based on the recommendation of an autonomic manager tasked with monitoring and managing the performance of the entire computer system. An alternate embodiment of the Distributed Virtual I/O Tool queries each I/O device manager for availability and performance data, and assigns or reassigns I/O devices based on the responses of the I/O device managers. Alternatively, the physical I/O devices may be distributed randomly across available LPARs such that LPARs with specific I/O needs may be given priority for a physical I/O device. In a further embodiment, a LPAR may have a dedicated I/O device, and will not share the Virtual I/O Tool.
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will be understood best by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
FIG. 1 is an exemplary computer network;
FIG. 2 is a diagram of an exemplary shared resource with a dedicated VIO LPAR;
FIG. 3 is a diagram of a shared resource with a distributed VIO tool;
FIG. 4 describes programs and files in a memory on a computer;
FIG. 5 is a flowchart of an I/O Management Component;
FIG. 6 is a flowchart of an I/O Device Assignment Component; and
FIG. 7 is a flowchart of an I/O Failover Component.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The principles of the present invention are applicable to a variety of computer hardware and software configurations. The term “computer hardware” or “hardware,” as used herein, refers to any machine or apparatus that is capable of accepting, performing logic operations on, storing, or displaying data, and includes without limitation processors and memory; the term “computer software” or “software,” refers to any set of instructions operable to cause computer hardware to perform an operation. A “computer,” as that term is used herein, includes without limitation any useful combination of hardware and software, and a “computer program” or “program” includes without limitation any software operable to cause computer hardware to accept, perform logic operations on, store, or display data. A computer program may, and often is, comprised of a plurality of smaller programming units, including without limitation subroutines, modules, functions, methods, and procedures. Thus, the functions of the present invention may be distributed among a plurality of computers and computer programs. The invention is described best, though, as a single computer program that configures and enables one or more general-purpose computers to implement the novel aspects of the invention. For illustrative purposes, the inventive computer program will be referred to as the “Distributed Virtual I/O Tool”
Additionally, the Distributed Virtual I/O Tool is described below with reference to an exemplary network of hardware devices, as depicted in FIG. 1. A “network” comprises any number of hardware devices coupled to and in communication with each other through a communications medium, such as the Internet. A “communications medium” includes without limitation any physical, optical, electromagnetic, or other medium through which hardware or software can transmit data. For descriptive purposes, exemplary network 100 has only a limited number of nodes, including workstation computer 105, workstation computer 110, server computer 115, and persistent storage 120. Network connection 125 comprises all hardware, software, and communications media necessary to enable communication between network nodes 105-120. Unless otherwise indicated in context below, all network nodes use publicly available protocols or messaging services to communicate with each other through network connection 125.
A computer with multiple logical partitions, known as a shared resource, is shown in FIG. 2. Shared Resource 200 is an example of the prior art method of providing a VIO server on a dedicated logical partition, or LPAR. Shared Resource 200 has several LPARs connected by Inter-Partition Communication 220, a high-speed communication system linking all the LPARs, such as the POWER HYPERVISOR product from IBM. LPAR_1 211 runs applications on an AIX operating system. LPAR_2 212 runs applications on a LINUX operating system. LPAR_3 213 runs applications on an i5 operating system. LPAR_4 214 has unassigned resources available for increases in demands for computing resources. LPAR_5 215 is the VIO LPAR and physically connects to all the available I/O devices such as Ethernet adaptors, fibre channels and persistent storage media. Each application LPAR (211-214) accesses I/O devices 250 via Inter-Partition Communication 220 and VIO server LPAR 215.
FIG. 3 depicts Improved Shared Resource 300 using a VIO server distributed across several LPARs. The LPARs on Improved Shared Resource 300 are connected by Inter-Partition Communication 320, just as the prior art in FIG. 2. LPAR_1 311 and LPAR_5 315 run applications on an AIX operating system. LPAR_2 312 runs applications on a LINUX operating system. LPAR_3 313 runs applications on an i5 operating system. LPAR_4 314 has unassigned resources available for increases in demands for computing resources. Distributed VIO Tool 400 runs on any of the LPARs, as part of the overall server management software. LPARs 311, 312 and 315 are physically connected to I/ O devices 351, 352 and 353 respectively. Each LPAR (311-315) can access any of I/O devices 350 via Inter-Partition Communication 320 and the direct I/O connections through LPARs 311, 312 and 315. In an embodiment of the invention, LPAR 311, 312 or 315 may have a dedicated I/O device that is not shared by the other LPARs.
Distributed VIO Tool 400 typically is stored in a memory, represented schematically as memory 420 in FIG. 4. The term “memory,” as used herein, includes without limitation any volatile or persistent medium, such as an electrical circuit, magnetic disk, or optical disk, in which a computer can store data or software for any duration. A single memory may encompass and be distributed across a plurality of media. Thus, FIG. 4 is included merely as a descriptive expedient and does not necessarily reflect any particular physical embodiment of memory 420. As depicted in FIG. 2, though, memory 420 may include additional data and programs. Of particular import to Distributed VIO Tool 400, memory 420 may include Autonomic Manager 430, Applications 450, I/O Device Mapping List 460, and I/O Device Managers 470 with which Distribute VIO Tool 400 interacts. Additionally, Distributed VIO Tool 400 has three components: I/O Management Component 500, I/O Device Assignment Component 600 and I/O Failover Component 700.
Autonomic Manager 430 continuously monitors and analyzes the computer system to ensure the system operates smoothly. One major function known in the art for Autonomic Manager 430 is load balancing so that system resources are efficiently used by applications on the server. Applications 450 are the functional programs performing tasks for users on the server. Examples of Applications 450 include such things as databases, Internet sites, accounting software and e-mail service. I/O Device Mapping List 460 is a file that maps various applications and LPARs to specific I/O devices using bindings. I/O Device Mapping List 460 may also include other configuration preferences such as a performance threshold for I/O devices or a preferred priority for assigning certain applications to an I/O device. I/O Device Managers 470 are programs that configure and operate the physical I/O devices.
As shown in FIG. 5, I/O Management Component 500 starts whenever an I/O request is made for one of Applications 450 on shared resource 300 (510). I/O Management Component 500 receives the I/O request (512) and accesses I/O Device Mapping List 460 (514). I/O Management Component 500 determines if an I/O device has been assigned to the application or LPAR that made or received the I/O request (516). If an I/O device is not assigned, I/O Management Component 500 starts I/O Device Assignment Component 600 (518). If an I/O device is already assigned, or after assigning an I/O device, I/O Management Component 500 determines if the assigned I/O device is available (520). If the assigned I/O device is not available, I/O Management Component 500 starts I/O Failover Component 700 (522). After insuring that the I/O request is assigned to an available I/O device, I/O Management Component 500 determines whether the assigned I/O device is performing at an acceptable level (524). Performance thresholds may be set in I/O device mapping list 460, or may come from another source, such as Autonomic Manger 430. If the I/O device performance is not acceptable, I/O Management Component 500 starts I/O Device Assignment Component 600 (526). Once an I/O request is assigned to an available, acceptable I/O device, the I/O Management Component 500 sends the I/O request to the assigned I/O device manager 470 (528) and I/O Management Component 500 stops (530).
FIG. 6 shows that I/O Device Assignment Component 600 starts when initiated by I/O management Component 500 (610). I/O Device Assignment Component 600 reads the I/O request (612) and opens I/O Device Mapping List 460 (614). I/O Device Assignment Component 600 consults Autonomic Manager 430 to identify performance metrics of available I/O devices (616). I/O Device Assignment Component 600 assigns the I/O request to the best performing I/O device of the type needed by the I/O request (618). The assignment of the I/O device may also be influenced by priority preferences stored in I/O Device Mapping List 460. I/O Device Assignment Component 600 saves the assignment to I/O Device Mapping List 460 (620) so that subsequent requests in the session will already be assigned. Using bindings to link a request to a specific I/O device allows the client to encapsulate the assignment in subsequent requests in the session. I/O Device Assignment Component 600 closes I/O Device Mapping List 460 (622), sends the I/O request and assignment back to I/O Management Component 500 (624) and stops (628).
An alternate embodiment of I/O Device Assignment Component 600 (not shown) does not consult Autonomic Manager 430 or another centralized tracking and tuning program to make I/O device assignments. Instead, the alternate embodiment queries each I/O device manager 470 individually, then makes the assignment based on the responses of each I/O device manager 470.
I/O Failover Component 700, shown in FIG. 7, starts when initiated by I/O management Component 500 (710). I/O Failover Component 700 is initiated whenever an I/O request is assigned to a failed or unavailable I/O device. An I/O device may become unavailable because the I/O device itself failed or the LPAR connected to the I/O device has failed. I/O Failover Component 700 receives the I/O request (712) and opens I/O Device Mapping List 460 (714). I/O Failover Component 700 consults Autonomic Manager 430 to identify performance metrics of available I/O devices (716). I/O Failover Component 700 assigns the I/O request to the best performing I/O device of the type needed by the I/O request (718). The assignment of the I/O device may also be influenced by priority preferences stored in I/O Device Mapping List 460. I/O Failover Component 700 saves the assignment to I/O Device Mapping List 460 (720) so that subsequent requests in the session will already be assigned. Using bindings to link a request to a specific I/O device allows the client to encapsulate the assignment in subsequent requests in the session. I/O Failover Component 700 determines if any other applications, LPARs or sessions are assigned to the failed device (722) by reviewing bindings stored in I/O Device Mapping List 460. If other assignments to the failed device are identified, I/O Failover Component 700 assigns future I/O requests for the application or LPAR to the best performing I/O device (724) and saves the assignment to I/O Device Mapping List 460 (726). After reassigning I/O requests, I/O Failover Component 700 closes I/O Device Mapping List 460 (728), sends the I/O request and assignment back to I/O Management Component 500 (730) and stops (732).
As with I/O Device Assignment Component 600, an alternate embodiment of I/O Failover Component 700 (not shown) does not consult Autonomic Manager 430 or another centralized tracking and tuning program to determine I/O device assignments. Instead, the alternate embodiment queries each I/O device manger 470 individually and then makes the assignment based on the responses of each I/O device manger 470.
A preferred form of the invention has been shown in the drawings and described above, but variations in the preferred form will be apparent to those skilled in the art. The preceding description is for illustration purposes only, and the invention should not be construed as limited to the specific form shown and described. The scope of the invention should be limited only by the language of the following claims.

Claims (1)

1. A method for accessing a plurality of input/output (I/O) devices physically distributed across a plurality of logical partitions (LPARs) without a single point of failure comprising:
creating a distributed virtual input/out (VIO) server by running a VIO tool on each LPAR of the plurality of LPARs;
wherein each LPAR is operable to access the plurality of I/O devices using inter-partition communication and each LPAR is operable to access an I/O device to which it is directly connected;
wherein each LPAR is mapped to a dedicated I/O device that is not shared with the plurality of LPARs;
wherein each VIO tool performs steps comprising:
receiving an I/O request from an application by a VIO tool;
assigning the I/O request to one of the plurality of I/O devices, wherein the I/O request is assigned to any I/O device of a proper type attached to any LPAR, regardless of which LPAR runs the application;
wherein types of I/O devices comprise Ethernet adaptors, disk adaptors, fibre channels and persistent storage media;
accessing an I/O Device Mapping List, that maps applications and LPARs to the plurality of I/O devices using bindings;
consulting an Autonomic Manager to assign the I/O request to a preferred I/O device, wherein the preferred I/O device is one performing at an acceptable level;
sending the I/O request to the assigned I/O device;
binding the I/O request to the assigned I/O device wherein an assignment of the assigned I/O device is encapsulated in one or more subsequent requests of a client/server session;
saving the assignment to the I/O Device Mapping List;
responsive to a failure of a previously assigned I/O device or to a failure of the LPAR mapped to the I/O device, reassigning one or more subsequent I/O requests in the client/server session to another I/O device; and
saving the reassigned I/O request to the I/O Device Mapping List;
wherein a plurality of input/output (I/O) devices physically distributed across a plurality of logical partitions (LPARs) are accessible without a single point of failure.
US11/461,461 2006-08-01 2006-08-01 System and method for distributing virtual input/output operations across multiple logical partitions Active 2027-03-31 US7546398B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US11/461,461 US7546398B2 (en) 2006-08-01 2006-08-01 System and method for distributing virtual input/output operations across multiple logical partitions
CN2007101373822A CN101118521B (en) 2006-08-01 2007-07-25 System and method for spanning multiple logical sectorization to distributing virtual input-output operation
JP2007199992A JP5039947B2 (en) 2006-08-01 2007-07-31 System and method for distributing virtual input / output operations across multiple logical partitions
US12/478,584 US8024497B2 (en) 2006-08-01 2009-06-04 Distributing virtual input/output operations across multiple logical partitions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/461,461 US7546398B2 (en) 2006-08-01 2006-08-01 System and method for distributing virtual input/output operations across multiple logical partitions

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/478,584 Continuation US8024497B2 (en) 2006-08-01 2009-06-04 Distributing virtual input/output operations across multiple logical partitions

Publications (2)

Publication Number Publication Date
US20080126579A1 US20080126579A1 (en) 2008-05-29
US7546398B2 true US7546398B2 (en) 2009-06-09

Family

ID=39054646

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/461,461 Active 2027-03-31 US7546398B2 (en) 2006-08-01 2006-08-01 System and method for distributing virtual input/output operations across multiple logical partitions
US12/478,584 Active 2027-01-31 US8024497B2 (en) 2006-08-01 2009-06-04 Distributing virtual input/output operations across multiple logical partitions

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/478,584 Active 2027-01-31 US8024497B2 (en) 2006-08-01 2009-06-04 Distributing virtual input/output operations across multiple logical partitions

Country Status (3)

Country Link
US (2) US7546398B2 (en)
JP (1) JP5039947B2 (en)
CN (1) CN101118521B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080162800A1 (en) * 2006-12-13 2008-07-03 Souichi Takashige Computer, Control Method for Virtual Device, and Program Thereof
US20090240849A1 (en) * 2006-08-01 2009-09-24 International Business Machines Corporation System and Method for Distributing Virtual Input/Output Operations Across Multiple Logical Partitions
US20090313391A1 (en) * 2008-06-11 2009-12-17 Hitachi, Ltd. Computer system, device sharing method, and device sharing program
US20100031325A1 (en) * 2006-12-22 2010-02-04 Virtuallogix Sa System for enabling multiple execution environments to share a device
US20100250786A1 (en) * 2006-12-07 2010-09-30 International Business Machines Corporation Migrating Domains from One Physical Data Processing System to Another
US8527699B2 (en) 2011-04-25 2013-09-03 Pivot3, Inc. Method and system for distributed RAID implementation
US8621147B2 (en) 2008-06-06 2013-12-31 Pivot3, Inc. Method and system for distributed RAID implementation
US20140372716A1 (en) * 2013-06-14 2014-12-18 International Business Machines Corporation Parallel mapping of client partition memory to multiple physical adapters
US9086821B2 (en) 2008-06-30 2015-07-21 Pivot3, Inc. Method and system for execution of applications in conjunction with raid

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9317309B2 (en) * 2006-12-28 2016-04-19 Hewlett-Packard Development Company, L.P. Virtualized environment allocation system and method
WO2008147913A2 (en) * 2007-05-22 2008-12-04 Vidsys, Inc. Tracking people and objects using multiple live and recorded surveillance camera video feeds
US8219989B2 (en) 2007-08-02 2012-07-10 International Business Machines Corporation Partition adjunct with non-native device driver for facilitating access to a physical input/output device
US8645974B2 (en) 2007-08-02 2014-02-04 International Business Machines Corporation Multiple partition adjunct instances interfacing multiple logical partitions to a self-virtualizing input/output device
JP4966135B2 (en) * 2007-08-31 2012-07-04 株式会社東芝 Server device, terminal device, communication control method, and communication control program
US8301848B2 (en) * 2008-06-09 2012-10-30 International Business Machines Corporation Virtualizing storage for WPAR clients using node port ID virtualization
US8099522B2 (en) * 2008-06-09 2012-01-17 International Business Machines Corporation Arrangements for I/O control in a virtualized system
US8145871B2 (en) 2008-06-09 2012-03-27 International Business Machines Corporation Dynamic allocation of virtual real memory for applications based on monitored usage
US8365274B2 (en) * 2008-09-11 2013-01-29 International Business Machines Corporation Method for creating multiple virtualized operating system environments
US8055736B2 (en) * 2008-11-03 2011-11-08 International Business Machines Corporation Maintaining storage area network (‘SAN’) access rights during migration of operating systems
JP4743282B2 (en) * 2009-01-26 2011-08-10 横河電機株式会社 Redundant input / output module
US8274881B2 (en) * 2009-05-12 2012-09-25 International Business Machines Corporation Altering access to a fibre channel fabric
US8489797B2 (en) * 2009-09-30 2013-07-16 International Business Machines Corporation Hardware resource arbiter for logical partitions
US8656375B2 (en) 2009-11-02 2014-02-18 International Business Machines Corporation Cross-logical entity accelerators
US8417911B2 (en) 2010-06-23 2013-04-09 International Business Machines Corporation Associating input/output device requests with memory associated with a logical partition
US8635430B2 (en) 2010-06-23 2014-01-21 International Business Machines Corporation Translation of input/output addresses to memory addresses
US8645606B2 (en) 2010-06-23 2014-02-04 International Business Machines Corporation Upbound input/output expansion request and response processing in a PCIe architecture
US9213661B2 (en) 2010-06-23 2015-12-15 International Business Machines Corporation Enable/disable adapters of a computing environment
US8572635B2 (en) 2010-06-23 2013-10-29 International Business Machines Corporation Converting a message signaled interruption into an I/O adapter event notification
US8918573B2 (en) 2010-06-23 2014-12-23 International Business Machines Corporation Input/output (I/O) expansion response processing in a peripheral component interconnect express (PCIe) environment
US8615645B2 (en) 2010-06-23 2013-12-24 International Business Machines Corporation Controlling the selectively setting of operational parameters for an adapter
US8650337B2 (en) 2010-06-23 2014-02-11 International Business Machines Corporation Runtime determination of translation formats for adapter functions
US8671287B2 (en) 2010-06-23 2014-03-11 International Business Machines Corporation Redundant power supply configuration for a data center
US8416834B2 (en) 2010-06-23 2013-04-09 International Business Machines Corporation Spread spectrum wireless communication code for data center environments
US8504754B2 (en) 2010-06-23 2013-08-06 International Business Machines Corporation Identification of types of sources of adapter interruptions
US8645767B2 (en) 2010-06-23 2014-02-04 International Business Machines Corporation Scalable I/O adapter function level error detection, isolation, and reporting
US8566480B2 (en) 2010-06-23 2013-10-22 International Business Machines Corporation Load instruction for communicating with adapters
US8677180B2 (en) 2010-06-23 2014-03-18 International Business Machines Corporation Switch failover control in a multiprocessor computer system
US8650335B2 (en) 2010-06-23 2014-02-11 International Business Machines Corporation Measurement facility for adapter functions
US8626970B2 (en) 2010-06-23 2014-01-07 International Business Machines Corporation Controlling access by a configuration to an adapter function
US8745292B2 (en) 2010-06-23 2014-06-03 International Business Machines Corporation System and method for routing I/O expansion requests and responses in a PCIE architecture
US8615622B2 (en) 2010-06-23 2013-12-24 International Business Machines Corporation Non-standard I/O adapters in a standardized I/O architecture
US8468284B2 (en) 2010-06-23 2013-06-18 International Business Machines Corporation Converting a message signaled interruption into an I/O adapter event notification to a guest operating system
US9342352B2 (en) * 2010-06-23 2016-05-17 International Business Machines Corporation Guest access to address spaces of adapter
US9195623B2 (en) 2010-06-23 2015-11-24 International Business Machines Corporation Multiple address spaces per adapter with address translation
US8621112B2 (en) 2010-06-23 2013-12-31 International Business Machines Corporation Discovery by operating system of information relating to adapter functions accessible to the operating system
US8478922B2 (en) 2010-06-23 2013-07-02 International Business Machines Corporation Controlling a rate at which adapter interruption requests are processed
US8639858B2 (en) 2010-06-23 2014-01-28 International Business Machines Corporation Resizing address spaces concurrent to accessing the address spaces
US8510599B2 (en) 2010-06-23 2013-08-13 International Business Machines Corporation Managing processing associated with hardware events
US8549182B2 (en) 2010-06-23 2013-10-01 International Business Machines Corporation Store/store block instructions for communicating with adapters
US8656228B2 (en) 2010-06-23 2014-02-18 International Business Machines Corporation Memory error isolation and recovery in a multiprocessor computer system
US8683108B2 (en) 2010-06-23 2014-03-25 International Business Machines Corporation Connected input/output hub management
US8505032B2 (en) 2010-06-23 2013-08-06 International Business Machines Corporation Operating system notification of actions to be taken responsive to adapter events
US8607039B2 (en) 2010-08-17 2013-12-10 International Business Machines Corporation Isolation of device namespace to allow duplicate/common names in root volume group workload partitions
US8726274B2 (en) * 2010-09-10 2014-05-13 International Business Machines Corporation Registration and initialization of cluster-aware virtual input/output server nodes
US8495217B2 (en) * 2010-09-30 2013-07-23 International Business Machines Corporation Mechanism for preventing client partition crashes by removing processing resources from the client logical partition when an NPIV server goes down
US9191454B2 (en) * 2011-06-27 2015-11-17 Microsoft Technology Licensing, Llc Host enabled management channel
US8677374B2 (en) 2011-09-14 2014-03-18 International Business Machines Corporation Resource management in a virtualized environment
CN102724057B (en) * 2012-02-23 2017-03-08 北京市计算中心 A kind of distributed levelization autonomous management method towards cloud computing platform
JP5874879B2 (en) * 2012-11-26 2016-03-02 株式会社日立製作所 I / O device control method and virtual computer system
US20150081400A1 (en) * 2013-09-19 2015-03-19 Infosys Limited Watching ARM
JP5954338B2 (en) * 2014-01-14 2016-07-20 横河電機株式会社 Instrumentation system and maintenance method thereof
CN104461951A (en) * 2014-11-19 2015-03-25 浪潮(北京)电子信息产业有限公司 Physical and virtual multipath I/O dynamic management method and system
US9846602B2 (en) * 2016-02-12 2017-12-19 International Business Machines Corporation Migration of a logical partition or virtual machine with inactive input/output hosting server

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5539883A (en) * 1991-10-31 1996-07-23 International Business Machines Corporation Load balancing of network by maintaining in each computer information regarding current load on the computer and load on some other computers in the network
US6279046B1 (en) * 1999-05-19 2001-08-21 International Business Machines Corporation Event-driven communications interface for logically-partitioned computer
US6330615B1 (en) 1998-09-14 2001-12-11 International Business Machines Corporation Method of using address resolution protocol for constructing data frame formats for multiple partitions host network interface communications
US6728832B2 (en) * 1990-02-26 2004-04-27 Hitachi, Ltd. Distribution of I/O requests across multiple disk units
US6738886B1 (en) * 2002-04-12 2004-05-18 Barsa Consulting Group, Llc Method and system for automatically distributing memory in a partitioned system to improve overall performance
US20050044228A1 (en) * 2003-08-21 2005-02-24 International Business Machines Corporation Methods, systems, and media to expand resources available to a logical partition
US6963915B2 (en) * 1998-03-13 2005-11-08 Massachussetts Institute Of Technology Method and apparatus for distributing requests among a plurality of resources
US20060010031A1 (en) 2001-04-25 2006-01-12 Tatsuo Higuchi System and method for computer resource marketing
US7051188B1 (en) * 1999-09-28 2006-05-23 International Business Machines Corporation Dynamically redistributing shareable resources of a computing environment to manage the workload of that environment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0193830A (en) * 1987-10-05 1989-04-12 Nec Corp System for controlling interruption in virtual computer system
US5717919A (en) * 1995-10-02 1998-02-10 Sybase, Inc. Database system with methods for appending data records by partitioning an object into multiple page chains
US6314501B1 (en) * 1998-07-23 2001-11-06 Unisys Corporation Computer system and method for operating multiple operating systems in different partitions of the computer system and for allowing the different partitions to communicate with one another through shared memory
US6807579B1 (en) * 2003-05-12 2004-10-19 International Business Machines Corporation Method, system and program products for assigning an address identifier to a partition of a computing environment
US7506343B2 (en) * 2004-08-19 2009-03-17 International Business Machines Corporation System and method for passing information from one device driver to another
US7546398B2 (en) * 2006-08-01 2009-06-09 International Business Machines Corporation System and method for distributing virtual input/output operations across multiple logical partitions

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6728832B2 (en) * 1990-02-26 2004-04-27 Hitachi, Ltd. Distribution of I/O requests across multiple disk units
US5539883A (en) * 1991-10-31 1996-07-23 International Business Machines Corporation Load balancing of network by maintaining in each computer information regarding current load on the computer and load on some other computers in the network
US6963915B2 (en) * 1998-03-13 2005-11-08 Massachussetts Institute Of Technology Method and apparatus for distributing requests among a plurality of resources
US6330615B1 (en) 1998-09-14 2001-12-11 International Business Machines Corporation Method of using address resolution protocol for constructing data frame formats for multiple partitions host network interface communications
US6279046B1 (en) * 1999-05-19 2001-08-21 International Business Machines Corporation Event-driven communications interface for logically-partitioned computer
US7051188B1 (en) * 1999-09-28 2006-05-23 International Business Machines Corporation Dynamically redistributing shareable resources of a computing environment to manage the workload of that environment
US20060010031A1 (en) 2001-04-25 2006-01-12 Tatsuo Higuchi System and method for computer resource marketing
US6738886B1 (en) * 2002-04-12 2004-05-18 Barsa Consulting Group, Llc Method and system for automatically distributing memory in a partitioned system to improve overall performance
US20050044228A1 (en) * 2003-08-21 2005-02-24 International Business Machines Corporation Methods, systems, and media to expand resources available to a logical partition

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090240849A1 (en) * 2006-08-01 2009-09-24 International Business Machines Corporation System and Method for Distributing Virtual Input/Output Operations Across Multiple Logical Partitions
US8024497B2 (en) * 2006-08-01 2011-09-20 International Business Machines Corporation Distributing virtual input/output operations across multiple logical partitions
US7890665B2 (en) 2006-12-07 2011-02-15 International Business Machines Corporation Migrating domains from one physical data processing system to another
US8239583B2 (en) 2006-12-07 2012-08-07 International Business Machines Corporation Migrating domains from one physical data processing system to another
US20100250786A1 (en) * 2006-12-07 2010-09-30 International Business Machines Corporation Migrating Domains from One Physical Data Processing System to Another
US8291425B2 (en) * 2006-12-13 2012-10-16 Hitachi, Ltd. Computer, control method for virtual device, and program thereof
US20080162800A1 (en) * 2006-12-13 2008-07-03 Souichi Takashige Computer, Control Method for Virtual Device, and Program Thereof
US20100031325A1 (en) * 2006-12-22 2010-02-04 Virtuallogix Sa System for enabling multiple execution environments to share a device
US8996864B2 (en) * 2006-12-22 2015-03-31 Virtuallogix Sa System for enabling multiple execution environments to share a device
US9465560B2 (en) 2008-06-06 2016-10-11 Pivot3, Inc. Method and system for data migration in a distributed RAID implementation
US9535632B2 (en) 2008-06-06 2017-01-03 Pivot3, Inc. Method and system for distributed raid implementation
US9146695B2 (en) 2008-06-06 2015-09-29 Pivot3, Inc. Method and system for distributed RAID implementation
US8621147B2 (en) 2008-06-06 2013-12-31 Pivot3, Inc. Method and system for distributed RAID implementation
US8375148B2 (en) 2008-06-11 2013-02-12 Hitachi, Ltd. Computer system, device sharing method, and device sharing program
US8156253B2 (en) * 2008-06-11 2012-04-10 Hitachi, Ltd. Computer system, device sharing method, and device sharing program
US20090313391A1 (en) * 2008-06-11 2009-12-17 Hitachi, Ltd. Computer system, device sharing method, and device sharing program
US9086821B2 (en) 2008-06-30 2015-07-21 Pivot3, Inc. Method and system for execution of applications in conjunction with raid
US8527699B2 (en) 2011-04-25 2013-09-03 Pivot3, Inc. Method and system for distributed RAID implementation
US20140372716A1 (en) * 2013-06-14 2014-12-18 International Business Machines Corporation Parallel mapping of client partition memory to multiple physical adapters
US9870242B2 (en) 2013-06-14 2018-01-16 International Business Machines Corporation Parallel mapping of client partition memory to multiple physical adapters
US9875125B2 (en) * 2013-06-14 2018-01-23 International Business Machines Corporation Parallel mapping of client partition memory to multiple physical adapters
US10169062B2 (en) 2013-06-14 2019-01-01 International Business Machines Corporation Parallel mapping of client partition memory to multiple physical adapters
US10210007B2 (en) 2013-06-14 2019-02-19 International Business Machines Corporation Parallel mapping of client partition memory to multiple physical adapters

Also Published As

Publication number Publication date
US20080126579A1 (en) 2008-05-29
US8024497B2 (en) 2011-09-20
US20090240849A1 (en) 2009-09-24
CN101118521B (en) 2010-06-02
JP2008041093A (en) 2008-02-21
CN101118521A (en) 2008-02-06
JP5039947B2 (en) 2012-10-03

Similar Documents

Publication Publication Date Title
US7546398B2 (en) System and method for distributing virtual input/output operations across multiple logical partitions
US8595364B2 (en) System and method for automatic storage load balancing in virtual server environments
US10701141B2 (en) Managing software licenses in a disaggregated environment
US11275622B2 (en) Utilizing accelerators to accelerate data analytic workloads in disaggregated systems
US8909698B2 (en) Grid-enabled, service-oriented architecture for enabling high-speed computing applications
US8327372B1 (en) Virtualization and server imaging system for allocation of computer hardware and software
US7530071B2 (en) Facilitating access to input/output resources via an I/O partition shared by multiple consumer partitions
US8140817B2 (en) Dynamic logical partition management for NUMA machines and clusters
US7984251B2 (en) Autonomic storage provisioning to enhance storage virtualization infrastructure availability
US20170293994A1 (en) Dynamically provisioning and scaling graphic processing units for data analytic workloads in a hardware cloud
US20010044817A1 (en) Computer system and a method for controlling a computer system
US10908940B1 (en) Dynamically managed virtual server system
CN104937584A (en) Providing optimized quality of service to prioritized virtual machines and applications based on quality of shared resources
US20080080544A1 (en) Method for dynamically allocating network adapters to communication channels for a multi-partition computer system
US11561824B2 (en) Embedded persistent queue
KR20040075307A (en) System and method for policy quorum grid resource management
US10353585B2 (en) Methods for managing array LUNs in a storage network with a multi-path configuration and devices thereof
US20240036935A1 (en) Lcs sdxi resource ownership system
US20190155635A1 (en) High availability system for providing in-memory based virtualization service
Salapura et al. Availability Considerations for Mission Critical Applications in the Cloud.
KR20230067755A (en) Memory management device for virtual machine

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CORNELI, KARYN T.;DAWSON, CHRISTOPHER J.;HAMILTON, II, RICK A.;AND OTHERS;REEL/FRAME:018037/0752;SIGNING DATES FROM 20060707 TO 20060714

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

REMI Maintenance fee reminder mailed
FPAY Fee payment

Year of fee payment: 4

SULP Surcharge for late payment
FPAY Fee payment

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12

AS Assignment

Owner name: KYNDRYL, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:057885/0644

Effective date: 20210930