WO2007036739A2 - A system and method for sharing computer resources - Google Patents

A system and method for sharing computer resources Download PDF

Info

Publication number
WO2007036739A2
WO2007036739A2 PCT/GB2006/003634 GB2006003634W WO2007036739A2 WO 2007036739 A2 WO2007036739 A2 WO 2007036739A2 GB 2006003634 W GB2006003634 W GB 2006003634W WO 2007036739 A2 WO2007036739 A2 WO 2007036739A2
Authority
WO
WIPO (PCT)
Prior art keywords
computer resources
network
sharing
computer
resources according
Prior art date
Application number
PCT/GB2006/003634
Other languages
French (fr)
Other versions
WO2007036739A3 (en
Inventor
Robert Houghton
Original Assignee
Robert Houghton
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Houghton filed Critical Robert Houghton
Publication of WO2007036739A2 publication Critical patent/WO2007036739A2/en
Publication of WO2007036739A3 publication Critical patent/WO2007036739A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5014Reservation

Definitions

  • This invention relates to a system and method for sharing computer resources, particularly, though not exclusively, to a system and method for sharing computer processing resources between a number of users.
  • SAN Storage Area Network
  • a control SAN switch which is normally connected to a number of switches which control either individual or multiple racks of servers and other devices, or server clusters.
  • LAN Local Area Network
  • vLAN virtual LAN
  • Internet any other network, including the wider Internet.
  • LAN Local Area Network
  • vLAN virtual LAN
  • the overall network configuration is usually static.
  • Such computer resources, as well as the data storage devices, are generally dedicated to the particular user to which they are assigned.
  • the present invention therefore seeks to provide a system and method for sharing computer resources, which overcomes, or at least reduces, the above-mentioned problems of the prior art.
  • the invention provides a system of sharing a computer resource between a plurality of networked user devices, the system comprising at least one shared computer resource available on a network, a plurality of user devices connectable to the network and a time management controller comprising a scheduler coupled to the network for receiving task requests from the user devices requiring access to computer resources at particular times, the scheduler determining whether a particular task request can be met and, if so, storing the particular times and particular computer resources required, a controller coupled to the scheduler, a data storage device, and the network for controlling the shared computer resource, the controller controlling the at least one shared computer resource, if required by a particular user device at a particular time, to image the configuration of the shared computer resource and its network environment as used by a previous user device prior to the particular time and to store the image in the data storage device, the controller further determining whether the particular user device has a previously stored image in the data storage device and, if so, to configure the shared computer resource and the network environment to that previously stored
  • a different computer resource can be used, as long as the storage devices associated can be networked to be presentable to the computing resource.
  • This resource for each session can be at the same data centre or across multiple storage based replication methods to allow the storage devices to be made available at different data centres in real time.
  • At least one shared computer resource preferably comprises a computer having at least one logical drive, which may comprise at least one virtual drive provided by a data storage device at a location remote from the shared computer resource.
  • the plurality of user devices may include at least one device which executes automatic regular operations and/or at least one device which is operated manually by a user.
  • the scheduler preferably includes a computer resource manager for determining which computer resources on the network will be required to perform a particular task request, determining whether those computer resources are available at the requested times, and sending a confirmation to the user device that sent the request if the particular task request can be met.
  • the controller determines that no previously stored image exists for a particular user device, then the controller configures the network environment and the shared computer resource to a predetermined configuration.
  • the predetermined configuration may be a default configuration, may be determined by the controller depending data within the task request, or may be obtained by the controller from another location on the network.
  • the image of the network environment and the shared computer device includes information regarding the identity of the user device, information regarding logon details of the user device, and/or information regarding the identity of the user device.
  • the invention provides a method for sharing a computer resource between a plurality of networked user devices, the method comprising the steps of receiving a task request from a user device connected to a network requiring access to a shared computer resource on the network at a particular time, determining whether a particular task request can be met and, if so, storing the particular time and particular computer resource required, imaging the configuration of the shared computer resource and its network environment as used by a previous user device prior to the particular time, storing the image, determining whether the particular user device has a previously stored image and, if so, configuring the shared computer resource and the network environment to that previously stored image so that the network environment and the computer resource is available to the particular user device at the particular time in the particular configuration whose image was previously stored.
  • the method further comprises the steps of determining which computer resources on the network will be required to perform a particular task request and determining whether those computer resources are available at the requested time and sending a confirmation to the user device that sent the request if the particular task request can be met.
  • the method further comprises the step of configuring the network environment and the shared computer resource to a predetermined configuration.
  • the predetermined configuration may be a default configuration, or may be determined by the controller depending data within the task request or may be obtained by the controller from another location on the network.
  • FIG. 1 shows a schematic block diagram of a system according to one embodiment of the present invention
  • FIG. 2 shows a schematic flow diagram of a process flow of the system of FIG. 1 ; and FIG. 3A-C shows a more detailed flow chart of the process of FIG. 2.
  • FIG. 1 shows a first embodiment of the present invention for sharing a set of computer resources between users in different timezones.
  • One particular application is the field of information technology education. It is common to conduct classes in classrooms with the teaching hardware in situ. For organisations with multiple educational delivery venues across multiple geographies, the replication of the same type of hardware for access and delivery purposes is both costly and inefficient as each class will run for a maximum 8 hour period in any 24 hour period.
  • the need for hardware in each class can be removed as the teaching hardware can be placed in one central location or dispersed across a small number. Access to the hardware by students is facilitated via standard internet protocol (IP).
  • IP internet protocol
  • the invention allows the equipment to be used in more than one timezone in any given 24 hour period.
  • the computer resource 1 to be shared is a storage device 2, which is controlled by server 3 via switch 4.
  • the storage device 2 (and the server 3 and switch 4) are connected to a SAN (not shown) to which the users are also connected.
  • a European User Device 5 may require the computer resources 1 for a class that may start in Europe at 0900 GMT and finish at 1700 GMT.
  • an American User Device 6 may require the computer resources 1 for an American class starting at 1730 GMT and finishing at 0130 GMT the following day and an Asia-Pacific User Device (not shown) may require the same equipment for an Asia-Pacific class starting at 0200 GMT and finishing at 0830 GMT, after which the next European class may start again at 0900 GMT.
  • the User Devices 5 and 6 send Task Requests 7 and 8, respectively, at any time prior to the required time, to a scheduler 9.
  • the task requests 7, 8 may specify the particular computer resources required, together with their configuration, or, in some implementations, may simply provide a description of the task(s) to be performed and allow the scheduler 9 to determine what resources and configurations are needed.
  • the scheduler 9 determines whether the required equipment is available, and, if so, allocates 10 the required equipment, as well as the data centre and replication, to that task for the required date and time.
  • the scheduler may first provide an indication of available equipment to the user to allow the user to choose which equipment the user would like to use.
  • the scheduler provides confirmations 11, 12 of this to the respective user (as shown in FIG. 1).
  • the scheduler 9 then also saves 16 each task request 13, 14, respectively, to a data storage device 15 together with details of the equipment allocated and other required information.
  • the stored task request, together with the equipment allocation and other information is read from the data storage device 15 by a controller 16.
  • the saved European task request 17 and equipment allocation is provided to the controller prior to the start time of 0900 GMT.
  • the controller 16 then controls the required equipment 1 to configure it to the desired configuration 18 required for the European user 5.
  • this may be an existing configuration that was previously stored 19 in the data storage device, or may be a new configuration obtained 20 from a device on the network or otherwise specified.
  • the required European configuration is obtained by the controller 16, and used to configure the equipment 1 to the European configuration 18.
  • this (re)configuration of the equipment 1 includes reconfiguration 21 of the switch 3, presentation of the storage devices with host environment and data as well as any other switches that may be required to provide the appropriate connections and zones for the European configuration.
  • the required equipment is then rebooted 22 with the required operating system and other software and the new configuration is uploaded 23 to server 3 (and other required servers) with the appropriate European login credentials so that the equipment is ready to use 24 for the European class.
  • the configuration of the equipment for this class at this time is then saved 25 to the control data storage device 15 within the SAN.
  • the next saved, American task request 26, together with the equipment allocation and other information is read from the data storage device 15 by controller 16.
  • the controller 16 controls the required equipment 1 to configure it to the desired configuration 27 required for the American user 6.
  • the configuration of the equipment for this class is saved 28 to the control data storage device 15.
  • a third, Asia-Pacific configuration 29 is presented to the same equipment in preparation for an Asia-Pacific class starting at 0200 GMT and finishing at 0830 GMT.
  • This configuration is then saved 30 to the control data storage device 15 and the original European configuration 31 as saved 25 at 1700 GMT the previous day is reloaded, enabling the European students to continue their class from the same point where they left it the previous day. Again, the European configuration is stored 32 at the end of their class. Consequently, the teaching hardware has supported 3 classes with full utilisation.
  • FIGS 3A - 3C show a flow diagram of the operation of the system, with the flow moving from block A in FIG. 3A to block A in FIG. 3B, from block B in FIG. 3B to block B in FIG. 3C and from block A in FIG. 3C to block A in FIG. 3B.
  • the process starts when a client logs-in 40 to the scheduler on the system.
  • the scheduler checks 41 that the log-in is correct and, if not, deals with the client 42 either on the basis of a lost or forgotten log-in, or as a new client to generate a new account and get the terms and conditions of business accepted.
  • step 43 the client, as described above, makes a task request or modifies a previously made task request.
  • the client is, at this time, requested to provide full details of the equipment needed for this task, as well as the dates and times that it is requested for. It will be apparent that, if the task is a recurring one, then the equipment details need only be provided once and the date and times can be entered as recurring.
  • the particular equipment requirements may be stored as a particular client task requirement that the client can access each time a fresh request is made to save time, with only the date and time need to be entered. In any event, the scheduler determines that it has sufficient information to determine what equipment configuration is required.
  • step 44 If the request is a modification to an existing task request, this is processed in step 44. If it is a new task request, then the scheduler generates a new unique schedule ID for the task and the process moves on to step 45, where the scheduler checks that the request is valid against the contract that the client has. If not, the contract needs to be updated to cover the new request. This occurs through steps 46 and 47, where the contract is, firstly, updated, and then agreement with the client for the updated contract is sought.
  • step 48 the process moves on to step 48, but if not, then the task request is not allowed and the process moves back to step 43 to await a new task request.
  • the process moves on to step 48, where the scheduler checks whether the required equipment is available at the required date and time. If it is not available, then the scheduler determines the dates and times that the required equipment is available and provides 49 those options to the client. The client then decides 50 whether to proceed with an alternative data and/or time. If so, the process moves back to step 44, but if not, then the process moves back to step 43 to await a new task request.
  • step 51 the scheduler processes the required date and time and sets (stores) particular time alarms based on the required configuration and the time necessary to (re)configure the equipment so as to be ready for the requested task.
  • the scheduler checks 52, whether the required configuration will require reconfiguration from that in use prior to the requested task and, if not, sets the alarm time accordingly. If the required configuration will require reconfiguration from that in use prior to the requested task, then an alarm time is set for a time approximately 4 hours before the requested task start time. At this time, as shown in step 53, the controller 16 receives the alarm and the required configuration and checks whether the required configuration is a brand new configuration, or one that had previously been used.
  • the controller creates 54 new boot discs either from an image of the required configuration, which may be downloaded from elsewhere on the network, or by creating the boot disc from scratch by building up the required hardware and software to be installed. Once the new boot discs have been created, a copy of the required configuration is made 55 and the process moves on to step 59.
  • step 53 If, in step 53, it is determined that the configuration is not new, then a determination is made 56 whether the appropriate boot discs are in on-line storage. This determination can be made about 1 hour before the start time. If it is found that the required boot discs are not in on-line storage, then boot and data discs need to be created 57 from the required configuration, which may be mapped through the SAN infrastructure. If the client requires replication and/or datacentre options, then the storage is also replicated. The created discs are then restored 58 from near-line storage to the appropriate on-line storage for use and the process moves on to step 59.
  • step 56 If, in step 56, it is determined that the required boot discs are available in on-line storage, then the process moves straight on to step 59, in which, about 10 minutes before the required start time, the controller allocates the required datacentre and hardware for the required configuration.
  • the reconfiguration of the system involves making 60 all the hardware in the system visible in the infrastructure, and then creating vLAN modifications, as required and modifying DHCP entries where required. This step is likely to take no more that about two minutes. Appropriate zones are then created 61 and applied to the current environment, and the complete infrastructure as at the current session, is copied 63 and the schedule and session IDs are updated. Each of steps 62 and 63 are also not expected to take more than about two minutes.
  • step 64 the boot discs from on-line storage are presented to all nodes and the servers are re-booted from the requested restore or reconfigure position. Permissions and log-ins for clients are then set up 65 from the required task request and client access and host log-ins are enabled 66. Once the client has logged-in and is working as required 67, the controller checks 68 whether the client has requested a restore. If so, the controller presents 69 the client with a list of saved restore positions and the client chooses the required one. The controller then shuts down 70 the server and logically disconnects the SAN, as well as stopping any replication, before reverting to step 64 to re-boot the system.
  • the controller checks 71 whether the user as requested to finish the session early, or, if not, whether the scheduled finish time for the session has been reached. If not, the process reverts to step 67 allowing access to the client to work normally. If, however, the end of the session is reached in step 71 , the controller logs-out 72 all the client sessions and checks 73 to make sure that this is the end of the entire client's request (the client may, for example have several concurrent sessions booked in the same request). If the end of the client's booking request has not been reached, then the controller checks 74 with the scheduler whether the client has booked another session to start within a predetermined period of time, for example X days.
  • step 76 If not, then all of the data is backed up 75 from the storage devices used by the client to offline or near-line storage and the process moves on to step 76. If the client has booked another session to start within a predetermined period of time, then the process moves straight to step 76, where all the storage devices are logically disconnected from the network and any replication is stopped and then the complete infrastructure, as at the current session, is copied 77 and the schedule and session IDs are updated. The process then moves back to step 51 to process the schedule to set the next alarm.
  • step 73 If, in step 73, it is determined that the end of the client's booking request has been reached, then the schedule ID is marked as expired and account and usage information is updated 78 before the process reverts back to step 51. It can thus be seen that the above described embodiment of the invention utilises redundant capacity of the system to provide re-configured servers and data storage devices whilst at the same time saving the original configuration on a data storage device within the main network environment. The original configuration can be retrieved at a later designated time and date. The above described embodiment of the invention enables the re-configured servers and data storage devices to be used for other tasks in time which would otherwise be designated as being redundant.
  • the above described embodiment of the invention incorporates a time management system or 'scheduler' that enables users to book allotted times for access to the network infrastructure.
  • the scheduler will highlight equipment availability and indicate to the user the designated equipment resource required to complete the task or allow the user to select equipment resource of their choosing.
  • the management software will log the event and store the data within the control network infrastructure.
  • the management software will unpresent the current image, equipment configuration, zoning parameters, vLAN protocols, login credentials, data and usage details of the equipment allocated to the new task to a control storage device within the network.
  • the replacement image (which may be one that has been used previously, or a newly created image) including required software and utilities, zoning configuration, vLAN protocols and login credentials are then presented from the control storage device within the network via a LUN to the allocated equipment.
  • the management software will reboot the servers as part of the process in accepting the LUN and consequently the new configuration. In the case of a SAN, it is the SAN itself which is providing the means to reboot the designated equipment. Once the new image and configuration has been accepted, the servers are ready for use. At the same time, the management software will reconfigure the main network switch together with the relevant local or rack switches in order to accommodate the new zoning configuration.
  • the management software will also control the creation and allocation of images and passwords by 'managing' existing, established and readily available software within the operating system, which may be any of, for example, Microsoft Windows, HP-UX (Unix) or Linux operating systems.
  • the above described embodiment of the invention thus provides increased efficiency of usage on a device by device basis within a networked environment.
  • the described embodiment facilitates the opportunity to maximise device usage with a decrease in overall hardware expenditure over time. The user therefore does not experience any perceptible change to their allocated equipment as long as their own configuration has been saved from their previous session or period of activity.

Abstract

A scheduler (9) enables users (5, 6) to book allotted times for access to computer resources (1) on a network infrastructure. Depending on the nature of the requested task, the scheduler (9) will highlight equipment availability and indicate to the user (6, 7) the designated equipment resource required to complete the task or allow the user to select equipment resource of their choosing. Prior to the appointed time, the scheduler passes on the task request to a controller (16) which configures the computer resources (1) and the network environment to meet the user requirements. At the end of the session, the configuration is imaged and stored to be re-used at the start of the next session so that the user does not experience any perceptible change to their allocated equipment.

Description

A System and Method for Sharing Computer Resources
Field of the Invention
This invention relates to a system and method for sharing computer resources, particularly, though not exclusively, to a system and method for sharing computer processing resources between a number of users.
Background of the Invention In traditional networking environments, it is typical to have a number of servers linked to switches and data storage devices, with a number of users being connected to the network. In general, each user will have his own computer or terminal, which will be configured for the particular needs and requirements of that user. Thus, the user's configuration will provide a particular operating environment, with particular network resources available to that user, whilst another user may have a different operating environment, with a different set of network resources available. The various network resources are made available to particular users by switches within the network environment that are commonly controlled by a network administrator utilising a software switch management package which will also allow the creation and configuration of zones of equipment, access to which is controlled by the relevant switch settings.
One type of network resource commonly made available to users are various data storage facilities, whereby a particular user may have available to him a number of different data storage devices, which may be shown on that user's computer as different drives, with many of the drives being virtual in the sense that they are not physically present in that computer but are remotely located, although the user's computer shows all the available drives in the same way, whether they are physically present in that machine or virtual. A network may have a large number of data storage facilities available, some of which may be co-located in large data storage facilities and some of which may be located in other places. In each case, a data storage facility may include one or more data storage devices, each of which may be divided into a large number of logically separate drives, which may be made available to different users. Thus, there is commonly provided a Storage Area Network (SAN), for example, under the control of a control SAN switch, which is normally connected to a number of switches which control either individual or multiple racks of servers and other devices, or server clusters.
Similarly, other computer resources may be available to a user via a Local Area Network (LAN), a virtual LAN (vLAN), or any other network, including the wider Internet. The overall network configuration is usually static. Such computer resources, as well as the data storage devices, are generally dedicated to the particular user to which they are assigned. Furthermore, within the network, it is common and established practice to have a number of zones which contain fixed pools of equipment which are designed to perform specific tasks, for example e-mail servers and attached data storage devices.
Typically, there can be high rates of redundancy in server and data storage usage within fixed network configurations where usage may only be required for a few hours in any 24 hour period due to the nature and demand of the task that has been assigned to a particular zone within the network. This is especially prevalent, for example, when data back-up routines are implemented on a fixed-time schedule. The data back-up routine may run for 5 hours, but the servers and associated data storage devices will then be redundant for 19 hours. Consequently, the return on investment can be relatively low depending on the rates of server redundancy within the network.
Brief Summary of the Invention
The present invention therefore seeks to provide a system and method for sharing computer resources, which overcomes, or at least reduces, the above-mentioned problems of the prior art.
Accordingly, in a first aspect, the invention provides a system of sharing a computer resource between a plurality of networked user devices, the system comprising at least one shared computer resource available on a network, a plurality of user devices connectable to the network and a time management controller comprising a scheduler coupled to the network for receiving task requests from the user devices requiring access to computer resources at particular times, the scheduler determining whether a particular task request can be met and, if so, storing the particular times and particular computer resources required, a controller coupled to the scheduler, a data storage device, and the network for controlling the shared computer resource, the controller controlling the at least one shared computer resource, if required by a particular user device at a particular time, to image the configuration of the shared computer resource and its network environment as used by a previous user device prior to the particular time and to store the image in the data storage device, the controller further determining whether the particular user device has a previously stored image in the data storage device and, if so, to configure the shared computer resource and the network environment to that previously stored image so that the network environment and the computer resource is available to the particular user device at the particular time in the particular configuration whose image was previously stored.
For each session of use a different computer resource can be used, as long as the storage devices associated can be networked to be presentable to the computing resource. This resource for each session can be at the same data centre or across multiple storage based replication methods to allow the storage devices to be made available at different data centres in real time.
At least one shared computer resource preferably comprises a computer having at least one logical drive, which may comprise at least one virtual drive provided by a data storage device at a location remote from the shared computer resource. The plurality of user devices may include at least one device which executes automatic regular operations and/or at least one device which is operated manually by a user.
In one embodiment, the scheduler preferably includes a computer resource manager for determining which computer resources on the network will be required to perform a particular task request, determining whether those computer resources are available at the requested times, and sending a confirmation to the user device that sent the request if the particular task request can be met.
Preferably, if the controller determines that no previously stored image exists for a particular user device, then the controller configures the network environment and the shared computer resource to a predetermined configuration. The predetermined configuration may be a default configuration, may be determined by the controller depending data within the task request, or may be obtained by the controller from another location on the network. - A -
Preferably, the image of the network environment and the shared computer device includes information regarding the identity of the user device, information regarding logon details of the user device, and/or information regarding the identity of the user device. According to a second aspect, the invention provides a method for sharing a computer resource between a plurality of networked user devices, the method comprising the steps of receiving a task request from a user device connected to a network requiring access to a shared computer resource on the network at a particular time, determining whether a particular task request can be met and, if so, storing the particular time and particular computer resource required, imaging the configuration of the shared computer resource and its network environment as used by a previous user device prior to the particular time, storing the image, determining whether the particular user device has a previously stored image and, if so, configuring the shared computer resource and the network environment to that previously stored image so that the network environment and the computer resource is available to the particular user device at the particular time in the particular configuration whose image was previously stored.
Preferably, the method further comprises the steps of determining which computer resources on the network will be required to perform a particular task request and determining whether those computer resources are available at the requested time and sending a confirmation to the user device that sent the request if the particular task request can be met.
In one embodiment, if it is determined that no previously stored image exists for a particular user device, then the method further comprises the step of configuring the network environment and the shared computer resource to a predetermined configuration. The predetermined configuration may be a default configuration, or may be determined by the controller depending data within the task request or may be obtained by the controller from another location on the network.
Brief Description of the Drawings
One embodiment of the invention will now be more fully described, by way of example, with reference to the drawings, of which: FIG. 1 shows a schematic block diagram of a system according to one embodiment of the present invention;
FIG. 2 shows a schematic flow diagram of a process flow of the system of FIG. 1 ; and FIG. 3A-C shows a more detailed flow chart of the process of FIG. 2.
Detailed Description of the Drawings
Thus, FIG. 1 shows a first embodiment of the present invention for sharing a set of computer resources between users in different timezones. One particular application is the field of information technology education. It is common to conduct classes in classrooms with the teaching hardware in situ. For organisations with multiple educational delivery venues across multiple geographies, the replication of the same type of hardware for access and delivery purposes is both costly and inefficient as each class will run for a maximum 8 hour period in any 24 hour period.
By utilising a SAN-based or LAN-based storage presentation embodiment of the present invention, the need for hardware in each class can be removed as the teaching hardware can be placed in one central location or dispersed across a small number. Access to the hardware by students is facilitated via standard internet protocol (IP). The invention allows the equipment to be used in more than one timezone in any given 24 hour period. As can be seen in FIG. 1 , the computer resource 1 to be shared is a storage device 2, which is controlled by server 3 via switch 4. The storage device 2 (and the server 3 and switch 4) are connected to a SAN (not shown) to which the users are also connected. In the example shown, a European User Device 5 may require the computer resources 1 for a class that may start in Europe at 0900 GMT and finish at 1700 GMT. Similarly, an American User Device 6 may require the computer resources 1 for an American class starting at 1730 GMT and finishing at 0130 GMT the following day and an Asia-Pacific User Device (not shown) may require the same equipment for an Asia-Pacific class starting at 0200 GMT and finishing at 0830 GMT, after which the next European class may start again at 0900 GMT.
The User Devices 5 and 6 send Task Requests 7 and 8, respectively, at any time prior to the required time, to a scheduler 9. As shown also in FIG. 2, the task requests 7, 8 may specify the particular computer resources required, together with their configuration, or, in some implementations, may simply provide a description of the task(s) to be performed and allow the scheduler 9 to determine what resources and configurations are needed. The scheduler 9 then determines whether the required equipment is available, and, if so, allocates 10 the required equipment, as well as the data centre and replication, to that task for the required date and time. In one embodiment, the scheduler may first provide an indication of available equipment to the user to allow the user to choose which equipment the user would like to use. Once the particular equipment has been allocated for the task, the scheduler provides confirmations 11, 12 of this to the respective user (as shown in FIG. 1). The scheduler 9 then also saves 16 each task request 13, 14, respectively, to a data storage device 15 together with details of the equipment allocated and other required information. Prior to the requested start time of a particular task, the stored task request, together with the equipment allocation and other information is read from the data storage device 15 by a controller 16. Thus, in the example, described above, the saved European task request 17 and equipment allocation is provided to the controller prior to the start time of 0900 GMT. The controller 16 then controls the required equipment 1 to configure it to the desired configuration 18 required for the European user 5. As will be explained later, this may be an existing configuration that was previously stored 19 in the data storage device, or may be a new configuration obtained 20 from a device on the network or otherwise specified. In any event, the required European configuration is obtained by the controller 16, and used to configure the equipment 1 to the European configuration 18. As shown in FIG. 2, this (re)configuration of the equipment 1 includes reconfiguration 21 of the switch 3, presentation of the storage devices with host environment and data as well as any other switches that may be required to provide the appropriate connections and zones for the European configuration. The required equipment is then rebooted 22 with the required operating system and other software and the new configuration is uploaded 23 to server 3 (and other required servers) with the appropriate European login credentials so that the equipment is ready to use 24 for the European class. When the European class has finished at 1700 GMT, the configuration of the equipment for this class at this time is then saved 25 to the control data storage device 15 within the SAN. At about this time, the next saved, American task request 26, together with the equipment allocation and other information is read from the data storage device 15 by controller 16. The controller 16 then controls the required equipment 1 to configure it to the desired configuration 27 required for the American user 6. When the American class has finished at 0130 GMT, the configuration of the equipment for this class is saved 28 to the control data storage device 15. A third, Asia-Pacific configuration 29 is presented to the same equipment in preparation for an Asia-Pacific class starting at 0200 GMT and finishing at 0830 GMT. This configuration is then saved 30 to the control data storage device 15 and the original European configuration 31 as saved 25 at 1700 GMT the previous day is reloaded, enabling the European students to continue their class from the same point where they left it the previous day. Again, the European configuration is stored 32 at the end of their class. Consequently, the teaching hardware has supported 3 classes with full utilisation.
FIGS 3A - 3C show a flow diagram of the operation of the system, with the flow moving from block A in FIG. 3A to block A in FIG. 3B, from block B in FIG. 3B to block B in FIG. 3C and from block A in FIG. 3C to block A in FIG. 3B. The process starts when a client logs-in 40 to the scheduler on the system. The scheduler checks 41 that the log-in is correct and, if not, deals with the client 42 either on the basis of a lost or forgotten log-in, or as a new client to generate a new account and get the terms and conditions of business accepted. If the log- in is correct, then the process moves forward to step 43, in which the client, as described above, makes a task request or modifies a previously made task request. The client is, at this time, requested to provide full details of the equipment needed for this task, as well as the dates and times that it is requested for. It will be apparent that, if the task is a recurring one, then the equipment details need only be provided once and the date and times can be entered as recurring. Alternatively, the particular equipment requirements may be stored as a particular client task requirement that the client can access each time a fresh request is made to save time, with only the date and time need to be entered. In any event, the scheduler determines that it has sufficient information to determine what equipment configuration is required. This would include the datacentre, replication, failover, node requirements, software build on nodes, storage requirements, specialist hardware requirements, LAN requirements and any recovery options. If the request is a modification to an existing task request, this is processed in step 44. If it is a new task request, then the scheduler generates a new unique schedule ID for the task and the process moves on to step 45, where the scheduler checks that the request is valid against the contract that the client has. If not, the contract needs to be updated to cover the new request. This occurs through steps 46 and 47, where the contract is, firstly, updated, and then agreement with the client for the updated contract is sought. If the updated contract is agreed, the process moves on to step 48, but if not, then the task request is not allowed and the process moves back to step 43 to await a new task request. Provided the task request is covered by the contract, the process moves on to step 48, where the scheduler checks whether the required equipment is available at the required date and time. If it is not available, then the scheduler determines the dates and times that the required equipment is available and provides 49 those options to the client. The client then decides 50 whether to proceed with an alternative data and/or time. If so, the process moves back to step 44, but if not, then the process moves back to step 43 to await a new task request.
If the required equipment is available at the required date and time, then the process moves on to step 51 (see FIG. 3B), in which the scheduler processes the required date and time and sets (stores) particular time alarms based on the required configuration and the time necessary to (re)configure the equipment so as to be ready for the requested task. The scheduler checks 52, whether the required configuration will require reconfiguration from that in use prior to the requested task and, if not, sets the alarm time accordingly. If the required configuration will require reconfiguration from that in use prior to the requested task, then an alarm time is set for a time approximately 4 hours before the requested task start time. At this time, as shown in step 53, the controller 16 receives the alarm and the required configuration and checks whether the required configuration is a brand new configuration, or one that had previously been used. If it is brand new, the controller creates 54 new boot discs either from an image of the required configuration, which may be downloaded from elsewhere on the network, or by creating the boot disc from scratch by building up the required hardware and software to be installed. Once the new boot discs have been created, a copy of the required configuration is made 55 and the process moves on to step 59.
If, in step 53, it is determined that the configuration is not new, then a determination is made 56 whether the appropriate boot discs are in on-line storage. This determination can be made about 1 hour before the start time. If it is found that the required boot discs are not in on-line storage, then boot and data discs need to be created 57 from the required configuration, which may be mapped through the SAN infrastructure. If the client requires replication and/or datacentre options, then the storage is also replicated. The created discs are then restored 58 from near-line storage to the appropriate on-line storage for use and the process moves on to step 59. If, in step 56, it is determined that the required boot discs are available in on-line storage, then the process moves straight on to step 59, in which, about 10 minutes before the required start time, the controller allocates the required datacentre and hardware for the required configuration. The reconfiguration of the system involves making 60 all the hardware in the system visible in the infrastructure, and then creating vLAN modifications, as required and modifying DHCP entries where required. This step is likely to take no more that about two minutes. Appropriate zones are then created 61 and applied to the current environment, and the complete infrastructure as at the current session, is copied 63 and the schedule and session IDs are updated. Each of steps 62 and 63 are also not expected to take more than about two minutes.
The process then moves on to step 64, as shown in FIG. 3C, where the boot discs from on-line storage are presented to all nodes and the servers are re-booted from the requested restore or reconfigure position. Permissions and log-ins for clients are then set up 65 from the required task request and client access and host log-ins are enabled 66. Once the client has logged-in and is working as required 67, the controller checks 68 whether the client has requested a restore. If so, the controller presents 69 the client with a list of saved restore positions and the client chooses the required one. The controller then shuts down 70 the server and logically disconnects the SAN, as well as stopping any replication, before reverting to step 64 to re-boot the system.
If no restore request is received from the client in step 68, the controller checks 71 whether the user as requested to finish the session early, or, if not, whether the scheduled finish time for the session has been reached. If not, the process reverts to step 67 allowing access to the client to work normally. If, however, the end of the session is reached in step 71 , the controller logs-out 72 all the client sessions and checks 73 to make sure that this is the end of the entire client's request (the client may, for example have several concurrent sessions booked in the same request). If the end of the client's booking request has not been reached, then the controller checks 74 with the scheduler whether the client has booked another session to start within a predetermined period of time, for example X days. If not, then all of the data is backed up 75 from the storage devices used by the client to offline or near-line storage and the process moves on to step 76. If the client has booked another session to start within a predetermined period of time, then the process moves straight to step 76, where all the storage devices are logically disconnected from the network and any replication is stopped and then the complete infrastructure, as at the current session, is copied 77 and the schedule and session IDs are updated. The process then moves back to step 51 to process the schedule to set the next alarm.
If, in step 73, it is determined that the end of the client's booking request has been reached, then the schedule ID is marked as expired and account and usage information is updated 78 before the process reverts back to step 51. It can thus be seen that the above described embodiment of the invention utilises redundant capacity of the system to provide re-configured servers and data storage devices whilst at the same time saving the original configuration on a data storage device within the main network environment. The original configuration can be retrieved at a later designated time and date. The above described embodiment of the invention enables the re-configured servers and data storage devices to be used for other tasks in time which would otherwise be designated as being redundant.
Furthermore, the above described embodiment of the invention incorporates a time management system or 'scheduler' that enables users to book allotted times for access to the network infrastructure. Depending on the nature of the task, the scheduler will highlight equipment availability and indicate to the user the designated equipment resource required to complete the task or allow the user to select equipment resource of their choosing. Once the details have been accepted by the scheduler, the management software will log the event and store the data within the control network infrastructure. At the designated time, the management software will unpresent the current image, equipment configuration, zoning parameters, vLAN protocols, login credentials, data and usage details of the equipment allocated to the new task to a control storage device within the network. The replacement image (which may be one that has been used previously, or a newly created image) including required software and utilities, zoning configuration, vLAN protocols and login credentials are then presented from the control storage device within the network via a LUN to the allocated equipment. The management software will reboot the servers as part of the process in accepting the LUN and consequently the new configuration. In the case of a SAN, it is the SAN itself which is providing the means to reboot the designated equipment. Once the new image and configuration has been accepted, the servers are ready for use. At the same time, the management software will reconfigure the main network switch together with the relevant local or rack switches in order to accommodate the new zoning configuration. The management software will also control the creation and allocation of images and passwords by 'managing' existing, established and readily available software within the operating system, which may be any of, for example, Microsoft Windows, HP-UX (Unix) or Linux operating systems.
The above described embodiment of the invention thus provides increased efficiency of usage on a device by device basis within a networked environment. The described embodiment facilitates the opportunity to maximise device usage with a decrease in overall hardware expenditure over time. The user therefore does not experience any perceptible change to their allocated equipment as long as their own configuration has been saved from their previous session or period of activity.
As can be seen from the above, therefore, the present invention provides a system and method for sharing computer resources which goes beyond the normal physical interactions between shared computer resources and networked user devices. More particularly, the system allows a network and the attached computer resources to be fully utilized over a 24 hour period in a manner which goes beyond the normal interaction of the network and computer resources, and instead turns them into a fully stateful virtual memory system that spans the entire system. Thus, any state can be returned to and processed at any time with any resource over any portion of the network.
It will be appreciated that although only one particular embodiment of the invention has been described in detail, various modifications and improvements can be made by a person skilled in the art without departing from the scope of the present invention.

Claims

Claims
1. A system of sharing a computer resource between a plurality of networked user devices, the system comprising: at least one shared computer resource available on a network; a plurality of user devices connectable to the network; and a time management controller comprising: a scheduler coupled to the network for receiving task requests from the user devices requiring access to computer resources at particular times, the scheduler determining whether a particular task request can be met and, if so, storing the particular times and particular computer resources required; a controller coupled to the scheduler, a data storage device, and the network for controlling the shared computer resource, the controller controlling the at least one shared computer resource, if required by a particular user device at a particular time, to image the configuration of the shared computer resource and its network environment as used by a previous user device prior to the particular time and to store the image in the data storage device, the controller further determining whether the particular user device has a previously stored image in the data storage device and, if so, to configure the shared computer resource and the network environment to that previously stored image so that the network environment and the computer resource is available to the particular user device at the particular time in the particular configuration whose image was previously stored.
2. A system of sharing computer resources according to claim 1 , wherein the at least one shared computer resource comprises a computer having at least one logical drive.
3. A system of sharing computer resources according to claim 2, wherein the at least one logical drive comprises at least one virtual drive provided by a data storage device at a location remote from the shared computer resource.
4. A system of sharing computer resources according to any preceding claim, wherein the plurality of user devices includes at least one device which executes automatic regular operations.
5. A system of sharing computer resources according to any preceding claim, wherein the plurality of user devices includes at least one device which is operated manually by a user.
6. A system of sharing computer resources according to any preceding claim, wherein the scheduler includes a computer resource manager for determining which computer resources on the network will be required to perform a particular task request, determining whether those computer resources are available at the requested times and sending a confirmation to the user device that sent the request if the particular task request can be met.
7. A system of sharing computer resources according to any preceding claim, wherein, if the controller determines that no previously stored image exists for a particular user device, then the controller configures the network environment and the shared computer resource to a predetermined configuration.
8. A system of sharing computer resources according to claim 7, wherein the predetermined configuration is a default configuration.
9. A system of sharing computer resources according to claim 7, wherein the predetermined configuration is determined by the controller depending data within the task request.
10. A system of sharing computer resources according to claim 7, wherein the predetermined configuration is obtained by the controller from another location on the network.
11. A system of sharing computer resources according to any preceding claim, wherein the image of the network environment and the shared computer device includes information regarding the identity of the user device.
12. A system of sharing computer resources according to any preceding claim, wherein the image of the network environment and the shared computer device includes information regarding logon details of the user device.
13. A system of sharing computer resources according to any preceding claim, wherein the image of the network environment and the shared computer device includes information regarding the identity of the user device.
14. A method for sharing a computer resource between a plurality of networked user devices, the method comprising the steps of: receiving a task request from a user device connected to a network requiring access to a shared computer resource on the network at a particular time; determining whether a particular task request can be met and, if so, storing the particular time and particular computer resource required; imaging the configuration of the shared computer resource and its network environment as used by a previous user device prior to the particular time; storing the image; determining whether the particular user device has a previously stored image and, if so, configuring the shared computer resource and the network environment to that previously stored image so that the network environment and the computer resource is available to the particular user device at the particular time in the particular configuration whose image was previously stored.
15. A method for sharing computer resources according to claim 14, further comprising the steps of: determining which computer resources on the network will be required to perform a particular task request; determining whether those computer resources are available at the requested time and sending a confirmation to the user device that sent the request if the particular task request can be met.
16. A method for sharing computer resources according to either claim 14 or claim 15, wherein, if is determined that no previously stored image exists for a particular user device, then configuring the network environment and the shared computer resource to a predetermined configuration.
17. A method for sharing computer resources according to claim 16, wherein the predetermined configuration is a default configuration.
18. A method for sharing computer resources according to claim 16, wherein the predetermined configuration is determined by the controller depending data within the task request.
19. A method for sharing computer resources according to claim 16, wherein the predetermined configuration is obtained by the controller from another location on the network.
PCT/GB2006/003634 2005-09-29 2006-09-29 A system and method for sharing computer resources WO2007036739A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0519890.8 2005-09-29
GBGB0519890.8A GB0519890D0 (en) 2005-09-29 2005-09-29 A system and method for sharing computer resources

Publications (2)

Publication Number Publication Date
WO2007036739A2 true WO2007036739A2 (en) 2007-04-05
WO2007036739A3 WO2007036739A3 (en) 2007-07-12

Family

ID=35395011

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2006/003634 WO2007036739A2 (en) 2005-09-29 2006-09-29 A system and method for sharing computer resources

Country Status (2)

Country Link
GB (1) GB0519890D0 (en)
WO (1) WO2007036739A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022237255A1 (en) * 2021-05-14 2022-11-17 华为技术有限公司 Management method and system for computing node

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5974547A (en) * 1998-03-20 1999-10-26 3Com Corporation Technique for reliable network booting of an operating system to a client computer
US20020161995A1 (en) * 2001-04-27 2002-10-31 International Business Machines Corporation Method and system for organized booting of a target device in a network environment
US20030126242A1 (en) * 2001-12-28 2003-07-03 Chang Albert H. Network boot system and method using remotely-stored, client-specific boot images created from shared, base snapshot image
US20040049670A1 (en) * 2002-09-10 2004-03-11 Jareva Technologies, Inc. Off-motherboard resources in a computer system
US20040059900A1 (en) * 2002-09-24 2004-03-25 Drake Backman Mechanism for controlling PXE-based boot decisions from a network policy directory
US6751658B1 (en) * 1999-10-18 2004-06-15 Apple Computer, Inc. Providing a reliable operating system for clients of a net-booted environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5974547A (en) * 1998-03-20 1999-10-26 3Com Corporation Technique for reliable network booting of an operating system to a client computer
US6751658B1 (en) * 1999-10-18 2004-06-15 Apple Computer, Inc. Providing a reliable operating system for clients of a net-booted environment
US20020161995A1 (en) * 2001-04-27 2002-10-31 International Business Machines Corporation Method and system for organized booting of a target device in a network environment
US20030126242A1 (en) * 2001-12-28 2003-07-03 Chang Albert H. Network boot system and method using remotely-stored, client-specific boot images created from shared, base snapshot image
US20040049670A1 (en) * 2002-09-10 2004-03-11 Jareva Technologies, Inc. Off-motherboard resources in a computer system
US20040059900A1 (en) * 2002-09-24 2004-03-25 Drake Backman Mechanism for controlling PXE-based boot decisions from a network policy directory

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022237255A1 (en) * 2021-05-14 2022-11-17 华为技术有限公司 Management method and system for computing node

Also Published As

Publication number Publication date
WO2007036739A3 (en) 2007-07-12
GB0519890D0 (en) 2005-11-09

Similar Documents

Publication Publication Date Title
US6880002B2 (en) Virtualized logical server cloud providing non-deterministic allocation of logical attributes of logical servers to physical resources
EP3338186B1 (en) Optimal storage and workload placement, and high resiliency, in geo-distributed cluster systems
EP3469478B1 (en) Server computer management system for supporting highly available virtual desktops of multiple different tenants
CN102763102B (en) For data environment from service configuration
US8171485B2 (en) Method and system for managing virtual and real machines
US8028193B2 (en) Failover of blade servers in a data center
US8688772B2 (en) Method and apparatus for web based storage on demand
US20050080891A1 (en) Maintenance unit architecture for a scalable internet engine
US20020129128A1 (en) Aggregation of multiple headless computer entities into a single computer entity group
US7805600B2 (en) Computer-implemented systems and methods for managing images
US20090193110A1 (en) Autonomic Storage Provisioning to Enhance Storage Virtualization Infrastructure Availability
US8224941B2 (en) Method, apparatus, and computer product for managing operation
US9471137B2 (en) Managing power savings in a high availability system at a redundant component level of granularity
WO2019222262A1 (en) Apparatuses and methods for zero touch computing node initialization
US8819200B2 (en) Automated cluster node configuration
JP2002278769A (en) Method for automatically installing os and computer network system
US20060271672A1 (en) System and method for loading various operating systems from a remote console
CN107632877A (en) VDI and VOI framework virtual machine emerging systems and startup method
EP1611523B1 (en) Controlling usage of system resources by a network manager
WO2007036739A2 (en) A system and method for sharing computer resources
JP2003050649A (en) Centralized control system, its method and program for performing centralized control
US20240069892A1 (en) Cloud provisioned boot volumes
Syrewicze et al. Using failover cluster manager to manage hyper-v clusters
JP2003208345A5 (en)
Zacker Exam Ref 70-410 Installing and Configuring Windows Server 2012 R2 (MCSA)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06794599

Country of ref document: EP

Kind code of ref document: A2

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS EPO FORM 1205A DATED 09.06.2008.

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF IGHTS (EPO FORM 1205A DATED 09.06.2008)

122 Ep: pct application non-entry in european phase

Ref document number: 06794599

Country of ref document: EP

Kind code of ref document: A2