US7035919B1 - Method for calculating user weights for thin client sizing tool - Google Patents

Method for calculating user weights for thin client sizing tool Download PDF

Info

Publication number
US7035919B1
US7035919B1 US09/813,668 US81366801A US7035919B1 US 7035919 B1 US7035919 B1 US 7035919B1 US 81366801 A US81366801 A US 81366801A US 7035919 B1 US7035919 B1 US 7035919B1
Authority
US
United States
Prior art keywords
user
users
heavy
application
server farm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US09/813,668
Inventor
Sharon Marie Lee
Leonard Eugene Eismann
Kathryn Ann McDonald
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisys Corp
Original Assignee
Unisys Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US09/813,668 priority Critical patent/US7035919B1/en
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EISMANN, LEONARD EUGENE, LEE, SHARON MARIE, MCDONALD, KATHRYN ANN
Application filed by Unisys Corp filed Critical Unisys Corp
Application granted granted Critical
Publication of US7035919B1 publication Critical patent/US7035919B1/en
Assigned to UNISYS HOLDING CORPORATION, UNISYS CORPORATION reassignment UNISYS HOLDING CORPORATION RELEASE BY SECURED PARTY Assignors: CITIBANK, N.A.
Assigned to UNISYS CORPORATION, UNISYS HOLDING CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY Assignors: CITIBANK, N.A.
Assigned to DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE reassignment DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE PATENT SECURITY AGREEMENT (PRIORITY LIEN) Assignors: UNISYS CORPORATION
Assigned to DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE reassignment DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE PATENT SECURITY AGREEMENT (JUNIOR LIEN) Assignors: UNISYS CORPORATION
Assigned to GENERAL ELECTRIC CAPITAL CORPORATION, AS AGENT reassignment GENERAL ELECTRIC CAPITAL CORPORATION, AS AGENT SECURITY AGREEMENT Assignors: UNISYS CORPORATION
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: DEUTSCHE BANK TRUST COMPANY
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE
Assigned to WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL TRUSTEE reassignment WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL TRUSTEE PATENT SECURITY AGREEMENT Assignors: UNISYS CORPORATION
Assigned to JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT reassignment JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: UNISYS CORPORATION
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: WELLS FARGO BANK, NATIONAL ASSOCIATION (SUCCESSOR TO GENERAL ELECTRIC CAPITAL CORPORATION)
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: WELLS FARGO BANK, NATIONAL ASSOCIATION
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/535Tracking the activity of the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3452Performance evaluation by statistical analysis

Definitions

  • This disclosure involves a method for calculating user weights for use during the solution generation and configuration process for an enterprise which uses a Thin Client Sizing Tool to propose an optimal configuration of Server Farms.
  • the present invention provides a tool for providing service for an enterprise with an algorithm utilizing a method for determining the particular level of stress involved with different user-types of user personnel, in addition to correlating the types of applications and servers that these user-types are involved with. Then, this can be used to estimate an adjusted number of users in a Server Farm that correlates more closely to the average or typical benchmark user for each specific Server.
  • these stress levels can be used to help determine a proposed solution configuration for a potential Thin Client customer of an enterprise.
  • the presently-described method is used to provide a means for assessing the stress of processor use, as the processor use is imposed upon by the different types of users and the different types of applications involved by the user on each particular server.
  • the algorithm introduces the concept of rating different application attributes and putting them in generic categories to aid in the final development of proposing an optimal configuration sizing of Servers for the enterprise.
  • FIGS. 1 , 2 A and 2 B show flowcharts illustrating the various steps involved in calculating the user weights for the Thin Client Sizing Tool
  • FIG. 3 is an overall environmental drawing showing the basic modules and elements involved in a particular enterprise solution.
  • FIG. 3 is an overall environmental drawing of a Thin Client Solution Configurator method which indicates the various elements involved in providing and optimizing Enterprise server solutions for a customer's enterprise.
  • the Application Delivery Solution Configurator program algorithm 60 has a series of input and output connections which include an input from the Customer Client Profile 10 , plus inputs from the Server Information Database 20 , the Sizing Database 30 , the Configuration Database Template 40 , and the Configuration Session Database 50 . Additionally, the algorithm of the Application Delivery Solution Configurator 60 also provides information for storage in the Sizing Database 30 and the Configuration Session Database 50 , after which a final series of information and reports can be provided at the Reports Module 70 .
  • the Application Delivery Solution Configurator program 60 has a number of areas which must be fulfilled and satisfied in order to provide the final report.
  • One of these areas which must be calculated and provided to the Solution Configurator 60 is that of the present invention which involves the method for calculating user weights.
  • the weights are designated as: light, medium, heavy or super-heavy.
  • FIGS. 1 and 2 will illustrate a flowchart which shows the various steps involved in order to calculate the user weights information involved which can then be input to the Application Delivery Solution Configurator 60 at the appropriate steps in order to help generate the final configurator solution.
  • FIGS. 1 and 2 will be described hereinbelow as a series of steps designated E 1 , E 2 , E 3 . . . E 16 . Additionally, during the description of the flowchart steps involved, there will be given certain numbers and application information in a specific example to better illustrate exactly how the particular algorithm can be effectuated. These numbers are for illustrative purposes only and will vary depending on the customer profile and the type of results desired by a given customer-user or enterprise developer.
  • the customer's profile for a single Server Farm could look like the following:
  • the sequence begins in FIG. 1 with the User-Types assigned to a particular Server Farm within a Site E 1 .
  • the only “User Type” assigned to the Engineering Server Farm (I) is that of Developers.
  • Each Application used concurrently by the 650 Developers is then considered at step E 2 , beginning with the Attachmate terminal emulator (i), which is used by 300 Developers.
  • Step E 2 a of FIG. 1 then refers to steps E 3 –E 13 which are shown in FIG. 2 .
  • the first decision block E 3 of FIG. 2 asks whether the Attachment terminal emulator is either a 16-bit or MS-DOS application to which the initial answer here is “NO”. With the “NO” answer, the next decision block step E 4 asks whether Attachmate terminal emulator's background processing is Heavy to which here the initial answer is “NO”. With the “NO” answer, the next decision block at step E 5 asks whether the Attachmate terminal emulator's output is graphic-based or animated to which the initial answer is “NO”. With the “NO” answer, the next decision block, step E 6 asks whether the Attachmate terminal emulator's input is mostly GUI-based—to which the initial answer is “YES”.
  • next decision block step E 7 asks whether the Attachmate terminal emulator's background processing is light to which the initial answer is “YES”.
  • the next decision block step E 8 asks whether Attachmate terminal emulator's output is mostly text-based to which the initial answer here is “YES”.
  • step E 13 the number of Developer User Types concurrently running the Attachmate terminal emulator application ( 300 ) is added to a category called the Light User Total at step E 13 .
  • the next step sequence step E 14 in FIG. 1 then asks if there are “More Applications?” involved, which for this example, is answered “YES” (since there are other applications in the Server Farm such as Internet Explorer, Access 97 and I/O Cooker) and the flow sequence returns to step E 2 .
  • the next application considered for the Developers User Type is the Internet Explorer application (ii) at step E 2 , FIG. 1 .
  • the decision block step E 3 on FIG. 2 asks whether the Internet Explorer is either a 16-bit or MS-DOS application E 3 , to which the answer here is “NO”.
  • the next decision block step E 4 asks whether Internet Explorer's background processing is Heavy to which the answer is “NO” since it is “light”.
  • the next decision block, step E 5 asks whether the Internet Explorer's output is graphic-based or animated E 5 to which the answer is “NO”.
  • the next decision block step E 6 asks whether Internet Explorer's input is mostly GUI-based to which the answer here is “NO” (since it is “text based”).
  • step E 9 which asks whether the Developer's typing speed is 45 wpm or faster to which the answer is here “NO”.
  • the number of Developers User Type concurrently running the Internet Explorer application 200 Developers
  • the sequence block step (E 14 of FIG. 1 ) is then asked if there are “More Applications?” involved, which, in this example, is answered “YES” (since there are still Access 97 and I/O Cooker applications) and then the flow sequence returns back to step E 2 on FIG. 1 .
  • the next application considered for the Developers User Type is the “Access 97” application (iii).
  • the decision block step (E 3 , FIG. 2 ) asks whether Access 97 is either a 16-bit or MS-DOS application E 3 to which the answer here is “NO” since Access 97 has a GUI-based input. With the “NO” answer, the next decision block, step E 4 asks whether Access 97's background processing is Heavy E 4 to which the answer is here “YES”. At this point, the number of Developers User Types concurrently running the Access 97 application ( 100 Developers) is added at step E 11 to the Heavy User Category Total.
  • step E 14 is then asked if there are “More Applications?” involved, which, in this example, is answered “YES” (since the IoCooker application (iv) is still in play) and then the flow returns to step E 2 , to handle the next application.
  • the last application considered at step E 2 for the Developers User Type is IOCooker (iv).
  • the decision block step E 3 FIG. 2 , asks whether IOCooker is either a 16-bit or MS-DOS application E 3 to which the answer is “YES” (16-bit) since this is a 16-bit application.
  • the number of Developer Users Types concurrently running the IOCooker application (50 Developers) is added to the “Super Heavy User” Category Total at step E 10 , FIG. 2 .
  • the user weighting algorithm then finalizes the results and returns the following information:
  • Described herein has been a method and system for developing the User-weight category data for each application involved in each Server Farm together with the number of Users involved with each User-weight category. This data can then be supplied to a Thin Client Sizing Tool at the appropriate data input section of the Application Solution Delivery Configurator to enable an optimal configuration proposal suitable to a given customer's profile applicable to the customer-enterprise.
  • the resulting user weights can then be applied to the customer's configuration solution so that the number of users within the Server Farm can be better estimated with regard to the amount of processing they will incur. These weights will be applied so that a typical analysis would portray a user weight as a percentage of a typical benchmark user. For example, a user weighted as Super Heavy could be 200% of a benchmark user, a Heavy user could be 100% (i.e., Heavy use IS typical of a benchmark user), a Medium user could be 67% and a Light user 50%.
  • the 650 original users becomes 892 adjusted users using the followed calculation:

Abstract

A Thin Client Sizer, used to configure an optimal Server Farm, requires specific data regarding the category level of utilization, by each User, of each of the Applications available to the Server Farm. A method is presented for input to a Solution Configurator to select and categorize each User-Type User as to add in his utilization of each Application used in the Server Farm.

Description

FIELD OF THE INVENTION
This disclosure involves a method for calculating user weights for use during the solution generation and configuration process for an enterprise which uses a Thin Client Sizing Tool to propose an optimal configuration of Server Farms.
CROSS-REFERENCES TO RELATED APPLICATIONS
This application is related to co-pending applications designated hereinbelow and which are all included herein by reference.
U.S. Ser. No. 09/813,667 now abandoned entitled “THIN CLIENT SIZING TOOL FOR ENTERPRISE SERVER FARM SOLUTION CONFIGURATOR”;
U.S. Ser. No. 09/813,671 entitled “CONFIGURATION INTERVIEW SESSION METHOD FOR THIN CLIENT SIZING TOOL”;
U.S. Ser. No. 09/813,672 entitled “METAFARM SIZER CONFIGURATION OPTIMIZATION METHOD”;
U.S. Ser. No. 09/813,670 entitled “SOLUTION GENERATION METHOD FOR THIN CLIENT SIZING TOOL”;
U.S. Ser. No. 09/813,669 entitled “METHOD FOR CALCULATING MEMORY REQUIREMENTS FOR THIN CLIENT SIZING TOOL”;
U.S. Pat. No. 6,496,948 entitled “METHOD FOR ESTIMATING THE AVAILABILITY OF AN OPERATING SERVER FARM”;
U.S. Pat. No. 6,571,283 entitled “METHOD FOR SERVER FARM CONFIGURATION OPTIMIZATION”;
U.S. Ser. No. 09/705,441 now U.S. Pat. No. 6,859,929 entitled “METHOD FOR SERVER METAFARM CONFIGURATION OPTIMIZATION”.
BACKGROUND OF THE INVENTION
Of special importance to newly developing enterprises which have multiple users at several different sites, many different types of problem situations are presented to the designer, the proposal-maker, and configurator of Server Farm facilities.
Part of this solution is the need to establish and utilize the “user-weights” involved according to the data in the Customer Profile which was developed in connection with U.S. Ser. No. 09/813,671 now abandoned.
In order for a designer or developer to provide a solution configuration for a customer having many users or an enterprise with multiple numbers of client-user terminals involved, there must be concluded a calculation as to the appropriate number and type of servers that would be required as per the configuration development in U.S. Ser. No. 09/813,672 now U.S. Pat. No. 6,963,828 and Ser. No. 09/813,670. Part of that design and development work for an appropriate proposal involves calculating the weight (stress) of users relative to a typical benchmark user for the Thin Client Sizing Tool.
Often neglected and seldom investigated data, in the prior art methods of estimation and configuration of Server Farms for enterprises, was the area of “user weights” which involves the types of users who use the various application programs in a system. These can now be identified in terms of light, medium, heavy and super-heavy users. As a result, these factors can now be taken into the development of algorithms which will help provide the most appropriate solution for a given enterprise or group of users.
One area of specific development which had often been ignored or unknown, in past estimations, was that of determining the level of stress, the speed or the quality and quantity of usage involved by the different types of personnel who are denoted as “user-types”. Then by taking into consideration the different types of applications involved with each user-type as they impact on overall network operations, this information could then be factored in.
As a result, the presently-described method for calculating the user weights in the Thin Client Sizing Tool operation can now be done to eliminate much of the guesswork and to help develop a more accurate configuration solution.
SUMMARY OF THE INVENTION
The present invention provides a tool for providing service for an enterprise with an algorithm utilizing a method for determining the particular level of stress involved with different user-types of user personnel, in addition to correlating the types of applications and servers that these user-types are involved with. Then, this can be used to estimate an adjusted number of users in a Server Farm that correlates more closely to the average or typical benchmark user for each specific Server.
Thus, by taking into account the number of typical users and the number of different stress levels, such as light, medium, heavy, and super-heavy, involved for the applications used, these stress levels can be used to help determine a proposed solution configuration for a potential Thin Client customer of an enterprise.
The presently-described method is used to provide a means for assessing the stress of processor use, as the processor use is imposed upon by the different types of users and the different types of applications involved by the user on each particular server. The algorithm introduces the concept of rating different application attributes and putting them in generic categories to aid in the final development of proposing an optimal configuration sizing of Servers for the enterprise.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1, 2A and 2B show flowcharts illustrating the various steps involved in calculating the user weights for the Thin Client Sizing Tool;
FIG. 3 is an overall environmental drawing showing the basic modules and elements involved in a particular enterprise solution.
GLOSSARY LIST OF RELEVANT ITEMS
  • 1. ADJUSTED USER TOTAL (SERVER FARM): The normalized total number of Users that will be supported by the SERVER FARM. Unadjusted Users are grouped into 4 distinct usage-pattern categories, namely (a) Light, (b) Medium, (c) Heavy, and (d) Super Heavy. Calculations are performed on the number of Users in each grouping to determine the normalized number of Users. These normalized numbers are then summed to establish the ADJUSTED USER TOTAL for the entire SERVER FARM.
  • 2. APPLICATION DELIVERY SOLUTION CONFIGURATOR: This is the Unisys approved and recognized designation of the present method and system as defined by this invention. This is a Windows application that helps one in choosing the best-developed Application Delivery (Thin Client) server solution that will meet a client's requirements. This Solution Configurator guides one through a customer interview session where information is gathered in order to develop a set of solutions that will match the customer's performance requirements but also provide different availability levels suitable to the customer-client.
  • 3. APPLICATION SERVER: This is the intended use or responsibility of one of the designated server farms. This type of server farm would run computer programs or pieces of software designed to perform specific multi-user tasks solely within the Windows Terminal Server systems making up the server farm. APPLICATION SERVERS would not be dependent on other back-end servers for the processing of data.
  • 4. APPLICATION TYPE: This is one of four main interview categories used by the described Thin Client Sizer Tool for collecting customer information and collecting also Application Type documents involving the memory and the disk resources typically required when running an application. Supplying the Application Types that will be running helps to size the Server Farm in order to sufficiently handle the client demand.
  • 5. APPINPUT: GUI-based—This requires limited User input such as an application developed with Microsoft Visual Basic where selections are made from lists or by clicking various options. Text-based—Requires considerable typing by the User such as creating a document in Microsoft Word.
  • 6. APPOUTPUT: Text-based—Indicates the kind of information presented by the application. For example, most Visual Basic or C++ windows and dialog boxes, most uses of productivity apps (Microsoft Office), terminal emulation, etc. Graphic-based—Indicates the kind of information presented by the application. For example, desktop publishing large documents with graphics, Web pages with a lot of picture content (JPEG files), scanned images (TIF files), Microsoft Encarta, etc.
  • 7. APPPROCESSING: Light—Indicates the application executing on the terminal server does little more than present a GUI. For example, a Visual Basic application, the SAP thin client, light use of productivity apps (Microsoft Office), terminal emulation, etc. Heavy—Indicates the application executing on the terminal server uses more processor, memory or disk resource usage. For example, the PeopleSoft Thin Client, Outlook Exchange client, heavy use of productivity apps for complex tasks (desktop publishing, large documents with graphics, extremely large spreadsheets with complex cascading calculations, etc.)
  • 8. AVAILABILITY: This is a measure of the readiness of the system and an application to deliver an expected service to the User with a required performance level. It may be described as a percentage of time that a system and an application are running as distinguished from the system being down for maintenance or repairs.
  • 9. AVAILABILITY GOAL: This is the target service level as defined by the client for the server farm. This data value is input to the tool as a percentage of time that the client expects the systems and applications in the server farm to be accessible by all Users.
  • 10. AVAILABILITY TAB WINDOW (FIGS. 24A,B OF U.S. Ser. No. 09/813,667: This shows the Availability Calculator which helps to determine solutions that include future/growth potential requirements with a variety of redundancy levels. This screen is interactive and will take input for Adjusted Concurrent number of users, system repair times and redundancy levels. This screen is interactive and will take input for Adjusted Concurrent number of users, system repair times and redundancy levels and returns solution information such as estimated number of servers, # peak users, availability, estimated downtime, # redundant servers and server farm mean time to failure (MTTF).
  • 11. BACKGROUND PROCESSING: The ability of a user-interactive software application to execute processing steps independent of the input and output actions. Background processing would include, but is not limited to, procedures such as ‘always on’ spell checking in a word processor or ‘always on’ calculations in a spreadsheet.
  • 12. BENCHMARK: This is test of computer performance and consists of a test or set of tests used to measure the performance of an individual e-Action ES Terminal Server. The output from these tests consists of a value designated as the total number of Users that each e-Action ES Terminal Server system can reasonably sustain and process.
  • 13. BASE SOLUTIONS TAB WINDOW (FIG. 23 OF U.S. Ser. No. 09/813,667: Reports the minimum server configuration recommendation (i.e., not including additional redundancy or growth considerations) for each of the customer Site's server farms. A base solution includes the minimum number of servers and GB RAM required with regard to the Operating system, # processors and MHz available for each server type supported by Unisys.
  • 14. CITRIX METAFRAME: This is computer software from Citrix Systems, Inc., headquartered in Ft. Lauderdale, Fla. This METAFRAME software is loaded onto each Windows Terminal Server and provides superior enterprise-level management and control functions for the e-@ction Enterprise Servers.
  • 15. CITRIX METAFRAME ADD-ONS: ICA Secure and Load Balancing Services are two optional computer softwares that can be run simultaneously with CITRIX METAFRAME on a Terminal Server. ICA Secure provides enhanced network security for METAFRAME. Load Balancing Services allow Citrix MetaFrame to distribute application processing to the plurality of computer systems in a server farm.
  • 16. CONCURRENT USERS: This number is an estimate of the maximum number of users simultaneously processing applications on a Server Farm at any given time. This is characteristically a percentage of the total Benchmark users that can be sustained on all of the e-@ction Enterprise Servers in the Server Farm.
  • 17. CONFIGURATION DATABASE TEMPLATE: This is a collection of data on a computer applied during the information collection process and utilized for the assembly of information collected from window screens.
  • 18. CONFIGURATION SESSION: This is the vehicle used by the described Thin Client Sizer Tool to collect the information on a customer's sizing requirements and to generate the best solution to meet those requirements.
  • 19. CONFIGURATION SESSION DATABASE: This is a collection of data on a computer used for providing information to an instance of the Application Delivery Solution Configurator that enables algorithmic steps and calculations to be applied in the development of an optimized configuration for the Server Farm.
  • 20. CONFIGURATOR: See APPLICATION DELIVERY SOLUTION CONFIGURATOR.
  • 21. CUSTOMER DATA TAB WINDOW (FIG. 22 OF U.S. Ser. No. 09/813,667: Reports back to the customer the information that was collected during the interview session and that which the solution generation was based on.
  • 22. CUSTOMER PROFILE: This is a collection of data describing the customer's characteristics and attributes and assembled from the customer interview. This data is input to an algorithm which will output a configuration solution for that particular User or customer.
  • 23. DEFAULT AVAILABILITY: The four (4) SERVER FARM initial availability level scenarios as calculated and displayed by the AVAILABILITY CALCULATOR. The availability levels for the Server Farm are calculated based on the following three parameters: (1) the number of adjusted concurrent users, (2) the system repair time, and (3) the REDUNDANCY FACTOR. For the four DEFAULT AVAILABILITY levels, the first parameter is calculated based on the sizing of the SERVER FARM, and the latter two parameters have pre-configured values, as chosen by the Engineering Group, where the second parameter is held constant at 6 hours and the second parameter is varied from 25% to 10% in decrements of 5%.
  • 24. DISK CAPACITY TAB WINDOW (FIG. 27 OF U.S. Ser. No. 09/813,667: Reports on the disk capacity requirements determined by the interview session input and solution generation algorithms for each of the customer Site's Server Farms.
  • 25. DOWNTIME: The downtime or repair time for a single application server is the time interval required to restore the server and system back to normal business operation. At the end of the repair period, the applications running on the repaired server are available to Users. The downtime for a Server Farm is the time interval required to restore the nominal Server Farm performance.
  • 26. e-@CTION ENTERPRISE SERVER (ES): This is the specific name for a plurality of server models marketed and sold by Unisys Corporation. Current models include ES7000, ES5000, and ES2000 systems.
  • 27. ESTIMATOR PROGRAM: This is a program which performs method steps for estimating system parameters such as the availability of an application program to run on any computer or server in the cluster of at least two servers or computers. This type of estimator program was the subject of U.S. Pat. No. 6,334,196 which is incorporated herein by reference. Another estimator program is the subject of this patent application.
  • 28. ETO: This represents engineering technology optimization and involves an organization located at a specific company location that is devoted to optimizing the performance of the Enterprise-class Windows NT Server platforms.
  • 29. FAILOVER: This is a mode of operation in the system which has two or more servers or computers wherein a failure in one of the servers or computers will result in transfer of operations to the other or another one of the still operating servers and computers. Failover time is the period of time required for successful transfer from a failed server to an operative server.
  • 30. INPUT CHARACTERISTICS: These attributes describe how input is provided to the customer's software applications—through textural typing, through GUI based screen manipulation, or through a combination of both methods.
  • 31. KBPS REQUIREMENTS (SERVER FARM): This is the total data transmission capacity (or bandwidth), measured in kilobytes per second (Kbps), which will be needed for all bi-directional communication between the Users' concurrent connections and the SERVER FARM(s).
  • 32. MB (MEGABYTE): A unit of computer memory or disk storage space equal to 1,048,576 bytes.
  • 33. MEAN TIME TO FAILURE (MTTF): This is the average operating time between two failures, that can be estimated as the total operating time divided by the number of failures.
  • 34. MEAN TIME TO REPAIR (MTTR): This is the average “downtime” in case of failure, that can be estimated as the total downtime divided by the number of failures.
  • 35. MEMORY REQUIREMENTS: This is the necessary amount of server memory used by each User's instance of the multi-user software application.
  • 36. NETWORK CAPACITY TAB WINDOW (FIG. 26 OF U.S. Ser. No. 09/813,667: This is called Network Utilization now; reports on the estimated network activity measured in Kbps for each of the customer Site's Server Farms.
  • 37. OUTPUT CHARACTERISTICS: These attributes describe how output is derived from the customer's software applications—through the display of visual information as text, as graphics, as animated graphics, or as a combination of one or more methods.
  • 38. OPTIMIZATION CRITERION: This is a function that determines the value of one of the essential system attributes and must be minimized (or maximized) by variation of one or more system parameters that are chosen as OPTIMIZATION PARAMETERS. Each optimization parameter should have a predefined domain that defines the values that the optimization parameter may assume. The OPTIMIZATION CRITERION is a focus of an optimum system design or configuration. The examples of the optimization criteria are system performance, system availability, and cost of ownership.
  • 39. OPTIONAL SOFTWARE TAB WINDOW (FIG. 25 OF U.S. Ser. No. 09/813,667: Reports on the additional features/capabilities entered in the interview session regarding the customer's profile for each of the Site's Server Farms. Optional software requirements include such categories as Client Connection Methods, Enhancements, Environment support, Multimedia capabilities, Display characteristics, Protocol support, and Server Enhancements.
  • 40. PROCESSING CHARACTERISTIC: This attribute describes whether the customer's software application performs extensive BACKGROUND PROCESSING, independent from the processing of application input and output.
  • 41. REDUNDANCY FACTOR (Rf): This is a measure of the additional number of Users that can be added to the nominal number of Users per server without exceeding the maximum number of Users per server (server performance benchmark maximum of Users). It is a difference between maximum and nominal performance as a percentage of the maximum performance. The Redundancy Factor can be calculated as 100 percent minus a usage factor Uf.
  • 42. SERVER CONFIGURATION REPORT: This is a report generated by the Thin Client Sizer Tool that will contain the information on the optimum server configurations as determined by the customer information which was collected during the Configuration Session and the performance benchmarking results.
  • 43. SERVER FARM: This is one of the five main interview categories used by the Thin Client Sizer Tool for collecting customer information. A Server Farm consists of one or more Windows Terminal Servers configured together for unified administration, security, and for communication services. For instance, two Server Farms might be required for certain applications such as the PeopleSoft clients, or one server for a Payroll function, and another server for a Human Resources function.
  • 44. SERVER FARM AVAILABILITY CALCULATOR: This is an estimator program that estimates the availability for the Server Farm.
  • 45. SERVER FARM OVERFLOW: The condition whereby the results of calculations on the number of servers in a SERVER FARM, during the Solution Generation phase, exceeds the maximum number of servers recommended for a SERVER FARM as determined by the Engineering Group.
  • 46. SERVER INFORMATION DATABASE: This is a collection of data on a computer for holding benchmark and informational data on a plurality of Unisys Enterprise Server systems. This data is used by the Thin Client Sizing Tool in determining the optimum server farm configuration to meet the customer's sizing requirements.
  • 47. SITE: This is one of the five main interview categories used by the Thin Client Sizer Tool for collecting customer information. A Site is the physical location where the Windows Terminal Servers will be located in particular cities such as, New York, Los Angeles or Chicago, etc. and the number of users at that physical location.
  • 48. SITE/SERVER FARM PAIR: This is a defined combination of a specific Server Farm residing within a particular physical location. As defined during the customer interview, each physical location, or site, can contain one of more Server Farms. When defining the User and Application characteristics of each Server Farm within the site, each individual combination is considered as an independent pair.
  • 49. SIZING DATABASE: This is a collection of data on a computer output from the THIN CLIENT SEVER FARM AVAILABILITY CALCULATOR and used for storing the number of e-@ction Enterprise Server unit modules and their availability levels.
  • 50. SOLUTION CONFIGURATOR: See APPLICATION DELIVERY SOLUTION CONFIGURATOR.
  • 51. SOLUTION GENERATION: The act of producing a particular SERVER FARM configuration (i.e. the SOLUTION) that will meet the sizing and availability requirements of a client. This SOLUTION will be comprised of an appropriate number of servers, proper disk space and memory to meet the client requirements.
  • 52. THIN CLIENT SERVER FARM AVAILABILITY CALCULATOR: This is one of the examples of the SERVER FARM AVAILABILITY CALCULATOR. Because Thin Client configurations are intended to make applications available to multiple Users at the same time, this calculator calculates the availability of a specified number of instances of an application (not just a single instance) where each application instance is being run at the server, but all the User input response is taking place at the client terminal. In this scenario, downtime occurs whenever the number of available instances of the application drops below the required specified number of instances.
  • 53. UCON32: This is the unit designated as the Unisys Configurator which is an extensive on-line configuration tool which is used to support all Unisys Corporation system platforms.
  • 54. USAGE FACTOR (Uf): This is the ratio of the nominal number of Users per server to the maximum number of Users per server (server performance benchmark maximum of Users) times 100 percent.
  • 55. USER-TYPE: This is one of the five main interview categories used by the Thin Client Sizer Tool for collecting customer information. A User-Type embodies the usage patterns of a particular group of Users. User usage patterns will have a significant impact on performance. The area which is considered here is the user's typing speed. Some examples of User-types are, order entry clerks, secretaries, developers, and technical writers.
  • 56. USER WEIGHT: This is the estimated average user impact (light, medium, heavy or super heavy) on the Windows Terminal Server, and a value is assigned to each User Type by the sizing tool. Such User attributes as typing speed or application familiarity can all affect this parameter. It is used to approximate the amount of server processor usage that is imposed by the different User Types.
  • 57. WINDOWS TERMINAL SERVER: This is the designation for an e-@ction Enterprise Server that is running one of two operating systems sold and supported by Microsoft Corporation: (1) Windows NT Server 4.0, Terminal Server Edition, or (2) Windows 2000 (Server, Advanced Server, or Datacenter Server) with the optional Terminal Services service enabled in Application Server mode.
DESCRIPTION OF PREFERRED EMBODIMENT
FIG. 3 is an overall environmental drawing of a Thin Client Solution Configurator method which indicates the various elements involved in providing and optimizing Enterprise server solutions for a customer's enterprise.
As seen in FIG. 3, the Application Delivery Solution Configurator program algorithm 60 has a series of input and output connections which include an input from the Customer Client Profile 10, plus inputs from the Server Information Database 20, the Sizing Database 30, the Configuration Database Template 40, and the Configuration Session Database 50. Additionally, the algorithm of the Application Delivery Solution Configurator 60 also provides information for storage in the Sizing Database 30 and the Configuration Session Database 50, after which a final series of information and reports can be provided at the Reports Module 70.
The Application Delivery Solution Configurator program 60 has a number of areas which must be fulfilled and satisfied in order to provide the final report. One of these areas which must be calculated and provided to the Solution Configurator 60 is that of the present invention which involves the method for calculating user weights. The weights are designated as: light, medium, heavy or super-heavy.
Here, FIGS. 1 and 2 will illustrate a flowchart which shows the various steps involved in order to calculate the user weights information involved which can then be input to the Application Delivery Solution Configurator 60 at the appropriate steps in order to help generate the final configurator solution.
FIGS. 1 and 2, as shown, will be described hereinbelow as a series of steps designated E1, E2, E3 . . . E16. Additionally, during the description of the flowchart steps involved, there will be given certain numbers and application information in a specific example to better illustrate exactly how the particular algorithm can be effectuated. These numbers are for illustrative purposes only and will vary depending on the customer profile and the type of results desired by a given customer-user or enterprise developer.
Referring to FIGS. 1 and 2, and using a simple example, the customer's profile for a single Server Farm could look like the following:
    • I. Server Farm: Engineering (650 concurrent users)
      • 300 Developers use a terminal emulator from Attachmate
      • 200 Developers use Microsoft Internet Explorer
      • 100 Developers use Microsoft Access 97
      • 50 Developers use IOCooker
    • IA: User Type attributes:
      • Developers—insignificant average typing speed
    • IAA: Application attributes:
      • (i) Attachmate terminal emulators—32-bit application with GUI-based input, text-based output and “Light” background processing.
      • (ii) Internet Explorer—32-bit application with text-based I/O and “Medium” background processing.
      • (iii) Access 97—32-bit application with mostly GUI-based input, text-based output and “Heavy” background processing.
      • (iv) IoCooker—16-bit application with text-based I/O and “Super-Heavy” background processing.
The sequence begins in FIG. 1 with the User-Types assigned to a particular Server Farm within a Site E1. In the above example, the only “User Type” assigned to the Engineering Server Farm (I) is that of Developers. Each Application used concurrently by the 650 Developers is then considered at step E2, beginning with the Attachmate terminal emulator (i), which is used by 300 Developers.
Step E2 a of FIG. 1 then refers to steps E3–E13 which are shown in FIG. 2.
The first decision block E3 of FIG. 2 asks whether the Attachment terminal emulator is either a 16-bit or MS-DOS application to which the initial answer here is “NO”. With the “NO” answer, the next decision block step E4 asks whether Attachmate terminal emulator's background processing is Heavy to which here the initial answer is “NO”. With the “NO” answer, the next decision block at step E5 asks whether the Attachmate terminal emulator's output is graphic-based or animated to which the initial answer is “NO”. With the “NO” answer, the next decision block, step E6 asks whether the Attachmate terminal emulator's input is mostly GUI-based—to which the initial answer is “YES”. With the “YES” answer, the next decision block step E7 asks whether the Attachmate terminal emulator's background processing is light to which the initial answer is “YES”. With the “YES” answer, the next decision block step E8 asks whether Attachmate terminal emulator's output is mostly text-based to which the initial answer here is “YES”.
At this point, the number of Developer User Types concurrently running the Attachmate terminal emulator application (300) is added to a category called the Light User Total at step E13. Thus, at this point there are 300 “light users” for the Server Farm. The next step sequence step E14 in FIG. 1 then asks if there are “More Applications?” involved, which for this example, is answered “YES” (since there are other applications in the Server Farm such as Internet Explorer, Access 97 and I/O Cooker) and the flow sequence returns to step E2.
The next application considered for the Developers User Type is the Internet Explorer application (ii) at step E2, FIG. 1. The decision block step E3 on FIG. 2 asks whether the Internet Explorer is either a 16-bit or MS-DOS application E3, to which the answer here is “NO”. With the “NO” answer, the next decision block step E4 asks whether Internet Explorer's background processing is Heavy to which the answer is “NO” since it is “light”. With the “NO” answer, the next decision block, step E5 asks whether the Internet Explorer's output is graphic-based or animated E5 to which the answer is “NO”. With the “NO” answer, the next decision block step E6 asks whether Internet Explorer's input is mostly GUI-based to which the answer here is “NO” (since it is “text based”).
With the “NO” answer, the next decision block is step E9 which asks whether the Developer's typing speed is 45 wpm or faster to which the answer is here “NO”. At this point, the number of Developers User Type concurrently running the Internet Explorer application (200 Developers) is added to the “Medium User” Total at step E12. The sequence block step (E14 of FIG. 1) is then asked if there are “More Applications?” involved, which, in this example, is answered “YES” (since there are still Access 97 and I/O Cooker applications) and then the flow sequence returns back to step E2 on FIG. 1.
At step E2, the next application considered for the Developers User Type is the “Access 97” application (iii). The decision block step (E3, FIG. 2), asks whether Access 97 is either a 16-bit or MS-DOS application E3 to which the answer here is “NO” since Access 97 has a GUI-based input. With the “NO” answer, the next decision block, step E4 asks whether Access 97's background processing is Heavy E4 to which the answer is here “YES”. At this point, the number of Developers User Types concurrently running the Access 97 application (100 Developers) is added at step E11 to the Heavy User Category Total.
At this state, the sequence block step E14, FIG. 1, is then asked if there are “More Applications?” involved, which, in this example, is answered “YES” (since the IoCooker application (iv) is still in play) and then the flow returns to step E2, to handle the next application.
The last application considered at step E2 for the Developers User Type is IOCooker (iv). The decision block step E3, FIG. 2, asks whether IOCooker is either a 16-bit or MS-DOS application E3 to which the answer is “YES” (16-bit) since this is a 16-bit application. At this point, the number of Developer Users Types concurrently running the IOCooker application (50 Developers) is added to the “Super Heavy User” Category Total at step E10, FIG. 2.
The involvement of “More Applications?” is asked again at sequence block step (E14, FIG. 1), and this is answered “NO” here, (since all of the four applications (i, ii, iii, iv) have now been handled) after which the flow sequence dictates that the step E15 question “More User Types?” is asked E15 and is answered “NO”, since only one User Type was defined and has been handled. The sequence flow then continues by returning the total number of Super Heavy, Heavy, Medium and Light User Totals for the Server Farm at step E16 to the Solution Generation flow (as was indicated in the application U.S. Ser. No. 09/813,670 as output from D13 in FIG. 1B, of U.S. Ser. No. 09/813,670.
For the Engineering Server Farm example, the user weighting algorithm then finalizes the results and returns the following information:
    • 300 Light Users
    • 200 Medium Users
    • 100 Heavy Users
    • 50 Super Heavy Users
      These results can then be input into step D13 of FIG. 1B, of U.S. Ser. No. 09/813,670 and used to calculate the Adjusted Users Total by utilizing this information on User-weights to help complete the Solution Generation.
Described herein has been a method and system for developing the User-weight category data for each application involved in each Server Farm together with the number of Users involved with each User-weight category. This data can then be supplied to a Thin Client Sizing Tool at the appropriate data input section of the Application Solution Delivery Configurator to enable an optimal configuration proposal suitable to a given customer's profile applicable to the customer-enterprise.
The resulting user weights can then be applied to the customer's configuration solution so that the number of users within the Server Farm can be better estimated with regard to the amount of processing they will incur. These weights will be applied so that a typical analysis would portray a user weight as a percentage of a typical benchmark user. For example, a user weighted as Super Heavy could be 200% of a benchmark user, a Heavy user could be 100% (i.e., Heavy use IS typical of a benchmark user), a Medium user could be 67% and a Light user 50%. When systematically applied to the number of users within the example Server Farm, the 650 original users becomes 892 adjusted users using the followed calculation:
    • (# Super Heavy users * 200%)+
    • (# Heavy users * 100%)+
    • (# Medium users * 67%)+
    • (# Light users * 50%)=
    • (300 * 2)+(200 * 1)+(100 * 0.67)+(50 * 0.5)=600+200+67+25=892 adjusted users.
While one preferred embodiment of the invention has been described, other variations and embodiments may be realized which are still encompassed by the following claims.

Claims (2)

1. In a Thin Client Sizing Tool for configuring an Optimal Server Farm for a customer-enterprise, wherein client-customer profile data is developed from a client-customer as input to a Solution Configurator algorithm for developing an optimized solution for a network of one or more Server Farms and associated modules most suitable for said client-customer needs, a method for utilizing said client-customer profile data to specific User-weight categories for each User-Type involved with each specific Application in each said Server Farm, wherein concurrent Users who are processing applications on each Server Farm are categorized as to the type of User and the usage weight of the User such that a User weight value is assigned to each User in each application, said User weight being categorized as Light User, Medium User, Heavy User, said method comprising the steps of:
(a) sorting-out and eliminating those Applications which are 16-bit or MS-DOS;
wherein step (a) includes the step of:
(a1) selecting those Applications whose background processing involves Heavy Users;
(a2) adding the number of Users using each Application as a Heavy User;
(a3) accumulating the total number of User-Type “Heavy” Users utilizing each and every Application and applying a 100% weight factor for each of said Heavy Users for input to said Solution Configuration algorithm;
(b) sorting-out and eliminating those Applications which involve Heavy Users in processing;
wherein step (b) includes the steps of:
(b1) selecting those Applications whose output is NOT graphics-based or animated;
(b2) selecting those Applications whose input is NOT mostly GUI-based;
(b3) selecting those User-Types whose typing speed is slower than 45 words/minute;
wherein step (b3) includes the steps of:
(b3a) selecting those Applications whose input is faster than 45 words/minute;
(b3b) adding the number of such User-Type Users to the Heavy User total; and applying a 100% weight factor to said Heavy User total;
(b3c) accumulating the total number of Users of the Heavy category for each and every one of the Applications used in each Server Farm;
(b4) adding-up the number of User-Type Users utilizing said Applications as a “Medium” User;
(b5) accumulating the total number of Medium Users and applying a weight factor of 67% value for each said Medium User category for input to said Solution Configuration algorithm;
(c) sorting-out and eliminating those Applications which are graphic-based or animated;
(d) selecting those Applications having an input which is mostly GUI-based;
(e) selecting those Applications whose background processing involves Light Users;
wherein step (e) includes the steps of:
(e1) selecting those Applications which are NOT mostly text-based;
(e2) selecting those User-types whose typing speed is faster than 45 words/minute;
(e3) adding the number of User-type Users to the “Heavy” category and utilizing a 100% weight factor value to the number of Heavy User-type Users;
(e4) accumulating the total number of Heavy Users for each and every one of the Applications used in each Server Farm for input to said Solution Configuration algorithm;
(f) selecting those Applications whose output is mostly text-based;
(g) adding up the number of Light Users for each Application type;
(h) accumulating the total number of User-Type Users using Applications involving a “Light” User;
(i) utilizing a weight factor of 50% to establish a weight factor value for said Light users;
(j) inputting said Light User weight factor value to said Solution Configurator algorithm for said total number of Light Users in each server Farm.
2. The method of claim 1 wherein step (e1) includes the steps of:
(e1a) selecting those User-Types whose typing speed is slower than 45 words/minute;
(e1b) adding the number of Medium Users using each Application and assigning a 67% weight factor value to said Medium Users;
(e1c) accumulating the total number of Medium Users for each and every one of the Applications used in each Server Farm;
(e1d) inputting the total number of Medium Users and said 67% weight factor value to said Solution Configuration algorithm.
US09/813,668 2001-03-21 2001-03-21 Method for calculating user weights for thin client sizing tool Expired - Lifetime US7035919B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/813,668 US7035919B1 (en) 2001-03-21 2001-03-21 Method for calculating user weights for thin client sizing tool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/813,668 US7035919B1 (en) 2001-03-21 2001-03-21 Method for calculating user weights for thin client sizing tool

Publications (1)

Publication Number Publication Date
US7035919B1 true US7035919B1 (en) 2006-04-25

Family

ID=36191181

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/813,668 Expired - Lifetime US7035919B1 (en) 2001-03-21 2001-03-21 Method for calculating user weights for thin client sizing tool

Country Status (1)

Country Link
US (1) US7035919B1 (en)

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040172217A1 (en) * 2003-01-17 2004-09-02 Fischer John G. Method of displaying product and service performance data
US20060075275A1 (en) * 2004-10-01 2006-04-06 Dini Cosmin N Approach for characterizing the dynamic availability behavior of network elements
US20060106938A1 (en) * 2003-11-14 2006-05-18 Cisco Systems, Inc. Load balancing mechanism using resource availability profiles
US20060143516A1 (en) * 2004-12-15 2006-06-29 International Business Machines Corporation System for on demand task optimization
US20060168230A1 (en) * 2005-01-27 2006-07-27 Caccavale Frank S Estimating a required number of servers from user classifications
US20060165052A1 (en) * 2004-11-22 2006-07-27 Dini Cosmin N Approach for determining the real time availability of a group of network elements
US20060259348A1 (en) * 2005-05-10 2006-11-16 Youbet.Com, Inc. System and Methods of Calculating Growth of Subscribers and Income From Subscribers
US7620714B1 (en) * 2003-11-14 2009-11-17 Cisco Technology, Inc. Method and apparatus for measuring the availability of a network element or service
US20110041659A1 (en) * 2009-08-21 2011-02-24 Dennis Clifford Williams Portable duct board cutting table
US20110060429A1 (en) * 2003-01-17 2011-03-10 Fischer John G Method of Displaying Product and Service Performance Data
US20110072008A1 (en) * 2009-09-22 2011-03-24 Sybase, Inc. Query Optimization with Awareness of Limited Resource Usage
US20120041799A1 (en) * 2010-08-13 2012-02-16 Fuji Xerox Co., Ltd. Information processing apparatus and computer readable medium
US20120047250A1 (en) * 2010-08-23 2012-02-23 Intuit Inc. Scalability breakpoint calculator for a software product
US20120054261A1 (en) * 2010-08-25 2012-03-01 Autodesk, Inc. Dual modeling environment
US20120166651A1 (en) * 2002-07-31 2012-06-28 Mark Lester Jacob Systems and Methods for Seamless Host Migration
US20130042312A1 (en) * 2011-08-09 2013-02-14 Mobileframe Llc Authentication in a smart thin client server
US8775125B1 (en) * 2009-09-10 2014-07-08 Jpmorgan Chase Bank, N.A. System and method for improved processing performance
US9020945B1 (en) * 2013-01-25 2015-04-28 Humana Inc. User categorization system and method
US9049174B2 (en) 2011-08-09 2015-06-02 Mobileframe, Llc Maintaining sessions in a smart thin client server
US9053444B2 (en) 2011-08-09 2015-06-09 Mobileframe, Llc Deploying applications in a smart thin client server
US9516068B2 (en) 2002-07-31 2016-12-06 Sony Interactive Entertainment America Llc Seamless host migration based on NAT type
US9762631B2 (en) 2002-05-17 2017-09-12 Sony Interactive Entertainment America Llc Managing participants in an online session
US9990506B1 (en) 2015-03-30 2018-06-05 Quest Software Inc. Systems and methods of securing network-accessible peripheral devices
US10042572B1 (en) * 2013-03-14 2018-08-07 EMC IP Holdings Company LLC Optimal data storage configuration
US10142391B1 (en) * 2016-03-25 2018-11-27 Quest Software Inc. Systems and methods of diagnosing down-layer performance problems via multi-stream performance patternization
US10140466B1 (en) 2015-04-10 2018-11-27 Quest Software Inc. Systems and methods of secure self-service access to content
US10146954B1 (en) 2012-06-11 2018-12-04 Quest Software Inc. System and method for data aggregation and analysis
US10157358B1 (en) 2015-10-05 2018-12-18 Quest Software Inc. Systems and methods for multi-stream performance patternization and interval-based prediction
US10200501B1 (en) * 2015-12-16 2019-02-05 Amazon Technologies, Inc. Program code allocation based on processor features
US10218588B1 (en) 2015-10-05 2019-02-26 Quest Software Inc. Systems and methods for multi-stream performance patternization and optimization of virtual meetings
US10326748B1 (en) 2015-02-25 2019-06-18 Quest Software Inc. Systems and methods for event-based authentication
US10417613B1 (en) 2015-03-17 2019-09-17 Quest Software Inc. Systems and methods of patternizing logged user-initiated events for scheduling functions
US10536352B1 (en) 2015-08-05 2020-01-14 Quest Software Inc. Systems and methods for tuning cross-platform data collection
US10695671B2 (en) 2018-09-28 2020-06-30 Sony Interactive Entertainment LLC Establishing and managing multiplayer sessions
US10765952B2 (en) 2018-09-21 2020-09-08 Sony Interactive Entertainment LLC System-level multiplayer matchmaking
USRE48700E1 (en) 2002-04-26 2021-08-24 Sony Interactive Entertainment America Llc Method for ladder ranking in a game
US11249874B2 (en) * 2019-03-20 2022-02-15 Salesforce.Com, Inc. Content-sensitive container scheduling on clusters
US11263111B2 (en) 2019-02-11 2022-03-01 Microstrategy Incorporated Validating software functionality
US11283900B2 (en) * 2016-02-08 2022-03-22 Microstrategy Incorporated Enterprise performance and capacity testing
US11354216B2 (en) 2019-09-18 2022-06-07 Microstrategy Incorporated Monitoring performance deviations
US11360881B2 (en) 2019-09-23 2022-06-14 Microstrategy Incorporated Customizing computer performance tests
US11379728B2 (en) * 2019-01-09 2022-07-05 Red Hat, Inc. Modified genetic recombination operator for cloud optimization
US11438231B2 (en) 2019-09-25 2022-09-06 Microstrategy Incorporated Centralized platform management for computing environments
US11637748B2 (en) 2019-08-28 2023-04-25 Microstrategy Incorporated Self-optimization of computing environments
US11669420B2 (en) 2019-08-30 2023-06-06 Microstrategy Incorporated Monitoring performance of computing systems
US11671505B2 (en) 2016-02-08 2023-06-06 Microstrategy Incorporated Enterprise health score and data migration

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6327622B1 (en) * 1998-09-03 2001-12-04 Sun Microsystems, Inc. Load balancing in a network environment
US20020002613A1 (en) * 2000-05-08 2002-01-03 Freeman Thomas D. Method and apparatus for communicating among a network of servers
US20020116605A1 (en) * 2000-12-21 2002-08-22 Berg Mitchell T. Method and system for initiating execution of software in response to a state
US6496948B1 (en) * 1999-11-19 2002-12-17 Unisys Corporation Method for estimating the availability of an operating server farm
US6567767B1 (en) * 2000-09-19 2003-05-20 Unisys Corporation Terminal server simulated client performance measurement tool
US6571283B1 (en) * 1999-12-29 2003-05-27 Unisys Corporation Method for server farm configuration optimization
US6687735B1 (en) * 2000-05-30 2004-02-03 Tranceive Technologies, Inc. Method and apparatus for balancing distributed applications
US6691259B1 (en) * 2000-09-19 2004-02-10 Unisys Corporation Terminal server data file extraction and analysis application

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6327622B1 (en) * 1998-09-03 2001-12-04 Sun Microsystems, Inc. Load balancing in a network environment
US6496948B1 (en) * 1999-11-19 2002-12-17 Unisys Corporation Method for estimating the availability of an operating server farm
US6571283B1 (en) * 1999-12-29 2003-05-27 Unisys Corporation Method for server farm configuration optimization
US20020002613A1 (en) * 2000-05-08 2002-01-03 Freeman Thomas D. Method and apparatus for communicating among a network of servers
US6687735B1 (en) * 2000-05-30 2004-02-03 Tranceive Technologies, Inc. Method and apparatus for balancing distributed applications
US6567767B1 (en) * 2000-09-19 2003-05-20 Unisys Corporation Terminal server simulated client performance measurement tool
US6691259B1 (en) * 2000-09-19 2004-02-10 Unisys Corporation Terminal server data file extraction and analysis application
US20020116605A1 (en) * 2000-12-21 2002-08-22 Berg Mitchell T. Method and system for initiating execution of software in response to a state

Cited By (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE48803E1 (en) 2002-04-26 2021-11-02 Sony Interactive Entertainment America Llc Method for ladder ranking in a game
USRE48802E1 (en) 2002-04-26 2021-11-02 Sony Interactive Entertainment America Llc Method for ladder ranking in a game
USRE48700E1 (en) 2002-04-26 2021-08-24 Sony Interactive Entertainment America Llc Method for ladder ranking in a game
US9762631B2 (en) 2002-05-17 2017-09-12 Sony Interactive Entertainment America Llc Managing participants in an online session
US10659500B2 (en) 2002-05-17 2020-05-19 Sony Interactive Entertainment America Llc Managing participants in an online session
US9729621B2 (en) 2002-07-31 2017-08-08 Sony Interactive Entertainment America Llc Systems and methods for seamless host migration
US20120166651A1 (en) * 2002-07-31 2012-06-28 Mark Lester Jacob Systems and Methods for Seamless Host Migration
US9516068B2 (en) 2002-07-31 2016-12-06 Sony Interactive Entertainment America Llc Seamless host migration based on NAT type
US8972548B2 (en) * 2002-07-31 2015-03-03 Sony Computer Entertainment America Llc Systems and methods for seamless host migration
US20110060429A1 (en) * 2003-01-17 2011-03-10 Fischer John G Method of Displaying Product and Service Performance Data
US20090213124A1 (en) * 2003-01-17 2009-08-27 Fischer John G Method of Displaying Product and Service Performance Data
US7834878B2 (en) 2003-01-17 2010-11-16 Fischer John G Method of displaying product and service performance data
US7483026B2 (en) * 2003-01-17 2009-01-27 Fischer John G Method of displaying product and service performance data
US8554346B2 (en) 2003-01-17 2013-10-08 Shawdon, Lp Method of displaying product and service performance data
US20040172217A1 (en) * 2003-01-17 2004-09-02 Fischer John G. Method of displaying product and service performance data
US7620714B1 (en) * 2003-11-14 2009-11-17 Cisco Technology, Inc. Method and apparatus for measuring the availability of a network element or service
US8180922B2 (en) 2003-11-14 2012-05-15 Cisco Technology, Inc. Load balancing mechanism using resource availability profiles
US20060106938A1 (en) * 2003-11-14 2006-05-18 Cisco Systems, Inc. Load balancing mechanism using resource availability profiles
US7631225B2 (en) 2004-10-01 2009-12-08 Cisco Technology, Inc. Approach for characterizing the dynamic availability behavior of network elements
US20060075275A1 (en) * 2004-10-01 2006-04-06 Dini Cosmin N Approach for characterizing the dynamic availability behavior of network elements
US7974216B2 (en) 2004-11-22 2011-07-05 Cisco Technology, Inc. Approach for determining the real time availability of a group of network elements
US20060165052A1 (en) * 2004-11-22 2006-07-27 Dini Cosmin N Approach for determining the real time availability of a group of network elements
US20090133030A1 (en) * 2004-12-15 2009-05-21 International Business Machines Corporation System for on demand task optimization
US8055941B2 (en) 2004-12-15 2011-11-08 International Business Machines Corporation System for on demand task optimization
US20060143516A1 (en) * 2004-12-15 2006-06-29 International Business Machines Corporation System for on demand task optimization
US7500140B2 (en) * 2004-12-15 2009-03-03 International Business Machines Corporation System for on demand task optimization
US20060168230A1 (en) * 2005-01-27 2006-07-27 Caccavale Frank S Estimating a required number of servers from user classifications
US20060259348A1 (en) * 2005-05-10 2006-11-16 Youbet.Com, Inc. System and Methods of Calculating Growth of Subscribers and Income From Subscribers
US10547670B2 (en) 2007-10-05 2020-01-28 Sony Interactive Entertainment America Llc Systems and methods for seamless host migration
US10063631B2 (en) 2007-10-05 2018-08-28 Sony Interactive Entertainment America Llc Systems and methods for seamless host migration
US11228638B2 (en) 2007-10-05 2022-01-18 Sony Interactive Entertainment LLC Systems and methods for seamless host migration
US20110041659A1 (en) * 2009-08-21 2011-02-24 Dennis Clifford Williams Portable duct board cutting table
US8775125B1 (en) * 2009-09-10 2014-07-08 Jpmorgan Chase Bank, N.A. System and method for improved processing performance
US10365986B2 (en) 2009-09-10 2019-07-30 Jpmorgan Chase Bank, N.A. System and method for improved processing performance
US11036609B2 (en) 2009-09-10 2021-06-15 Jpmorgan Chase Bank, N.A. System and method for improved processing performance
US8712972B2 (en) * 2009-09-22 2014-04-29 Sybase, Inc. Query optimization with awareness of limited resource usage
US20110072008A1 (en) * 2009-09-22 2011-03-24 Sybase, Inc. Query Optimization with Awareness of Limited Resource Usage
US20120041799A1 (en) * 2010-08-13 2012-02-16 Fuji Xerox Co., Ltd. Information processing apparatus and computer readable medium
US8738416B2 (en) * 2010-08-13 2014-05-27 Fuji Xerox Co., Ltd. Information processing apparatus and computer readable medium
US20120047250A1 (en) * 2010-08-23 2012-02-23 Intuit Inc. Scalability breakpoint calculator for a software product
US8732299B2 (en) * 2010-08-23 2014-05-20 Intuit Inc. Scalability breakpoint calculator for a software product
US20120054261A1 (en) * 2010-08-25 2012-03-01 Autodesk, Inc. Dual modeling environment
US9002946B2 (en) * 2010-08-25 2015-04-07 Autodesk, Inc. Dual modeling environment in which commands are executed concurrently and independently on both a light weight version of a proxy module on a client and a precise version of the proxy module on a server
US9049174B2 (en) 2011-08-09 2015-06-02 Mobileframe, Llc Maintaining sessions in a smart thin client server
US9053444B2 (en) 2011-08-09 2015-06-09 Mobileframe, Llc Deploying applications in a smart thin client server
US20130042312A1 (en) * 2011-08-09 2013-02-14 Mobileframe Llc Authentication in a smart thin client server
US10146954B1 (en) 2012-06-11 2018-12-04 Quest Software Inc. System and method for data aggregation and analysis
US9501553B1 (en) * 2013-01-25 2016-11-22 Humana Inc. Organization categorization system and method
US9020945B1 (en) * 2013-01-25 2015-04-28 Humana Inc. User categorization system and method
US10303705B2 (en) 2013-01-25 2019-05-28 Humana Inc. Organization categorization system and method
US10042572B1 (en) * 2013-03-14 2018-08-07 EMC IP Holdings Company LLC Optimal data storage configuration
US10326748B1 (en) 2015-02-25 2019-06-18 Quest Software Inc. Systems and methods for event-based authentication
US10417613B1 (en) 2015-03-17 2019-09-17 Quest Software Inc. Systems and methods of patternizing logged user-initiated events for scheduling functions
US9990506B1 (en) 2015-03-30 2018-06-05 Quest Software Inc. Systems and methods of securing network-accessible peripheral devices
US10140466B1 (en) 2015-04-10 2018-11-27 Quest Software Inc. Systems and methods of secure self-service access to content
US10536352B1 (en) 2015-08-05 2020-01-14 Quest Software Inc. Systems and methods for tuning cross-platform data collection
US10157358B1 (en) 2015-10-05 2018-12-18 Quest Software Inc. Systems and methods for multi-stream performance patternization and interval-based prediction
US10218588B1 (en) 2015-10-05 2019-02-26 Quest Software Inc. Systems and methods for multi-stream performance patternization and optimization of virtual meetings
US10200501B1 (en) * 2015-12-16 2019-02-05 Amazon Technologies, Inc. Program code allocation based on processor features
US11050846B2 (en) * 2015-12-16 2021-06-29 Amazon Technologies, Inc. Program code allocation based on processor features
US20190166228A1 (en) * 2015-12-16 2019-05-30 Amazon Technologies, Inc. Program code allocation based on processor features
US11283900B2 (en) * 2016-02-08 2022-03-22 Microstrategy Incorporated Enterprise performance and capacity testing
US11671505B2 (en) 2016-02-08 2023-06-06 Microstrategy Incorporated Enterprise health score and data migration
US10142391B1 (en) * 2016-03-25 2018-11-27 Quest Software Inc. Systems and methods of diagnosing down-layer performance problems via multi-stream performance patternization
US10765952B2 (en) 2018-09-21 2020-09-08 Sony Interactive Entertainment LLC System-level multiplayer matchmaking
US10695671B2 (en) 2018-09-28 2020-06-30 Sony Interactive Entertainment LLC Establishing and managing multiplayer sessions
US11364437B2 (en) 2018-09-28 2022-06-21 Sony Interactive Entertainment LLC Establishing and managing multiplayer sessions
US11379728B2 (en) * 2019-01-09 2022-07-05 Red Hat, Inc. Modified genetic recombination operator for cloud optimization
US11263111B2 (en) 2019-02-11 2022-03-01 Microstrategy Incorporated Validating software functionality
US11650894B2 (en) 2019-03-20 2023-05-16 Salesforce, Inc. Content-sensitive container scheduling on clusters
US11249874B2 (en) * 2019-03-20 2022-02-15 Salesforce.Com, Inc. Content-sensitive container scheduling on clusters
US11637748B2 (en) 2019-08-28 2023-04-25 Microstrategy Incorporated Self-optimization of computing environments
US11669420B2 (en) 2019-08-30 2023-06-06 Microstrategy Incorporated Monitoring performance of computing systems
US11354216B2 (en) 2019-09-18 2022-06-07 Microstrategy Incorporated Monitoring performance deviations
US11360881B2 (en) 2019-09-23 2022-06-14 Microstrategy Incorporated Customizing computer performance tests
US11829287B2 (en) 2019-09-23 2023-11-28 Microstrategy Incorporated Customizing computer performance tests
US11438231B2 (en) 2019-09-25 2022-09-06 Microstrategy Incorporated Centralized platform management for computing environments

Similar Documents

Publication Publication Date Title
US7035919B1 (en) Method for calculating user weights for thin client sizing tool
US7047177B1 (en) Thin client sizing tool for enterprise server farm solution configurator
US7050961B1 (en) Solution generation method for thin client sizing tool
US11656915B2 (en) Virtual systems management
US6963828B1 (en) Metafarm sizer configuration optimization method for thin client sizing tool
US7543060B2 (en) Service managing apparatus for keeping service quality by automatically allocating servers of light load to heavy task
US7979520B2 (en) Prescriptive architecture recommendations
US7979857B2 (en) Method and apparatus for dynamic memory resource management
US7406689B2 (en) Jobstream planner considering network contention & resource availability
US8046466B2 (en) System and method for managing resources
US8924791B2 (en) System including a vendor computer system for testing software products in a cloud network
US6898564B1 (en) Load simulation tool for server resource capacity planning
US7350209B2 (en) System and method for application performance management
JP2005538459A (en) Method and apparatus for root cause identification and problem determination in distributed systems
JP2004521411A (en) System and method for adaptive reliability balancing in a distributed programming network
US20050132379A1 (en) Method, system and software for allocating information handling system resources in response to high availability cluster fail-over events
US20210255899A1 (en) Method for Establishing System Resource Prediction and Resource Management Model Through Multi-layer Correlations
US20060250970A1 (en) Method and apparatus for managing capacity utilization estimation of a data center
US7925755B2 (en) Peer to peer resource negotiation and coordination to satisfy a service level objective
US20180081716A1 (en) Outcome-based job rescheduling in software configuration automation
CN108200151A (en) ISCSI Target load-balancing methods and device in a kind of distributed memory system
US7062426B1 (en) Method for calculating memory requirements for thin client sizing tool
Xiong et al. Trust-based resource allocation in web services
US7873715B1 (en) Optimized instrumentation of web pages for performance management
JP2007265244A (en) Performance monitoring device for web system

Legal Events

Date Code Title Description
AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, SHARON MARIE;EISMANN, LEONARD EUGENE;MCDONALD, KATHRYN ANN;REEL/FRAME:011685/0705

Effective date: 20010320

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION, DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

Owner name: UNISYS CORPORATION,PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION,DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION, DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

Owner name: UNISYS CORPORATION,PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION,DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

AS Assignment

Owner name: DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERA

Free format text: PATENT SECURITY AGREEMENT (PRIORITY LIEN);ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:023355/0001

Effective date: 20090731

AS Assignment

Owner name: DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERA

Free format text: PATENT SECURITY AGREEMENT (JUNIOR LIEN);ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:023364/0098

Effective date: 20090731

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: GENERAL ELECTRIC CAPITAL CORPORATION, AS AGENT, IL

Free format text: SECURITY AGREEMENT;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:026509/0001

Effective date: 20110623

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:DEUTSCHE BANK TRUST COMPANY;REEL/FRAME:030004/0619

Effective date: 20121127

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE;REEL/FRAME:030082/0545

Effective date: 20121127

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATE

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:042354/0001

Effective date: 20170417

Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL TRUSTEE, NEW YORK

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:042354/0001

Effective date: 20170417

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT, ILLINOIS

Free format text: SECURITY INTEREST;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:044144/0081

Effective date: 20171005

Owner name: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT

Free format text: SECURITY INTEREST;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:044144/0081

Effective date: 20171005

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553)

Year of fee payment: 12

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION (SUCCESSOR TO GENERAL ELECTRIC CAPITAL CORPORATION);REEL/FRAME:044416/0358

Effective date: 20171005

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION;REEL/FRAME:054231/0496

Effective date: 20200319