WO1997022208A2 - Providing access to services in a telecommunications system - Google Patents

Providing access to services in a telecommunications system Download PDF

Info

Publication number
WO1997022208A2
WO1997022208A2 PCT/GB1996/002961 GB9602961W WO9722208A2 WO 1997022208 A2 WO1997022208 A2 WO 1997022208A2 GB 9602961 W GB9602961 W GB 9602961W WO 9722208 A2 WO9722208 A2 WO 9722208A2
Authority
WO
WIPO (PCT)
Prior art keywords
computer system
reserve
main computer
threads
thread
Prior art date
Application number
PCT/GB1996/002961
Other languages
French (fr)
Other versions
WO1997022208A3 (en
Inventor
Stephen Turrell
William Nokes
Martyn HOWARD (deceased)
Original Assignee
Northern Telecom Limited
NICHOL, John (Heir of HOWARD, Martyn (deceased))
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB9525214.4A external-priority patent/GB9525214D0/en
Application filed by Northern Telecom Limited, NICHOL, John (Heir of HOWARD, Martyn (deceased)) filed Critical Northern Telecom Limited
Publication of WO1997022208A2 publication Critical patent/WO1997022208A2/en
Publication of WO1997022208A3 publication Critical patent/WO1997022208A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2038Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/255Control mechanisms for ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/0016Arrangements providing connection between exchanges
    • H04Q3/0062Provisions for network management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/42Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker
    • H04Q3/54Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker in which the logic circuitry controlling the exchange is centralised
    • H04Q3/545Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker in which the logic circuitry controlling the exchange is centralised using a stored programme
    • H04Q3/54541Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker in which the logic circuitry controlling the exchange is centralised using a stored programme using multi-processor systems
    • H04Q3/54558Redundancy, stand-by
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2033Failover techniques switching over of hardware resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2097Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13504Indexing scheme relating to selecting arrangements in general and for multiplex systems client/server architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13521Indexing scheme relating to selecting arrangements in general and for multiplex systems fault management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13526Indexing scheme relating to selecting arrangements in general and for multiplex systems resource management

Definitions

  • This invention relates to the delivery of services to users of a telecommunications system
  • Telecommunications networks were originally developed to provide voice communication between subscribers, this basic service now being referred to as POTS
  • POTS IP Multimedia Subsystem
  • a further problem with advanced telecommunications networks is that of ensuring reliability by providing backup systems to take over in the event of a system fault so that a service to customers may be maintained
  • This is a particular problem with the network switches which, in a modern ATM network may comprise a lower switch element layer built in specialised hardware and providing the basic performance, a real time controller (RTC) providing a control functionality for the switch elements, and a system manager providing the network management interfaces
  • RTC real time controller
  • a system manager providing the network management interfaces
  • a number of key system components must be duplicated so that, in the event of a component failure, the duplicate can take over
  • high reliability is provided by duplicating all hardware elements This is costly both in terms of hardware costs and development costs as such systems are proprietary in nature.
  • An object of the invention is to provide improved access by telecommunications subscribers to network services.
  • a communications network including a plurality of switch elements, a controller for the switch elements, and a system manager, wherein the controller comprises a main computer system and a reserve computer system, there being means for switching from the main computer system to the reserve computer system in the event of failure of the main computer system, wherein the main computer system has means for communicating state changes in the form of a message stream to the reserve computer system whereby to update the reserve system with the current network state, and wherein the main computer system is arranged to provide status information to the reserve computer system at regular intervals whereby to detect in service failure of the main computer system
  • a method of controlling a telecommunications network including a plurality of switch elements, a controller for the switch elements, and a system manager, wherein the controller comprises a main computer system and a reserve computer system, the method including communicating state changes in the main computer system to the reserve computer system whereby to update the reserve system with the current network state, providing status information from the main computer system to the reserve computer system at regular intervals whereby to detect in service failure of the main computer system, and substituting the main computer system with the reserve computer system in the event of a detected failure of the main system.
  • a communications network providing client access to a plurality of servers, the network including a plurality of capsules each containing client and server objects and each incorporating a nucleus supporting a plurality of client/server interfaces, wherein each said nucleus includes means for pooling tasks, means for providing threads and for assigning those threads to respective tasks, and means for binding an assigned thread to its respective task during the execution of that task
  • the conventional tightly coupled duplicate hardware is replaced by two computing platforms, a master and a spare, which are loosely coupled via a network
  • the master handles all the work, with the spare monitoring activity and keeping in step so as to be ready to take ⁇ over in the event that the master fails Obviously if the spare fails, the master continues to handle the system This is cheaper in terms of hardware since no additional work is required to have the hardware self- check and manage switch over
  • Figure 1 is a schematic diagram of a telecommunications network provided with duplication of control functions
  • Figure 2 is a schematic diagram illustrating client server interaction via an open distributed system associated with a telecommunications network
  • Figure 3 illustrates the general functionality of a nucleus of the system of figure 2
  • Figure 4 shows the multithreading environment of the nucleus of figure 2 and Figure 5 illustrates a thread/task model of the nucleus of figures 3 and 4.
  • the network includes a plurality of switch elements (SE) which are used to determine traffic routing in the network.
  • the switch elements are controlled via a real time controller and system management is provided by a network management interface (NMI) allowing an operator to monitor and configure the system.
  • the controller functions are provided by a main computer and are duplicated or backed up by a reserve computer.
  • the master In normal operation, the master provides all the system control functions, but, in the event of failure of the master, its function is replaced by the reserve In this arrangement it is necessary to ensure that the reserve has up-to-date information about the state of the system, to detect failure and to take over control when the master fails All of this is handled within a deterministic real-time computing environment
  • the first problem is handled by a process referred to as journalling
  • the master communicates state changes to the spare as a stream of messages Failure detection is provided via a 'heartbeat' protocol wherein the master and reserve exchange status messages at regular intervals, a missing message denotes potential failure
  • the frequency of such heartbeats determines the maximum time taken to detect failure
  • switch over in the event of failure is managed by providing a path from each SE to both master and reserve and the SE is notified on failure to report via the backup path to the reserve
  • the open distributed computer system incorporates a number of capsules 1 1 each containing client and server objects communicating via remote procedure calls (RPC) and each incorporating a nucleus 12. Communication between capsules is effected via telecommunications network 13 Within each capsule, the nucleus provides the infrastructure required for distribution, communication and multi-threading support for client and server objects The nucleus further provides application concurrency
  • logical units of concurrency and physical unit, of concurrency and physical units of concurrency as thread and task respectively ln particular:-
  • a logical unit of concurrency is an independent execution path through an application which can be executed concurrently with another typical unit of concurrency
  • a physical unit of concurrency is the resource required to enable execution of a logical unit of concurrency.
  • the nucleus is a modular design for real-time communications infra ⁇ structure which can be configured at compile and runtime to meet different domain requirements Particular emphasis is on deterministic performance and resource utilisation
  • the nucleus provides a set of well defined Application Programmer Interfaces and a strict modular approach allows alternative implementations of key components to be selected either at link or run time to meet particular application needs
  • the nucleus allows a number of separate protocol stacks to be utihtised simultaneously at run-time by application objects
  • the use of a wrapper which translates between protocol and Nucleus interface, allows most protocol stacks to be incorporated into the Nucleus communications framework
  • the application programmer can construct concurrent programs independently of the mechanisms by which the concurrency is actually provided These can be supplied by the host Operating System (e g)
  • Asynchronous Event Notification (AEN) system is concerned with managing the registration of events and handlers, and arranging for the queuing of raised events and the execution of the appropriate handlers in a pre-arranged context and priority This is handled via the concept of a dom ain , wh ich provides execution resources and a priority for the handling of events raised in that domain
  • Each domain may have several handlers for each event and may handle an event in a different maner from that of other domains
  • Each domain has a handler thread whose priority can be specified on domain creation, a set of mappings between events and handlers and a queue for raised events Domains can be created and destroyed dynamic
  • the nucleus provides a generic scheme to unify the majority of system resources and any number of user defined resources under a single mechanism.
  • This single mechanism provides a consistent mechanism for integrating external resources within internal resources thereby simplifying the application level problems
  • Resources such as interface handles, vouchers, file descriptors (read and write) and channels can be managed in the following ways
  • Resources can be waited on either singly or in sets Sets are created and maintained by the user Waiting can be time bound or indefinite
  • a wait blocks the thread of control calling the wait (and only that thread) until either a resource becomes available (ready) or a time-out (if specified) occurs Alternatively, events or call-backs can be registered to occur when a resource becomes available (ready) If an event is requested it will be raised in the domain of the thread registering the request when the resource becomes available For a call-back the call ⁇ back function is queued to be executed on the stack of a special call-back thread
  • the resource module is driven by the application and the nucleus itself registering interest in the readiness
  • figure 3 shows in schematic form the pooling of tasks and their execution by threads within the nucleus
  • a thread can execute only when it is associated with a task.
  • Tasks are shared and re- used by threads in an application. Once a thread has completed execution the task it is executing on is released and can be used by another thread.
  • FIG 3 the tasks are represented by triangles and the threads by squares.
  • the diagram shows the life-cycle of a task.
  • a task starts in a free pool, i.e. it is unbound, then it is bound to a thread, the thread task pair are now ready to execute and enter a run-queue The thread/task pairs in the run-queue are then scheduled to share the available processor time.
  • the task it is bound to is freed and returned to the free pool
  • nucleus performs the following functions:
  • threads are the logical unit of execution in an application
  • a thread does not have the necessary resources for actual execution This allows us to have many threads (potential units of execution) in an application without the cost of providing the resources for each to be able to execute
  • This functionality is provided by a thread management object.
  • Multi-threaded programs need to allocate and free memory blocks in much the same way that single threaded applications do.
  • standard routines such as UNIX malloc and free
  • memory allocated using standard routines, and not explicitly freed by the application will not b e freed when a thread terminates. Instead the memory will be freed when the capsule terminates. This causes memory leaks and will lead to application memory requirements growing over time.
  • Functionality which allows memory to be allocated to specific threads, and to be freed when the thread completes execution is provided by a thread management object which is also responsible for providing portable reliable memory allocation and freeing routines for use by the application.
  • Provision of a multi-threaded environment introduces a need for functionality to enable threads to synchronise their activity. This includes:
  • the thread synchronisation primitives that can be used to provide the synchronisation functionality specified above include
  • Mutexes are binary semaphores, i e counting semaphores with an active concurrency limit of one thread, with threads waiting to attain a lock on the mutex queued in priority order
  • Condition variables allow a number of threads to synchronise their activity upon the value of a shared state Operations are provided to queue threads which are waiting for the value of the shared state to be modified by another thread so that they can continue execution and to signal that the value of the shared state has been changed so that other threads may now be able to continue execution
  • Event counters hold a piece of state and threads can be queued waiting for the state of an event counter to reach a particular value
  • the value of the event counter ts controlled by threads making advance calls which cause the value to be increased by one.
  • Sequencers provide threads with tickets whose values are guaranteed to be monotonically increasing Together the two can be used to provide a guaranteed ordering of the execution of a number of threads, and can ensure that only a given number of threads are active at any one time
  • Timers are structures which trigger events at a given relative or absolute, time A timer has an action associated with it The action will be scheduled to execute as soon after the expiry time is reached as possible
  • Each pool can contain an arbitrary number of tasks
  • Task stack sizes are configurable, different tasks within the same pool may have different stack sizes
  • Low and high watermarks can be specified for each poo!
  • the high and low watermark concept is a task resource management issue Instead of creating and configuring the maximum number of tasks that a capsule will require during initialisation, reserving the maximum possible memory that the system will use, we create only a proportion of the tasks initially This proportion is called the low watermark and is a lower limit on the number of tasks which are actually part of a given pool Further tasks can be dynamically created, up to the limit specified by the high watermark, either on demand or using a lookahead mechanism When tasks are no longer required they can be destroyed, freeing memory, as long as the low watermark is not breached. Dynamically creating, and destroying, tasks keeps the resources required by the nucleus down, at the cost of increased latency introduced by having to create tasks to execute threads during periods of high activity.
  • Figure 5 is a diagrammatic representation of the task pools.
  • the four task pools are: G for general use, I for use by interfaces instances X-j , X 2 , Y 2 only, 2 for use by threads of priority greater than or equal to 3, and 3 for use by threads of priority greater than or equal to 5. So low priority threads use tasks from the general pool only, hence ensuring that low priority tasks to do hog all of the nucleus' tasks. Pool 1 is reserved for use by a specific set of interfaces, this ensures that a minimum service guarantee can be provided to those interfaces.
  • Task pools can be created and destroyed dynamically, allowing new capsules to ensure that they have sufficient resources to execute properly.
  • High and low watermarks are not shown on the diagram but have the effect that as all the available tasks are consumed from a pool more will be created until that pool has reached its high watermark. As the tasks are freed they will be destroyed until the pool reaches its low watermark once more.
  • the design of the task pools is flexible, but if task pools are not used by the application little or nor overhead should be incurred. It is important in the task pool implementation to ensure that the costs of task pool flexibility are only incurred by those requi ⁇ ng them Task Scheduling
  • scheduling There may be several thread/task pairs bound and awaiting to execute in the nucleus at any one time.
  • the nucleus must organise these activities in some ordered manner, this process is usually referred to as scheduling.
  • Task scheduling can be of two basic types
  • Non-Preemptive Scheduling Once a task has begun execution it continues to execute until it chooses to yield execution to another task
  • the current nucleus implements a simple non- preemptive scheduler. This is simple to implement and allows the use of non re-entrant libraries However, it lacks fairness in that a thread/task may hog the processor
  • Preemptive Scheduling A running thread-task can be stopped and another thread/task allowed to run This increases the fairness of the scheduling but requires that II code be written in a re-entrant manner or non-re-entrant code be protected by critical regions POSIX provides a preemptive scheduler.
  • preemptive scheduling where threads have priorities, the thread that has the highest priority runs If a thread with a higher priority becomes ready, then it should pre-empt any lower priority threads.
  • preemptive scheduling a number of policies can be used to control when thread/task pairs are switched out and replaced by others
  • FIFO First In First Out
  • RR Round Robin
  • the threads in the highest priority queue are time-sliced, i.e they share the available processor resources
  • Each thread executes for a given time period, called a quantum, and is then pre-empted • Time-sharing.
  • Thread/task pairs are given a quantum according to their priority, larger quantum for higher priority activities, and then all activities are time-sliced according to their quantum and not their priority.
  • a scheduler When Kernel threads are not available, a scheduler must be provided by the nucleus.
  • a light-weight, non-preemptive , priority based, round- robin-like scheduler is provided as the default scheduler because:
  • a priority based scheduler is essential in an environment where there is a need to prioritise activities and give those prioritised activities preferential treatment
  • nucleus will allow the application of the option of using it.
  • Some preemptive scheduling systems allow the user to control the quantum value and algorithm used by the scheduler
  • nucleus configurables can be specified both statistically (at compile time) and dynamically (at run time) The nucleus will handle the configuration of the underlying system, where appropriate functionality exists, and resolution of any conflicts between different application requirements
  • kernel threads are available we need merely encapsulate them to provide the functionality of the ODS task object, see Figure 5
  • the nucleus is responsible for providing the thread objects, the ODS task object provides a wrapper around the kernel threads and the task binding algorithm
  • Figure 4 shows the objects comprising the multi-threading environment
  • the task binding algorithm is responsible for choosing the next thread to be bound on any available tasks
  • the task binding will be priority based, and the application will be offered the option, when applicable, of using the tasks in either preemptive or non-preemptive mode
  • the nucleus is responsible for providing all components of the multi-threading environment not provided by the kernel, i e if kernel threads are not provided the nucleus must supply thread, task, stack and scheduler
  • the default scheduling algorithm used in the nucleus will be priority based non-preemptive
  • the task module will provide the application with the ability to control the task binding algorithm by providing a mechanism to allows tasks to be reserved for use by threads of an acceptably high priority level
  • the system provides a deterministic environment in which additional components may be integrated as appropriate
  • the framework includes mechanisms for threads and tasks, comms stacks (including but not limited to RPC), resource management and asynchronous event notification.
  • the use of these components allows applications to be constructed which run over multiple clients and serves and enable servers to themselves be clients of other servers
  • the system provides an integrated framework for distributed applications in a real time environment
  • Tasks and threads provide abstractions to simplify the programmers handling of concurrency whilst giving fine grained control of resource usage
  • the programmer has the flexibility to define maximum and minimum resource availability for fine-grained activities
  • LMP is a particular protocol stack which is suited to a range of tasks from journalling with an asymmetric flow of data but efficient retransmission characteristics, through to RPC systems such as the heartbeat protocol
  • Resource Management is an integrated way of enabling efficient coexistence of multiple resource types within a common model suitable for real-time application development
  • the model enables determinism, prio ⁇ tisation and deadline scheduling for

Abstract

A communications network, includes a plurality of switch elements, a controller for the switch elements, and a system manager. The controller comprises a main computer system and a reserve computer system, there being means for switching from the main computer system to the reserve computer system in the event of failure of the main computer system. The main computer system communicates state changes in the form of a message stream to the reserve computer system whereby to update the reserve system with the current network state. The main computer system also provides status information to the reserve computer system at regular intervals whereby to detect an in-service failure of the main computer system.

Description

PROVIDING ACCESS TO SERVICES IN A TELECOMMUNICATIONS SYSTEM
This invention relates to the delivery of services to users of a telecommunications system
Telecommunications networks were originally developed to provide voice communication between subscribers, this basic service now being referred to as POTS With the introduction of digital networks it has become feasible to provide a variety of services to system users Typically these services originate from a number of service providers, each service running on a software application which may be unique to that provider This has introduced the problem of inter-operability and of providing a single means of access by a subscriber to those services One approach to this problem is described by M Lapierre et al in Electrical Communications, October 1994, pp260-7
A further problem with advanced telecommunications networks is that of ensuring reliability by providing backup systems to take over in the event of a system fault so that a service to customers may be maintained This is a particular problem with the network switches which, in a modern ATM network may comprise a lower switch element layer built in specialised hardware and providing the basic performance, a real time controller (RTC) providing a control functionality for the switch elements, and a system manager providing the network management interfaces In order to meet reliability requirements, a number of key system components must be duplicated so that, in the event of a component failure, the duplicate can take over At present, high reliability is provided by duplicating all hardware elements This is costly both in terms of hardware costs and development costs as such systems are proprietary in nature.
An object of the invention is to provide improved access by telecommunications subscribers to network services.
It as another object of the invention to provide an open distributed computer system associated with a telecommunications network
It is a further object of the invention to provide a nucleus platform supporting a plurality of server interfaces
According to one aspect of the invention there is provided a communications network, including a plurality of switch elements, a controller for the switch elements, and a system manager, wherein the controller comprises a main computer system and a reserve computer system, there being means for switching from the main computer system to the reserve computer system in the event of failure of the main computer system, wherein the main computer system has means for communicating state changes in the form of a message stream to the reserve computer system whereby to update the reserve system with the current network state, and wherein the main computer system is arranged to provide status information to the reserve computer system at regular intervals whereby to detect in service failure of the main computer system
According to another aspect of the invention there is provided a method of controlling a telecommunications network including a plurality of switch elements, a controller for the switch elements, and a system manager, wherein the controller comprises a main computer system and a reserve computer system, the method including communicating state changes in the main computer system to the reserve computer system whereby to update the reserve system with the current network state, providing status information from the main computer system to the reserve computer system at regular intervals whereby to detect in service failure of the main computer system, and substituting the main computer system with the reserve computer system in the event of a detected failure of the main system.
According to a further aspect of the invention there is provided a communications network providing client access to a plurality of servers, the network including a plurality of capsules each containing client and server objects and each incorporating a nucleus supporting a plurality of client/server interfaces, wherein each said nucleus includes means for pooling tasks, means for providing threads and for assigning those threads to respective tasks, and means for binding an assigned thread to its respective task during the execution of that task
In our arrangement, the conventional tightly coupled duplicate hardware is replaced by two computing platforms, a master and a spare, which are loosely coupled via a network The master handles all the work, with the spare monitoring activity and keeping in step so as to be ready to take¬ over in the event that the master fails Obviously if the spare fails, the master continues to handle the system This is cheaper in terms of hardware since no additional work is required to have the hardware self- check and manage switch over
An embodiment of the invention will now be described with reference to the accompanying drawings in which -
Figure 1 is a schematic diagram of a telecommunications network provided with duplication of control functions
Figure 2 is a schematic diagram illustrating client server interaction via an open distributed system associated with a telecommunications network
Figure 3 illustrates the general functionality of a nucleus of the system of figure 2
Figure 4 shows the multithreading environment of the nucleus of figure 2 and Figure 5 illustrates a thread/task model of the nucleus of figures 3 and 4.
Referring to figure 1 , the network includes a plurality of switch elements (SE) which are used to determine traffic routing in the network. The switch elements are controlled via a real time controller and system management is provided by a network management interface (NMI) allowing an operator to monitor and configure the system. The controller functions are provided by a main computer and are duplicated or backed up by a reserve computer. In normal operation, the master provides all the system control functions, but, in the event of failure of the master, its function is replaced by the reserve In this arrangement it is necessary to ensure that the reserve has up-to-date information about the state of the system, to detect failure and to take over control when the master fails All of this is handled within a deterministic real-time computing environment The first problem is handled by a process referred to as journalling The master communicates state changes to the spare as a stream of messages Failure detection is provided via a 'heartbeat' protocol wherein the master and reserve exchange status messages at regular intervals, a missing message denotes potential failure The frequency of such heartbeats determines the maximum time taken to detect failure Finally, switch over in the event of failure is managed by providing a path from each SE to both master and reserve and the SE is notified on failure to report via the backup path to the reserve
Referring now to figure 2, the open distributed computer system incorporates a number of capsules 1 1 each containing client and server objects communicating via remote procedure calls (RPC) and each incorporating a nucleus 12. Communication between capsules is effected via telecommunications network 13 Within each capsule, the nucleus provides the infrastructure required for distribution, communication and multi-threading support for client and server objects The nucleus further provides application concurrency In the following description we define logical units of concurrency and physical unit, of concurrency and physical units of concurrency as thread and task respectively ln particular:-
A logical unit of concurrency is an independent execution path through an application which can be executed concurrently with another typical unit of concurrency A physical unit of concurrency is the resource required to enable execution of a logical unit of concurrency.
The nucleus is a modular design for real-time communications infra¬ structure which can be configured at compile and runtime to meet different domain requirements Particular emphasis is on deterministic performance and resource utilisation By providing a well- defined API covering all the necessary functionality, applications become portable and, by using the common infrastructure, are able to interwork regardless of the computing platform involved The nucleus provides a set of well defined Application Programmer Interfaces and a strict modular approach allows alternative implementations of key components to be selected either at link or run time to meet particular application needs By providing a common interface to the protocol stacks at the application level, the nucleus allows a number of separate protocol stacks to be utihtised simultaneously at run-time by application objects The use of a wrapper which translates between protocol and Nucleus interface, allows most protocol stacks to be incorporated into the Nucleus communications framework By providing a standard model of concurrency and resource usage, the application programmer can construct concurrent programs independently of the mechanisms by which the concurrency is actually provided These can be supplied by the host Operating System (e g POSIX Threads, or real-time kernel threads) or by proprietary components without changing the application code This ensures that applications can have the flexibility of concurrency without sacrificing portability At the same time, the model of resource usage ensures that the resources are controlled, prioritised and behave in a deterministic way Within the concurrent environment, events and resources may be generated asynchronously and need handling according to priority Again, the resources used for this purpose must be controlled to ensure that priorities are honoured. To this end, the nucleus provides an asynchronous event notification mechanism and a flexible resource management system which combine to deliver the necessary control.
In a telecomm unications environment, large numbers of events are occurring which are of interest only to parts of the system In many cases, the system models are dependent on asynchronous events to progress activity (e g a standard call model) In order to support this, a mechanism to manage the raising, queuing and handling of events in a well-defined context with a view to priority and determ inism is necessary The Asynchronous Event Notification (AEN) system is concerned with managing the registration of events and handlers, and arranging for the queuing of raised events and the execution of the appropriate handlers in a pre-arranged context and priority This is handled via the concept of a dom ain , wh ich provides execution resources and a priority for the handling of events raised in that domain Each domain may have several handlers for each event and may handle an event in a different maner from that of other domains Each domain has a handler thread whose priority can be specified on domain creation, a set of mappings between events and handlers and a queue for raised events Domains can be created and destroyed dynamically Events are raised on a domain which may be specified directly or defaulted to the domain associated with the thread, allowing code to raise events without being aware of the target domain The event is matched to its handler and put in a queue This queue is processed, and the handlers called, within the context of the handler thread for the domain Generic data can be passed into the event when it is raised and will be passed to the handler The default domain for the handler thread is the domain itself, and event handlers are free to raise events themselves In fact, since an event handler is run in a thread, there is no restriction on what a handler may do There are global events These have default handlers which can be changed dynamically on a system-wide basis These handlers can also be overridden in a domain There are also events which are domain specific The creation and destruction of these domain specific events and changing handlers for events is dynamic. However, changing the handler of an event will not effect any currently queued event raises. Global events can be raised across all domains in the system with a single request. In a given domain, any over-riding handler will be used instead of the global default handler for that event.
The nucleus provides a generic scheme to unify the majority of system resources and any number of user defined resources under a single mechanism. This single mechanism provides a consistent mechanism for integrating external resources within internal resources thereby simplifying the application level problems Resources such as interface handles, vouchers, file descriptors (read and write) and channels can be managed in the following ways Resources can be waited on either singly or in sets Sets are created and maintained by the user Waiting can be time bound or indefinite A wait blocks the thread of control calling the wait (and only that thread) until either a resource becomes available (ready) or a time-out (if specified) occurs Alternatively, events or call-backs can be registered to occur when a resource becomes available (ready) If an event is requested it will be raised in the domain of the thread registering the request when the resource becomes available For a call-back the call¬ back function is queued to be executed on the stack of a special call-back thread The resource module is driven by the application and the nucleus itself registering interest in the readiness of a resource or set of resources The registered owner of the resource is then signalled by the resource manager to indicate interest in the resources state If the resource becomes available (ready) then the owner of the resource informs the resource manager which then either wakes waiting threads, raises events or makes call-backs tn order to inform interested parties that the resource has become ready It is possible that the resource may not still be ready when an attempt is made to grab the resource, especially when many threads are waiting on a shared resource
Referring now to figure 3, this shows in schematic form the pooling of tasks and their execution by threads within the nucleus In the arrangement of figure 3, a thread can execute only when it is associated with a task. Tasks are shared and re- used by threads in an application. Once a thread has completed execution the task it is executing on is released and can be used by another thread.
In figure 3 the tasks are represented by triangles and the threads by squares. The diagram shows the life-cycle of a task. A task starts in a free pool, i.e. it is unbound, then it is bound to a thread, the thread task pair are now ready to execute and enter a run-queue The thread/task pairs in the run-queue are then scheduled to share the available processor time. When a thread has completed execution the task it is bound to is freed and returned to the free pool
By separating logical and physical concurrency, threads and tasks, we can control the resources used to provide the concurrency in an application without affecting the logical concurrency of the application
Our thread/task concurrency makes use of light-weight concurrency for its tasks, and provides an additional layer of logical concurrency on top Additional benefits gained from this approach include -
inexpensive generation of logical concurrency, • the physical resources required to provide concurrency can be limited, and controlled, • physical resources can be pre-allocated removing the latency introduced by dynamic resource allocation, control can be exercised over the scheduling of the threads, without control being required over the scheduling of the tasks that will execute those threads; and • customised thread scheduling algorithms can be used for different environments, i e deadline scheduling for multi-media
This approach to providing concurrency has the advantage over conventional means of concurrency provision in that each thread does not require resources such as stack and register store to be allocated In order to provide light-weight concurrency, or multi-threading, the nucleus performs the following functions:-
• Pπoritisation of concurrent activities - From the real-time work a need for certain application operations to be handled more urgently than others has introduced the need for priontisation of application activities This means that more urgent application activities are given a higher priority Higher priority activities are then performed in preference to lower priority activities
• Control over resources required for multi-threading
• Synchronisation mechanisms - In order for a multi-threaded application to be written in a safe manner the concurrent components in the application must be able to synchronise their activity to guard critical regions of code against multiple concurrent usage
Incorporation/interworking with Kernel, or other, threads packages
As discussed above, threads are the logical unit of execution in an application However, a thread does not have the necessary resources for actual execution This allows us to have many threads (potential units of execution) in an application without the cost of providing the resources for each to be able to execute
Provision of application concurrency requires mechanisms to control
• thread management, • thread memory management thread synchronisation, and thread activity tracking Together these provide an environment for applications to successfully use multi-threading in a safe manner This multi-threading environment is illustrated in figure 4 Providing a multi-threaded environment needs functionality to provide:
the allocation of thread resources, i.e. the creation/destruction of threads;
• the tracking of thread activities. In order to provide debugging and diagnostic information it is necessary to follow the flow of execution from parent threads to children, and also possibly from threads in one object to another, i.e. following a call from a client to a server and back again, and
• priority control for a thread.
This functionality is provided by a thread management object.
Multi-threaded programs need to allocate and free memory blocks in much the same way that single threaded applications do. However, the use of standard routines, such as UNIX malloc and free, is not sufficient. In a multi-threaded environment memory allocated using standard routines, and not explicitly freed by the application, will not b e freed when a thread terminates. Instead the memory will be freed when the capsule terminates. This causes memory leaks and will lead to application memory requirements growing over time. Functionality which allows memory to be allocated to specific threads, and to be freed when the thread completes execution is provided by a thread management object which is also responsible for providing portable reliable memory allocation and freeing routines for use by the application.
Provision of a multi-threaded environment introduces a need for functionality to enable threads to synchronise their activity. This includes:
providing a mechanism to allow mutual exclusion for protection of shared data; the ability for a number of threads to synchronise their state after a period of concurrent execution,
mechanisms to allow threads to generate regular events in the nucleus, for use in heartbeats, or periodic checking of the state of another capsule, and
the ability to do time based waiting for events to be triggered by the nucleus
The thread synchronisation primitives that can be used to provide the synchronisation functionality specified above include
Counting semaphores - used to protect resources, and critical regions by limiting the number of threads which can hold locks on the semaphore at any one time Thus, three threads attempt to lock a semaphore with a limit of two threads the third thread will be blocked until one of the first two threads return the lock they have taken
• Mutexes and Condition Variables
Mutexes are binary semaphores, i e counting semaphores with an active concurrency limit of one thread, with threads waiting to attain a lock on the mutex queued in priority order Condition variables allow a number of threads to synchronise their activity upon the value of a shared state Operations are provided to queue threads which are waiting for the value of the shared state to be modified by another thread so that they can continue execution and to signal that the value of the shared state has been changed so that other threads may now be able to continue execution
Event Counters and Sequencers
Event counters hold a piece of state and threads can be queued waiting for the state of an event counter to reach a particular value The value of the event counter ts controlled by threads making advance calls which cause the value to be increased by one. Sequencers provide threads with tickets whose values are guaranteed to be monotonically increasing Together the two can be used to provide a guaranteed ordering of the execution of a number of threads, and can ensure that only a given number of threads are active at any one time
Providing threads with time based synchronisation primitives requires the introduction of timers Timers are structures which trigger events at a given relative or absolute, time A timer has an action associated with it The action will be scheduled to execute as soon after the expiry time is reached as possible
Task Binding Task binding is the mechanism by which watting threads are associated with available tasks This is effected via a task binding algorithm which address the following problems
A number of task pools will be managed by the nucleus
• Each pool can contain an arbitrary number of tasks
• Task stack sizes are configurable, different tasks within the same pool may have different stack sizes
• Low and high watermarks can be specified for each poo! The high and low watermark concept is a task resource management issue Instead of creating and configuring the maximum number of tasks that a capsule will require during initialisation, reserving the maximum possible memory that the system will use, we create only a proportion of the tasks initially This proportion is called the low watermark and is a lower limit on the number of tasks which are actually part of a given pool Further tasks can be dynamically created, up to the limit specified by the high watermark, either on demand or using a lookahead mechanism When tasks are no longer required they can be destroyed, freeing memory, as long as the low watermark is not breached. Dynamically creating, and destroying, tasks keeps the resources required by the nucleus down, at the cost of increased latency introduced by having to create tasks to execute threads during periods of high activity.
Pools can be either:
• reserved for use by threads whose priority is greater than a given value, or
• reserved for use by an interface, or group of interfaces, within a capsule.
Figure 5 is a diagrammatic representation of the task pools. In the diagram the four task pools are: G for general use, I for use by interfaces instances X-j , X2, Y2 only, 2 for use by threads of priority greater than or equal to 3, and 3 for use by threads of priority greater than or equal to 5. So low priority threads use tasks from the general pool only, hence ensuring that low priority tasks to do hog all of the nucleus' tasks. Pool 1 is reserved for use by a specific set of interfaces, this ensures that a minimum service guarantee can be provided to those interfaces.
This design allows the task pools to be configured in a variety of ways. Task pools can be created and destroyed dynamically, allowing new capsules to ensure that they have sufficient resources to execute properly. High and low watermarks are not shown on the diagram but have the effect that as all the available tasks are consumed from a pool more will be created until that pool has reached its high watermark. As the tasks are freed they will be destroyed until the pool reaches its low watermark once more.
The design of the task pools is flexible, but if task pools are not used by the application little or nor overhead should be incurred. It is important in the task pool implementation to ensure that the costs of task pool flexibility are only incurred by those requiπng them Task Scheduling
There may be several thread/task pairs bound and awaiting to execute in the nucleus at any one time. The nucleus must organise these activities in some ordered manner, this process is usually referred to as scheduling.
Task scheduling can be of two basic types
• Non-Preemptive Scheduling - Once a task has begun execution it continues to execute until it chooses to yield execution to another task The current nucleus implements a simple non- preemptive scheduler. This is simple to implement and allows the use of non re-entrant libraries However, it lacks fairness in that a thread/task may hog the processor
• Preemptive Scheduling - A running thread-task can be stopped and another thread/task allowed to run This increases the fairness of the scheduling but requires that II code be written in a re-entrant manner or non-re-entrant code be protected by critical regions POSIX provides a preemptive scheduler.
In preemptive scheduling where threads have priorities, the thread that has the highest priority runs If a thread with a higher priority becomes ready, then it should pre-empt any lower priority threads With preemptive scheduling, a number of policies can be used to control when thread/task pairs are switched out and replaced by others
First In First Out (FIFO) The thread/task at the head of the highest priority queue runs to full completion, or until it yields execution, unless pre-empted by a higher priority thread
Round Robin (RR) The threads in the highest priority queue are time-sliced, i.e they share the available processor resources Each thread executes for a given time period, called a quantum, and is then pre-empted • Time-sharing. Thread/task pairs are given a quantum according to their priority, larger quantum for higher priority activities, and then all activities are time-sliced according to their quantum and not their priority.
When Kernel threads are not available, a scheduler must be provided by the nucleus. A light-weight, non-preemptive , priority based, round- robin-like scheduler is provided as the default scheduler because:
• a non-preemptive scheduler is easier to implement than a preemptive one,
• it does not require re-entrant libraries to be written for the standard C and UNIX libraries,
• it can be upgraded at a later date, providing the Operating system libraries that go with it are re-implemented or protected in some way.
• a priority based scheduler is essential in an environment where there is a need to prioritise activities and give those prioritised activities preferential treatment
• the algorithm will be round-robin-like in that the highest priority activities will be non-preemptive ly cycled By using round-robin we ensure that the highest priority activities be given preference by the scheduler.
Due to the modular nature of the nucleus design, it will of course be possible for different schedulers to be used in place of the standard one provided. In order to ensure this, the interface and behaviour which a scheduler must provide will be carefully documented
Where preemptive scheduling is provided b y the kernel, the nucleus will allow the application of the option of using it Some preemptive scheduling systems allow the user to control the quantum value and algorithm used by the scheduler These, and other, nucleus configurables can be specified both statistically (at compile time) and dynamically (at run time) The nucleus will handle the configuration of the underlying system, where appropriate functionality exists, and resolution of any conflicts between different application requirements
Multi-threading summary
In summary we have identified a need for a number of component objects of a multi-threading system All of these objects must be available to provide multi-threading
However two distinct scenarios have been identified
• Where kernel threads are available we need merely encapsulate them to provide the functionality of the ODS task object, see Figure 5 The nucleus is responsible for providing the thread objects, the ODS task object provides a wrapper around the kernel threads and the task binding algorithm
Figure 4 shows the objects comprising the multi-threading environment The task binding algorithm is responsible for choosing the next thread to be bound on any available tasks The task binding will be priority based, and the application will be offered the option, when applicable, of using the tasks in either preemptive or non-preemptive mode
• The nucleus is responsible for providing all components of the multi-threading environment not provided by the kernel, i e if kernel threads are not provided the nucleus must supply thread, task, stack and scheduler
• The default scheduling algorithm used in the nucleus will be priority based non-preemptive The task module will provide the application with the ability to control the task binding algorithm by providing a mechanism to allows tasks to be reserved for use by threads of an acceptably high priority level
The system provides a deterministic environment in which additional components may be integrated as appropriate The framework includes mechanisms for threads and tasks, comms stacks (including but not limited to RPC), resource management and asynchronous event notification. The use of these components allows applications to be constructed which run over multiple clients and serves and enable servers to themselves be clients of other servers In particular, the system provides an integrated framework for distributed applications in a real time environment Tasks and threads provide abstractions to simplify the programmers handling of concurrency whilst giving fine grained control of resource usage The programmer has the flexibility to define maximum and minimum resource availability for fine-grained activities LMP is a particular protocol stack which is suited to a range of tasks from journalling with an asymmetric flow of data but efficient retransmission characteristics, through to RPC systems such as the heartbeat protocol Resource Management is an integrated way of enabling efficient coexistence of multiple resource types within a common model suitable for real-time application development The model enables determinism, prioπtisation and deadline scheduling for such resource utilisations Asynchronous event notification is a model for event handling which fits in with the environments above enabling applications programmers to handle them in a fully integrated manner This preserves the deterministic characteristic in the presence of external systems

Claims

CLA1MS:-
1. A communications network, including a plurality of switch elements, a controller for the switch elements, and a system manager, wherein the controller comprises a main computer system and a reserve computer system, there being means for switching from the main computer system to the reserve computer system in the event of failure of the main computer system, wherein the main computer system has means for communicating state changes in the form of a message stream to the reserve computer system whereby to update the reserve system with the current network state, and wherein the main computer system is arranged to provide status information to the reserve computer system at regular intervals whereby to detect an in-service failure of the main computer system.
2. A communications network providing client access to a plurality of servers, the network including a plurality of capsules each containing client and server objects and each incorporating a nucleus supporting a plurality of client/server interfaces, wherein each said nucleus includes means for pooling tasks, means for providing threads and for assigning those threads to respective tasks, and means for binding an assigned thread to its respective task during the execution of that task.
3. A communications network as claimed in claim 1 , and incorporating means for asynchronous notification of system events
4. A method of controlling a telecommunications network including a plurality of switch elements, a controller for the switch elements, and a system manager, wherein the controller comprises a main computer system and a reserve computer system , the method including communicating state changes in the main computer system to the reserve computer system whereby to update the reserve system with the current network state, providing status information from the main computer system to the reserve computer system at regular intervals whereby to detect in service failure of the main computer system, and substituting the main computer system with the reserve computer system in the event of a detected failure of the main system.
PCT/GB1996/002961 1995-12-09 1996-11-29 Providing access to services in a telecommunications system WO1997022208A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GBGB9525214.4A GB9525214D0 (en) 1995-12-09 1995-12-09 Providing access to services in a telecommunications system
GB9525214.4 1995-12-09
GB9600681.2 1996-01-12
GB9600681A GB2308040A (en) 1995-12-09 1996-01-12 Telecommunications system

Publications (2)

Publication Number Publication Date
WO1997022208A2 true WO1997022208A2 (en) 1997-06-19
WO1997022208A3 WO1997022208A3 (en) 1997-11-20

Family

ID=26308267

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1996/002961 WO1997022208A2 (en) 1995-12-09 1996-11-29 Providing access to services in a telecommunications system

Country Status (2)

Country Link
GB (1) GB2308040A (en)
WO (1) WO1997022208A2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999057918A1 (en) * 1998-05-04 1999-11-11 Gateway Technologies, Inc. Telecommunication resource allocation
GB2344018A (en) * 1998-11-18 2000-05-24 Mitel Corp Remote peripheral switch backup call service mechanism
US7899167B1 (en) 2003-08-15 2011-03-01 Securus Technologies, Inc. Centralized call processing
US7916845B2 (en) 2006-04-13 2011-03-29 Securus Technologies, Inc. Unauthorized call activity detection and prevention systems and methods for a Voice over Internet Protocol environment
US8000269B1 (en) 2001-07-13 2011-08-16 Securus Technologies, Inc. Call processing with voice over internet protocol transmission
US8340260B1 (en) 2003-08-15 2012-12-25 Securus Technologies, Inc. Inmate management and call processing systems and methods
US9135097B2 (en) 2012-03-27 2015-09-15 Oracle International Corporation Node death detection by querying
US9560193B1 (en) 2002-04-29 2017-01-31 Securus Technologies, Inc. Systems and methods for detecting a call anomaly using biometric identification
US9990683B2 (en) 2002-04-29 2018-06-05 Securus Technologies, Inc. Systems and methods for acquiring, accessing, and analyzing investigative information
US10115080B2 (en) 2002-04-29 2018-10-30 Securus Technologies, Inc. System and method for proactively establishing a third-party payment account for services rendered to a resident of a controlled-environment facility
US10796392B1 (en) 2007-05-22 2020-10-06 Securus Technologies, Llc Systems and methods for facilitating booking, bonding and release

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19728505A1 (en) * 1997-07-03 1999-01-07 Philips Patentverwaltung Communication system with special channel
US6064726A (en) * 1997-12-31 2000-05-16 Alcatel Usa Sourcing, L.P. Fully flexible routing system
US6470462B1 (en) * 1999-02-25 2002-10-22 Telefonaktiebolaget Lm Ericsson (Publ) Simultaneous resynchronization by command for state machines in redundant systems
US6567376B1 (en) 1999-02-25 2003-05-20 Telefonaktiebolaget Lm Ericsson (Publ) Using system frame number to implement timers in telecommunications system having redundancy
DE10011267A1 (en) * 2000-03-08 2001-09-13 Tenovis Gmbh & Co Kg Communication module for bus operation as well as a system with several communication modules
DE10011268B4 (en) * 2000-03-08 2011-05-19 Tenovis Gmbh & Co. Kg switch
ATE345015T1 (en) * 2002-02-12 2006-11-15 Cit Alcatel METHOD FOR DETERMINING AN ACTIVE OR PASSIVE ROLE ALLOCATION FOR A NETWORK ELEMENT CONTROL MEANS

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0230029A2 (en) * 1985-12-27 1987-07-29 AT&T Corp. Method and apparatus for fault recovery in a distributed processing system
EP0362105A2 (en) * 1988-09-29 1990-04-04 International Business Machines Corporation Method for processing program threads of a distributed application program by a host computer and an intelligent work station in an SNA LU 6.2 network environment
EP0381655A2 (en) * 1989-01-31 1990-08-08 International Business Machines Corporation Method for synchronizing the dispatching of tasks among multitasking operating systems
US4959768A (en) * 1989-01-23 1990-09-25 Honeywell Inc. Apparatus for tracking predetermined data for updating a secondary data base
EP0441087A1 (en) * 1990-02-08 1991-08-14 International Business Machines Corporation Checkpointing mechanism for fault-tolerant systems
US5084816A (en) * 1987-11-25 1992-01-28 Bell Communications Research, Inc. Real time fault tolerant transaction processing system
WO1992005487A1 (en) * 1990-09-24 1992-04-02 Novell, Inc. Fault tolerant computer system
US5249290A (en) * 1991-02-22 1993-09-28 At&T Bell Laboratories Method of and apparatus for operating a client/server computer network
EP0649092A1 (en) * 1993-10-15 1995-04-19 Tandem Computers Incorporated Method and apparatus for fault tolerant connection of a computing system to local area networks

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0230029A2 (en) * 1985-12-27 1987-07-29 AT&T Corp. Method and apparatus for fault recovery in a distributed processing system
US5084816A (en) * 1987-11-25 1992-01-28 Bell Communications Research, Inc. Real time fault tolerant transaction processing system
EP0362105A2 (en) * 1988-09-29 1990-04-04 International Business Machines Corporation Method for processing program threads of a distributed application program by a host computer and an intelligent work station in an SNA LU 6.2 network environment
US4959768A (en) * 1989-01-23 1990-09-25 Honeywell Inc. Apparatus for tracking predetermined data for updating a secondary data base
EP0381655A2 (en) * 1989-01-31 1990-08-08 International Business Machines Corporation Method for synchronizing the dispatching of tasks among multitasking operating systems
EP0441087A1 (en) * 1990-02-08 1991-08-14 International Business Machines Corporation Checkpointing mechanism for fault-tolerant systems
WO1992005487A1 (en) * 1990-09-24 1992-04-02 Novell, Inc. Fault tolerant computer system
US5249290A (en) * 1991-02-22 1993-09-28 At&T Bell Laboratories Method of and apparatus for operating a client/server computer network
EP0649092A1 (en) * 1993-10-15 1995-04-19 Tandem Computers Incorporated Method and apparatus for fault tolerant connection of a computing system to local area networks

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
COMPUTER, vol. 23, no. 5, 1 May 1990, pages 35-43, XP000128603 BLACK D L: "SCHEDULING SUPPORT FOR CONCURRENCY AND PARALLELISM IN THE MACH OPERATING SYSTEM" *
ERICSSON REVIEW, vol. 67, no. 1, 1990, STOCKHOLM (SE), pages 2-11, XP000126882 A.ASH ET AL: "AXE 10 Central Processors" *
GRINSEC: "Electronic switching - Part II" 1983 , ELSEVIER SCIENCE PUBLISHERS B.V. , AMSTERDAM (NL) XP002028333 see Chapter II-7, pages 262-290, paragraph 2.2.1, pages 271-274 *
IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE & EXHIBITION, HOLLYWOOD, NOVEMBER 28 - DECEMBER 1, 1988, vol. VOL. 3 OF 3, no. 1988, 28 November 1988, INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, pages 1383-1388, XP000012125 OZEKI T ET AL: "IMPLEMENTATION OF REAL-TIME OPERATING SYSTEM FOR INTEGRATED SWITCHING SYSTEM" *
PROCEEDINGS OF THE INTERNATIONAL SWITCHING - PAPER 22A1, vol. 1, 7 - 11 May 1984, FLORENCE (IT), pages 1-7, XP002028429 J. BECKER ET AL: "Introduction and overview of the 3B TB computer family" *
PROCEEDINGS OF THE NATIONAL COMMUNICATIONS FORUM, vol. 42, no. 2, 30 September 1988, CHICAGO (US), pages 1364-1372, XP000096816 J. TEBES ET AL: "Processor architecture in the Siemens EWSD switch" *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6381321B1 (en) 1998-05-04 2002-04-30 T-Netix, Inc. Telecommunication resource allocation system and method
WO1999057918A1 (en) * 1998-05-04 1999-11-11 Gateway Technologies, Inc. Telecommunication resource allocation
GB2344018A (en) * 1998-11-18 2000-05-24 Mitel Corp Remote peripheral switch backup call service mechanism
US6504922B1 (en) 1998-11-18 2003-01-07 Mitel Corporation Remote peripheral switch backup call service mechanism
GB2344018B (en) * 1998-11-18 2003-06-25 Mitel Corp Remote peripheral switch backup call service mechanism
US8000269B1 (en) 2001-07-13 2011-08-16 Securus Technologies, Inc. Call processing with voice over internet protocol transmission
US10115080B2 (en) 2002-04-29 2018-10-30 Securus Technologies, Inc. System and method for proactively establishing a third-party payment account for services rendered to a resident of a controlled-environment facility
US9560193B1 (en) 2002-04-29 2017-01-31 Securus Technologies, Inc. Systems and methods for detecting a call anomaly using biometric identification
US9990683B2 (en) 2002-04-29 2018-06-05 Securus Technologies, Inc. Systems and methods for acquiring, accessing, and analyzing investigative information
US10178224B2 (en) 2002-04-29 2019-01-08 Securus Technologies, Inc. Systems and methods for detecting a call anomaly using biometric identification
US8340260B1 (en) 2003-08-15 2012-12-25 Securus Technologies, Inc. Inmate management and call processing systems and methods
US7899167B1 (en) 2003-08-15 2011-03-01 Securus Technologies, Inc. Centralized call processing
US10740861B1 (en) 2003-11-24 2020-08-11 Securus Technologies, Inc. Systems and methods for acquiring, accessing, and analyzing investigative information
US7916845B2 (en) 2006-04-13 2011-03-29 Securus Technologies, Inc. Unauthorized call activity detection and prevention systems and methods for a Voice over Internet Protocol environment
US10796392B1 (en) 2007-05-22 2020-10-06 Securus Technologies, Llc Systems and methods for facilitating booking, bonding and release
US9135097B2 (en) 2012-03-27 2015-09-15 Oracle International Corporation Node death detection by querying

Also Published As

Publication number Publication date
GB2308040A (en) 1997-06-11
WO1997022208A3 (en) 1997-11-20
GB9600681D0 (en) 1996-03-13

Similar Documents

Publication Publication Date Title
WO1997022208A2 (en) Providing access to services in a telecommunications system
US4685125A (en) Computer system with tasking
US6405262B1 (en) Efficient inter-process object and interface pinging
EP0444376B1 (en) Mechanism for passing messages between several processors coupled through a shared intelligent memory
US5515538A (en) Apparatus and method for interrupt handling in a multi-threaded operating system kernel
US5586318A (en) Method and system for managing ownership of a released synchronization mechanism
US7058948B2 (en) Synchronization objects for multi-computer systems
Marsh et al. First-class user-level threads
CA2724853C (en) Protected mode scheduling of operations
Rivas et al. POSIX-compatible application-defined scheduling in MaRTE OS
Pyarali et al. Evaluating and optimizing thread pool strategies for real-time CORBA
CA2443839C (en) System, method, and article of manufacture for using a replaceable component to select a replaceable quality of service capable network communication channel component
JPH03194647A (en) Fault notifying method
EP1012715A1 (en) Data processing unit with hardware assisted context switching capability
AU7631194A (en) Software overload control method
Pruyne et al. Providing resource management services to parallel applications
EP1162536A1 (en) Multiple operating system control method
US9015534B2 (en) Generation of memory dump of a computer process without terminating the computer process
US7552446B1 (en) Methods and apparatus for a timer event service infrastructure
US20230135951A1 (en) Scheduling of threads for clusters of processors
EP0955581A1 (en) Software interrupt mechanism
WO2004111840A2 (en) Customer framework for embedded applications
Nakajima et al. Experiments with Real-Time Servers in Real-Time Mach.
US20230134872A1 (en) Thread state transitions
Serra et al. An architecture for declarative real-time scheduling on Linux

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

NENP Non-entry into the national phase in:

Ref country code: JP

Ref document number: 97521821

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase