WO1998038566A2 - Enhanced hierarchical data distribution system using bundling in multiple routers - Google Patents

Enhanced hierarchical data distribution system using bundling in multiple routers Download PDF

Info

Publication number
WO1998038566A2
WO1998038566A2 PCT/US1998/003980 US9803980W WO9838566A2 WO 1998038566 A2 WO1998038566 A2 WO 1998038566A2 US 9803980 W US9803980 W US 9803980W WO 9838566 A2 WO9838566 A2 WO 9838566A2
Authority
WO
WIPO (PCT)
Prior art keywords
record
primary
router
records
block
Prior art date
Application number
PCT/US1998/003980
Other languages
French (fr)
Other versions
WO1998038566A9 (en
WO1998038566A3 (en
Inventor
Kevin J. Jacoby
Sajan Pillai
Original Assignee
Mci Communications Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mci Communications Corporation filed Critical Mci Communications Corporation
Priority to AU63436/98A priority Critical patent/AU6343698A/en
Publication of WO1998038566A2 publication Critical patent/WO1998038566A2/en
Publication of WO1998038566A3 publication Critical patent/WO1998038566A3/en
Publication of WO1998038566A9 publication Critical patent/WO1998038566A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/275Synchronous replication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17356Indirect interconnection networks
    • G06F15/17368Indirect interconnection networks non hierarchical topologies
    • G06F15/17381Two dimensional, e.g. mesh, torus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Definitions

  • the present invention generally relates to computer databases, and more particularly relates to the distribution of data among multiple computer databases.
  • NIDSs Network Information Distribution Servers
  • Call processing data is generated by application programs running on a source computer, often a mainframe computer. This data must then be distributed to a plurality of NIDSs (running on target computers), so that the data can be available for the intelligent network to perform call processing.
  • a data distribution system must be able to distribute large volumes of data among a plurality of NIDSs quickly and efficiently.
  • a data distribution system must also provide reliability to ensure that data gets distributed to all NIDSs and other client systems that require it.
  • FIG. 1 illustrates a conventional environment 102 in which data distribution takes place.
  • Application Programs 106 generate call processing data that need to be distributed to a plurality of Network Information Distribution Servers (NIDSs) 118.
  • the NIDSs 118 are used to provide call processing data to intelligent network applications (not shown).
  • the Application Programs 106 reside on a computer 104.
  • the computer 104 may be logically divided into multiple computing regions. Each of the computing regions represents a unique address space. The computing regions contend for the same computer resources in the computer. For example, the computing regions contend for use of the central processing units (CPUs) within the computer. Typically, the computing regions are each allocated a finite number of CPU time slices during any given time period.
  • the computer 104 may be an IBM mainframe computer Customer Information Control System (CICS), which is an on-line transaction processing system that is of common use in many applications, and which supports multiple CICS regions (computing regions). CICS is well known, and is described in many publicly available documents.
  • CICS Customer Information Control System
  • Data generated by the Application Programs 106 are included in data records. Each record has a service type. With regard to the telecommunications environment, different service types are associated with different call processing services. For example, Service Type X might be associated with a collect calling service. An Application Program 106 that generates call processing data for the collect calling service places the call processing data in a data record, and sets the service type of the data record equal to Service Type X. As another example,
  • Service Type Y might be associated with a credit card calling service.
  • An Application Program 106 that generates call processing data for the credit card calling service places the call processing data in a data record, and sets the service type of the data record equal to Service Type Y.
  • a service type is analogous to a data type or file type.
  • service type data type
  • file type are used interchangeably herein.
  • the Application Programs 106 write the data records into a Primary Router Queue 110.
  • a Primary Router 112 reads the records from the Primary Router Queue 110. For each record read from the Primary Router Queue 110, the Primary Router 112 identifies the NIDSs 1 18 that support (or accept) the service associated with the service type of the record (the Primary Router 112 refers to information in a Routing Distribution Table 114 to perform this function). The Primary Router 112 determines that the record is to be distributed to these identified NIDSs 118. For example, suppose NIDSs 118A, 118C, and 1181 support the credit card calling service. Given this scenario, the Primary Router 112 determines that Service Type Y records are to be distributed to NIDSs 118 A, 118C, and 1181.
  • the Primary Router 112 After identifying the NIDSs 1 18 to which a record is to be distributed, the Primary Router 112 replicates the record, converts the replicated records as required for each respective end point (i.e., each respective NIDS 118 that is to receive the record), and writes the converted records into Send Queues 116 associated with the identified NIDSs 118. With reference to the above example, the Primary Router 112 writes Service Type Y records into Send Queues 116A, 116C, and 1161 associated with NIDSs 118A, 118C, and 1181, respectively.
  • Each Send Queue 116 triggers a Send Process 117.
  • the Send Process 117 is a client application which establishes a connection/session/conversation with its respective NIDS 118, reads records from its respective Send Queue 116, and sends the records to its NIDS 1 18 through an appropriate protocol, such as SNA LU6.2.
  • Processes 117 are implemented in the same computing region as the Primary
  • Router 112 the Primary Router Queue 110, and the Application Programs 106.
  • the environment 102 of FIG. 1 is not very efficient if large volumes of data are to be distributed to a large number of NIDSs 118.
  • a single computing region, having access to limited resources, is limited in the number of concurrent tasks that it can effectively perform.
  • the environment 102 of FIG. 1 results in congestion within the Send Queues 116 and poor data distribution performance.
  • FIG.2 illustrates another environment 202 in which data distribution takes place.
  • the Application Programs 206, the Primary Router Queue 210, and the Primary Router 212 are in a first computing region, called a Primary Region 204.
  • the environment 202 of FIG. 2 includes a number of additional computing regions, called Secondary Regions 216.
  • the Send Queues 218 are in the Secondary Regions 216.
  • the Secondary Regions 216 may be implemented on the same computer as the Primary Region 204, or on different computers.
  • the Application Programs 206 generate call processing data which is placed into data records.
  • the Application Programs 206 write the data records into a Primary Router Queue 210.
  • the Primary Router 212 reads the records from the Primary Router Queue 210.
  • the Primary Router 212 identifies the NIDSs 220 to which each record is to be sent (based on the service type of each record, as discussed above).
  • the Primary Router 212 replicates and distributes the data records to the appropriate Send Queues 218 in the Secondary Regions 216.
  • the Send Processes 219 read the data from the Send Queues 218 and distribute the data to the corresponding NIDSs 220, as discussed above.
  • Secondary Regions 216 for Send Queues 218 and Send Processes 219 distributes the tasks involved with data distribution among many computing regions. Therefore, more concurrent tasks can be effectively conducted, and efficiency is increased (because multiple computing regions collectively have access to more computing resources than a single computing region).
  • limitations in throughput are still encountered with large volumes of data records and large numbers of NIDSs 220.
  • the Primary Router 212 must perform a remote write of data to each Send Queue 218 in the Secondary Regions 216. Remote writes are needed since data is traversing across different computing regions. These remote writes are performed in a serial fashion.
  • the Primary Router 212 must first remotely write to Send Queue 218A, and then to Send Queue 218B, and then to Send Queue 218C, etc. (This assumes that the service type of the record is supported by NIDSs 220A- 220C.) Each remote write must complete before the next remote write can begin. These serial remote writes are costly in terms of time, and if a large number of NIDSs 220 and Send Queues 218 are involved, efficiency is limited.
  • the approach of FIG. 2 also suffers from a reliability problem.
  • a record is to be transmitted to NIDSs 220A and 220D.
  • the Primary Router 212 must complete a remote write of the record to the Send Queue 218 A before it can begin the remote write to the next Send Queue 218D. If the remote write to the Send Queue 218 A is not successfully completed (because, for example, the Secondary Region 216 is experiencing difficulties and is, therefore, unavailable), then the Primary Router 212 attempts to resend the record, thus holding up the remaining distribution process. This places upon the distribution system a dependence on the availability of every Secondary Region 216. If a Secondary Region 216 is unavailable, the entire distribution process is delayed or even terminated.
  • the present invention is a system and method for data distribution.
  • the invention uses multiple routing processes in a distribution hierarchy to increase the efficiency and performance of the distribution of high volumes of data.
  • a Primary Router Queue, a Primary Router, and one or more Secondary Router Queues are in a first computing region, called a Primary Region. Secondary Routers are located in second computing regions, called
  • the Primary Router reads data records from the Primary Router Queue. These are local read operations.
  • the Primary Router replicates the data records, and writes the replicated records to the Secondary Router Queues. These are local write operations.
  • the Secondary Routers read (using remote read operations) the records from the Secondary Router Queues, convert and replicate the records as necessary, and write the records to the Send Queues.
  • the Send Processes then send the records to the NIDSs.
  • Use of the Secondary Router Queues on the same region (the Primary Region) as the Primary Router buffers the Primary Router from the Secondary Regions so that if a Secondary Region fails, the Primary Router may continue to distribute data records to other Secondary Regions.
  • reliability is enhanced. It also provides a parallel distribution environment which employs multiple parallel tasks to route data to fewer endpoints, and in which the number of costly remote writes are reduced. This improves the overall data throughput of the data distribution system (DDS).
  • DDS data distribution system
  • the DDS is used to distribute call processing data for intelligent network applications from Application
  • the NIDSs house databases that contain call processing data. This data is used by intelligent network applications to provide a variety of enhanced call processing services, such as operator services, customer services, collect calling, credit card calling, and virtually any other telecommunications service offering.
  • Data records that are distributed to NIDSs may instruct the NIDSs to perform certain database operations (add, update, delete, etc.).
  • the present invention provides an improvement in both the efficiency and reliability of this distribution process.
  • the present invention operates as follows. A Primary Router in a Primary
  • Region performs a local read operation to read a data record from a Primary Router Queue, which is also in the Primary Region.
  • the Primary Router identifies Secondary Router Queues that are associated with Secondary Regions that are coupled to servers (NIDSs) that support a service type of the data record.
  • the Secondary Router Queues are in the Primary Region.
  • the Primary Router replicates the data record to produce replicated data records.
  • the Primary Router performs local write operations to write the replicated data records to the identified Secondary Router Queues.
  • a Secondary Router performs a remote read operation to read a replicated data record from one of the Secondary Router Queues.
  • the Secondary Router identifies any Send Queues (in the Secondary Region) that are associated with servers that support the service type of the data record read from the Secondary Router Queue.
  • the Secondary Router replicates and optionally converts the data record to produce converted data records.
  • the Secondary Router performs local write operations to write the converted data records to the identified send queues.
  • the above description relates to a non-block processing mode.
  • the invention also includes a block processing mode, which operates as follows.
  • the Primary Router generates a block record having stored therein information from one or more data records read from the Primary Router Queue.
  • the Primary Router identifies any Secondary Router Queues associated with Secondary Regions that are coupled to servers that support the service type of the block record.
  • the Primary Router replicates the block record to produce replicated block records, and performs local write operations to write the replicated block records to the identified Secondary Router Queues.
  • the Secondary Router performs a remote read operation to read a replicated block record from one of the Secondary Router Queues.
  • the Secondary Router unbundles the block record to obtain one or more data records.
  • the Secondary Router identifies Send Queues associated with servers that support the service type of the data records.
  • the Secondary Router replicates and optionally converts the data records to produce converted data records, and performs local write operations to write the converted data records to the identified Send Queues.
  • the Secondary Router performs a remote read operation to read a replicated block record from one of the Secondary Router Queues.
  • the Secondary Router unbundles the block record to obtain one or more data records.
  • the Secondary Router identifies Send Queues associated with servers that support the service type of the data records.
  • the Secondary Router optionally converts the data records to produce converted data records, and generates a block record having stored therein information from one or more data records read from the Secondary Router Queue. All of the data records in the block record have the same service type.
  • the Secondary Router replicates the block record to produce replicated block records, and performs local write operations to write the replicated block records to the identified Send Queues.
  • Block processing may be implemented at the Primary Router level or the Secondary Router level, or both simultaneously.
  • the present invention is distinguishable from conventional approaches in a number of ways.
  • the present invention uses Secondary Router Queues implemented on the same computing region as the Primary Router. This provides a number of advantages, such as:
  • FIG. 1 is a block diagram of a first conventional data distribution system
  • FIG. 2 is a block diagram of a second conventional data distribution system
  • FIG. 3 is a block diagram of a data distribution system according to a preferred embodiment of the present invention.
  • FIG. 4 illustrates an example Routing Distribution Table according to a preferred embodiment of the present invention
  • FIG. 5 illustrates an example Master Control Table according to a preferred embodiment of the present invention
  • FIG. 6 illustrates an exemplary format of a data record generated by
  • FIG. 7 illustrates an exemplary format of a block record
  • FIGS. 8 A, 8B, 9 A, and 9B are flowcharts depicting the preferred operation of the present invention
  • FIGS. 10A-10D are flowcharts depicting an alternate operation of the present invention, using secondary level block processing
  • FIG. 1 1 is an example secondary router block record format.
  • the present invention is directed to a data distribution system (DDS), and a method practiced in the DDS.
  • DDS data distribution system
  • the invention is useful for distributing data from a source computer to a plurality of target computers for virtually any application.
  • the present invention is useful for distributing data from a source computer to a plurality of target computers (preferably server computers) which are used to house databases for enhanced call processing within an intelligent telecommunications network.
  • target computers preferably server computers
  • the invention is described herein with reference to this preferred embodiment. It should be understood, however, that the invention is not limited to this embodiment. Instead, the invention is applicable to any application involving the transfer of data between multiple computers.
  • FIG. 3 illustrates a computing environment 302 according to a preferred embodiment of the present invention.
  • the environment 302 Includes a first computing region, called a Primary Region 304, and a plurality of second computing regions, called Secondary Regions 306.
  • the Primary Region 304 and the Secondary Regions 306 are implemented using a single computer that can be logically divided into multiple computing regions.
  • Each of the computing regions represents a unique address space in the computer.
  • the computing regions contend for the same computer resources in the computer. For example, the computing regions contend for use of the central processing units (CPUs) within the computer.
  • the computing regions are each allocated a finite number of
  • the computer may be any suitable computer having the characteristics described above, such as an IBM mainframe computer Customer Information Control System (CICS), which is an on-line transaction processing system that is of common use in many applications, and which supports multiple CICS regions (computing regions).
  • CICS is well known, and is described in many publicly available documents, such as CICS Intercommunication Guide, Version 3, Release 2.1, second edition, June 1991, which is incorporated herein by reference in its entirety.
  • CICS Intercommunication Guide Version 3, Release 2.1, second edition, June 1991, which is incorporated herein by reference in its entirety.
  • Regions 306 are implemented in multiple computers.
  • the Primary Region 304 and the Secondary Regions 306 may be each implemented using a different computer.
  • the Primary Region 304 may be implemented in one computer, and groups of the Secondary Regions 306 may be implemented together on other computers.
  • Region 304 and the Secondary Regions 306 on computers are also within the scope of the present invention.
  • a plurality of Application Programs (processes) 308 execute in the Primary Region 304.
  • the Application Programs 308 generate data that must be distributed to end points, called Network Information Distribution Servers (NIDSs) in FIG. 3.
  • NIDSs are also called servers.
  • the Application Programs 308 generate call processing data used for enhanced call processing services, such as operator services, customer services, collect calling, credit card calling, and virtually any other telecommunications service offering.
  • Each enhanced call processing service has one or more service types.
  • the collect calling service may have a service type equal to Service Type
  • the credit card calling service may have a service type equal to Service Type Y, or Service Y.
  • the Application Programs 308 insert the data that they generate in data records.
  • An exemplary format of a data record 602 generated by an Application Program 308 is shown in FIG. 6.
  • the data record 602 includes a Record Data field 608, whose length varies from record to record, that stores the data that was generated by the Application Program 308.
  • Service Type X might be associated with a collect calling service.
  • An Application Program 308 that generates call processing data for the collect calling service places the call processing data in a data record 602. and sets the Service Type field 604 equal to
  • Service Type X As another example, Service Type Y might be associated with a credit card calling service.
  • An Application Program 308 that generates call processing data for the credit card calling service places the call processing data in a data record 602, and sets the Service Type field 604 equal to Service Type Y.
  • a service type is analogous to a data type or file type.
  • service type data type
  • file type are used interchangeably herein.
  • NIDSs Network Information Distribution Servers
  • the NIDSs 326 are implemented in computers that are coupled to the
  • a NIDS 326 may be a database, or may be a database in combination with a process executing in a computer. In the preferred embodiment, the NIDSs 326 are used to provide call processing data to an intelligent network (not shown) that provides enhanced call processing services. Each NIDS 326 supports one or more enhanced call processing services
  • each NIDS 326 may store call data related to one or more enhanced call processing services.
  • each NIDS 326 is associated with one or more service types (i.e., a NIDS 326 is associated with the service types that identify the enhanced call processing services that it supports).
  • Send Queues 324 are implemented in the Secondary Regions 306.
  • One or more Send Queues 324 are associated with each NIDS 326.
  • Each Send Queue 324 accepts data records 602 of a given service type.
  • the NIDS 326 will have multiple Send Queues 324, one for each service type. This is the case with NIDS 326C, that has
  • the present invention operates to transfer call data related to a given enhanced call processing service to the NIDSs 326 that support that service. For example, suppose that NIDS 326B supports an enhanced call process service having a service type equal to Service 2. In this example, the invention ensures that all data records 602 of Service 2 are written to Send Queue 324B. These data records 602 are then read by Send Process 325B, which acts as a client and sends the data to the NIDS 326B. This operation is described in greater detail below.
  • queue refers to a storage device which stores and retrieves data in a preferably first in first out (FIFO) manner.
  • FIFO first in first out
  • Other types of queues can alternatively be used, including but not limited to last in first out (LIFO) queues.
  • LIFO queues are commonly referred to as STACKS.
  • One or more Primary Router Queues 312 are implemented in the Primary Region 304. Each Primary Router Queue 312 is associated with one or more service types. A service type can only be associated with a single Primary Router
  • the Application Programs 106 write the data records 602 that they generate into the respective Primary Router Queues 312 associated with the service types of the data records 602. For example, suppose that Primary Router
  • Queue 312B is associated with Service Type X and Service Type Y. In this case, all data records 602 of Service Type X and Service Type Y are written to Primary
  • the Application Programs 308 have access to a Master Control Table 310.
  • An example Master Control Table 310 is shown in FIG. 5. There is a row in the Master Control Table 310 for each service type. Each row identifies the service type (field 504 of the Master Control Table 310), and the Primary Router Queue 312 that is associated with that service type (field 506 of the Master Control
  • Primary Router Queue 312A is associated with Service 5.
  • Primary Router Queue 312B is associated with Services 1-4.
  • Primary Router Queue 312C is associated with Service 6.
  • All writes into the Primary Router Queues 312 by the Application Programs 308 are local writes, because both the Application Programs 308 and the Primary Router Queues 312 are in the same computing region, i.e., the Primary Region 304.
  • a Primary Router 314 is implemented in the Primary Region 304.
  • the Primary Router 314 is implemented as a process that executes in the Primary Region 304.
  • Other Primary Routers may also be implemented in the Primary Region 304, or in other Primary Regions (not shown).
  • a plurality of Secondary Router Queues 318 are implemented in the
  • Each Secondary Router Queue 318 is associated with and services one of the Secondary Regions 306, and acts as the interface or gateway between the Primary Region 304 and the Secondary Region 306.
  • Each Primary Router 314 has its own Primary Router Queue 312. For example, the Primary Router 314 reads records from only its own Primary Router
  • Each Primary Router Queue such as Primary Router Queue 312B, has its own set of Secondary Router Queues, such as Secondary Router Queues 318A-318C, to write records, with each Secondary Router Queue 318 servicing one of the Secondary Regions 306.
  • the Primary Router 314 For each record 602 read from the Primary Router Queue 312, the Primary Router 314 identifies the Secondary Router Queues 318 that service the Secondary Regions 306 that are coupled to NIDSs 326 that support the service type of the record 602. The Primary Router 314 performs this function by reference to a Routing Distribution Table 316.
  • Table 316 is shown in FIG. 4.
  • the Routing Distribution Table 316 has two types of rows.
  • the Routing Distribution Table 316 has a row for each unique server/service type/send queue combination. These are the first type of rows (rows 414A-414H). Each of these type of rows of the Routing Distribution Table 316 identifies a service type (field
  • a NIDS 326 that supports that service type (field 408), the Send Queue 410 associated with that service type and that NIDS 326 (field 410), the Secondary Region 406 in which that Send Queue 410 is located (field 406), and a confirm level (field 412) for the Send Processes 325.
  • the confirm level indicates the number of records a Send Process 325 can send to a NIDS 326 before it must confirm receipt by the NIDS 326.
  • the Routing Distribution Table 316 also has a row for each unique and applicable secondary region/secondary router queue combination. These are the second type of rows (rows 4141-414K). Each of these type of rows of the Routing Distribution Table 316 identifies a Primary Router Queue 312 (field 404), a
  • the confirm level indicates the number of records a Secondary Router 320 can process before a commit must be performed.
  • the first and second types of rows of the Routing Distribution Table 316 are maintained in two tables.
  • the Primary Router 314 When the Primary Router 314 reads a record 602 from the Primary Router Queue 312B, the Primary Router 314 determines the service type of the record 602 by reference to the Service Type field 604. The Primary Router 314 then retrieves from the Routing Distribution Table 316 all rows of the first type having a service type (field 404) equal to that of the record 602. The Primary Router 314 refers to field 406 of these rows to identify the Secondary Regions 306 having NIDSs 326 that support the service type. Then the Primary Router 314 retrieves from the Routing Distribution Table 316 all rows of the second type where field
  • the Primary Router 314 identifies the Secondary Router Queues 318 that service the identified Secondary Regions 306 (by reference to field 410).
  • the Primary Router 314 replicates the record 602, and writes the replicated records to the identified Secondary Router Queues 318.
  • the write operations from the Primary Router 314 to the Secondary Router Queues 318 are performed on a serial basis. That is, if the Primary Router 314 must write a record to Secondary Router Queues 318A and 318B, then the Primary Router 314 must first write the record to Secondary Router Queue 318 A, and then Secondary Router Queue 318B. Note that these write operations are local writes, since the Primary Router 314 and the Secondary Router Queues 318 are all in the same computing region, i.e., the Primary Region 304.
  • the Primary Router 314 of the present invention supports two operational modes: non-block processing (also called single processing) and block processing.
  • the Primary Router 314 While in the non-block processing mode, the Primary Router 314 reads a record 602 from the Primary Router Queue 312B, determines the destinations of the record 602 (as described above), replicates the record 602, and then writes the replicated records to the appropriate Secondary Router Queues 318. In other words, the Primary Router 314 processes a single record 602 at a time.
  • the Primary Router 314 While in the block processing mode, the Primary Router 314 attempts to process multiple records 602 at a time. Specifically, the Primary Router 314 reads multiple records 602 from the Primary Router Queue 312B, bundles the records 602 into a primary block record 702, determines the destinations of the primary block record 702 (as described above), replicates the primary block record 702, and then writes the replicated records to the appropriate Secondary Router Queues 318.
  • a primary block record 702 can contain only records 602 of the same service type. Also, there is a limit to the number of records 602 (of the same service type) that can be bundled into a primary block record 702.
  • the Routing Distribution Table 316 actually has a third type of row. Each Primary Router 314 has one of these rows (row 414L). These rows can be accessed by Primary
  • Each of these rows has a Confirm Level field 412, which indicates the maximum bundle size for block records generated by the associated Primary Router.
  • FIG. 7 An exemplary format of a primary block record 702 is shown in FIG. 7.
  • Field 706 identifies the service type of the primary block record 702.
  • a service type of 'q-block-recs' indicates that the record is a block record.
  • Field 708 identifies the total size of the primary block record 702, and field 710 indicates the number of records 602 that are bundled in the primary block record 702.
  • the records 602 that are bundled in the primary block record 702 follow field 710.
  • Each record 602 in the primary block record 702 has a Record Data field 714 or
  • the Primary Router 314 While in the block processing mode, the Primary Router 314 continues to read records 602 from the Primary Router Queue 312B, and add the records 602 to the primary block record 702, until one of the following occurs: (1) the maximum bundle size is reached (i.e., the maximum number of records 602 in the primary block record 702); (2) the record 602 read from the Primary Router Queue 312B has a service type that is different from the service type of the primary block record 702 being constructed; (3) the Primary Router Queue 312B becomes empty: or (4) an internal buffer limit preferably equal to 8153 bytes is reached. It is noted that any other, implementation dependent internal buffer limit value could alternatively be used. If any of these conditions occur, the Primary Router 314 completes the primary block record 702. This processing is discussed in greater detail below.
  • a Secondary Router 320 is implemented in each Secondary Region 306. Each Secondary Router 320 reads records 602, 702 from its Secondary Router Queue 318 in the Primary Region 304. If the record is a primary block record 702, then the Secondary Router 320 unbundles the primary block record 702 and processes the resulting records 602 individually.
  • each Secondary Router 320 refers to a Routing Distribution Table 322 (that is identical to the Routing Distribution Table 316 connected to the Primary Router 314) to determine the destinations of the record 602.
  • the Secondary Router 320 performs this task by retrieving from the Routing Distribution Table 322 all entries having a service type (field 404) that is equal to the service type of the record 602 being processed for all NIDS servers for which that region serves.
  • the Secondary Router 320 then extracts from these entries the identities of the Send Queues 324 (field 410).
  • the Secondary Router 320 replicates the record 602, and then converts as necessary the replicated records so as to be compatible with the entities (i.e., computers) that will be accessing the destination NIDS 326. (Since conversion operations are not always necessary, this functionality is considered to be optional to the present invention.) Details of such conversion operations are well known.
  • the Secondary Router 320 then writes the converted records to the Send Queues 324 previously identified by reference to the Routing Distribution Table 316.
  • Routing Distribution Tables 322 coupled to the Secondary Routers 320 are identical to the Routing Distribution Table 316 coupled to the Primary Router 314. In some embodiments, where the Primary
  • the Routing Distribution Tables 316, 322 represent the same table. In other embodiments, where the Primary Region 304 and the Secondary Regions 306 are implemented in different computers, the Routing Distribution Table 316 is distributed to the Secondary Regions 306. Alternatively, the Secondary
  • Regions 306 are provided with a mechanism for accessing the Routing Distribution Table 316 in the Primary Region 304, such as a module that enables the Secondary Routers 320 to make remote database queries to the Routing Distribution Table 316 in the Primary Region 304.
  • the present invention supports two operational modes for the Secondary Router 320: non-block processing (also called single processing) and block processing. While in the non-block processing mode, the Secondary Router 320B reads a record 602 or a primary block record 702 from the Secondary Router Queue 318B. If the Secondary Router 320B reads a record 602, it determines the destination of the record 602 (as described above), replicates and converts the record 602, and then writes the replicated records to the appropriate Send Queues 324.
  • non-block processing also called single processing
  • block processing While in the non-block processing mode, the Secondary Router 320B reads a record 602 or a primary block record 702 from the Secondary Router Queue 318B. If the Secondary Router 320B reads a record 602, it determines the destination of the record 602 (as described above), replicates and converts the record 602, and then writes the replicated records to the appropriate Send Queues 324.
  • the Secondary Router 320B If the Secondary Router 320B reads a primary block record 702, it unbundles the primary block record 702 into individual records 602 and processes records individually.
  • the Secondary Router 320B determines the destinations of the record 602 (as described above), replicates and converts the record 602, and then writes the replicated records to the appropriate Send Queues 324.
  • the Secondary Router 320B repeats this procedure for each record 602 unbundled from the primary block record 702. In other words, the Secondary Router 320B processes a single record 602 at a time. While in the block processing mode, the Secondary Router 320B attempts to process multiple records 602 at a time.
  • the Secondary Router 320B reads multiple records 602 or block records 702 from the Secondary Router Queue 318B, unbundles block records 702 into individual records 602, optionally converts the records, bundles the records 602 into a secondary block record 1102, determines the destinations of the secondary block record 1 102 (as described above), replicates the secondary block record 1102, and then writes the replicated records to the appropriate Send Queues 325.
  • a secondary block record 1102 can contain only records 602 of the same service type. Also, there is a limit to the number of records 602 (of the same service type) that can be bundled into a secondary block record 1102.
  • Routing Distribution Table 322 actually has a fourth type of row.
  • Each Secondary Router 320 has one of these rows (row 414M). These rows can be accessed by Secondary Router Queue name (field 410). Each of these rows has a Confirm Level field 412, which indicates the maximum bundle size for block records generated by the associated Secondary Router 320.
  • FIG. 11 An exemplary format of a secondary block record 1102 is shown in FIG. 11.
  • FIG. 1 1 corresponds to FIG. 7.
  • Field 1106 identifies the service type of the secondary block record 1 102.
  • a service type of 'q-block-recs' indicates that the record is a block record.
  • Field 1108 identifies the total size of the secondary block record 1102. and field 1110 indicates the number of records
  • Each record 602 in the secondary block record 1102 has a Record Data field 1114 or 1118, that stores the record 602, and a Length of Record field 1 1 12 or 1 116 that stores the length of the record 602.
  • the Secondary Router 320B While in the block processing mode, the Secondary Router 320B continues to read records 602 and block records 702 from the Secondary Router Queue 318B, unbundle the block records 702, optionally convert the records 602 and add the records 602 to the secondary block record 1102, until one of the following occurs: (1) the maximum bundle size is reached (i.e., the maximum number of records 602 in the secondary block record 1102); (2) the record 602 read from the Secondary Router Queue 318B has a service type that is different from the service type of the secondary block record 1102 being constructed; (3) the Secondary Router Queue 318B becomes empty; or (4) an internal buffer limit preferably equal to 8153 bytes is reached.
  • the maximum bundle size i.e., the maximum number of records 602 in the secondary block record 1102
  • the record 602 read from the Secondary Router Queue 318B has a service type that is different from the service type of the secondary block record 1102 being constructed
  • the preferred embodiment is to have the internal buffer size for the secondary block record 1102 the same size or larger than the internal buffer for the primary block record 702 internal buffer size. It is noted that any other, implementation dependent internal buffer limit value could alternatively be used. If any of these conditions occur, the Secondary Router 320B completes the secondary block record 1102. This processing is discussed in greater detail below.
  • FIGS. 8A, 8B, 9A, and 9B collectively illustrate flowcharts 802. 902 that represent the operation of the present invention.
  • an Application Program 308 generates a data record 602 of a particular service type.
  • the service type is Service 2.
  • step 808 the Application Program 308 refers to the Master Control
  • step 810 the Application Program 308 determines that the Primary Router Queue 312B is associated with Service 2.
  • the Application Program 308 writes the data record 602 into the Primary Router Queue 312B. This is a local write operation.
  • step 812 the Primary Router 314 reads the data record 602 from the Primary Router Queue 312B. This is a local read operation.
  • step 814 the Primary Router 314 determines whether block processing is enabled. At any given time, either block processing or non-block processing by the Primary Router 314 is enabled. An administrator may set the current mode, for example. Alternatively, some systems may only support one or the other mode. In some embodiments, step 814 is not actually performed. Instead, non-block processing is established by starting a first process, and block processing is established by starting a second process. If block processing of the Primary Router 314 is not enabled, then step 832 (FIG. 8B) is processed. Step 832 is described below. If block processing of the Primary Router 314 is enabled, step 816 is processed. In step 816, the Primary Router 314 initializes a new prin y block record
  • Field 706 is set to the default blocking service name "q-block-recs". The entire record 602 is placed into field 714. Field 712 is set equal to the length of the record 602.
  • the Primary Router 314 accesses the Routing Distribution Table 316 and determines the maximum bundle size.
  • the Primary Router 314 does this by searching in the Routing Distribution Table 316 until it finds a row where field 410 is equal to the name of its Primary Router Queue, i.e., Primary Router Queue 312B. In the example of FIG.4. the Primary Router 314 finds row 414L.
  • the Primary Router 314 uses the value in the Confirm Level field 412 of this row as the maximum bundle size. For this example, the primary router 314 has a maximum bundle size of 10.
  • step 820 the Primary Router 314 determines whether the maximum bundle size of the primary block record 702 has been reached. In other words, the Primary Router 314 determines whether the number of records 602 bundled into the primary block record 702 is equal to the maximum bundle size. If the maximum bundle size has been reached, then step 830 (FIG. 8B) is performed. In step 830, the Primary Router 314 completes the primary block record 702 by filling in the appropriate data for fields 708 and 710 of the primary block record 702. Control then passes to step 832, described below. If, in step 820, the Primary Router 314 determines that the maximum bundle size has not been reached, then step 822 is performed.
  • step 822 the Primary Router 314 attempts to read another data record 602 from the Primary Router Queue 12B (preferably, this step is not performed during the first iteration through the flowchart).
  • step 824 the Primary Router 314 determines whether the Primary
  • Step 830 the Primary Router 314 determines whether the new data record 602 read from the Primary Router Queue 312B is of the same service type as the service type of the primary block record 702 being constructed. If the service type is different, then step 830 is performed, as described above. Otherwise, step 829 is performed.
  • step 829 the Primary Router 314 determines if the size of the primary block record 702 would exceed the maximum buffer size (preferably 8153 bytes) if the new data record 602 was added to the primary block record 702. Step 830 (described above) is performed if the maximum buffer size would be exceeded. Otherwise, step 828 is performed. In step 828, the Primary Router 314 adds the new data record 602 to the primary block record 702. This is done by appropriately filling in fields 716 and 718 of the primary block record 702. Control then returns to step 820. described above.
  • the maximum buffer size preferably 8153 bytes
  • Step 832 is performed if block processing of the Primary Router 314 is not enabled (step 814), or if the primary block record 702 has been completed (step 830).
  • the Primary Router 314 accesses the Routing Distribution Table 316 to identify the Secondary Router Queues 318 that service Secondary Regions 306 that are coupled to NIDSs 326 that support the service type of the data record 602 (for block processing, any of the data records 602 that were placed into the primary block record 702 could be used, since they all have the same service type). This operation is described further above.
  • step 834 the Primary Router 314 replicates the data record 602 (for non-block processing) or the primary block record 702 (for block processing). One copy is made for each Secondary Router Queue 318 identified in step 832.
  • step 836 the Primary Router 314 serially writes (via local writes) the replicated data records or block records to the Secondary Router Queues 318 identified in step 832.
  • Steps 835 and 837 are then performed. These steps are related to steps 826 and 829. Steps 826 and 829 previously determined whether the new record 602 that was read in step 822 should be added to the primary block record 702 being created. If either step 826 or step 829 was found to be True, then the new record 602 was not added to the primary block record 702. If this is the case, then the new record 602 should be used to create a new primary block record 702. This logic is represented by steps 835 and 837. In step 835, the Primary Router 314 determines whether block processing is enabled. If it is not enabled, then control returns to step 812.
  • step 837 the Primary Router 314 determines whether the new record 602 has a different service type than the primary block record 702, or whether the addition of the new record 602 would have made the primary block record 702 exceed the maximum buffer size. In other words, the Primary Router 314 in step 837 determines whether either step 826 or 829 was determined to be true. If either was true, control passes to step 838, where a new primary block record 702 is initialized, and control then returns to step 829. Otherwise, control returns to step 812.
  • Flowchart 902 in FIGS. 9 A and 9B shall now be described. Note that the steps in flowchart 902 are performed in parallel by the Secondary Routers 320 in all of the Secondary Regions 306. For illustrative purposes, flowchart 902 is described below with reference to the Secondary Router 320A in Secondary Region 306.
  • step 906 the Secondary Router 320A reads an item from its Secondary Router Queue 318A. This is a remote read operation.
  • step 908 the Secondary Router 320A determines whether this item is a data record 602, or a primary block record 702. If the item is a primary block record 702, then it will have "q-block-recs" in field 706. If the item is a primary block record 702, then step 916 is processed (described below). Otherwise, step
  • the Secondary Router 320 A accesses the Routing Distribution Table 322A to identify any of its Send Queues 324 that service any of its NIDSs 326 that support the service type of the data record 602.
  • step 912. the Secondary Router 320A replicates the data record 602, and converts the replicated data records as necessary (as described above).
  • step 914 the Secondary Router 320A serially writes (via local writes) the converted data records to the Send Queues 324 identified in step 910.
  • the Send Processes 325 write the data records to their respective NIDSs 326.
  • Flowchart 902 is complete after step 914 is performed, as indicated by step 928.
  • step 916 the Secondary Router 320A unbundles the primary block record 702 to produce one or more data records 602.
  • This unbundling process involves extracting data from the primary block record 702, and then packaging the data as necessary into data records 602 having the format shown in FIG. 6. The manner in which this unbundling operation is performed will be apparent to persons skilled in the relevant art(s).
  • the Secondary Router 320A then individually processes each of the data records 602. Specifically, in step 918, the Secondary Router 320A selects one of the data records 602.
  • step 920 the Secondary Router 320A accesses the Routing Distribution Table 322 to identify any of its Send Queues 324 that service any of its NIDSs
  • step 920 need only be performed once for all of the data records 602 obtained from the primary block record 702.
  • the Routing Distribution Table 322 is accessed for each data record 602. In other embodiments, the Routing Distribution Table 322 is accessed only once for the data records 602 in a primary block record 702.
  • step 922 the Secondary Router 320A replicates the selected data record 602, and converts the replicated data records as necessary (as described above).
  • step 924 the Secondary Router 320A serially writes (via local writes) the converted data records to the Send Queues 324 identified in step 920.
  • Send Processes 325 write the data records to the NIDSs 326.
  • step 926 the Secondary Router 320A determines whether there are any additional unbundled data records 602 to processed. If there are additional data records 602 to process, then control returns to step 918 (described above). Otherwise, flowchart 902 is complete, as indicated by step 928.
  • the Secondary Routers 320 assume that the Secondary Routers 320 only operated in a non-block mode. In an alternate embodiment, the Secondary Routers 320 operate in either a non-block mode or a block mode.
  • the operation of the Secondary Routers 320 according to this alternate embodiment is represented by flowchart 1002 in FIGS. 10A-10D. Note that the steps in flowchart 1002 are performed in parallel by the Secondary Routers 320 in all of the Secondary Regions 306. For illustrative purposes, flowchart 1002 is described below with reference to the Secondary Router 320A in Secondary Region 306A. In step 1006, the Secondary Router 320A reads an item from its
  • step 1008 the Secondary Router 320A determines whether there are any data records 602 or block records 702 to process in the Secondary Router
  • step 1010 the Secondary Router 320A determines whether this item is a data record 602, or a primary block record 702. If the item is a primary block record 702, then it will have "q-block-recs" in field 706. If the item is a primary block record 702, then step 1020 is processed (described below). Otherwise, step
  • step 1012 the Secondary Router 320A determines whether secondary block processing is enabled. At any given time in the Secondary Router 320A, either block processing or non-block processing is enabled. An administrator may set the current mode, for example. Alternatively, some systems may only support one or the other mode. In some embodiments, step 1012 is not actually performed. Instead, non-block processing in the Secondary Router 320 A is established by starting a first process, and block processing in the Secondary Router 320A is established by starting a second process. If secondary block processing is not enabled, then step 1014 is processed. If secondary block processing is enabled, step 1050 (FIG. 10C) is processed.
  • the Secondary Router 320A accesses the Routing Distribution Table 322A to identify any of its Send Queues 324 that service any of its NIDSs 326 that support the service type of the data record 602.
  • the Secondary Router 320A replicates the data record 602, and converts the replicated data records as necessary (as described above).
  • step 1018 the Secondary Router 320 A serially writes (via local writes) the converted data records to the Send Queues 324 identified in step 1014.
  • the Send Processes 325 write the data records to their respective NIDSs 326. Control then returns to step 1006.
  • step 1010 determines in step 1010 that the item read from the Secondary Router Queue 318A is a primary block record 702, then step 1020 is performed.
  • step 1020 the Secondary Router 320A unbundles the primary block record 702 to produce one or more data records 602.
  • This unbundling process involves extracting data from the primary block record 702, and then packaging the data as necessary into data records 602 having the format shown in FIG. 6. The manner in which this unbundling operation is performed will be apparent to persons skilled in the relevant art(s).
  • the Secondary Router 320A then individually processes each of the data records 602. Specifically, in step 1022, the Secondary Router 320A selects one of the data records 602.
  • step 1024 the Secondary Router 320A determines whether secondary block processing is enabled (as described in step 1012). If secondary block processing is not enabled, then step 1026 is processed. If secondary block processing is enabled, step 1028 (FIG. 10B) is processed.
  • the Secondary Router 320A accesses the Routing Distribution Table 322A to identify any of its Send Queues 324 that service any of its NIDSs 326 that support the service type of the selected data record 602.
  • step 10 6 need only be performed once for all of the data records 602 obtained from the primary block record 702.
  • the Routing Distribution Table 322A is accessed for each data record 602. In other embodiments, the Routing Distribution Table 322A is accessed only once for the data records 602 in a primary block record 702. Step 1080 (FIG. 10D) is then processed.
  • step 1080 the Secondary Router 320A replicates the selected data record 602, and converts the replicated data records as necessary (as described above).
  • step 1082 the Secondary Router 320A serially writes (via local writes) the converted data records to the Send Queues 324 identified in step 1026.
  • the Send Processes 325 write the data records to the NIDSs 326.
  • step 1084 the Secondary Router 320A determines whether there are any additional unbundled data records 602 to process. If there are additional data records 602 to process, then control returns to step 1022 (described above). Otherwise, control returns to step 1006 (FIG. 10A).
  • step 1050 (FIG. 10C) is performed.
  • step 1050 the Secondary Router 320A initializes a new secondary block record 1102.
  • Field 1 106 is set to the default blocking service name "q- block-recs".
  • the entire record 602 is placed into field 11 14.
  • Field 1112 is set equal to the length of the record 602.
  • step 1052 the Secondary Router 320A accesses the Routing
  • the Secondary Router 320A determines the maximum bundle size.
  • the Secondary Router 320A does this by searching in the Routing Distribution Table 322A until it finds a row where field 410 is equal to the name of its Secondary Router Queue, i.e.. Secondary Router Queue 318A. In the example of FIG.4, the Secondary Router 320A finds row 414M. The Secondary Router 320A uses the value in the Confirm Level field 412 of this row as the maximum bundle size.
  • the Secondary Router 320A has a maximum bundle size of 10.
  • step 1054 the Secondary Router 320A determines whether the maximum bundle size of the secondary block record 1 102 has been reached. In other words, the Secondary Router 320A determines whether the number of records 602 bundled into the secondary block record 1102 is equal to the maximum bundle size. If the maximum bundle size has been reached, then step 1068 is performed. In step 1068, the Secondary Router 320A completes the secondary block record 1102 by filling in the appropriate data for fields 1108 and 1110 of the secondary block record 1102. Control then passes to step 1070, described below.
  • step 1056 the Secondary Router 320A determines that the maximum bundle size has not been reached. If, in step 1054, the Secondary Router 320A determines that the maximum bundle size has not been reached, then step 1056 is performed. In step 1056, the
  • Secondary Router 320 A attempts to read another data record 602 from the Secondary Router Queue 318 A (preferably, this step is not performed during the first iteration through the flowchart).
  • step 1058 the Secondary Router 320A determines whether the
  • step 1068 is performed, as described above. Otherwise, step 1068
  • step 1060 the Secondary Router 320A determines whether the new data record 602 read from the Secondary Router Queue 318A is of the same service type as the service type of the secondary block record 1102 being constructed. If the service type is different, then step 1068 is performed, as described above. Otherwise, step 1061 is performed.
  • step 1061 the Secondary Router 320A determines if the new data record 602 read from the Secondary Router Queue 318A is a primary block record 702. Step 1068 (described above) is performed if the new data record 602 is a primary block record 702. Otherwise, step 1062 is performed.
  • step 1062 the Secondary Router 320A converts the data record 602 as necessary (as described above).
  • step 1064 the Secondary Router 320A determines if the size of the secondary block record 1 102 would exceed the maximum buffer size (preferably 8153 bytes) if the new data record 602 was added to the secondary block record 1102. Step 1068 (described above) is performed if the maximum buffer size would be exceeded. Otherwise, step 1066 is performed.
  • the maximum buffer size preferably 8153 bytes
  • step 1066 the Secondary Router 320A adds the new data record 602 to the secondary block record 1102. This is done by appropriately filling in fields 1116 and 1118 of the secondary block record 1102. Control then returns to step
  • Step 1070 is performed if the secondary block record 1 102 has been completed (step 1068).
  • the Secondary Router 320A accesses the Routing Distribution Table 322 A to identify the Send Queues 324 that are coupled to NIDSs 326 that support the service type of the data record 602 (for secondary block processing, any of the data records 602 that were placed into the secondary block record 1 102 could be used, since they all have the same service type). This operation is described further above.
  • step 1072 the Secondary Router 320A replicates the secondary block record 1102. One copy is made for each Send Queue 324 identified in step 1070.
  • step 1074 the Secondary Router 320A serially writes (via local writes) the replicated data records or block records to the Send Queues 324 identified in step 1070.
  • Step 1086 (FIG. 10D) is then performed.
  • Step 1086 is related to steps 1054, 1060. and 1064. Steps 1054, 1060, and 1064 previously determined whether the new record 602 that was read in step
  • step 1056 should be added to the secondary block record 1 102 being created. If any one of steps 1054, 1060, or 1064 were found to be True, then the new record 602 was not added to the secondary block record 1 102. If any one of steps 1054, 1060, or 1064 were found to be true, then the new record 602 should be used to create a new secondary block record 1102. This logic is represented by step 1086.
  • step 1086 the Secondary Router 320A determines whether the new record 602 has a different service type than the secondary block record 1102, or whether the addition of the new record 602 would have made the secondary block record 1102 exceed the maximum buffer or bundle size. In other words, the Secondary Router 320A in step 1086 determines whether any one of step 1054, 1060, or 1064 were determined to be true. If any of these conditions are true, then control passes to step 1092, where a new secondary block record 1102 is initialized. Control then passes to step 1058. Otherwise, control returns to step 1008 (FIG. 10A).
  • step 1024 determines in step 1024 (FIG. 10A) that secondary block processing is enabled
  • step 1028 is performed.
  • step 1028 the Secondary Router 320A initializes a new secondary block record 1 102.
  • Field 1 106 is set to the default blocking service name "q- block-recs".
  • the entire record 602 is placed into field 11 14.
  • Field 1112 is set equal to the length of the record 602.
  • step 1030 the Secondary Router 320A accesses the Routing
  • the Secondary Router 320A determines the maximum bundle size.
  • the Secondary Router 320A does this by searching in the Routing Distribution Table 322A until it finds a row where field 410 is equal to the name of its Secondary Router Queue, i.e., Secondary Router Queue 318A. In the example of FIG.4. the Secondary Router 320A finds row 414M. The Secondary Router 320A uses the value in the Confirm Level field 412 of this row as the maximum bundle size.
  • the Secondary Router 320A has a maximum bundle size of 10.
  • step 1032 the Secondary Router 320A determines whether the maximum bundle size of the secondary block record 1102 has been reached. In other words, the Secondary Router 320A determines whether the number of records 602 bundled into the secondary block record 1 102 is equal to the maximum bundle size. If the maximum bundle size has been reached, then step 1044 is performed. In step 1044, the Secondary Router 320 A completes the secondary block record 1 102 by filling in the appropriate data for fields 1 108 and 1110 of the secondary block record 1102. Control then passes to step 1046, described below.
  • step 1034 the Secondary Router 320A attempts to select another data record 602 from the unbundled block (preferably, this step is not performed during the first iteration through the flowchart).
  • step 1036 the Secondary Router 320A determines whether any more records from the unbundled block remain. If no more records from the unbundled block remain, then step 1044 is performed, as described above. Otherwise, step
  • step 1038 the Secondary Router 320A converts the data record 602 as necessary (as described above).
  • the Secondary Router 320A determines if the size of the block record 1 102 would exceed the maximum buffer size (preferably 8153 bytes) if the new data record 602 was added to the secondary block record 1102. Step
  • step 1044 (described above) is performed if the maximum buffer size would be exceeded. Otherwise, step 1042 is performed.
  • step 1042 the Secondary Router 320A adds the new data record 602 to the secondary block record 1 102. This is done by appropriately filling in fields
  • Step 1046 is performed if the secondary block record 1102 has been completed (step 1044).
  • the Secondary Router 320A accesses the Routing Distribution Table 322A to identify the Send Queues 324 that are coupled to NIDSs 326 that support the service type of the data record 602 (for secondary block processing, any of the data records 602 that were placed into the secondary block record 1 102 could be used, since they all have the same service type). This operation is described further above.
  • step 1048 the Secondary Router 320A replicates the secondary block record 1102. One copy is made for each Send Queue 324 identified in step 1046. Control then passes to step 1076 (FIG. 10D).
  • step 1076 the Secondary Router 320A serially writes (via local writes) the replicated block records to the Send Queues 324 identified in step 1046.
  • Step 1078 is then performed.
  • Step 1078 is then performed.
  • This step is related to steps 1032 and 1040.
  • Step 1032 previously determined whether the maximum bundle size had been reached.
  • Step 1040 previously determined whether the new record 602 that was read in step 1034 and being added to the secondary block record 1 102 would fill up the internal buffer for secondary block record 1102. If step 1040 was found to be True, then the new record 602 was not added to the secondary block record 1102. If this is the case, then the new record 602 should be used to create a new secondary block record 1 102.
  • This logic is represented by step 1078.
  • step 1078 the Secondary Router 320A determines whether the maximum bundle size would have been exceeded, or whether the new record 602 would have made the secondary block record 1102 exceed the maximum buffer size. In other words, the Secondary Router 320A in step 1078 determines whether step 1032 or 1040 was determined to be true. If either was true, then control passes to step 1079. Otherwise, control returns to step 1006 (FIG. 10A).
  • step 1079 the Secondary Router 320A initializes the buffer, creating a new secondary block record 1 102 (as described in step 1028). and adding the new data record 602 to the secondary block record 1102 (as also described in step 1028) if the maximum buffer size was exceeded in step 1040. Control then passes to step 1090. In step 1090, if the max bundle size was reached, then control returns to step 1034. Otherwise, control returns to step 1038 (FIG. 10B).
  • FIG. 3 The invention of FIG. 3 provides two key advantages.
  • the Secondary Router Queues 318 within the Primary Region 304 serve as a buffer to the Secondary
  • the second key advantage is a significant improvement in efficiency and performance of the data distribution process.
  • the present invention improves throughput of data. This is accomplished by reducing the number of endpoints that each parallel Secondary Router 320 must service, and by reducing the number of remote (inter-region) writes. This is described further with the following example.
  • Benchmark tests can be conducted that result in comparisons between the time required for remote writes, remote reads, local writes, and local reads. For example, it has been found in some instances that a remote read (a data read across different CICS regions) is four times more costly than a local read (a data read within a single CICS region). Sample throughput calculations are shown below using the assumption that remote operations are more costly than local operations, as well as the assumption that remote operations are equal to local operations.
  • Each set of calculations is for an example DDS in which data needs to be distributed to 80 NIDS, and that five Secondary Regions are used, each one supporting 16 NIDS.
  • an arbitrary unit of time can be used to make relative comparisons, and is referred to as a time factor.
  • Router 320 performed in parallel equals 4 tf d) 80 local writes, 1 of each Secondary Router 320 to a Send Queue 324, performed in 5 parallel groups of 16 each equals 16 tf
  • the first additional advantage of having the Secondary Routers 320 perform bundling is improved performance of the overall Data Distribution System, in terms of time.
  • the time saved is in the input/output process of reading data from and writing data to permanent storage, such as a magnetic disk or a Direct Access Storage Device (DASD).
  • DASD Direct Access Storage Device
  • the process of performing a local write may require a mechanical process such as positioning a head on a disk at the right location, and moving the head to write the data to the disk. Time is saved by performing such a process only once for 10 bundled records rather than 10 times for 10 separate records. This is described further with examples below.
  • the second additional advantage realized is the time saved in the number of database queries required.
  • the Secondary Router 320 needs to query its Routing Distribution Table 322 to determine routing and to formulate a header for the record 602 or block 1102. Without bundling, the Secondary Router 320 performs this query for each individual record 602. With bundling, the Secondary Router 320 performs this query once for each block 1102, since each record 602 in a block 1102 is being routed to the same NIDS server 326. This reduces by several times the number of Routing Distribution Table 322 queries required.
  • Block records 1 102 contain only one index per bundle of records, while one index for each data record 602 is required.
  • the present invention of FIG. 3, with bundling in the primary region requires the following: a) 50 local reads of the Primary Router Queue 312B by the Primary Router 314 equals 50 tf b) 5 local writes of the bundled record by the Primary Router 314 to Secondary Router Queues 318 equals 5 tf c) 1 remote read of the bundled record by each Secondary Router Queue 318 by a Secondary Router 320 equals 4 tf d) 50 local writes of each Secondary Router 320 to 16 Send Queues 324 equals 800 tf
  • Secondary Router Queues 318 equals 5 tf c) 1 remote read of the bundled record by each Secondary Router Queue 318 by a Secondary Router 320, equals 4 tf d) 1 local write of the bundled record by each Secondary Router 320 to 16 Send Queues 324 equals 16 tf
  • FIG. 3 with bundling in the primary region, requires the following:
  • Send Queues 324 equals 16 tf
  • I/O input/output
  • the present invention of FIG. 3 without bundling in either the primary or secondary regions requires the following: a) 50 local reads of the Primary Router Queue 312B by the Primary Router 314 equals 50 I/O operations b) 50 local writes of the Primary Router 314 to each of the 5 Secondary Router Queues 318 equals 250 I/O operations c) 50 remote reads of each Secondary Router Queue 318 by a Secondary
  • Router 320 equals 200 I/O operations (Note: secondary routers read in parallel) d) 50 local writes of each Secondary Router 320 to each Send Queue 324 equals 750 I/O operations (Note: secondary routers write in parallel)
  • Router 320 equals 4 I/O operations d) 1 local write of each Secondary Router 320 to 15 Send Queues 324 equals 15 I/O operations

Abstract

A system and a method for distributing data are described. A Primary Router (314) in a Primary Region (304) performs a local read operation to read a data record from a Primary Router Queue (312A-312C), also in the Primary Region (304). The Primary Router (314) identifies Secondary Router Queues (318A-318C) that are associated with Secondary Regions (324A, 324E, 324H) that are coupled to servers (326A-326I) that support a service type of the data record.

Description

Enhanced Hierarchical Data Distribution System Using Bundling in Multiple Routers
Background of the Invention
Field of the Invention
The present invention generally relates to computer databases, and more particularly relates to the distribution of data among multiple computer databases.
Related Art
Distribution of data from a source computer to a plurality of target computers is a common function of many different industries. For example, in telecommunications, in which intelligent networks are used for enhanced call processing, a plurality of Network Information Distribution Servers (NIDSs) are used to provide call processing data to an intelligent network. Call processing data is generated by application programs running on a source computer, often a mainframe computer. This data must then be distributed to a plurality of NIDSs (running on target computers), so that the data can be available for the intelligent network to perform call processing.
The large volumes of data that must be distributed to the NIDSs for enhanced call processing places strict requirements for efficiency and performance on systems used for data distribution. A data distribution system must be able to distribute large volumes of data among a plurality of NIDSs quickly and efficiently. A data distribution system must also provide reliability to ensure that data gets distributed to all NIDSs and other client systems that require it.
FIG. 1 illustrates a conventional environment 102 in which data distribution takes place. Application Programs 106 generate call processing data that need to be distributed to a plurality of Network Information Distribution Servers (NIDSs) 118. The NIDSs 118 are used to provide call processing data to intelligent network applications (not shown). The Application Programs 106 reside on a computer 104.
The computer 104 may be logically divided into multiple computing regions. Each of the computing regions represents a unique address space. The computing regions contend for the same computer resources in the computer. For example, the computing regions contend for use of the central processing units (CPUs) within the computer. Typically, the computing regions are each allocated a finite number of CPU time slices during any given time period. The computer 104 may be an IBM mainframe computer Customer Information Control System (CICS), which is an on-line transaction processing system that is of common use in many applications, and which supports multiple CICS regions (computing regions). CICS is well known, and is described in many publicly available documents.
Data generated by the Application Programs 106 are included in data records. Each record has a service type. With regard to the telecommunications environment, different service types are associated with different call processing services. For example, Service Type X might be associated with a collect calling service. An Application Program 106 that generates call processing data for the collect calling service places the call processing data in a data record, and sets the service type of the data record equal to Service Type X. As another example,
Service Type Y might be associated with a credit card calling service. An Application Program 106 that generates call processing data for the credit card calling service places the call processing data in a data record, and sets the service type of the data record equal to Service Type Y. In a general sense, a service type is analogous to a data type or file type.
Thus, the terms service type, data type, and file type are used interchangeably herein.
The Application Programs 106 write the data records into a Primary Router Queue 110. A Primary Router 112 reads the records from the Primary Router Queue 110. For each record read from the Primary Router Queue 110, the Primary Router 112 identifies the NIDSs 1 18 that support (or accept) the service associated with the service type of the record (the Primary Router 112 refers to information in a Routing Distribution Table 114 to perform this function). The Primary Router 112 determines that the record is to be distributed to these identified NIDSs 118. For example, suppose NIDSs 118A, 118C, and 1181 support the credit card calling service. Given this scenario, the Primary Router 112 determines that Service Type Y records are to be distributed to NIDSs 118 A, 118C, and 1181.
After identifying the NIDSs 1 18 to which a record is to be distributed, the Primary Router 112 replicates the record, converts the replicated records as required for each respective end point (i.e., each respective NIDS 118 that is to receive the record), and writes the converted records into Send Queues 116 associated with the identified NIDSs 118. With reference to the above example, the Primary Router 112 writes Service Type Y records into Send Queues 116A, 116C, and 1161 associated with NIDSs 118A, 118C, and 1181, respectively.
Each Send Queue 116 triggers a Send Process 117. The Send Process 117 is a client application which establishes a connection/session/conversation with its respective NIDS 118, reads records from its respective Send Queue 116, and sends the records to its NIDS 1 18 through an appropriate protocol, such as SNA LU6.2.
In the environment 102 of FIG. 1, the Send Queues 116 and Send
Processes 117 are implemented in the same computing region as the Primary
Router 112, the Primary Router Queue 110, and the Application Programs 106.
The environment 102 of FIG. 1 is not very efficient if large volumes of data are to be distributed to a large number of NIDSs 118. A single computing region, having access to limited resources, is limited in the number of concurrent tasks that it can effectively perform. The environment 102 of FIG. 1 results in congestion within the Send Queues 116 and poor data distribution performance.
FIG.2 illustrates another environment 202 in which data distribution takes place. The Application Programs 206, the Primary Router Queue 210, and the Primary Router 212 are in a first computing region, called a Primary Region 204. The environment 202 of FIG. 2 includes a number of additional computing regions, called Secondary Regions 216. The Send Queues 218 are in the Secondary Regions 216. The Secondary Regions 216 may be implemented on the same computer as the Primary Region 204, or on different computers.
As before, the Application Programs 206 generate call processing data which is placed into data records. The Application Programs 206 write the data records into a Primary Router Queue 210.
The Primary Router 212 reads the records from the Primary Router Queue 210. The Primary Router 212 identifies the NIDSs 220 to which each record is to be sent (based on the service type of each record, as discussed above). The Primary Router 212 replicates and distributes the data records to the appropriate Send Queues 218 in the Secondary Regions 216. The Send Processes 219 read the data from the Send Queues 218 and distribute the data to the corresponding NIDSs 220, as discussed above.
The use of Secondary Regions 216 for Send Queues 218 and Send Processes 219 distributes the tasks involved with data distribution among many computing regions. Therefore, more concurrent tasks can be effectively conducted, and efficiency is increased (because multiple computing regions collectively have access to more computing resources than a single computing region). However, limitations in throughput are still encountered with large volumes of data records and large numbers of NIDSs 220. The Primary Router 212 must perform a remote write of data to each Send Queue 218 in the Secondary Regions 216. Remote writes are needed since data is traversing across different computing regions. These remote writes are performed in a serial fashion. For example, the Primary Router 212 must first remotely write to Send Queue 218A, and then to Send Queue 218B, and then to Send Queue 218C, etc. (This assumes that the service type of the record is supported by NIDSs 220A- 220C.) Each remote write must complete before the next remote write can begin. These serial remote writes are costly in terms of time, and if a large number of NIDSs 220 and Send Queues 218 are involved, efficiency is limited.
The approach of FIG. 2 also suffers from a reliability problem. Suppose that a record is to be transmitted to NIDSs 220A and 220D. The Primary Router 212 must complete a remote write of the record to the Send Queue 218 A before it can begin the remote write to the next Send Queue 218D. If the remote write to the Send Queue 218 A is not successfully completed (because, for example, the Secondary Region 216 is experiencing difficulties and is, therefore, unavailable), then the Primary Router 212 attempts to resend the record, thus holding up the remaining distribution process. This places upon the distribution system a dependence on the availability of every Secondary Region 216. If a Secondary Region 216 is unavailable, the entire distribution process is delayed or even terminated.
Summary of the Invention
Briefly stated, the present invention is a system and method for data distribution. The invention uses multiple routing processes in a distribution hierarchy to increase the efficiency and performance of the distribution of high volumes of data. A Primary Router Queue, a Primary Router, and one or more Secondary Router Queues are in a first computing region, called a Primary Region. Secondary Routers are located in second computing regions, called
Secondary Regions.
The Primary Router reads data records from the Primary Router Queue. These are local read operations. The Primary Router replicates the data records, and writes the replicated records to the Secondary Router Queues. These are local write operations. The Secondary Routers read (using remote read operations) the records from the Secondary Router Queues, convert and replicate the records as necessary, and write the records to the Send Queues. The Send Processes then send the records to the NIDSs. Use of the Secondary Router Queues on the same region (the Primary Region) as the Primary Router buffers the Primary Router from the Secondary Regions so that if a Secondary Region fails, the Primary Router may continue to distribute data records to other Secondary Regions. Thus, reliability is enhanced. It also provides a parallel distribution environment which employs multiple parallel tasks to route data to fewer endpoints, and in which the number of costly remote writes are reduced. This improves the overall data throughput of the data distribution system (DDS).
In a preferred embodiment of the invention, the DDS is used to distribute call processing data for intelligent network applications from Application
Programs to a plurality of NIDSs. The NIDSs house databases that contain call processing data. This data is used by intelligent network applications to provide a variety of enhanced call processing services, such as operator services, customer services, collect calling, credit card calling, and virtually any other telecommunications service offering.
Data records that are distributed to NIDSs may instruct the NIDSs to perform certain database operations (add, update, delete, etc.). The present invention provides an improvement in both the efficiency and reliability of this distribution process. The present invention operates as follows. A Primary Router in a Primary
Region performs a local read operation to read a data record from a Primary Router Queue, which is also in the Primary Region. The Primary Router identifies Secondary Router Queues that are associated with Secondary Regions that are coupled to servers (NIDSs) that support a service type of the data record. The Secondary Router Queues are in the Primary Region. The Primary Router replicates the data record to produce replicated data records. The Primary Router performs local write operations to write the replicated data records to the identified Secondary Router Queues.
In each Secondary Region, a Secondary Router performs a remote read operation to read a replicated data record from one of the Secondary Router Queues. The Secondary Router identifies any Send Queues (in the Secondary Region) that are associated with servers that support the service type of the data record read from the Secondary Router Queue. The Secondary Router replicates and optionally converts the data record to produce converted data records. The Secondary Router performs local write operations to write the converted data records to the identified send queues.
The above description relates to a non-block processing mode. The invention also includes a block processing mode, which operates as follows.
The Primary Router generates a block record having stored therein information from one or more data records read from the Primary Router Queue.
All of the data records in the block record have the same service type. The Primary Router identifies any Secondary Router Queues associated with Secondary Regions that are coupled to servers that support the service type of the block record. The Primary Router replicates the block record to produce replicated block records, and performs local write operations to write the replicated block records to the identified Secondary Router Queues.
If the Secondary Router is not operating in block processing mode, in each of the Secondary Regions, the Secondary Router performs a remote read operation to read a replicated block record from one of the Secondary Router Queues. The Secondary Router unbundles the block record to obtain one or more data records. The Secondary Router identifies Send Queues associated with servers that support the service type of the data records. The Secondary Router replicates and optionally converts the data records to produce converted data records, and performs local write operations to write the converted data records to the identified Send Queues.
If the Secondary Router is operating in block processing mode, in each of the Secondary Regions, the Secondary Router performs a remote read operation to read a replicated block record from one of the Secondary Router Queues. The Secondary Router unbundles the block record to obtain one or more data records. The Secondary Router identifies Send Queues associated with servers that support the service type of the data records. The Secondary Router optionally converts the data records to produce converted data records, and generates a block record having stored therein information from one or more data records read from the Secondary Router Queue. All of the data records in the block record have the same service type. The Secondary Router replicates the block record to produce replicated block records, and performs local write operations to write the replicated block records to the identified Send Queues.
Block processing may be implemented at the Primary Router level or the Secondary Router level, or both simultaneously. The present invention is distinguishable from conventional approaches in a number of ways. For example, the present invention uses Secondary Router Queues implemented on the same computing region as the Primary Router. This provides a number of advantages, such as:
(1) It shields the Primary Router from the Secondary Regions, so that if a Secondary Region fails, the Primary Router may continue distribution to the other Secondary Regions via Secondary Router Queues.
(2) It introduces parallelism into the distribution environment, resulting in an increase in efficiency and performance. Multiple Secondary Routers can operate in parallel, decreasing the number of Send Queues and NIDSs that must be routed to in serial fashion.
(3) It reduces the number of costly remote writes that are needed.
(4) It provides a more scalable architecture. Adding more NIDSs can be accomplished by adding additional Secondary Regions. Adding a new computing region only requires configuration changes to employ another Secondary Router.
Further features and advantages of the invention, as well as the structure and operation of various embodiments of the invention, are described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
Brief Description of the Figures
The present invention will be described with reference to the accompanying drawings, wherein:
FIG. 1 is a block diagram of a first conventional data distribution system; FIG. 2 is a block diagram of a second conventional data distribution system;
FIG. 3 is a block diagram of a data distribution system according to a preferred embodiment of the present invention;
FIG. 4 illustrates an example Routing Distribution Table according to a preferred embodiment of the present invention;
FIG. 5 illustrates an example Master Control Table according to a preferred embodiment of the present invention; FIG. 6 illustrates an exemplary format of a data record generated by
Application Programs;
FIG. 7 illustrates an exemplary format of a block record: FIGS. 8 A, 8B, 9 A, and 9B are flowcharts depicting the preferred operation of the present invention; FIGS. 10A-10D are flowcharts depicting an alternate operation of the present invention, using secondary level block processing; and
FIG. 1 1 is an example secondary router block record format.
Detailed Description of the Preferred Embodiments
The present invention is directed to a data distribution system (DDS), and a method practiced in the DDS. Generally speaking, the invention is useful for distributing data from a source computer to a plurality of target computers for virtually any application. In a preferred embodiment, the present invention is useful for distributing data from a source computer to a plurality of target computers (preferably server computers) which are used to house databases for enhanced call processing within an intelligent telecommunications network. For illustrative purposes, the invention is described herein with reference to this preferred embodiment. It should be understood, however, that the invention is not limited to this embodiment. Instead, the invention is applicable to any application involving the transfer of data between multiple computers.
Structure of the Invention
FIG. 3 illustrates a computing environment 302 according to a preferred embodiment of the present invention. The environment 302 Includes a first computing region, called a Primary Region 304, and a plurality of second computing regions, called Secondary Regions 306.
In a first embodiment, the Primary Region 304 and the Secondary Regions 306 are implemented using a single computer that can be logically divided into multiple computing regions. Each of the computing regions represents a unique address space in the computer. The computing regions contend for the same computer resources in the computer. For example, the computing regions contend for use of the central processing units (CPUs) within the computer. In one implementation, the computing regions are each allocated a finite number of
CPU time slices during any given time period.
Each computing region, given its limited computing resources, can execute only a finite number of concurrent processes effectively. Performance drops as additional concurrent tasks are invoked in any given computing region. The computer may be any suitable computer having the characteristics described above, such as an IBM mainframe computer Customer Information Control System (CICS), which is an on-line transaction processing system that is of common use in many applications, and which supports multiple CICS regions (computing regions). CICS is well known, and is described in many publicly available documents, such as CICS Intercommunication Guide, Version 3, Release 2.1, second edition, June 1991, which is incorporated herein by reference in its entirety. In a second embodiment, the Primary Region 304 and the Secondary
Regions 306 are implemented in multiple computers. The Primary Region 304 and the Secondary Regions 306 may be each implemented using a different computer. Alternatively, the Primary Region 304 may be implemented in one computer, and groups of the Secondary Regions 306 may be implemented together on other computers. Other combinations for implementing the Primary
Region 304 and the Secondary Regions 306 on computers are also within the scope of the present invention.
Components of the environment 302 of the present invention are described below.
Application Programs
A plurality of Application Programs (processes) 308 execute in the Primary Region 304. Generally speaking, the Application Programs 308 generate data that must be distributed to end points, called Network Information Distribution Servers (NIDSs) in FIG. 3. NIDSs are also called servers. In the preferred embodiment, the Application Programs 308 generate call processing data used for enhanced call processing services, such as operator services, customer services, collect calling, credit card calling, and virtually any other telecommunications service offering.
Each enhanced call processing service has one or more service types. For example, the collect calling service may have a service type equal to Service Type
X, or simply Service X. The credit card calling service may have a service type equal to Service Type Y, or Service Y. The Application Programs 308 insert the data that they generate in data records. An exemplary format of a data record 602 generated by an Application Program 308 is shown in FIG. 6. The data record 602 includes a Record Data field 608, whose length varies from record to record, that stores the data that was generated by the Application Program 308. A Service Type field 604, which has a fixed length, indicates the service type of the service to which the data in the Record Data field 608 pertains. For example, Service Type X might be associated with a collect calling service. An Application Program 308 that generates call processing data for the collect calling service places the call processing data in a data record 602. and sets the Service Type field 604 equal to
Service Type X. As another example, Service Type Y might be associated with a credit card calling service. An Application Program 308 that generates call processing data for the credit card calling service places the call processing data in a data record 602, and sets the Service Type field 604 equal to Service Type Y. In a general sense, a service type is analogous to a data type or file type.
Thus, the terms service type, data type, and file type are used interchangeably herein.
Network Information Distribution Servers (NIDSs) and Send Queues
The NIDSs 326 are implemented in computers that are coupled to the
Secondary Regions 306. A NIDS 326 may be a database, or may be a database in combination with a process executing in a computer. In the preferred embodiment, the NIDSs 326 are used to provide call processing data to an intelligent network (not shown) that provides enhanced call processing services. Each NIDS 326 supports one or more enhanced call processing services
(i.e., each NIDS 326 may store call data related to one or more enhanced call processing services). In other words, each NIDS 326 is associated with one or more service types (i.e., a NIDS 326 is associated with the service types that identify the enhanced call processing services that it supports). Send Queues 324 are implemented in the Secondary Regions 306. One or more Send Queues 324 are associated with each NIDS 326. Each Send Queue 324 accepts data records 602 of a given service type. Thus, if a NIDS 326 supports multiple service types, then the NIDS 326 will have multiple Send Queues 324, one for each service type. This is the case with NIDS 326C, that has
Send Queues 324C and 324D.
The present invention operates to transfer call data related to a given enhanced call processing service to the NIDSs 326 that support that service. For example, suppose that NIDS 326B supports an enhanced call process service having a service type equal to Service 2. In this example, the invention ensures that all data records 602 of Service 2 are written to Send Queue 324B. These data records 602 are then read by Send Process 325B, which acts as a client and sends the data to the NIDS 326B. This operation is described in greater detail below.
The term "queue" as used herein refers to a storage device which stores and retrieves data in a preferably first in first out (FIFO) manner. Other types of queues can alternatively be used, including but not limited to last in first out (LIFO) queues. LIFO queues are commonly referred to as STACKS.
Primary Router Queues and Master Control Table
One or more Primary Router Queues 312 are implemented in the Primary Region 304. Each Primary Router Queue 312 is associated with one or more service types. A service type can only be associated with a single Primary Router
Queue 312.
The Application Programs 106 write the data records 602 that they generate into the respective Primary Router Queues 312 associated with the service types of the data records 602. For example, suppose that Primary Router
Queue 312B is associated with Service Type X and Service Type Y. In this case, all data records 602 of Service Type X and Service Type Y are written to Primary
Router Queue 312B. The Application Programs 308 have access to a Master Control Table 310. An example Master Control Table 310 is shown in FIG. 5. There is a row in the Master Control Table 310 for each service type. Each row identifies the service type (field 504 of the Master Control Table 310), and the Primary Router Queue 312 that is associated with that service type (field 506 of the Master Control
Table 310). Thus, according to the example of FIG. 5, Primary Router Queue 312A is associated with Service 5. Primary Router Queue 312B is associated with Services 1-4. Primary Router Queue 312C is associated with Service 6.
All writes into the Primary Router Queues 312 by the Application Programs 308 are local writes, because both the Application Programs 308 and the Primary Router Queues 312 are in the same computing region, i.e., the Primary Region 304.
Primary Router, Secondary Router Queue, and Routing Distribution Table
A Primary Router 314 is implemented in the Primary Region 304.
Preferably, the Primary Router 314 is implemented as a process that executes in the Primary Region 304. Other Primary Routers (not shown) may also be implemented in the Primary Region 304, or in other Primary Regions (not shown). A plurality of Secondary Router Queues 318 are implemented in the
Primary Region 304. Each Secondary Router Queue 318 is associated with and services one of the Secondary Regions 306, and acts as the interface or gateway between the Primary Region 304 and the Secondary Region 306.
Each Primary Router 314 has its own Primary Router Queue 312. For example, the Primary Router 314 reads records from only its own Primary Router
Queue 312B. Each Primary Router Queue, such as Primary Router Queue 312B, has its own set of Secondary Router Queues, such as Secondary Router Queues 318A-318C, to write records, with each Secondary Router Queue 318 servicing one of the Secondary Regions 306. For each record 602 read from the Primary Router Queue 312, the Primary Router 314 identifies the Secondary Router Queues 318 that service the Secondary Regions 306 that are coupled to NIDSs 326 that support the service type of the record 602. The Primary Router 314 performs this function by reference to a Routing Distribution Table 316. An example Routing Distribution
Table 316 is shown in FIG. 4.
The Routing Distribution Table 316 has two types of rows. The Routing Distribution Table 316 has a row for each unique server/service type/send queue combination. These are the first type of rows (rows 414A-414H). Each of these type of rows of the Routing Distribution Table 316 identifies a service type (field
404), a NIDS 326 that supports that service type (field 408), the Send Queue 410 associated with that service type and that NIDS 326 (field 410), the Secondary Region 406 in which that Send Queue 410 is located (field 406), and a confirm level (field 412) for the Send Processes 325. The confirm level indicates the number of records a Send Process 325 can send to a NIDS 326 before it must confirm receipt by the NIDS 326.
The Routing Distribution Table 316 also has a row for each unique and applicable secondary region/secondary router queue combination. These are the second type of rows (rows 4141-414K). Each of these type of rows of the Routing Distribution Table 316 identifies a Primary Router Queue 312 (field 404), a
Primary Region 304 in which the Primary Router Queue 312 is contained (field 406), a Secondary Region 306 (field 408), a Secondary Router Queue 318 (field 410) that services the Secondary Region 306, and a confirm level (field 412) for the Secondary Routers 320. The confirm level indicates the number of records a Secondary Router 320 can process before a commit must be performed. In an alternate embodiment, the first and second types of rows of the Routing Distribution Table 316 are maintained in two tables.
When the Primary Router 314 reads a record 602 from the Primary Router Queue 312B, the Primary Router 314 determines the service type of the record 602 by reference to the Service Type field 604. The Primary Router 314 then retrieves from the Routing Distribution Table 316 all rows of the first type having a service type (field 404) equal to that of the record 602. The Primary Router 314 refers to field 406 of these rows to identify the Secondary Regions 306 having NIDSs 326 that support the service type. Then the Primary Router 314 retrieves from the Routing Distribution Table 316 all rows of the second type where field
408 is equal to the identified Secondary Regions 306. From these rows, the Primary Router 314 identifies the Secondary Router Queues 318 that service the identified Secondary Regions 306 (by reference to field 410).
The Primary Router 314 replicates the record 602, and writes the replicated records to the identified Secondary Router Queues 318.
The write operations from the Primary Router 314 to the Secondary Router Queues 318 are performed on a serial basis. That is, if the Primary Router 314 must write a record to Secondary Router Queues 318A and 318B, then the Primary Router 314 must first write the record to Secondary Router Queue 318 A, and then Secondary Router Queue 318B. Note that these write operations are local writes, since the Primary Router 314 and the Secondary Router Queues 318 are all in the same computing region, i.e., the Primary Region 304.
Block Processing and Non-Block Processing by the Primary Router
The Primary Router 314 of the present invention supports two operational modes: non-block processing (also called single processing) and block processing.
While in the non-block processing mode, the Primary Router 314 reads a record 602 from the Primary Router Queue 312B, determines the destinations of the record 602 (as described above), replicates the record 602, and then writes the replicated records to the appropriate Secondary Router Queues 318. In other words, the Primary Router 314 processes a single record 602 at a time.
While in the block processing mode, the Primary Router 314 attempts to process multiple records 602 at a time. Specifically, the Primary Router 314 reads multiple records 602 from the Primary Router Queue 312B, bundles the records 602 into a primary block record 702, determines the destinations of the primary block record 702 (as described above), replicates the primary block record 702, and then writes the replicated records to the appropriate Secondary Router Queues 318.
A primary block record 702 can contain only records 602 of the same service type. Also, there is a limit to the number of records 602 (of the same service type) that can be bundled into a primary block record 702. The Routing Distribution Table 316 actually has a third type of row. Each Primary Router 314 has one of these rows (row 414L). These rows can be accessed by Primary
Router Queue name (field 410). Each of these rows has a Confirm Level field 412, which indicates the maximum bundle size for block records generated by the associated Primary Router.
An exemplary format of a primary block record 702 is shown in FIG. 7. Field 706 identifies the service type of the primary block record 702. A service type of 'q-block-recs' indicates that the record is a block record. Field 708 identifies the total size of the primary block record 702, and field 710 indicates the number of records 602 that are bundled in the primary block record 702. The records 602 that are bundled in the primary block record 702 follow field 710. Each record 602 in the primary block record 702 has a Record Data field 714 or
718, that stores the record 602, and a Length of Record field 712 or 716 that stores the length of the record 602.
While in the block processing mode, the Primary Router 314 continues to read records 602 from the Primary Router Queue 312B, and add the records 602 to the primary block record 702, until one of the following occurs: (1) the maximum bundle size is reached (i.e., the maximum number of records 602 in the primary block record 702); (2) the record 602 read from the Primary Router Queue 312B has a service type that is different from the service type of the primary block record 702 being constructed; (3) the Primary Router Queue 312B becomes empty: or (4) an internal buffer limit preferably equal to 8153 bytes is reached. It is noted that any other, implementation dependent internal buffer limit value could alternatively be used. If any of these conditions occur, the Primary Router 314 completes the primary block record 702. This processing is discussed in greater detail below.
Secondary Routers
A Secondary Router 320 is implemented in each Secondary Region 306. Each Secondary Router 320 reads records 602, 702 from its Secondary Router Queue 318 in the Primary Region 304. If the record is a primary block record 702, then the Secondary Router 320 unbundles the primary block record 702 and processes the resulting records 602 individually.
When processing the records 602, the Secondary Routers 320 operate in a manner that is similar to the operation of the Primary Router 314. Specifically, each Secondary Router 320 refers to a Routing Distribution Table 322 (that is identical to the Routing Distribution Table 316 connected to the Primary Router 314) to determine the destinations of the record 602. The Secondary Router 320 performs this task by retrieving from the Routing Distribution Table 322 all entries having a service type (field 404) that is equal to the service type of the record 602 being processed for all NIDS servers for which that region serves. The Secondary Router 320 then extracts from these entries the identities of the Send Queues 324 (field 410).
The Secondary Router 320 replicates the record 602, and then converts as necessary the replicated records so as to be compatible with the entities (i.e., computers) that will be accessing the destination NIDS 326. (Since conversion operations are not always necessary, this functionality is considered to be optional to the present invention.) Details of such conversion operations are well known.
(Note that the Primary Router 314 does not perform any such conversion operations.) The Secondary Router 320 then writes the converted records to the Send Queues 324 previously identified by reference to the Routing Distribution Table 316.
As noted above, the Routing Distribution Tables 322 coupled to the Secondary Routers 320 are identical to the Routing Distribution Table 316 coupled to the Primary Router 314. In some embodiments, where the Primary
Region 304 and the Secondary Regions 306 are implemented in the same computer, the Routing Distribution Tables 316, 322 represent the same table. In other embodiments, where the Primary Region 304 and the Secondary Regions 306 are implemented in different computers, the Routing Distribution Table 316 is distributed to the Secondary Regions 306. Alternatively, the Secondary
Regions 306 are provided with a mechanism for accessing the Routing Distribution Table 316 in the Primary Region 304, such as a module that enables the Secondary Routers 320 to make remote database queries to the Routing Distribution Table 316 in the Primary Region 304.
Secondary Block Processing and Non-Block Processing by the
Secondary Routers
The present invention supports two operational modes for the Secondary Router 320: non-block processing (also called single processing) and block processing. While in the non-block processing mode, the Secondary Router 320B reads a record 602 or a primary block record 702 from the Secondary Router Queue 318B. If the Secondary Router 320B reads a record 602, it determines the destination of the record 602 (as described above), replicates and converts the record 602, and then writes the replicated records to the appropriate Send Queues 324.
If the Secondary Router 320B reads a primary block record 702, it unbundles the primary block record 702 into individual records 602 and processes records individually. The Secondary Router 320B determines the destinations of the record 602 (as described above), replicates and converts the record 602, and then writes the replicated records to the appropriate Send Queues 324. The Secondary Router 320B repeats this procedure for each record 602 unbundled from the primary block record 702. In other words, the Secondary Router 320B processes a single record 602 at a time. While in the block processing mode, the Secondary Router 320B attempts to process multiple records 602 at a time. Specifically, the Secondary Router 320B reads multiple records 602 or block records 702 from the Secondary Router Queue 318B, unbundles block records 702 into individual records 602, optionally converts the records, bundles the records 602 into a secondary block record 1102, determines the destinations of the secondary block record 1 102 (as described above), replicates the secondary block record 1102, and then writes the replicated records to the appropriate Send Queues 325.
A secondary block record 1102 can contain only records 602 of the same service type. Also, there is a limit to the number of records 602 (of the same service type) that can be bundled into a secondary block record 1102. The
Routing Distribution Table 322 actually has a fourth type of row. Each Secondary Router 320 has one of these rows (row 414M). These rows can be accessed by Secondary Router Queue name (field 410). Each of these rows has a Confirm Level field 412, which indicates the maximum bundle size for block records generated by the associated Secondary Router 320.
An exemplary format of a secondary block record 1102 is shown in FIG. 11. Preferably, FIG. 1 1 corresponds to FIG. 7. Field 1106 identifies the service type of the secondary block record 1 102. A service type of 'q-block-recs' indicates that the record is a block record. Field 1108 identifies the total size of the secondary block record 1102. and field 1110 indicates the number of records
602 that are bundled in the secondary block record 1102. The records 602 that are bundled in the secondary block record 1102 follow field 11 10. Each record 602 in the secondary block record 1102 has a Record Data field 1114 or 1118, that stores the record 602, and a Length of Record field 1 1 12 or 1 116 that stores the length of the record 602. While in the block processing mode, the Secondary Router 320B continues to read records 602 and block records 702 from the Secondary Router Queue 318B, unbundle the block records 702, optionally convert the records 602 and add the records 602 to the secondary block record 1102, until one of the following occurs: (1) the maximum bundle size is reached (i.e., the maximum number of records 602 in the secondary block record 1102); (2) the record 602 read from the Secondary Router Queue 318B has a service type that is different from the service type of the secondary block record 1102 being constructed; (3) the Secondary Router Queue 318B becomes empty; or (4) an internal buffer limit preferably equal to 8153 bytes is reached. The preferred embodiment is to have the internal buffer size for the secondary block record 1102 the same size or larger than the internal buffer for the primary block record 702 internal buffer size. It is noted that any other, implementation dependent internal buffer limit value could alternatively be used. If any of these conditions occur, the Secondary Router 320B completes the secondary block record 1102. This processing is discussed in greater detail below.
Operation of the Invention
FIGS. 8A, 8B, 9A, and 9B collectively illustrate flowcharts 802. 902 that represent the operation of the present invention. Referring first to FIG. 8A, in step 806, an Application Program 308 generates a data record 602 of a particular service type. For illusti tive purposes, assume that the service type is Service 2.
In step 808, the Application Program 308 refers to the Master Control
Table 310 to identify the Primary Router Queue 312 associated with the service type of the data record 602. With reference to the current example, the
Application Program 308 determines that the Primary Router Queue 312B is associated with Service 2. In step 810, the Application Program 308 writes the data record 602 into the Primary Router Queue 312B. This is a local write operation.
In step 812, the Primary Router 314 reads the data record 602 from the Primary Router Queue 312B. This is a local read operation. In step 814, the Primary Router 314 determines whether block processing is enabled. At any given time, either block processing or non-block processing by the Primary Router 314 is enabled. An administrator may set the current mode, for example. Alternatively, some systems may only support one or the other mode. In some embodiments, step 814 is not actually performed. Instead, non-block processing is established by starting a first process, and block processing is established by starting a second process. If block processing of the Primary Router 314 is not enabled, then step 832 (FIG. 8B) is processed. Step 832 is described below. If block processing of the Primary Router 314 is enabled, step 816 is processed. In step 816, the Primary Router 314 initializes a new prin y block record
702. Field 706 is set to the default blocking service name "q-block-recs". The entire record 602 is placed into field 714. Field 712 is set equal to the length of the record 602.
In step 818. the Primary Router 314 accesses the Routing Distribution Table 316 and determines the maximum bundle size. The Primary Router 314 does this by searching in the Routing Distribution Table 316 until it finds a row where field 410 is equal to the name of its Primary Router Queue, i.e., Primary Router Queue 312B. In the example of FIG.4. the Primary Router 314 finds row 414L. The Primary Router 314 uses the value in the Confirm Level field 412 of this row as the maximum bundle size. For this example, the primary router 314 has a maximum bundle size of 10.
In step 820, the Primary Router 314 determines whether the maximum bundle size of the primary block record 702 has been reached. In other words, the Primary Router 314 determines whether the number of records 602 bundled into the primary block record 702 is equal to the maximum bundle size. If the maximum bundle size has been reached, then step 830 (FIG. 8B) is performed. In step 830, the Primary Router 314 completes the primary block record 702 by filling in the appropriate data for fields 708 and 710 of the primary block record 702. Control then passes to step 832, described below. If, in step 820, the Primary Router 314 determines that the maximum bundle size has not been reached, then step 822 is performed. In step 822, the Primary Router 314 attempts to read another data record 602 from the Primary Router Queue 12B (preferably, this step is not performed during the first iteration through the flowchart). In step 824, the Primary Router 314 determines whether the Primary
Router Queue 312B that it attempted to read from is empty (i.e., whether it received an end-of-file condition). If the Primary Router Queue 312B is empty, then step 830 is performed, as described above. Otherwise, step 826 is performed. In step 826, the Primary Router 314 determines whether the new data record 602 read from the Primary Router Queue 312B is of the same service type as the service type of the primary block record 702 being constructed. If the service type is different, then step 830 is performed, as described above. Otherwise, step 829 is performed. In step 829, the Primary Router 314 determines if the size of the primary block record 702 would exceed the maximum buffer size (preferably 8153 bytes) if the new data record 602 was added to the primary block record 702. Step 830 (described above) is performed if the maximum buffer size would be exceeded. Otherwise, step 828 is performed. In step 828, the Primary Router 314 adds the new data record 602 to the primary block record 702. This is done by appropriately filling in fields 716 and 718 of the primary block record 702. Control then returns to step 820. described above.
Step 832 is performed if block processing of the Primary Router 314 is not enabled (step 814), or if the primary block record 702 has been completed (step 830). In step 832, the Primary Router 314 accesses the Routing Distribution Table 316 to identify the Secondary Router Queues 318 that service Secondary Regions 306 that are coupled to NIDSs 326 that support the service type of the data record 602 (for block processing, any of the data records 602 that were placed into the primary block record 702 could be used, since they all have the same service type). This operation is described further above.
In step 834, the Primary Router 314 replicates the data record 602 (for non-block processing) or the primary block record 702 (for block processing). One copy is made for each Secondary Router Queue 318 identified in step 832. In step 836, the Primary Router 314 serially writes (via local writes) the replicated data records or block records to the Secondary Router Queues 318 identified in step 832.
Steps 835 and 837 are then performed. These steps are related to steps 826 and 829. Steps 826 and 829 previously determined whether the new record 602 that was read in step 822 should be added to the primary block record 702 being created. If either step 826 or step 829 was found to be True, then the new record 602 was not added to the primary block record 702. If this is the case, then the new record 602 should be used to create a new primary block record 702. This logic is represented by steps 835 and 837. In step 835, the Primary Router 314 determines whether block processing is enabled. If it is not enabled, then control returns to step 812.
If block processing is enabled, then in step 837 the Primary Router 314 determines whether the new record 602 has a different service type than the primary block record 702, or whether the addition of the new record 602 would have made the primary block record 702 exceed the maximum buffer size. In other words, the Primary Router 314 in step 837 determines whether either step 826 or 829 was determined to be true. If either was true, control passes to step 838, where a new primary block record 702 is initialized, and control then returns to step 829. Otherwise, control returns to step 812. Flowchart 902 in FIGS. 9 A and 9B shall now be described. Note that the steps in flowchart 902 are performed in parallel by the Secondary Routers 320 in all of the Secondary Regions 306. For illustrative purposes, flowchart 902 is described below with reference to the Secondary Router 320A in Secondary Region 306.
In step 906, the Secondary Router 320A reads an item from its Secondary Router Queue 318A. This is a remote read operation.
In step 908, the Secondary Router 320A determines whether this item is a data record 602, or a primary block record 702. If the item is a primary block record 702, then it will have "q-block-recs" in field 706. If the item is a primary block record 702, then step 916 is processed (described below). Otherwise, step
910 is processed.
In step 910, the Secondary Router 320 A accesses the Routing Distribution Table 322A to identify any of its Send Queues 324 that service any of its NIDSs 326 that support the service type of the data record 602.
In step 912. the Secondary Router 320A replicates the data record 602, and converts the replicated data records as necessary (as described above).
In step 914, the Secondary Router 320A serially writes (via local writes) the converted data records to the Send Queues 324 identified in step 910. The Send Processes 325 write the data records to their respective NIDSs 326.
Flowchart 902 is complete after step 914 is performed, as indicated by step 928.
If the Secondary Router 320A determines in step 908 that the item read from the Secondary Router Queue 318 A is a primary block record 702, then step 916 is performed. In step 916, the Secondary Router 320A unbundles the primary block record 702 to produce one or more data records 602. This unbundling process involves extracting data from the primary block record 702, and then packaging the data as necessary into data records 602 having the format shown in FIG. 6. The manner in which this unbundling operation is performed will be apparent to persons skilled in the relevant art(s). Preferably, the Secondary Router 320A then individually processes each of the data records 602. Specifically, in step 918, the Secondary Router 320A selects one of the data records 602.
In step 920, the Secondary Router 320A accesses the Routing Distribution Table 322 to identify any of its Send Queues 324 that service any of its NIDSs
326 that support the service type of the selected data record 602. Eventually, all of the data records 602 obtained after unbundling the primary block record 702 will be sent to these identified Send Queues 324 (since all of these data records 602 have the same service type). Thus, step 920 need only be performed once for all of the data records 602 obtained from the primary block record 702. In one embodiment, the Routing Distribution Table 322 is accessed for each data record 602. In other embodiments, the Routing Distribution Table 322 is accessed only once for the data records 602 in a primary block record 702.
In step 922, the Secondary Router 320A replicates the selected data record 602, and converts the replicated data records as necessary (as described above).
In step 924, the Secondary Router 320A serially writes (via local writes) the converted data records to the Send Queues 324 identified in step 920. The
Send Processes 325 write the data records to the NIDSs 326.
In step 926, the Secondary Router 320A determines whether there are any additional unbundled data records 602 to processed. If there are additional data records 602 to process, then control returns to step 918 (described above). Otherwise, flowchart 902 is complete, as indicated by step 928.
Alternative Embodiment - Secondary Router with Block Processing
The above description of the Secondary Routers 320 assumed that the Secondary Routers 320 only operated in a non-block mode. In an alternate embodiment, the Secondary Routers 320 operate in either a non-block mode or a block mode. The operation of the Secondary Routers 320 according to this alternate embodiment is represented by flowchart 1002 in FIGS. 10A-10D. Note that the steps in flowchart 1002 are performed in parallel by the Secondary Routers 320 in all of the Secondary Regions 306. For illustrative purposes, flowchart 1002 is described below with reference to the Secondary Router 320A in Secondary Region 306A. In step 1006, the Secondary Router 320A reads an item from its
Secondary Router Queue 318A. This is a remote read operation.
In step 1008, the Secondary Router 320A determines whether there are any data records 602 or block records 702 to process in the Secondary Router
Queue 318 A. If there are data records 602 or block records 702 to process, then control passes to step 1010. Otherwise, flowchart 1002 is complete, as indicated by step 1088.
In step 1010, the Secondary Router 320A determines whether this item is a data record 602, or a primary block record 702. If the item is a primary block record 702, then it will have "q-block-recs" in field 706. If the item is a primary block record 702, then step 1020 is processed (described below). Otherwise, step
1012 is processed.
In step 1012, the Secondary Router 320A determines whether secondary block processing is enabled. At any given time in the Secondary Router 320A, either block processing or non-block processing is enabled. An administrator may set the current mode, for example. Alternatively, some systems may only support one or the other mode. In some embodiments, step 1012 is not actually performed. Instead, non-block processing in the Secondary Router 320 A is established by starting a first process, and block processing in the Secondary Router 320A is established by starting a second process. If secondary block processing is not enabled, then step 1014 is processed. If secondary block processing is enabled, step 1050 (FIG. 10C) is processed.
In step 1014, the Secondary Router 320A accesses the Routing Distribution Table 322A to identify any of its Send Queues 324 that service any of its NIDSs 326 that support the service type of the data record 602. In step 1016, the Secondary Router 320A replicates the data record 602, and converts the replicated data records as necessary (as described above).
In step 1018, the Secondary Router 320 A serially writes (via local writes) the converted data records to the Send Queues 324 identified in step 1014. The Send Processes 325 write the data records to their respective NIDSs 326. Control then returns to step 1006.
If the Secondary Router 320A determines in step 1010 that the item read from the Secondary Router Queue 318A is a primary block record 702, then step 1020 is performed. In step 1020, the Secondary Router 320A unbundles the primary block record 702 to produce one or more data records 602. This unbundling process involves extracting data from the primary block record 702, and then packaging the data as necessary into data records 602 having the format shown in FIG. 6. The manner in which this unbundling operation is performed will be apparent to persons skilled in the relevant art(s).
Preferably, the Secondary Router 320A then individually processes each of the data records 602. Specifically, in step 1022, the Secondary Router 320A selects one of the data records 602.
In step 1024, the Secondary Router 320A determines whether secondary block processing is enabled (as described in step 1012). If secondary block processing is not enabled, then step 1026 is processed. If secondary block processing is enabled, step 1028 (FIG. 10B) is processed.
In step 1026, the Secondary Router 320A accesses the Routing Distribution Table 322A to identify any of its Send Queues 324 that service any of its NIDSs 326 that support the service type of the selected data record 602.
Eventually, all of the data records 602 obtained after unbundling the primary block record 702 will be sent to these identified Send Queues 324 (since all of these data records 602 have the same service type). Thus, step 10 6 need only be performed once for all of the data records 602 obtained from the primary block record 702. In one embodiment, the Routing Distribution Table 322A is accessed for each data record 602. In other embodiments, the Routing Distribution Table 322A is accessed only once for the data records 602 in a primary block record 702. Step 1080 (FIG. 10D) is then processed.
In step 1080 (FIG. 10D), the Secondary Router 320A replicates the selected data record 602, and converts the replicated data records as necessary (as described above).
In step 1082, the Secondary Router 320A serially writes (via local writes) the converted data records to the Send Queues 324 identified in step 1026. The Send Processes 325 write the data records to the NIDSs 326. In step 1084, the Secondary Router 320A determines whether there are any additional unbundled data records 602 to process. If there are additional data records 602 to process, then control returns to step 1022 (described above). Otherwise, control returns to step 1006 (FIG. 10A).
If the Secondary Router 320A determined in step 1012 (FIG. 10A) that secondary block processing is enabled, then step 1050 (FIG. 10C) is performed.
In step 1050, the Secondary Router 320A initializes a new secondary block record 1102. Field 1 106 is set to the default blocking service name "q- block-recs". The entire record 602 is placed into field 11 14. Field 1112 is set equal to the length of the record 602. In step 1052, the Secondary Router 320A accesses the Routing
Distribution Table 322A and determines the maximum bundle size. The Secondary Router 320A does this by searching in the Routing Distribution Table 322A until it finds a row where field 410 is equal to the name of its Secondary Router Queue, i.e.. Secondary Router Queue 318A. In the example of FIG.4, the Secondary Router 320A finds row 414M. The Secondary Router 320A uses the value in the Confirm Level field 412 of this row as the maximum bundle size.
For this example, the Secondary Router 320A has a maximum bundle size of 10.
In step 1054, the Secondary Router 320A determines whether the maximum bundle size of the secondary block record 1 102 has been reached. In other words, the Secondary Router 320A determines whether the number of records 602 bundled into the secondary block record 1102 is equal to the maximum bundle size. If the maximum bundle size has been reached, then step 1068 is performed. In step 1068, the Secondary Router 320A completes the secondary block record 1102 by filling in the appropriate data for fields 1108 and 1110 of the secondary block record 1102. Control then passes to step 1070, described below.
If, in step 1054, the Secondary Router 320A determines that the maximum bundle size has not been reached, then step 1056 is performed. In step 1056, the
Secondary Router 320 A attempts to read another data record 602 from the Secondary Router Queue 318 A (preferably, this step is not performed during the first iteration through the flowchart).
In step 1058, the Secondary Router 320A determines whether the
Secondary Router Queue 318A that it attempted to read from is empty (i.e., whether it received an end-of-file condition). If the Secondary Router Queue 318A is empty, then step 1068 is performed, as described above. Otherwise, step
1060 is performed.
In step 1060, the Secondary Router 320A determines whether the new data record 602 read from the Secondary Router Queue 318A is of the same service type as the service type of the secondary block record 1102 being constructed. If the service type is different, then step 1068 is performed, as described above. Otherwise, step 1061 is performed.
In step 1061, the Secondary Router 320A determines if the new data record 602 read from the Secondary Router Queue 318A is a primary block record 702. Step 1068 (described above) is performed if the new data record 602 is a primary block record 702. Otherwise, step 1062 is performed.
In step 1062, the Secondary Router 320A converts the data record 602 as necessary (as described above).
In step 1064, the Secondary Router 320A determines if the size of the secondary block record 1 102 would exceed the maximum buffer size (preferably 8153 bytes) if the new data record 602 was added to the secondary block record 1102. Step 1068 (described above) is performed if the maximum buffer size would be exceeded. Otherwise, step 1066 is performed.
In step 1066, the Secondary Router 320A adds the new data record 602 to the secondary block record 1102. This is done by appropriately filling in fields 1116 and 1118 of the secondary block record 1102. Control then returns to step
1054, described above.
Step 1070 is performed if the secondary block record 1 102 has been completed (step 1068). In step 1070, the Secondary Router 320A accesses the Routing Distribution Table 322 A to identify the Send Queues 324 that are coupled to NIDSs 326 that support the service type of the data record 602 (for secondary block processing, any of the data records 602 that were placed into the secondary block record 1 102 could be used, since they all have the same service type). This operation is described further above.
In step 1072, the Secondary Router 320A replicates the secondary block record 1102. One copy is made for each Send Queue 324 identified in step 1070.
In step 1074, the Secondary Router 320A serially writes (via local writes) the replicated data records or block records to the Send Queues 324 identified in step 1070. Step 1086 (FIG. 10D) is then performed.
Step 1086 is related to steps 1054, 1060. and 1064. Steps 1054, 1060, and 1064 previously determined whether the new record 602 that was read in step
1056 should be added to the secondary block record 1 102 being created. If any one of steps 1054, 1060, or 1064 were found to be True, then the new record 602 was not added to the secondary block record 1 102. If any one of steps 1054, 1060, or 1064 were found to be true, then the new record 602 should be used to create a new secondary block record 1102. This logic is represented by step 1086.
In step 1086, the Secondary Router 320A determines whether the new record 602 has a different service type than the secondary block record 1102, or whether the addition of the new record 602 would have made the secondary block record 1102 exceed the maximum buffer or bundle size. In other words, the Secondary Router 320A in step 1086 determines whether any one of step 1054, 1060, or 1064 were determined to be true. If any of these conditions are true, then control passes to step 1092, where a new secondary block record 1102 is initialized. Control then passes to step 1058. Otherwise, control returns to step 1008 (FIG. 10A).
If the Secondary Router 320A determines in step 1024 (FIG. 10A) that secondary block processing is enabled, then step 1028 (FIG. 10B) is performed.
In step 1028, the Secondary Router 320A initializes a new secondary block record 1 102. Field 1 106 is set to the default blocking service name "q- block-recs". The entire record 602 is placed into field 11 14. Field 1112 is set equal to the length of the record 602. In step 1030, the Secondary Router 320A accesses the Routing
Distribution Table 322A and determines the maximum bundle size. The Secondary Router 320A does this by searching in the Routing Distribution Table 322A until it finds a row where field 410 is equal to the name of its Secondary Router Queue, i.e., Secondary Router Queue 318A. In the example of FIG.4. the Secondary Router 320A finds row 414M. The Secondary Router 320A uses the value in the Confirm Level field 412 of this row as the maximum bundle size.
For this example, the Secondary Router 320A has a maximum bundle size of 10.
In step 1032, the Secondary Router 320A determines whether the maximum bundle size of the secondary block record 1102 has been reached. In other words, the Secondary Router 320A determines whether the number of records 602 bundled into the secondary block record 1 102 is equal to the maximum bundle size. If the maximum bundle size has been reached, then step 1044 is performed. In step 1044, the Secondary Router 320 A completes the secondary block record 1 102 by filling in the appropriate data for fields 1 108 and 1110 of the secondary block record 1102. Control then passes to step 1046, described below.
If, in step 1032, the Secondary Router 320A determines that the maximum bundle size has not been reached, then step 1034 is performed. In step 1034, the Secondary Router 320A attempts to select another data record 602 from the unbundled block (preferably, this step is not performed during the first iteration through the flowchart).
In step 1036. the Secondary Router 320A determines whether any more records from the unbundled block remain. If no more records from the unbundled block remain, then step 1044 is performed, as described above. Otherwise, step
1038 is performed.
In step 1038, the Secondary Router 320A converts the data record 602 as necessary (as described above).
In step 1040, the Secondary Router 320A determines if the size of the block record 1 102 would exceed the maximum buffer size (preferably 8153 bytes) if the new data record 602 was added to the secondary block record 1102. Step
1044 (described above) is performed if the maximum buffer size would be exceeded. Otherwise, step 1042 is performed.
In step 1042. the Secondary Router 320A adds the new data record 602 to the secondary block record 1 102. This is done by appropriately filling in fields
1116 and 1 118 of the secondary block record 1 102. Control then returns to step 1032, described above.
Step 1046 is performed if the secondary block record 1102 has been completed (step 1044). In step 1046, the Secondary Router 320A accesses the Routing Distribution Table 322A to identify the Send Queues 324 that are coupled to NIDSs 326 that support the service type of the data record 602 (for secondary block processing, any of the data records 602 that were placed into the secondary block record 1 102 could be used, since they all have the same service type). This operation is described further above. In step 1048, the Secondary Router 320A replicates the secondary block record 1102. One copy is made for each Send Queue 324 identified in step 1046. Control then passes to step 1076 (FIG. 10D).
In step 1076, the Secondary Router 320A serially writes (via local writes) the replicated block records to the Send Queues 324 identified in step 1046. Step 1078 is then performed. Step 1078 is then performed. This step is related to steps 1032 and 1040. Step 1032 previously determined whether the maximum bundle size had been reached. Step 1040 previously determined whether the new record 602 that was read in step 1034 and being added to the secondary block record 1 102 would fill up the internal buffer for secondary block record 1102. If step 1040 was found to be True, then the new record 602 was not added to the secondary block record 1102. If this is the case, then the new record 602 should be used to create a new secondary block record 1 102. This logic is represented by step 1078. In step 1078, the Secondary Router 320A determines whether the maximum bundle size would have been exceeded, or whether the new record 602 would have made the secondary block record 1102 exceed the maximum buffer size. In other words, the Secondary Router 320A in step 1078 determines whether step 1032 or 1040 was determined to be true. If either was true, then control passes to step 1079. Otherwise, control returns to step 1006 (FIG. 10A). In step 1079, the Secondary Router 320A initializes the buffer, creating a new secondary block record 1 102 (as described in step 1028). and adding the new data record 602 to the secondary block record 1102 (as also described in step 1028) if the maximum buffer size was exceeded in step 1040. Control then passes to step 1090. In step 1090, if the max bundle size was reached, then control returns to step 1034. Otherwise, control returns to step 1038 (FIG. 10B).
Advantages and Performance
The invention of FIG. 3 provides two key advantages.
The first key advantage is improved reliability. The Secondary Router Queues 318 within the Primary Region 304 serve as a buffer to the Secondary
Regions 306. Once the Primary Router 314 has distributed a dat record to each
Secondary Router Queue 318, it can proceed with the next data record; it does not rely on the Secondary Region 306 receiving the data record. Thus, if a Secondary Region 306 fails or otherwise becomes unavailable, the distribution process may continue with the other Secondary Regions 306. Data records will be held up only in the Secondary Router Queue 318 that corresponds with the failed Secondary Region 306. The second key advantage is a significant improvement in efficiency and performance of the data distribution process. By implementing a parallel distribution process, and thus taking advantage of the computer's multi-tasking capabilities, the present invention improves throughput of data. This is accomplished by reducing the number of endpoints that each parallel Secondary Router 320 must service, and by reducing the number of remote (inter-region) writes. This is described further with the following example.
A comparison can be made between the throughput of the prior art system shown in FIG. 2, and the present invention shown in FIG. 3. It is assumed that throughput from the Application Programs 206, 308 to the Primary Router Queues 210, 312, and from each Send Queue 218, 324 and Send Process 219, 325 to its corresponding NIDS 220, 326. is the same for both systems. The comparison can be made for throughput between the Primary Router Queue 210, 312 and Send Queues 218. 324.
Benchmark tests can be conducted that result in comparisons between the time required for remote writes, remote reads, local writes, and local reads. For example, it has been found in some instances that a remote read (a data read across different CICS regions) is four times more costly than a local read (a data read within a single CICS region). Sample throughput calculations are shown below using the assumption that remote operations are more costly than local operations, as well as the assumption that remote operations are equal to local operations.
Each set of calculations is for an example DDS in which data needs to be distributed to 80 NIDS, and that five Secondary Regions are used, each one supporting 16 NIDS. For the sake of comparison, an arbitrary unit of time can be used to make relative comparisons, and is referred to as a time factor. Scenario I
The following calculations assume that remote operations are more costly than local operations:
Assumptions as a result of benchmark tests:
1 local read = 1 time factor (tf)
1 local write = 1 time factor (tf)
1 remote read = 4 time factors (tf)
1 remote write = 5 time factors (tf)
The prior art system of FIG. 2 requires the following:
a) 1 local read of the Primary Router Queue 210 by the Primary Router 212 equals 1 tf b) 80 remote writes of the Primary Router 212 to the Send Queues 218 equals 400 tf
This results in a total of 401 time factors. The present invention of FIG.
3 requires the following:
a) 1 local read of the Primary Router Queue 312B by the Primary Router 314 equals 1 tf b) 5 local writes of the Primary Router 314 to Secondary Router Queues 318 equals 5 tf c) 5 remote reads, 1 of each Secondary Router Queue 318 by a Secondary Router 320 equals 20 tf d) 80 local writes, 1 of each Secondary Router 320 to a Send Queue 324 equals 80 tf This results in a total of 106 time factors. The relative expense savings is thus 401/106 = 3.78. This represents a minimum expense savings, because it does not take into account the fact that in steps c) and d) of the present invention, the operations are performed in parallel. Taking this fact into account, the present invention of FIG. 3 requires the following:
a) 1 local read of the Primary Router Queue 312B by the Primary Router 314 equals 1 tf b) 5 local writes of the Primary Router 314 to Secondary Router Queues 318 equals 5 tf c) 5 remote reads, one of each Secondary Router Queue 318 by a Secondary
Router 320. performed in parallel equals 4 tf d) 80 local writes, 1 of each Secondary Router 320 to a Send Queue 324, performed in 5 parallel groups of 16 each equals 16 tf
This results in a total of 26 time factors. The relative expense savings is thus 401/26 = 15.42.
Scenario 2
The following calculations assume that remote operations are equal in cost to local operations. It is assumed that all operations require 1 time factor (tf).
The prior art system of FIG. 2 requires the following:
a) 1 local read of the Primary Router Queue 210 by the Primary Router 212 equals 1 tf b) 80 remote writes of the Primary Router 212 to the Send Queues 218 equals 80 tf This results in a total of 81 time factors. The present invention of FIG. 3 requires the following:
a) 1 local read of the Primary Router Queue 312B by the Primary Router 314 equals 1 tf b) 5 local writes of the Primary Router 314 to Secondary Router Queues 318 equals 5 tf c) 5 remote reads, 1 of each Secondary Router Queue 318 by a Secondary Router 320. performed in parallel equals 1 tf d) 80 local writes, 1 of each Secondary Router 320 to a Send Queue 324, performed in 5 parallel groups of 16 each equals 16 tf
This results in a total of 23 time factors. The relative expense savings is thus 81/23 = 3.52.
It is noted that the above calculations for the present invention reflect only the non-block processing mode. Additional savings are realized by using the block mode, since less remote reads are required between the Secondary Routers
320 and the Secondary Router Queues 318.
Alternate Embodiment - Secondary Router with Block Processing
The alternate embodiment of implementing block processing in the Secondary Routers 320, described in flowchart 1002, provides three additional advantages.
The first additional advantage of having the Secondary Routers 320 perform bundling is improved performance of the overall Data Distribution System, in terms of time. The time saved is in the input/output process of reading data from and writing data to permanent storage, such as a magnetic disk or a Direct Access Storage Device (DASD). The process of performing a local write may require a mechanical process such as positioning a head on a disk at the right location, and moving the head to write the data to the disk. Time is saved by performing such a process only once for 10 bundled records rather than 10 times for 10 separate records. This is described further with examples below.
The second additional advantage realized is the time saved in the number of database queries required. Each time a Secondary Router 320 distributes a record 602 or a secondary block record 1 102 to a Send Queue 324, the Secondary Router 320 needs to query its Routing Distribution Table 322 to determine routing and to formulate a header for the record 602 or block 1102. Without bundling, the Secondary Router 320 performs this query for each individual record 602. With bundling, the Secondary Router 320 performs this query once for each block 1102, since each record 602 in a block 1102 is being routed to the same NIDS server 326. This reduces by several times the number of Routing Distribution Table 322 queries required.
The third additional advantage realized is in a savings in the indexing of bundled records. Block records 1 102 contain only one index per bundle of records, while one index for each data record 602 is required.
To illustrate the first advantage of this embodiment, a comparison can be made between the throughput of the prior art system shown in FIG. 2, and the present alternative embodiment shown in FIG. 3 , similar to that made for the prior embodiment. It is assumed that throughput from the Application Programs 206,
308 to the Primary Router Queues 210. 312, and from each Send Queue 218, 324 and Send Process 219, 325 to its corresponding NIDS 220, 326, is the same for both systems. The comparison can be made for throughput between the Primary Router Queue 210, 312 and Send Queues 218, 324. In each example DDS, assume five Secondary Regions are used, supporting a total of 80 NIDS. Each set of calculations is for 50 data records needing to be distributed to all 80 NIDS. The same time factors used in the examples illustrating the previous embodiment are applied here for remote writes, remote reads, local writes, and local reads. Assume that all 50 data records can be bundled together. Scenario 1
The following calculations assume that remote operations are more costly than local operations.
The prior art system of FIG. 2 requires the following:
a) 50 local reads of the Primary Router Queue 210 by the Primary Router
212 equals 50 tf b) 50 remote writes of the Primary Router 212 to each of the 80 Send Queues 218, at 5 tf/remote serial write, equals 20000 tf
This results in a total of 20050 time factors. The present invention of FIG. 3 without bundling in either the primary or secondary regions requires the following:
a) 50 local reads of the Primary Router Queue 312B by the Primary Router 314 equals 50 tf b) 50 local writes of the Primary Router 314 to each of the 5 Secondary Router Queues 318 equals 250 tf c) 50 remote reads of each Secondary Router Queue 318 by a Secondary Router 320, at 4 tf/remote read, equals 200 tf (Note: secondary routers read in parallel) d) 50 local writes of each Secondary Router 320 to each Send Queue 324 equals 800 tf (Note: secondary routers write in parallel)
This results in a total of 1300 time factors. The present invention of FIG. 3, with bundling in the primary region, requires the following: a) 50 local reads of the Primary Router Queue 312B by the Primary Router 314 equals 50 tf b) 5 local writes of the bundled record by the Primary Router 314 to Secondary Router Queues 318 equals 5 tf c) 1 remote read of the bundled record by each Secondary Router Queue 318 by a Secondary Router 320 equals 4 tf d) 50 local writes of each Secondary Router 320 to 16 Send Queues 324 equals 800 tf
This results in a total of 859 time factors. The present invention of FIG. 3, with bundling in both the primary and secondary regions, requires the following:
a) 50 local reads of the Primary Router Queue 312B by the Primary Router 314 equals 50 tf b) 5 local writes of the bundled record by the Primary Router 314 to
Secondary Router Queues 318 equals 5 tf c) 1 remote read of the bundled record by each Secondary Router Queue 318 by a Secondary Router 320, equals 4 tf d) 1 local write of the bundled record by each Secondary Router 320 to 16 Send Queues 324 equals 16 tf
This results in a total of 75 time factors. The relative expense savings of bundling in both regions compared to each previous configuration is thus:
Prior art system of Fig. 2 20050/75 = 627.333
Present Invention of Fig. 3, with no bundling in either region: 1300/75 = 17.333
Present Invention of Fig. 3. with bundling in the primary region only: 859/75 = 1 1.45 Scenariυ 2
The following calculations assume that remote operations are equal in cost to local operations. It is assumed that all operations require 1 time factor (tf).
The prior art system of FIG. 2 requires the following:
a) 50 local reads of the Primary Router Queue 210 by the Primary Router
212 equals 50 tf b) 50 remote writes of the Primary Router 212 to each of the 80 Send Queues 218 equals 4000 tf
This results in a total of 4050 time factors. The present invention of FIG. 3 without bundling in either the primary or secondary regions requires the following:
a) 50 local reads of the Primary Router Queue 312B by the Primary Router 314 equals 50 tf b) 50 local writes of the Primary Router 314 to each of the 5 Secondary Router Queues 318 equals 250 tf c) 50 remote reads of each Secondary Router Queue 318 by a Secondary Router 320 equals 50 tf (Note: secondary routers read in parallel) d) 50 local writes of each Secondary Router 320 to each Send Queue 324 equals 800 tf (Note: secondary routers write in parallel)
This results in a total of 1150 time factors. The present invention of
FIG. 3, with bundling in the primary region, requires the following:
a) 50 local reads of the Primary Router Queue 312B by the Primary Router
314 equals 50 tf b) 5 local writes of the bundled record by the Primary Router 314 to Secondary Router Queues 318 equals 5 tf c) 1 remote read of the bundled record by each Secondary Router Queue 318 by a Secondary Router 320 equals 1 tf d) 50 local writes of each Secondary Router 320 to 16 Send Queues 324 equals 800 tf
This results in a total of 856 time factors. The present invention of FIG. 3, with bundling in both the primary and secondary regions, requires the following:
a) 50 local reads of the Primary Router Queue 312B by the Primary Router 314 equals 50 tf b) 5 local writes of the bundled record by the Primary Router 314 to Secondary Router Queues 318 equals 5 tf c) 1 remote read of the bundled record by each Secondary Router Queue 318 by a Secondary Router 320. equals 1 tf d) 1 local write of the bundled record by each Secondary Router 320 to 16
Send Queues 324 equals 16 tf
This results in a total of 72 time factors. The relative expense savings of bundling in both regions compared to each previous configuration is thus:
Prior art system of Fig. 2 4050/72 = 56.25
Present Invention of Fig. 3. with no bundling in either region: 1 150/72 = 15.97
Present Invention of Fig. 3. with bundling in the primary region only: 856/72 = 11.88 A comparison can also be made between the number of input/output operations required for the prior art system shown in FIG. 2, and the present embodiment shown in FIG. 3. It is assumed that throughput from the Application Programs 206, 308 to the Primary Router Queues 210, 312, and from each Send Queue 218, 324 and Send Process 219. 325 to its corresponding NIDS 220, 326, is the same for both systems. The comparison can be made for throughput between the Primary Router Queue 210. 312 and Send Queues 218, 324.
In each example DDS. assume five Secondary Regions are used, each one supporting 15 NIDS, for a total of 75 NIDS. Each set of calculations is for 50 data records needing to be distributed to all 15 NIDS in each Secondary Region.
Comparisons are made between the number of input/output (I/O) operations required for remote writes, remote reads, local writes, and local reads. Assume that all 50 data records can be bundled together. For the sake of comparison, the number of I/O operations can be used to make relative comparisons, and are referred to as I/O operations.
The following calculations assume that remote operations are more costly than local operations, and use benchmark test results of 1 I/O operation per local read or write, and 4 I/O operations per remote read and remote write.
The prior art system of FIG. 2 requires the following:
a) 50 local reads of the Primary Router Queue 210 by the Primary Router
212 equals 50 I/O operations b) 50 remote writes of the Primary Router 212 to each of the 75 Send Queues 218 equals 15000 I/O operations
This results in a total of 15050 I/O operations. The present invention of FIG. 3 without bundling in either the primary or secondary regions requires the following: a) 50 local reads of the Primary Router Queue 312B by the Primary Router 314 equals 50 I/O operations b) 50 local writes of the Primary Router 314 to each of the 5 Secondary Router Queues 318 equals 250 I/O operations c) 50 remote reads of each Secondary Router Queue 318 by a Secondary
Router 320 equals 200 I/O operations (Note: secondary routers read in parallel) d) 50 local writes of each Secondary Router 320 to each Send Queue 324 equals 750 I/O operations (Note: secondary routers write in parallel)
This results in a total of 300 I/O operations in the primary region and 950
I/O operations in each secondary region. 300 I/O operations in the primary region, added to 950 I/O operations in each secondary region multiplied by 5 secondary regions, (ie. 300 + 950*5) amounts to a total of 5050 I/O operations. The relative expense savings is thus 15050/5050 = 2.98 This represents a minimum expense savings. The present invention of FIG. 3, with bundling in the primary region, requires the following:
a) 50 local reads of the Primary Router Queue 312B by the Primary Router 314 equals 50 I/O operations b) 5 local writes of the Primary Router 314 to Secondary Router Queues 318 equals 5 I/O operations c) 1 remote read of each Secondary Router Queue 318 by a Secondary Router 320, equals 4 I/O operations d) 50 local writes of each Secondary Router 320 to 15 Send Queues 324 equals 750 I/O operations
This results in a total of 55 I/O operations in the primary region and 754
I/O operations per secondary region. 55 I/O operations in the primary region, added to 754 I/O operations in each secondary region multiplied by 5 secondary regions, (ie. 55 + 754*5) amounts to a total of 3825 I/O operations. The relative expense savings is thus 15050/3825 = 3.93. This is a somewhat greater expense savings. The present invention of FIG. 3. with bundling in both the primary and secondary regions, requires the following:
a) 50 local reads of the Primary Router Queue 312B by the Primary Router 314 equals 50 I/O operations b) 5 local writes of the Primary Router 314 to Secondary Router Queues 318 equals 5 I/O operations c) 1 remote read of each Secondary Router Queue 318 by a Secondary
Router 320, equals 4 I/O operations d) 1 local write of each Secondary Router 320 to 15 Send Queues 324 equals 15 I/O operations
This results in a total of 55 I/O operations in the primary region and 19 I/O operations per secondary region. 55 I/O operations in the primary region, added to 19 I/O operations in each secondary region multiplied by 5 secondary regions,
(ie. 55 + 19*5) amounts to a total of 150 I/O operations. The relative expense savings is thus 15050/150 = 100.33.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

What Is Claimed Is:
1. A method of distributing data, comprising the steps of:
(1) performing a local read operation to read a data record from a primary router queue; (2) identifying secondary router queues associated with secondary regions that are coupled to servers that support a service type of said data record;
(3) replicating said data record to produce replicated data records;
(4) performing local write operations to write said replicated data records to said identified secondary router queues; (5) performing remote read operations to read data records from one of said identified secondary router queues associated with one of said secondary regions;
(6) identifying at least one send queue associated with a server that supports the service type of said data records, and that is supported by said one of said secondary regions;
(7) optionally converting said data records to produce converted data records;
(8) generating a secondary block record having stored therein information from one or more of said converted data records; and (9) performing a local write operation to write said secondary block record to said identified at least one send queue.
2. The method of claim 1. wherein steps (l)-(4) are performed by components within a primary region, and steps (5)-(9) are performed by components within a secondary region.
3. The method according to claim 1 , wherein step (6) comprises the step of: identify ing additional send queues associated with servers that support the service type of said data records, and that are supported by said one of said secondary regions.
4. The method according to claim 3, wherein step (8) comprises the step of: replicating said secondary block record to produce replicated secondary block records.
5. The method according to claim 4, wherein step (9) comprises the step of: performing local write operations to write said replicated secondary block records to said identified additional send queues.
6. The method of claim 1. further comprising the steps of:
(10) performing remote write operations to write secondary block records from said identified at least one send queue to associated NIDS;
(11) unbundling said secondary block records to produce converted data records in said NIDS.
7. The method of claim 1. further comprising the step of:
(10) unbundling said secondary block records to produce converted data records in said identified at least one send queue.
8. The method of claim 1. wherein step (8) comprises the steps of:
(a) performing a remote read operation to read a converted data record;
(b) storing information contained in said converted data record in a new secondary block record:
(c) determining whether a maximum bundle size of said new secondary block record has been reached; (d) if said maximum bundle size of said new secondary block record has not been reached, then attempting to read another converted data record;
(e) determining if predetermining conditions are satisfied;
(f) if said predetermined conditions are satisfied, then storing information contained in said another converted data record in said new secondary block record; and
(g) returning to step (c).
9. The method of claim 8. wherein step (e) comprises the steps of: determining if any said converted data records remain unbundled; if no said converted data records remain unbundled, then completing said new secondary block record; if said converted data records remain unbundled, then determining if a service type of said another converted data record is equal to a service type of said new secondary block record; if said service type of said another converted data record is not equal to said service type of said new secondary block record, then completing said new secondary block record; if said service type of said another converted data record is equal to said service type of said new secondary block record, then determining whether a maximum buffer size would be exceeded if said another converted data record is added to said new secondary block record; if said a maximum buffer size would be exceeded if said another converted data record is added to said new secondary block record, then completing said new secondary block record; and if said a maximum buffer size would not be exceeded if said another converted data record is added to said new secondary block record, then determining that said predetermined conditions are satisfied.
10. A method of distributing data, comprising the steps of: (1) generating a primary block record having stored therein information from one or more data records read from a primary router queue;
(2) identifying secondary router queues associated with secondary regions that are coupled to servers that support a service type of said primary block record;
(3) replicating said primary block record to produce replicated primary block records; and
(4) performing local write operations to write said replicated primary block records to said identified secondary router queues.
11. The method of claim 10, further comprising the steps of:
(5) performing a remote read operation to read one of said replicated primary block records from one of said identified secondary router queues associated with one of said secondary regions:
(6) unbundling said one of said replicated primary block records to obtain said one or more data records;
(7) identifying at least one send queue associated with servers that support a service type of said data records, and that are supported by said one of said secondary regions:
(8) optionally converting said data records to produce converted data records;
(9) generating a secondary block record having stored therein information from one or more said converted data records;
(10) performing local write operations to write said secondary block records to said identified at least one send queue.
12. The method of claim 1 1. wherein step (9) comprises the steps of:
(a) performing a remote read operation to read a data record;
(b) storing information contained in said converted data record in a new secondary block record; (c) determining whether a maximum bundle size of said new secondary block record has been reached;
(d) i f said maximum bundle size of said new secondary block record has not been reached, then attempting to read another converted data record; (e) determining if predetermining conditions are satisfied;
(f) if said predetermined conditions are satisfied, then storing information contained in said another converted data record in said new secondary block record: and
(g) returning to step (c).
13. The method of claim 12, wherein step (e) comprises the steps of: determining if any said converted data records remain unbundled; if no said converted data records remain unbundled, then completing said new secondary block record; if said converted data records remain unbundled, then determining if a service type of said another converted data record is equal to a service type of said new secondary block record; if said service type of said another converted data record is not equal to said service type of said new secondary block record, then completing said new secondary block record; if said service type of said another converted data record is equal to said service type of said new secondary block record, then determining whether a maximum buffer size would be exceeded if said another converted data record is added to said new secondary block record; if said a maximum buffer size would be exceeded if said another converted data record is added to said new secondary block record, then completing said new secondary block record; and if said a maximum buffer size would not be exceeded if said another converted data record is added to said new secondary block record, then determining thai said predetermined conditions are satisfied.
14. The method of claim 1 1 , wherein steps (l)-(4) are performed by components within a primary region, and steps (5)-(10) are performed by components within a secondary region.
15. The method according to claim 1 1 , wherein step (7) comprises the step of: identifying additional send queues associated with servers that support a service type of said data records, and that are supported by said one of said secondary regions.
16. The method according to claim 15. wherein step (9) comprises the step of: replicating said secondary block record to produce replicated secondary block records.
17. The method according to claim 16, wherein step (10) comprises the step of: performing local write operations to write said replicated secondary block records to said identified additional send queues.
18. The method of claim 1 1. further comprising the steps of:
(10) performing local write operations to write secondary block records from said identified at least one send queue to associated NIDS;
(11) unbundling said secondary block records to produce converted data records in said NIDS.
19. The method of claim 1 1. further comprising the step of:
(10) unbundling said secondary block records to produce converted data records in said identified at least one send queue.
20. The method of claim 1 1 , wherein step (1) comprises the steps of: (a) performing a local read operation to read a data record from a primary router queue:
(b) storing information contained in said data record in a new primary block record; (c) determining whether a maximum bundle size of said new primary block record has been reached;
(d) if said maximum bundle size of said new primary block record has not been reached, then attempting to read another data record from said primary router queue; (e) determining if predetermining conditions are satisfied;
(f) if said predetermined conditions are satisfied, then storing information contained in said another data record in said new primary block record; and
(g) returning to step (c).
21. The metnod of claim 20, wherein step (e) comprises the steps of: determining if said primary router queue is empty; if said primary router queue is empty, then completing said new primary block record; if said primary router queue is not empty, then determining if a service type of said another data record is equal to a service type of said new primary block record; if said service type of said another data record is not equal to said service type of said new primary block record, then completing said new primary block record; if said service type of said another data record is equal to said service type of said new primary block record, then determining whether a maximum buffer size would be exceeded if said another data record is added to said new primary block record; if said a maximum buffer size would be exceeded if said another data record is added to said new primary block record, then completing said new primary block record; and if said a maximum buffer size would not be exceeded if said another data record is added to said new primary block record, then determining that said predetermined conditions are satisfied.
22. A method of distributing data in a data distribution system in which a local read operation is performed to read a data record from a primary router queue, wherein a secondary router queue associated with secondary regions that are coupled to servers that support a service type of said data record is identified, and wherein said data record is replicated to produce replicated data records that are written using local write operations to said identified secondary router queues, the method comprising the steps of:
(1) performing remote read operations to read data records from a secondary router queue associated with a secondary region;
(2) identifying at least one send queue associated with a server that supports the service type of said data records, and that is supported by said secondary region:
(3) optionally converting said data records to produce converted data records;
(4) generating a secondary block record having stored therein information from one or more of said converted data records; and
(5) performing a local write operation to write said secondary block record to said identified at least one send queue.
23. A method of distributing data in a data distribution system in which a primary block record having stored therein information from one or more data records read from a primary router queue is generated, wherein secondary router queues associated with secondary regions that are coupled to servers that support a service type of said primary block record are identified, said primary block record being replicated to produce replicated primary block records that are written to said identified secondary router queues, the method comprising the steps of: (1) performing a remote read operation to read a replicated primary block record from a secondary router queue associated with a secondary region;
(2) unbundling said replicated primary block record to obtain one or more data records:
(3) identifying at least one send queue associated with servers that support a service type of said obtained data records, and that are supported by said secondary region;
(4) optionally converting said obtained data records to produce converted data records;
(5) generating a secondary block record having stored therein information from one or more of said converted data records; and
(6) performing local write operations to write said secondary block record to said identified at least one send queue.
PCT/US1998/003980 1997-02-28 1998-03-02 Enhanced hierarchical data distribution system using bundling in multiple routers WO1998038566A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU63436/98A AU6343698A (en) 1997-02-28 1998-03-02 Enhanced hierarchical data distribution system using bundling in multiple routers

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US81017697A 1997-02-28 1997-02-28
US08/810,176 1997-02-28
US93399597A 1997-09-19 1997-09-19
US08/933,995 1997-09-19

Publications (3)

Publication Number Publication Date
WO1998038566A2 true WO1998038566A2 (en) 1998-09-03
WO1998038566A3 WO1998038566A3 (en) 1998-12-03
WO1998038566A9 WO1998038566A9 (en) 1999-01-28

Family

ID=27123316

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/003980 WO1998038566A2 (en) 1997-02-28 1998-03-02 Enhanced hierarchical data distribution system using bundling in multiple routers

Country Status (2)

Country Link
AU (1) AU6343698A (en)
WO (1) WO1998038566A2 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5557745A (en) * 1990-09-04 1996-09-17 Digital Equipment Corporation Method for supporting foreign protocols across backbone network by combining and transmitting list of destinations that support second protocol in first and second areas to the third area
US5568605A (en) * 1994-01-13 1996-10-22 International Business Machines Corporation Resolving conflicting topology information
US5634011A (en) * 1992-06-18 1997-05-27 International Business Machines Corporation Distributed management communications network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5557745A (en) * 1990-09-04 1996-09-17 Digital Equipment Corporation Method for supporting foreign protocols across backbone network by combining and transmitting list of destinations that support second protocol in first and second areas to the third area
US5634011A (en) * 1992-06-18 1997-05-27 International Business Machines Corporation Distributed management communications network
US5568605A (en) * 1994-01-13 1996-10-22 International Business Machines Corporation Resolving conflicting topology information

Also Published As

Publication number Publication date
WO1998038566A3 (en) 1998-12-03
AU6343698A (en) 1998-09-18

Similar Documents

Publication Publication Date Title
US5613155A (en) Bundling client write requests in a server
US5924097A (en) Balanced input/output task management for use in multiprocessor transaction processing system
US6728727B2 (en) Data management apparatus storing uncomplex data and data elements of complex data in different tables in data storing system
JP3837291B2 (en) Application independent messaging system
US6393581B1 (en) Reliable time delay-constrained cluster computing
US6032147A (en) Method and apparatus for rationalizing different data formats in a data management system
US4130885A (en) Packet memory system for processing many independent memory transactions concurrently
JP3892987B2 (en) Message broker data processing apparatus, method, and recording medium
US7757232B2 (en) Method and apparatus for implementing work request lists
US6526055B1 (en) Method and apparatus for longest prefix address lookup
CN113067883B (en) Data transmission method, device, computer equipment and storage medium
US20080288670A1 (en) Use of virtual targets for preparing and servicing requests for server-free data transfer operations
US5864860A (en) Compression of structured data
JPH09505713A (en) System for parallel assembly of data transmission in broadband networks
US6944863B1 (en) Queue bank repository and method for sharing limited queue banks in memory
KR20120085798A (en) Efficient multiple filter packet statistics generation
US5742812A (en) Parallel network communications protocol using token passing
US6850962B1 (en) File transfer system and method
US20060149760A1 (en) LDAP bulk append
US8223785B2 (en) Message processing and content based searching for message locations in an asynchronous network
Ayesta et al. A token-based central queue with order-independent service rates
JP3598522B2 (en) Distributed database management device
JPH0392942A (en) Storing method and accessing method for file
WO1998038566A2 (en) Enhanced hierarchical data distribution system using bundling in multiple routers
JP2006277158A (en) Data update system, server and program

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AU CA CN JP KR MX

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AU CA CN JP KR MX

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

COP Corrected version of pamphlet

Free format text: PAGES 1/16-16/16, DRAWINGS, REPLACED BY NEW PAGES 1/16-16/16; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1998530403

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase