CN100596159C - Method for processing multiprogress message and method for processing multiprogress talk ticket - Google Patents

Method for processing multiprogress message and method for processing multiprogress talk ticket Download PDF

Info

Publication number
CN100596159C
CN100596159C CN200510125736A CN200510125736A CN100596159C CN 100596159 C CN100596159 C CN 100596159C CN 200510125736 A CN200510125736 A CN 200510125736A CN 200510125736 A CN200510125736 A CN 200510125736A CN 100596159 C CN100596159 C CN 100596159C
Authority
CN
China
Prior art keywords
shared drive
ticket
drive piece
load
formation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200510125736A
Other languages
Chinese (zh)
Other versions
CN1787588A (en
Inventor
周训波
Original Assignee
Datang Software Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datang Software Technologies Co Ltd filed Critical Datang Software Technologies Co Ltd
Priority to CN200510125736A priority Critical patent/CN100596159C/en
Publication of CN1787588A publication Critical patent/CN1787588A/en
Application granted granted Critical
Publication of CN100596159C publication Critical patent/CN100596159C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

This invention relates to a multi-process information process method and a multi-process ticket process method, in which the ticket process method used in parallel process to tickets in the charge system includes 1, setting up a shared memory for every process, 2, the process fetches the tickets for processing in the shared memory block, 3, distributing the tickets and storing them in the shared memory according to the preset load balance rule based on the current occupation situation of the memory block and storing the tickets with the same charged numbers in a same shared memory.

Description

The method that multi-process message treatment method and multi-process ticket are handled
Technical field
The present invention relates to the multi-process message treatment method, the load-balancing method of Message Processing between especially a kind of multi-process.The invention still further relates to a kind of real-time charging method, the method that especially a kind of multi-process ticket is handled.
Background technology
Existing multi-process message treatment method is by interprocess communication (IPC) mechanism (as pipeline, socket, message queue) of arranging at inter-module or catalogue file data to be distributed to assembly to handle.
With socket (pipeline) is example, sets up a connection (being to read to write two pipelines in pipeline mechanism) between pre-process link process and subsequent processes link process, and a process writes data, and another process is sense data therefrom; Subsequent processes link process regularly feeds back to pre-process link process to processing speed, described feedback can adopt IPC mechanism/file/forms such as tables of data to realize, and then, pre-process link process is carried out the distribution of data according to the feedback of subsequent processes link process, and the decision current data by which subsequent processes link process is handled.
Under this pattern, pre-process link process will guarantee that subsequent processes link process receives data, need obtain the affirmation information of subsequent processes link process.If subsequent processes link process occurs unusual or task is detained, the pre-process process will be absorbed in wait.This in addition machine-processed performance is limited, can not adapt to the mass data exchange.
With the message queue is example, and under the message queue mode, each process is set up a message queue.Pre-process link process sends data in the formation of follow-up processing links process, follow-up process is sense data therefrom.Subsequent processes link process regularly feeds back to pre-process link process to processing speed, described feedback can adopt similar IPC mechanism/file/forms such as tables of data to realize, pre-process link process is carried out the distribution of data according to the feedback of subsequent processes link process, and decision by which process is for further processing.
Under this pattern, the pre-process process is put into formation to data and just can be worked on.But (API) is limited for the application programming interfaces of message queue, and subsequent processes link process can't learn that the data in the formation overstock situation; This in addition machine-processed performance is limited, can not adapt to the mass data exchange.
In sum, adopt prior art, the delay of subsequent processes link task will cause the wait of pre-process link, and then the pre-process link can not carry out the scheduling of other task, and then cause the busy and another part process of part process to be in the situation of load imbalance between the process of wait state.Though existing message queue mechanism has the effect of buffering, interprocess communication (IPC) does not provide the application programming interfaces (API) of access queue state.And in the prior art, unusual if process occurs, operating system will be regained the internal memory that this process is used, and cause losing the data in this piece internal memory.Prior art adopts the performance of mechanism relatively slow, can't satisfy the demand of real time business.
Summary of the invention
The purpose of this invention is to provide a kind of multi-process message treatment method, the data payload that this method can balanced each process; Further this method occurs can avoiding losing of data when unusual in process; And this method can provide higher processing speed.Accordingly, another object of the present invention provides a kind of charging method that adopts described process message processing method.
For solving the problems of the technologies described above, the invention provides a kind of multi-process message treatment method, comprising: 1) be shared drive piece of each course allocation;
2) the shared drive piece of each process is divided formation, and in described shared drive piece, set up the attribute of described formation;
3) based on the current seizure condition of shared drive piece, pending message is calculated the ratio of the message count of formation and capacity of queue as process load according to the load balancing rule that presets, obtain the ratio of maximum process load and minimum process load, if this ratio greater than first threshold value that presets, then sends to pending message in the shared drive piece of current process load minimum.
On the said method basis, 3) the further periodic process load that obtains the shared drive piece in, whether judge minimum process load greater than the upper limit threshold value that presets,, and trigger according to the load balancing rule and preserve pending message if greater than then starting more multicompartment.
In the said method, when process handle to be determined in the shared drive piece message, adopt the mode of access control lock to limit of the visit of other processes to this shared drive piece.
The method that the present invention also provides a kind of multi-process ticket to handle is used for the parallel processing of charge system to ticket, comprising: 1) set up a shared drive piece for each process;
2) the shared drive piece of each process is divided formation, and in described shared drive piece, set up the attribute of described formation;
3) based on the current occupancy of shared drive piece, according to the mapping relations of described formation of setting up in advance and account number section and the load balancing rule that presets, the ticket number of calculation procedure formation and the ratio of capacity of queue are as process load, obtain the ratio of maximum process load and minimum process load, if this ratio is greater than first threshold value that presets, then trigger to adjust the account number segment limit that formation that the shared drive piece divides is shone upon, distribute ticket to be saved in idle shared drive piece, and same account's ticket store same shared drive piece into.
On the said method basis, described mapping relations can be Linear Mapping relation or Nonlinear Mapping relation.
In the said method, the account number segment limit that the formation that described adjustment shared drive piece is divided is shone upon is specially: the process load [g that calculates each shared drive piece 1, g 2..., g n], the average load amount of calculating shared drive piece Wherein n is a shared drive piece number; And, by
Figure C20051012573600072
Upgrade the tactful coefficient f of each shared drive piece and number section i, feasible number section [Min, Min+f 1* Δ, Min+f 2* Δ ..., Min+f N-1* Δ, Max] in, the relatively large shared drive piece pairing segment limit of process load diminishes; The less relatively shared drive piece pairing segment limit of process load becomes big; Wherein, Δ is the difference of Max and Min, w iBe the weight coefficient relevant with number section.
On the said method basis, after the mapping relations adjustment finishes, also can further judge and whether preserve the ticket that does not belong to the current section in the shared drive piece, if the shared drive piece that then reads and be saved in the affiliated number current mapping of section of this ticket is arranged.
On the said method basis, can periodically obtain described ratio.And, can be further, after obtaining minimum process load, if judge it greater than second threshold value that presets, the mapping relations of the then more serviced components of system start-up, and triggering adjustment shared drive piece and account number section; And/or obtain maximum process load, and if less than the 3rd threshold value that presets, the mapping relations of then system closing part assembly, and triggering adjustment shared drive piece and account number section.
Above technical scheme is utilized the Sharing Memory Realization exchanges data in the multi-process message treatment method of the present invention as can be seen, because shared drive is the fastest interprocess communication (IPC) mechanism, thereby guarantees that the processing to message has higher speed among the present invention; And because the present invention utilizes shared drive to store pending message, and the current state to this message is preserved after process reads message, thereby, after adopting method of the present invention,, then can not cause the loss of data in the memory block if exist process unusual situation to occur.Further, on the basis that has utilized shared drive, the present invention has adopted load-balancing mechanism, according to the current load situation of shared drive piece pending message is saved in idle memory block dynamically, load with balanced each memory block, avoid causing the part process busy and another part process is in the situation of wait state, and then improved the speed of Message Processing generally.
The present invention also provides a kind of processing method of multi-process ticket, is used for the parallel processing of charge system to ticket.Utilize the Sharing Memory Realization exchanges data in this method,, thereby guarantee the high efficiency that the ticket among the present invention is handled because shared drive is the fastest interprocess communication (IPC) mechanism; And, because the present invention utilizes the shared drive memory ticket, and the current state of preserving the ticket that is read by process, thereby, adopt method of the present invention after, if exist process unusual situation to occur, then can not cause the loss of data in the memory block.
Further, adopted the multiple strategy of load balancing flexibly and effectively among the present invention, the definition of process load, load balancing degrees definition, load policy definition based on coefficient sets, intuitively reflected loading condition, calculate fast and simple, very little to the influence of the runnability of call charge service, can satisfy the requirement of real time business.
And then one of the load strategy group of coefficient decision is mapped to ticket in the shared drive piece of distribution one by one among the present invention, and same caller account's ticket only can fall into same memory block, and a certain moment is only handled by the pairing process of this memory block.Therefore, the present invention has guaranteed the sequential that ticket is handled when load balancing is provided.
Description of drawings
Fig. 1 is a multi-process message treatment method flow chart of the present invention;
Fig. 2 is load balancing strategy embodiment flow chart in the multi-process message treatment method of the present invention;
Fig. 3 is that common ticketing is handled schematic diagram.
Embodiment
The invention provides a kind of multi-process message treatment method, the core of this method is: 1) be shared drive of each course allocation; 2) process reads message and handles from the shared drive piece that is distributed; 3) based on the current seizure condition of memory block, with pending message according to the load balancing rale store that presets in the shared drive piece of determining.
With reference to Fig. 1, specify the preferred embodiment of this method.
Step 11: be shared drive of each course allocation, and label with a unique name.
System arranges unique identification of each course allocation (as 1,2,3...n); According to this sign queued name is set, as/tmp/BILL_0 ,/tmp/BILL_1 .../tmp/BILL_xxx; Utilize ftok function (being used for filename is converted to a key assignments) to obtain a unique key value, according to this key value, the parameter that coupling system provides (as memory size, read-write permission etc.) is set up shared drive with shmget (setting up the function of shared drive under the IPC mechanism); Each assembly obtains the unique key of formation according to queued name, and the shmat function of shared drive (under the IPC mechanism visit) is loaded into the address space of process to this internal memory, so that process can be carried out queue accesses as visiting common memory.
Step 12: the shared drive of each process is divided formation, and the message that process reads in the formation is handled.
The shared drive of each process is divided formation, and in shared drive, set up the attribute of formation.Mark off queue heads and be used to describe the formation attribute in memory headroom, content can comprise: current message count, remaining queue entries number, start message position, access control lock etc. in formation static capacity, the formation; Go out queue heads, remaining memory headroom as the data field, is used to store data.Described access control lock is used for limiting the visit of other processes to this formation when formation is determined in the process visit.The access control lock is realized by semaphore, is techniques well known.
When inserting a pending message in the formation, this message is saved in described data field, and current message count in the formation is carried out adding up of a unit value; When process disposes to a piece of news in the formation, then current message count in the formation is deducted a unit value; When process is handled a piece of news, lock this message, promptly this message current state is preserved, do not change current message numerical value in the formation.
Step 13:, pending message is saved in idle formation according to the load balancing strategy that presets.
The thought of the load balancing strategy that present embodiment provides is: analyze relatively capacity, message number, residual queue's entry number of individual queue, and the load of calculation procedure, and according to the LOAD FOR load balancing degrees; And then pending message is inserted in the most idle formation.With reference to Fig. 2, the load balancing strategy in the present embodiment is described.
The calculating of the process load of individual queue, process load are the message count of process queue and the ratio of capacity of queue.That is: the message count/capacity of queue of process load=process queue.Process load codomain [0,1], 0 expression process are idle fully, do not have message pending in the formation, and 1 expression process disposal ability reaches capacity, and formation is taken by pending message.
The calculating of load balancing degrees, load balancing degrees are the ratio of maximum process load and minimum process load in the described process load, i.e. load balancing degrees=maximum process load/minimum process load.The codomain of load balancing degrees is [1, + ∞), load balancing degrees is to represent that the individual queue load reached the desirable balance state at 1 o'clock, and the load balancing degrees between the big more expression individual queue of load balancing degrees is poor more, and promptly the part process is in busy situation and the process of another part is in idle condition.
The load balancing policy definition of present embodiment is: step 21: periodically obtain calculation procedure load, step 26: calculate described load balancing degrees, when described load balancing degrees during greater than first threshold value that presets, then carry out step 27: pending message is sent in the formation of process load minimum, otherwise, can according to original rule pending distribution of messages be preserved in definite formation according to business demand, as adopt the mode of Random assignment or the pending message of particular traffic type sent to the type business and set up in the formation of mapping relations, transfer to specific process and handle.Described first threshold value can artificially be set according to systematic function, can first threshold value be set to 2 usually.
Described load balancing degrees is used to judge whether the load between individual queue is balanced, has embodied the gap between maximum process load of formation and the minimum process load, yet whether load balancing degrees can not embody all formations near full up or idle condition.Thereby on the said method basis, can also carry out step 22 between step 21 and 26: whether the minimum process load of judging described formation is greater than second thresholding that presets; And step 24: whether the maximum process load of formation is less than the 3rd thresholding that presets.Described second thresholding is used to judge that whether formation is near being taken state, when minimum process load surpasses described second threshold value, then need carry out step 23: system start-up more service assembly, and then trigger the load balance process of carrying out step 27 behind the start assembly; Accordingly, the 3rd thresholding described in the step 24 is used to judge that whether formation is near idle condition, when maximum process load during less than the 3rd thresholding, show assembly disposal ability surplus, and then carry out step 25: but system's closed portion serviced component, and, trigger the load balance process of carrying out step 27 behind the closing assembly.
Above-mentioned introduction only contrasts Fig. 2 a kind of load strategy has been described, there is not the constraint of execution sequence in some step that exists in this load strategy, as the judgement of step 22 with step 24, thereby those skilled in the art can be provided with the execution sequence that meets business demand as required.
In addition, the present invention also can adopt other mode to judge that whether formation is near taking or idle condition, for example: the average load of calculating individual queue, described average load and the thresholding that presets are compared, to reflect the process load situation of whole formation, prescribe a time limit greater than last when average load, show that all formations are on the whole near full up situation, corresponding all processes are in busy state, should start the more service assembly; If average load less than lower limit, then shows all formations on the whole near idle condition, the corresponding process free time, should the closed portion serviced component.In sum, those skilled in the art can be provided with the present situation that different mechanism is judged formation, and the present invention does not do concrete restriction.
In the above-mentioned preferred embodiment, provide a kind of load-balancing mechanism, those skilled in the art can carry out different settings according to the real needs of business, and for example, described load balancing rule can send to pending message the formation of residual capacity maximum.Yet, compare with the equilibrating mechanism that is provided among the embodiment above, though this method can play the effect of load balancing, owing to do not consider the situation that individual queue varies in size, thereby not as preferred balance policy of the present invention, but the present invention does not limit selecting for use of concrete load balancing strategy.
The method that the present invention also provides a kind of multi-process ticket to handle is used for the parallel processing of charge system to ticket.
Usually, the charging of every ticket is handled to be needed through processing links such as preliminary treatment, wholesale price, warehouse-ins, and each processing links is walked abreast by a plurality of processes usually and finishes.Ticketing is handled example as shown in Figure 3, comprising: bill record collection, by the responsible ticket that need handle from the switch acquisition of a co_proc, then ticket is distributed in two different preliminary treatment processes, and the speed of described bill record collection is very fast very fast; Preliminary treatment, two preliminary treatment processes are calculated separately (detecting, weigh single detection etc. as the ticket legitimacy), independently mail to follow-up wholesale price system then; Compare with bill record collection, pretreated speed is relatively slow; Wholesale price is handled, and four wholesale price processes are carried out the wholesale price processing independently, and the result after the wholesale price are issued " warehouse-in " process include the ticket storehouse in, and the speed of service that described wholesale price is handled hands over described preliminary treatment slower.In the real system, except that bill record collection, preliminary treatment, wholesale price, warehouse-in, also may comprise other processing links according to the demand of business; And each processing links may comprise more treatment progress, and the processing speed of each treatment progress may occur changing with handling content.
The process of being handled by above-mentioned ticketing as can be seen, ticket from a treatment progress to next treatment progress, should guarantee: obtain balanced ticket input variable in subsequent processes link process, it is busy and another part process enters wait state to avoid occurring the part process; If treatment progress abnormal end restarts the ticket that can not lose after the process in the processing; The ticket of same calling number, should handle in chronological order, be that charging after two ticket load balancing of same calling number is handled and still can be kept original time sequencing, for example: ticket 1 and ticket 2 are to arrive according to time order and function, load balancing is distributed to wholesale price process 1 to ticket 1, and ticket 2 is distributed to wholesale price process 2, if wholesale price process 2 is faster than wholesale price process 1, just may carry out wholesale price to ticket 2 earlier, thereby obtain wrong result.
According to the characteristics that above-mentioned ticketing is handled, the core of multi-process call bill processing method of the present invention is: 1) set up a shared drive for each process; 2) process reads the ticket of storing in the shared drive piece and handles; 3), be saved in the shared drive piece according to the load balancing regular allocation ticket that presets, and same account's ticket stores same shared drive into based on the current occupancy of memory block.
According to above core, below divide three parts to specify better embodiment of the present invention.
1) is shared drive of each course allocation, and labels with a unique name.
System arranges unique identification of each course allocation (as 1,2,3...n); According to this sign queued name is set, as/tmp/BILL_0 ,/tmp/BILL_1 .../tmp/BILL_xxx; Utilize ftok function (being used for filename is converted to a key assignments) to obtain a unique key value, according to this key value, the parameter that coupling system provides (as memory size, read-write permission etc.) is set up shared drive with shmget (setting up the function of shared drive under the IPC mechanism); Each assembly obtains the unique key of formation according to queued name, and the shmat function of shared drive (under the IPC mechanism visit) is loaded into the address space of process to this internal memory, so that process can be carried out queue accesses as visiting common memory.
2) shared drive of each process is divided formation, the ticket that process reads in the formation is handled.
The shared drive of each process is divided formation, and in shared drive, set up the attribute of formation.Mark off queue heads and be used to describe the formation attribute in memory headroom, content can comprise: current ticket number, remaining queue entries number, initial ticket position, access control lock etc. in formation static capacity, the formation; Go out queue heads, remaining memory headroom as the data field, is used to store data.Described access control lock is used for limiting the visit of other processes to this formation when formation is determined in the process visit.The access control lock is realized by semaphore, is techniques well known.
When inserting a ticket in the formation, this ticket is saved in described data field, and current ticket number in the formation is carried out adding up of a unit value; When process disposes to a ticket in the formation, then current ticket number in the formation is deducted a unit value; When process is handled a ticket, lock this ticket, promptly this ticket current state is preserved, do not change current ticket numerical value in the formation.
3) load-balancing method is set up formation and number section mapping relations, distributes ticket to idle formation according to the load balancing strategy.
Set up the mapping relations of formation and account number section, preserve the ticket of determining number section according to this corresponding relation and deposit the formation of mapping with it; And then the scope of the account number section of shining upon by the adjustment formation among the present invention reaches the purpose of equally loaded.Concrete, set up the load strategy, the load strategy is made of one group of coefficient and a number section in the present embodiment, for the load (n 〉=2) of n process, described tactful coefficient can be described as [0, f 1, f 2..., f N-1, 1], 0<f 1<f 2<f N-1<1; The described section is described as [Min, Max], as [0,99999999].
Can determine the place interval range of a number according to tactful coefficient and number section, this interval range corresponds to a process.Typical case's section is [Min, Min+f 1* Δ, Min+f 2* Δ ..., Min+f N-1* Δ, Max], Δ=(Max-Min).If described tactful coefficient is [0,0.25,0.5,0.75,1], according to said method, should determine 4 numbers sections that range size is identical by the strategy coefficient as can be known, each number section corresponds respectively to a process; Simultaneously, because the corresponding formation of shared drive that be each course allocation, thereby, can determine the formation that this ticket should mail to according to number section under the calling number of ticket, and then by the process processing corresponding with this formation.
Analyze relatively capacity, ticket number, residual queue's entry number of individual queue, the load of calculation procedure, and according to the LOAD FOR load balancing degrees.Concrete: the calculating of the process load of individual queue, process load are the ticket number of process queue and the ratio of capacity of queue, i.e. the ticket number/capacity of queue of process load=process queue.Process load codomain [0,1], 0 expression process are idle fully, do not have ticket pending in the formation, and 1 expression process disposal ability reaches capacity, and formation is taken by pending ticket; The calculating of load balancing degrees, load balancing degrees are the ratio of maximum process load and minimum process load in the described process load, i.e. load balancing degrees=maximum process load/minimum process load.The codomain of load balancing degrees is [1, + ∞), load balancing degrees is to represent that the individual queue load reached the desirable balance state at 1 o'clock, and the load balancing degrees between the big more expression individual queue of load balancing degrees is poor more, and promptly the part process is in busy situation and the process of another part is in idle condition.
Pending ticket is inserted in the idle formation.Find the load balancing strategy of process group (as wholesale price process group, warehouse-in process group etc.), calculate number section interval at place,, ticket is sent in the corresponding queues according to number segment limit according to the calling number of ticket.Concrete, in the ticketing processing procedure, according to the type of next step processing, obtain the object queue that ticket can mail to, as wholesale price, warehouse-in etc.; And then preserve ticket according to the load strategy and arrive formation.
The adjustment of load balancing strategy is attributable to the adjustment of load strategy coefficient among the present invention.Monitor component periodically (cycle is specified by parameter) is checked load balancing degrees and maximum process load, if load weighing apparatus degree surpasses the threshold value (as 2) of parameter setting, then triggers tactful coefficient is adjusted.Set-up procedure is as follows:
The formation of locking process group limits the visit of other processes to this formation; If n formation, the load [g of calculating individual queue 1, g 2..., g n], and average load amount g;
g ‾ = Σ i = 1 n g i n
And then, according to following formula tactful coefficient is adjusted:
f i = f i + g ‾ - g i g ‾ w i i=1..n-1
Wherein, w iBe the weight coefficient relevant with number section.Can simply be set to i/n, also can set up on their own.By adjustment to tactful coefficient, realized to formation the adjustment of correspondence segment limit.According to [Min, Min+f 1* Δ, Min+f 2* Δ .., Min+f N-1* Δ, Max] principle, process is busy relatively, promptly the relatively large formation pairing segment limit of process load diminishes; Process is idle relatively, and promptly the less relatively formation pairing segment limit of process load becomes big.Mean carrying out ticket and divide timing have more ticket to be assigned to idle formation, thereby reach the purpose of load balancing.
Number section after the strategy coefficient adjustment still is [Min, the Min+f1* Δ, the Min+f2* Δ, ..., the Min+fn-1* Δ, Max], accordingly, the adjustment of useful formation pairing segment limit, the calling number that may be kept at the ticket in certain formation belongs to other formation pairing segment limits, thereby, the adjustment of load balancing strategy finishes, and judges whether preserved the ticket that does not belong to the current section in the individual queue, if the formation of then reading and being saved in the current mapping of this ticket described section is arranged, to be assigned to same process with the CDR file that guarantees same caller and handle, guarantee the sequential that ticket is handled.Because the present invention utilized shared drive, thereby all ticket adjustment all carry out in internal memory, can obtain very fast processing speed.
4) load-balancing method of abnormality.Described abnormality comprises: formation occurs unusual near full up state, formation near idle condition, process.Load balancing degrees is used to judge whether the load between individual queue is balanced in the foregoing description, has embodied the gap between maximum process load of formation and the minimum process load, yet whether load balancing degrees can not embody all formations near full up or idle condition.Thereby on the basis of the foregoing description, the present invention also utilizes the load process to be used for judgement to abnormality, and then handles accordingly.
The processing that formation is full up when the minimum load of formation in the assembly has surpassed the upper limit threshold value that presets, shows assembly disposal ability deficiency, needs system start-up more service assembly, and triggers and carry out load balance process mentioned above.
The processing of formation free time when the maximum load of formation in the assembly is lower than the lower limit threshold value that presets, shows assembly disposal ability surplus, can seal serviced component by the reporting system closed portion, and behind the closing assembly, trigger and carry out load balance process mentioned above.
Process exception is handled, and process is obtained a ticket, will lock this ticket state, preserves the current state of this ticket, and then occurs when unusual when assembly, and this ticket still is kept in the shared drive; After assembly restarts, can find the ticket that has been untreated to continue to handle.
According to the principle of top described abnormality processing, the realization flow of abnormality processing can repeat no more referring to Fig. 2 and related description.
Above-mentioned for a full implementation example of ticketing processing method of the present invention, in this embodiment, set up linear mapping relations between formation and number section, yet, the present invention does not limit described mapping relations, and these mapping relations can adopt nonlinear function equally.In the formation that the foregoing description is set up, the description to the formation attribute in the queue heads also can comprise: capacity of queue; The residual queue space; Formation utilization rate etc.; Perhaps, the information that other are associated with process is as treatment state, processing time, processing speed etc.Those skilled in the art can set according to the actual demand of business.
More than method that multi-process message treatment method provided by the present invention and multi-process ticket are handled be described in detail, used specific case herein principle of the present invention and execution mode are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (10)

1, a kind of multi-process message treatment method is characterized in that:
1) is shared drive piece of each course allocation;
2) the shared drive piece of each process is divided formation, and in described shared drive piece, set up the attribute of described formation;
3) based on the current seizure condition of shared drive piece, pending message is calculated the ratio of the message count of formation and capacity of queue as process load according to the load balancing rule that presets, obtain the ratio of maximum process load and minimum process load, if this ratio greater than first threshold value that presets, then sends to pending message in the shared drive piece of current process load minimum.
2, multi-process message treatment method as claimed in claim 1 is characterized in that:
3) also comprise in: periodically obtain the process load of shared drive piece, whether judge minimum process load,, and trigger according to the load balancing rule and preserve pending message if greater than then starting more multicompartment greater than second threshold value that presets.
3, multi-process message treatment method as claimed in claim 1 is characterized in that:
When process is handled message in the shared drive piece, adopt the mode of access control lock to limit of the visit of other processes to this shared drive piece.
4, a kind of method of multi-process ticket processing is used for the parallel processing of charge system to ticket, it is characterized in that:
1) sets up a shared drive piece for each process;
2) the shared drive piece of each process is divided formation, and in described shared drive piece, set up the attribute of described formation;
3) based on the current occupancy of shared drive piece, according to the mapping relations of described formation of setting up in advance and account number section and the load balancing rule that presets, the ticket number of calculation procedure formation and the ratio of capacity of queue are as process load, obtain the ratio of maximum process load and minimum process load, if this ratio is greater than first threshold value that presets, then trigger to adjust the account number segment limit that formation that the shared drive piece divides is shone upon, distribute ticket to be saved in idle shared drive piece, and same account's ticket store same shared drive piece into.
5, the method for multi-process ticket processing as claimed in claim 4 is characterized in that described mapping relations are specially: Linear Mapping relation or Nonlinear Mapping relation.
6, the method for multi-process ticket processing as claimed in claim 4, it is characterized in that: the account number segment limit that the formation that described adjustment shared drive piece is divided is shone upon is specially:
Calculate the process load [g of each shared drive piece 1, g 2..., g n], the average load amount of calculating shared drive piece
Figure C2005101257360003C1
Wherein n is a shared drive piece number; And,
By
Figure C2005101257360003C2
Upgrade the tactful coefficient f of each shared drive piece and number section i, feasible number section [Min, Min+f 1* Δ, Min+f 2* Δ ..., Min+f N-1* Δ, Max] in, the relatively large shared drive piece pairing segment limit of process load diminishes; The formation pairing segment limit that the less relatively shared drive piece of process load is divided becomes big;
Wherein, Δ is the difference of Max and Min, w iBe the weight coefficient relevant with number section.
7, the method for handling as one of them described multi-process ticket of claim 4 to 6 is characterized in that, also comprises:
The mapping relations adjustment finishes, and judges whether preserved the ticket that does not belong to the current section in the shared drive piece, if the shared drive piece that then reads and preserve number current mapping of section under this ticket of this ticket is arranged.
8, the method for handling as one of them described multi-process ticket of claim 4 to 6 is characterized in that:
Periodically obtain the ratio of described maximum process load and minimum process load.
9, the method for handling as one of them described multi-process ticket of claim 4 to 6 is characterized in that:
Obtain minimum process load, if greater than second threshold value that presets, the then more serviced components of system start-up, and triggering are adjusted the formation of shared drive piece division and the mapping relations of account number section.
10, the method for handling as one of them described multi-process ticket of claim 4 to 6 is characterized in that:
Obtain maximum process load, if less than the 3rd threshold value that presets, then system closing part assembly, and triggering is adjusted the formation of shared drive piece division and the mapping relations of account number section.
CN200510125736A 2005-12-01 2005-12-01 Method for processing multiprogress message and method for processing multiprogress talk ticket Active CN100596159C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200510125736A CN100596159C (en) 2005-12-01 2005-12-01 Method for processing multiprogress message and method for processing multiprogress talk ticket

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200510125736A CN100596159C (en) 2005-12-01 2005-12-01 Method for processing multiprogress message and method for processing multiprogress talk ticket

Publications (2)

Publication Number Publication Date
CN1787588A CN1787588A (en) 2006-06-14
CN100596159C true CN100596159C (en) 2010-03-24

Family

ID=36784875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200510125736A Active CN100596159C (en) 2005-12-01 2005-12-01 Method for processing multiprogress message and method for processing multiprogress talk ticket

Country Status (1)

Country Link
CN (1) CN100596159C (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101145150B (en) * 2006-09-15 2011-11-02 中国银联股份有限公司 Batch file processing method and system
US8065503B2 (en) * 2006-12-15 2011-11-22 International Business Machines Corporation Iteratively processing data segments by concurrently transmitting to, processing by, and receiving from partnered process
CN101005549B (en) * 2007-01-30 2012-04-25 华为技术有限公司 Method, device and system for realizing voice list automatic distribution
CN101409877B (en) * 2008-11-28 2010-07-14 中兴通讯股份有限公司 Method for generating call ticket
CN101763289B (en) * 2009-09-25 2013-11-20 中国人民解放军国防科学技术大学 Message passing method based on shared memory
CN101697613A (en) * 2009-10-30 2010-04-21 中兴通讯股份有限公司 Method and device for processing abnormal call ticket
DK2507951T5 (en) * 2009-12-04 2013-12-02 Napatech As DEVICE AND PROCEDURE FOR RECEIVING AND STORING DATA PACKAGES MANAGED BY A CENTRAL CONTROLLER
CN102541663A (en) * 2011-12-28 2012-07-04 创新科软件技术(深圳)有限公司 Method for ensuring multiple processes to use shared memories to carry out communication
CN103034733A (en) * 2012-12-25 2013-04-10 北京讯鸟软件有限公司 Data monitoring statistical method for call center
CN103533081B (en) * 2013-10-25 2017-12-29 从兴技术有限公司 A kind of charge system and its implementation based on cloud computing
CN105828309B (en) * 2015-01-05 2019-07-02 中国移动通信集团广西有限公司 A kind of call bill processing method, equipment and system
CN105827670A (en) * 2015-01-05 2016-08-03 中国移动通信集团四川有限公司 Data processing method and data processing device
CN105450784B (en) * 2016-01-20 2019-06-04 北京京东尚科信息技术有限公司 The device and method of message distribution consumption node into MQ
CN105978930A (en) * 2016-04-15 2016-09-28 深圳市永兴元科技有限公司 Network data exchange method and device
CN106021000B (en) 2016-06-02 2018-06-01 北京百度网讯科技有限公司 For the shared-memory management method and apparatus of robot operating system
CN107704325B (en) * 2016-08-08 2021-08-27 北京百度网讯科技有限公司 Method and device for transmitting messages between processes
CN112035231A (en) * 2020-09-01 2020-12-04 中国银行股份有限公司 Data processing system, method and server group
CN112631768A (en) * 2020-11-23 2021-04-09 北京思特奇信息技术股份有限公司 Resource sharing method and system based on asynchronous mechanism

Also Published As

Publication number Publication date
CN1787588A (en) 2006-06-14

Similar Documents

Publication Publication Date Title
CN100596159C (en) Method for processing multiprogress message and method for processing multiprogress talk ticket
KR102562260B1 (en) Commitment-aware scheduler
CN104539440B (en) Traffic management with in-let dimple
CN104572307B (en) The method that a kind of pair of virtual resource carries out flexible scheduling
CN102111337B (en) Method and system for task scheduling
Caprita et al. Group Ratio Round-Robin: O (1) Proportional Share Scheduling for Uniprocessor and Multiprocessor Systems.
CN101594299B (en) Method for queue buffer management in linked list-based switched network
CN103327072A (en) Method for cluster load balance and system thereof
CN100542175C (en) A kind of method for balancing load in multiprocessing unit and system of multiprocessing unit
CN102918499A (en) Applying policies to schedule network bandwidth among virtual machines
CN103916396A (en) Method for automatic expansion of application examples of cloud platform based on load self-adaption
US8769543B2 (en) System and method for maximizing data processing throughput via application load adaptive scheduling and context switching
Li et al. Amoeba: Qos-awareness and reduced resource usage of microservices with serverless computing
Shen et al. Probabilistic network-aware task placement for mapreduce scheduling
CN108829512A (en) A kind of cloud central hardware accelerates distribution method, system and the cloud center of calculating power
CN109960573A (en) A kind of cross-domain calculating task dispatching method and system based on Intellisense
CN102402422B (en) The method that processor module and this assembly internal memory are shared
CN110134531A (en) Processing method, device and the computer equipment of fictitious assets circulation data
CN102833170A (en) Virtual channel dynamic dispatching method of AOS (Advanced Orbiting System)
CN105786909A (en) Message queue backlog load self-adaptive application triggering method and system
CN102420741B (en) The method of dispatching communication flow and device in based on the equipment of ATCA
CN106209681B (en) A kind of queue management method and device
Lu et al. InSTechAH: Cost-effectively autoscaling smart computing hadoop cluster in private cloud
EP2036267A1 (en) A processor and a method for a processor
CN101009657A (en) Measurement of the output speed of the buffer queue, allocation method and device of Iub bandwidth

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP03 Change of name, title or address

Address after: 100012, building 2, North American International Business Center, 108 Beiyuan Road, Beijing, Chaoyang District

Patentee after: Datang Software Technologies Co., Ltd.

Address before: 100083 No. 40, Haidian District, Beijing, Xueyuan Road

Patentee before: Datang Software Technologies Co., Ltd.

DD01 Delivery of document by public notice

Addressee: Gao Tingting

Document name: payment instructions

DD01 Delivery of document by public notice