CN100459520C - System and method for sharing internal storage cache between multiple stream servers - Google Patents

System and method for sharing internal storage cache between multiple stream servers Download PDF

Info

Publication number
CN100459520C
CN100459520C CNB2005101212639A CN200510121263A CN100459520C CN 100459520 C CN100459520 C CN 100459520C CN B2005101212639 A CNB2005101212639 A CN B2005101212639A CN 200510121263 A CN200510121263 A CN 200510121263A CN 100459520 C CN100459520 C CN 100459520C
Authority
CN
China
Prior art keywords
resource
servers
server
streaming
internal storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2005101212639A
Other languages
Chinese (zh)
Other versions
CN1859181A (en
Inventor
蔡鹏�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNB2005101212639A priority Critical patent/CN100459520C/en
Publication of CN1859181A publication Critical patent/CN1859181A/en
Application granted granted Critical
Publication of CN100459520C publication Critical patent/CN100459520C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention discloses a system and method for sharing cache among multithread servers. Said system includes stream server scheduling server; said scheduling server to proceed resource optimization scheduling said stream server, for realize said sharing cache among stream servers. Said method includes scheduling server to continue resource optimization scheduling to plurality of stream servers. The present invented system and method can establish correlation among plurality of stream servers, raising cache efficiency, reducing disc-accessing frequency, and reducing disc wastage rate and maintenance cost and maintenance cost.

Description

A kind of system and method for sharing internal storage cache between multiple stream servers
Technical field
The present invention relates to network communications technology field, relate in particular to the system and method for a kind of sharing internal storage cache between multiple stream servers (buffering).
Background technology
Streaming server is the system end equipment that provides edge joint receipts program stream, limit to watch service for stream services client end subscriber, and it is the key equipment of stream service business such as VOD program request, Web TV, Digital Television, the service of Internet stream.Along with the improvement of broadband network infrastructure, based on the application meeting rapid growth of streaming server.After a large amount of deployment of stream service equipment, the service provider can more and more be concerned about the maintenance cost in later stage.In streaming server, peripheral storage device generally accounts for 20% to 40% of the total hardware cost of system equipment, and current general employing hard disk is as storage medium.Consider time-moving television, just in watching the process of TV, the user can suspend, falls back, the stream service business such as program of F.F. broadcasting, peripheral storage device may need to carry out the long data read-write operation of high strength, so for hard disk, the consume period ratio uses in general service wants much shorter.According to the statistics of similar industry, DVR is carried out 24 hours monitoring videos, and the consume cycle is half a year to one year.This just means for operator, whenever spends half a year to one year, will pay sizable these consumable accessorys of original replacing that are maintained as.
As shown in Figure 1, more common now stream service realization technology is that every streaming server operates independently.Be installed in a stream service software on the hardware device and be responsible for controlling all hardware resource of this machine.In order to reduce the access frequency of hard disk, can adopt the memory techniques of certain way that programme content is carried out Cache.
Mainly there is the problem of following two aspects in prior art:
At first the resource-constrained of separate unit hardware device can't form very effective Cache technology.Calculate in digital program source with the 2Mb bandwidth, and one hour programme content is 1G nearly just.The memory size of separate unit main process equipment can't realize very effective stream service business level memory power.Such as, in Web TV, need to do the support of time-moving television, from statistical law, the program most probable in half an hour is return back to again by the user.From the angle of service optimization, this halfhour program preferably is stored in the internal memory.For another example, the program of some hot broadcasts is because the online number of watching is many, and programme content preferably also is stored in the internal memory.But since the restriction of separate unit hardware device resources, the possibility aggravation of resource contention.Set up carry out ability that resource adjusts in the scope fully also can be more weak.
The hardware management mode of doing things in his own way, the waste that brings resource.This reason is also very plain, and each streaming server is done things in his own way and carried out the management of all kinds of resources, must bring the problem of " repeated construction ".Might internal memory all be arranged a program on a plurality of streaming servers.The result is under the situation of resource anxiety, and valuable resource has been wasted again.
Summary of the invention
The object of the present invention is to provide a kind of system and method for sharing internal storage cache between multiple stream servers, be intended to increase the total amount of memory source, improve the usefulness of internal memory Cache, reduce the access frequency of disk, reduce disk coefficient of losses and maintenance cost, reduce the wasting of resources of doing things in his own way in the prior art and bringing simultaneously.
The objective of the invention is to be achieved through the following technical solutions:
The invention provides a kind of system of sharing internal storage cache between multiple stream servers, comprise a plurality of streaming servers, described streaming server provides the stream service for user terminal, and in described a plurality of streaming servers, at least one streaming server comprises internal memory Cache; Described system also comprises dispatch server, the resource information that described dispatch server is used for a plurality of streaming servers are reported is carried out unified management, and described streaming server is carried out priority scheduling of resource according to described resource information, to realize shared drive Cache between described streaming server;
Described resource information comprises streaming server memory size information.
Described dispatch server is provided with a resource use table, and the resource information that described resource use table comprises also comprises streaming server network handling capacity information and external memory capacity information.
Described dispatch server comprises:
The registration process module is used to receive the resource information that described streaming server reports, and described resource information is put into described resource use table;
The business statistics module uses the resource of table to form business statistics information according to described resource;
The optimized dispatching module is carried out priority scheduling of resource according to described business statistics information;
Resource modification information collection module continues to collect the resource modification information.
Described priority scheduling of resource is the particular memory locking, changes the priority of particular memory or duplicate specific stream service business in particular memory.
Described dispatch server is assigned streaming server provides the stream service for user terminal.
Described dispatch server is assigned a plurality of streaming servers in the different time periods provide the stream service for user terminal.
The present invention also provides a kind of method of sharing internal storage cache between multiple stream servers, comprising:
The resource of A, a plurality of streaming servers of dispatch server unified management, described resource comprises the streaming server memory size;
B, dispatch server carry out priority scheduling of resource according to described resource to a plurality of streaming servers, so that described streaming server shared drive.
Described steps A comprises:
A1, described streaming server report the resource of described streaming server to described dispatch server;
A2, described dispatch server are put into resource use table with the resource of described streaming server;
A3, described dispatch server use the resource in the table to form business statistics information according to described resource.
Described steps A comprises that also described dispatch server continues to collect the resource modification information of described streaming server, and upgrades described resource use table.
Described priority scheduling of resource comprises the particular memory locking, changes the priority of particular memory or duplicate specific stream service business in particular memory.
Described method comprises that also described dispatch server assignment streaming server provides the stream service for user terminal.
Described method comprises that also described dispatch server assigns a plurality of streaming servers and provide stream service for user terminal in the different time periods.
Described resource also comprises streaming server network handling capacity and external memory capacity.
By technical scheme provided by the invention as can be seen, the present invention has realized the association between a plurality of streaming servers by the scheduling of dispatch server to streaming server, by the resource-sharing mode, has increased the total amount of memory source, has improved the usefulness of internal memory Cache; By the effective Cache of internal memory, reduced the access frequency of disk, reduced disk coefficient of losses and maintenance cost; Reduced the wasting of resources of doing things in his own way in the prior art and bringing simultaneously.
Description of drawings
Fig. 1 independently flows the service equipment schematic diagram for prior art;
Fig. 2 is the schematic diagram of one embodiment of the present of invention;
Fig. 3 is the schematic diagram of another embodiment of the present invention;
Fig. 4 is the schematic diagram of another embodiment of the present invention;
Fig. 5 is the data flow diagram of the present invention's dispatch server inside.
Embodiment
Core concept of the present invention provides a kind of system and method for sharing internal storage cache between multiple stream servers, by the scheduling of dispatch server to streaming server, set up the association between a plurality of streaming servers, by resource-sharing, increase the total amount of memory source, improve the usefulness of internal memory Cache, reduce the access frequency of disk, reduce disk coefficient of losses and maintenance cost, reduce the wasting of resources of doing things in his own way in the prior art and bringing simultaneously.
The invention provides a kind of system of sharing internal storage cache between multiple stream servers, system of the present invention comprises: streaming server and dispatch server.
Described streaming server provides the stream service for user terminal, and its three main class resources comprise: network handling capacity, memory size and external memory capacity.
Described dispatch server, mainly finish following function:
All kinds of resources of each independent streaming server of unified management, each independent streaming server all can be given described dispatch server with described three class resource report.In running, described dispatch server is the use information of each resource of record in detail, such as the stream service conversation ID of the program ID that takies this resource, association ... Deng;
According to service conditions, make the priority scheduling of resource decision-making, and suitable priority scheduling of resource operation is made in the use of resource.Described priority scheduling of resource includes but not limited to the particular memory locking, changes the priority of particular memory and/or duplicate specific stream in particular memory professional.
Below by specific embodiment described priority scheduling of resource is explained.
As shown in Figure 2, after certain program issue, be in the state of fragmentary program request, the situation of programme content in the internal memory scene that begins at the beginning of may be as shown in Figure 2 at this time, program carries out distributed storage on each streaming server, along with the rising of program request temperature, dispatch server is made the priority scheduling of resource operation that internal memory is moved separate unit server and locking, moves the switching frequency that can reduce streaming server after the merger.Along with the lasting rising of program temperature, the separate unit server can't be a large number of users service, and dispatch server can be made the priority scheduling of resource operation that internal memory duplicates again;
Improve a certain priority level that is being used memory block for another example, thereby reduce this memory block and be released the risk of utilizing again, reason is to have preserved expection in this piece internal memory to be about to watching programme content, and after internal memory was used up, d/d at first was other memory block of lowest priority.
On the basis of carrying out priority scheduling of resource, described dispatch server can be according to the stream service request of user terminal, assign suitable streaming server and provide the stream service for user terminal, as shown in Figure 3, when the user terminal request program obtains the stream service, at first can send the stream service request, after described dispatch server is learnt described stream service request, can provide service for terminal according to the suitable streaming server of operating position scheduling of resource.In order to improve the service efficiency of resource, in the process of once stream service, described dispatch server can be dispatched a plurality of streaming servers in the different time periods and provide service for user terminal.As shown in Figure 4, being single-unit purpose multiple stream servers switches.After user terminal sends service request, dispatch server is according to the resource behaviour in service of each streaming server, dispatch a streaming server and provide service for the user, dispatch server has the relevant parameter of service for the dispatching command of streaming server, such as the program ID of service, the time span of service etc.Streaming server can send the notification message that service finishes to dispatch server after providing service by the requirement of dispatch server, and dispatch server can provide service or end service for the user according to the next streaming server of the current service state assignment of program;
As shown in Figure 5, described dispatch service implement body comprises: registration process module, business statistics module, optimized dispatching module and resource modification information collection module.
Described registration process module is used for original streaming server resource registration process is put into resource use table, comprising internal memory, network handling capacity and peripheral storage device;
Described business statistics module is used to form business statistics information;
Described optimized dispatching module is made suitable optimized dispatching operation to the use of resource.The optimized dispatching module of dispatch server periodically can be called the business statistics information that the business statistics module forms, and makes the priority scheduling of resource decision-making, and suitable optimized dispatching operation is made in the use of resource.
Described resource modification information collection module continues to collect the modification information that influences the operating position of resource because priority scheduling of resource etc. operate, to keep the availability of resource use table.
Utilize system of the present invention, the present invention also provides a kind of method of sharing internal storage cache between multiple stream servers, and this method operating process comprises:
Step 10: each streaming server reports resource report to give dispatch server;
Three main class resources of streaming server comprise: network handling capacity, memory size and external memory capacity.Each independent streaming server all can be given dispatch server with this three classes resource report.
Step 11: all kinds of resources and the operation information of each independent streaming server of dispatch server unified management;
In running, described dispatch server can be noted down the use information of each resource in detail as the program ID that takies this resource, related stream service conversation ID ... Deng, and it is carried out unified management.
Step 12: dispatch server carries out priority scheduling of resource to streaming server;
Step 121, described dispatch server draw the use statistical information that flows service business according to the use information of each resource, such as information such as the current program request situation of certain program, the number of watching, program request zero-time distribution situations;
Step 122, described dispatch server are made the priority scheduling of resource decision-making according to the statistical information of stream service business, and suitable priority scheduling of resource operation is made in the use of resource.
On the basis of carrying out priority scheduling of resource, described dispatch server can be according to the stream service request of user terminal, and assigning suitable streaming server provides the stream service for user terminal; In the process of once stream service, described dispatch server can be dispatched a plurality of streaming servers in the different time periods and provide service for user terminal.
In sum, the present invention has realized setting up the association between a plurality of streaming servers, reduce the wasting of resources of doing things in his own way and bringing, by the resource-sharing mode, increase the total amount of memory source, improve the usefulness of internal memory Cache, by the effective Cache of internal memory, reduce the access frequency of disk, reduce disk coefficient of losses and maintenance cost.
The above; only for the preferable embodiment of the present invention, but protection scope of the present invention is not limited thereto, and anyly is familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection range of claim.

Claims (13)

1, a kind of system of sharing internal storage cache between multiple stream servers comprises a plurality of streaming servers, and described streaming server provides the stream service for user terminal, and it is characterized in that: in described a plurality of streaming servers, at least one streaming server comprises internal memory Cache; Described system also comprises dispatch server, the resource information that described dispatch server is used for a plurality of streaming servers are reported is carried out unified management, and described streaming server is carried out priority scheduling of resource according to described resource information, to realize shared drive Cache between described streaming server;
Described resource information comprises streaming server memory size information.
2, the system of a kind of sharing internal storage cache between multiple stream servers as claimed in claim 1, it is characterized in that, described dispatch server is provided with a resource use table, and the resource information that described resource use table comprises also comprises streaming server network handling capacity information and external memory capacity information.
3, the system of a kind of sharing internal storage cache between multiple stream servers as claimed in claim 2 is characterized in that, described dispatch server comprises:
The registration process module is used to receive the resource information that described streaming server reports, and described resource information is put into described resource use table;
The business statistics module uses the resource of table to form business statistics information according to described resource;
The optimized dispatching module is carried out priority scheduling of resource according to described business statistics information;
Resource modification information collection module continues to collect the resource modification information.
4, the system of a kind of sharing internal storage cache between multiple stream servers as claimed in claim 3 is characterized in that, described priority scheduling of resource is the particular memory locking, changes the priority of particular memory or duplicate specific stream service business in particular memory.
5, the system of a kind of sharing internal storage cache between multiple stream servers as claimed in claim 1 is characterized in that, described dispatch server is assigned streaming server provides the stream service for user terminal.
6, the system of a kind of sharing internal storage cache between multiple stream servers as claimed in claim 1 is characterized in that, described dispatch server is assigned a plurality of streaming servers in the different time periods provide the stream service for user terminal.
7, a kind of method of sharing internal storage cache between multiple stream servers is characterized in that, comprising:
The resource of A, a plurality of streaming servers of dispatch server unified management, described resource comprises the streaming server memory size;
B, dispatch server carry out priority scheduling of resource according to described resource to a plurality of streaming servers, so that described streaming server shared drive.
8, the method for a kind of sharing internal storage cache between multiple stream servers as claimed in claim 7 is characterized in that, described steps A comprises:
A1, described streaming server report the resource of described streaming server to described dispatch server;
A2, described dispatch server are put into resource use table with the resource of described streaming server;
A3, described dispatch server use the resource in the table to form business statistics information according to described resource.
9, the method for a kind of sharing internal storage cache between multiple stream servers as claimed in claim 8 is characterized in that, described steps A comprises that also described dispatch server continues to collect the resource modification information of described streaming server, and upgrades described resource use table.
10, the method for a kind of sharing internal storage cache between multiple stream servers as claimed in claim 7 is characterized in that, described priority scheduling of resource comprises the particular memory locking, changes the priority of particular memory or duplicate specific stream service business in particular memory.
11, the method for a kind of sharing internal storage cache between multiple stream servers as claimed in claim 7 is characterized in that, described method comprises that also described dispatch server assignment streaming server provides the stream service for user terminal.
12, the method for a kind of sharing internal storage cache between multiple stream servers as claimed in claim 7 is characterized in that, described method comprises that also described dispatch server assigns a plurality of streaming servers and provide stream service for user terminal in the different time periods.
13, the method for a kind of sharing internal storage cache between multiple stream servers as claimed in claim 7 is characterized in that, described resource also comprises streaming server network handling capacity and external memory capacity.
CNB2005101212639A 2005-12-23 2005-12-23 System and method for sharing internal storage cache between multiple stream servers Expired - Fee Related CN100459520C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005101212639A CN100459520C (en) 2005-12-23 2005-12-23 System and method for sharing internal storage cache between multiple stream servers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005101212639A CN100459520C (en) 2005-12-23 2005-12-23 System and method for sharing internal storage cache between multiple stream servers

Publications (2)

Publication Number Publication Date
CN1859181A CN1859181A (en) 2006-11-08
CN100459520C true CN100459520C (en) 2009-02-04

Family

ID=37298047

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005101212639A Expired - Fee Related CN100459520C (en) 2005-12-23 2005-12-23 System and method for sharing internal storage cache between multiple stream servers

Country Status (1)

Country Link
CN (1) CN100459520C (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101431475B (en) * 2008-11-20 2011-03-23 季鹏程 Settings of high-performance streaming media server and method for reading high-performance program
CN109167685A (en) * 2018-08-27 2019-01-08 杭州领智云画科技有限公司 CDN quality of service monitoring system and method based on index system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061504A (en) * 1995-10-27 2000-05-09 Emc Corporation Video file server using an integrated cached disk array and stream server computers
CN1595905A (en) * 2004-07-04 2005-03-16 华中科技大学 Streaming media buffering proxy server system based on cluster
CN1604569A (en) * 2004-10-29 2005-04-06 清华大学 A robust point to point based stream scheduling method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061504A (en) * 1995-10-27 2000-05-09 Emc Corporation Video file server using an integrated cached disk array and stream server computers
CN1595905A (en) * 2004-07-04 2005-03-16 华中科技大学 Streaming media buffering proxy server system based on cluster
CN1604569A (en) * 2004-10-29 2005-04-06 清华大学 A robust point to point based stream scheduling method

Also Published As

Publication number Publication date
CN1859181A (en) 2006-11-08

Similar Documents

Publication Publication Date Title
CN101039329B (en) Media delivery system of network TV system based on media delivery
CN101540775B (en) Method and device for distributing contents and network system for distributing contents
EP3466085B1 (en) Methods and systems for generation of dynamic multicast channel maps
JP5383704B2 (en) Predictive caching content distribution network
CN101682355B (en) Method and apparatus providing scalability for channel change requests in a switched digital video system
US20190098067A1 (en) Adaptive Energy System Utilizing Quality of Service and Quality of Experience Metrics
CN101160966B (en) Method, device and system of implementing time-shifting TV
CN102333126B (en) Streaming media on demand method based on Hadoop and virtual streaming media server cluster
US20080059721A1 (en) Predictive Popular Content Replication
CN102882829A (en) Transcoding method and system
CN104378665A (en) Distributed transcoding system and method based on digital television
US20080059565A1 (en) Adaptive Content Load Balancing
CN101291425A (en) Method and system realizing content dynamically publishing based on hotness of user's demand
Carlsson et al. Server selection in large-scale video-on-demand systems
CN101583020B (en) Program broadcasting system and method
CN100459520C (en) System and method for sharing internal storage cache between multiple stream servers
CN102497389A (en) Big umbrella caching algorithm-based stream media coordination caching management method and system for IPTV
WO2008049364A1 (en) Method for transmitting information via channel network
WO2008024854A2 (en) Method and apparatus for alternate content recording and reporting for mobile devices
CN103369368B (en) Video cloud on-demand cache scheduling method supporting multi-code-rate version
CN100576905C (en) A kind of VOD frequency treating method and device thereof
CN101695044A (en) Stream media service node and load balancing method thereof
US20090100188A1 (en) Method and system for cluster-wide predictive and selective caching in scalable iptv systems
CN101448031A (en) Method of supporting media tendering address switching in real-time media stream transmission process
CN112543354B (en) Service-aware distributed video cluster efficient telescoping method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090204

Termination date: 20121223