CN103488581B - Data buffering system and data cache method - Google Patents

Data buffering system and data cache method Download PDF

Info

Publication number
CN103488581B
CN103488581B CN201310397394.4A CN201310397394A CN103488581B CN 103488581 B CN103488581 B CN 103488581B CN 201310397394 A CN201310397394 A CN 201310397394A CN 103488581 B CN103488581 B CN 103488581B
Authority
CN
China
Prior art keywords
data
buffer zone
type
respective type
stored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310397394.4A
Other languages
Chinese (zh)
Other versions
CN103488581A (en
Inventor
刘建民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yonyou Network Technology Co Ltd
Original Assignee
Yonyou Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yonyou Network Technology Co Ltd filed Critical Yonyou Network Technology Co Ltd
Priority to CN201310397394.4A priority Critical patent/CN103488581B/en
Publication of CN103488581A publication Critical patent/CN103488581A/en
Application granted granted Critical
Publication of CN103488581B publication Critical patent/CN103488581B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a kind of data buffering system and a kind of data cache method, wherein, this data buffering system comprises: buffer memory division unit, for inquiring about the type of data to be stored, data for each type arrange corresponding mark, and divide corresponding buffer zone respectively for the data of described each type; Storage unit, is stored in the internal memory of operating system for the data in each buffer zone being deposited, and is stored in the internal memory of virtual machine by metadata, mark and/or attribute information corresponding respectively for the data in described each buffer zone.By the technical scheme of the application, when carrying out large memory cache, by KEY value set and True Data Separate Storage, and the audiomonitor of data cached Version Control and automatic more new archive rank can be realized, and improving monitoring granularity.

Description

Data buffering system and data cache method
Technical field
The present invention relates to technical field of data storage, in particular to a kind of data buffering system and a kind of data cache method.
Background technology
In increasing J2EE application, caching technology storage service data are utilized to apply more widespread with the technology reducing frequent data item storehouse and link the resource consumption caused.
Due to 100 times even higher that internal memory is the R/W speed of hard disc such as database manipulation for the read or write speed of data, so memory cache effectively can promote operational efficiency.The popular caching technology of industry also all realizes memory cache, comparatively common as EhCache, OSCache, JbossCache, MemCache etc.Various caching technology has his own strong points, and is applicable to different business scene.
The class data that in business, usually data volume is large, variability is little are classified as a class file data, and all kinds of file data is safeguarded respectively and used, more clear, succinct.Such as official document, personnel identity, Currency Type, postcode, area name etc.These data are all the little file datas that often can use again of change, carry out database manipulation, be more suitable for utilizing buffer memory to store when should not be in each inquiry.A few block cache region can be applied in internal memory, store the data of all kinds of archives and the result of each inquiry respectively.Comparatively common use-pattern (personnel's reference) as shown in Figure 1A.
To the locating function of personnel's file data, the data variation of often inquiring about is little, and data total amount is comparatively large, is suitable for buffer memory to store conventional data and Query Result, so that next time, inquiry was quick position.
But for this type of application scenarios, the solution that above caching technology is too unsuitable, progressively shows some problems, such as: take JVM virtual machine buffer memory when buffer data size is larger too large, affect JVM and run; Archives class data volume is large, is applicable to storing with large internal memory, the imperfection that each technology is supported large internal memory; When buffer zone data are bigger than normal, to the efficiency that Cache data Frequent episodes causes; Data cached version monitoring monitors problem with automatic replacement problem during data variation for the buffer memory of concrete file data.
Adopt in large multi-buffer technique be the mode of JVM virutal machine memory to carry out data storage, if for the little data of quantity, this Method compare is applicable to and access rate is very fast.Conventional caching technology internal memory service condition as shown in Figure 1B.
But the use of JVM to internal memory has maximal value to limit, general optimum space is not very large.If archives class data, data volume is comparatively large, if the space, buffer zone arranged is less, can carry out eliminating of data frequently according to cache policy; If it is excessive to arrange spatial cache, the operation of JVM itself can be affected again.
The large memory caching technology provided in NC63 is provided, adopts the operating system grade memory outside JVM data cached, take full advantage of the large memory headroom of server height configuration.And along with the development of hardware technology, 64 bit machines occupy main flow gradually, server configuration is more and more higher, and the capacity of internal memory is also very large, meets the buffer memory needs of large internal memory.
Use the advantage of large memory caching technology apparent, but problem in archives class data buffer storage process is also following, is mainly reflected in following two aspects:
Problem one: the division of buffer zone and Data Serialization and unserializing problem.
Calling of buffer memory has unified entrance usually, similar CacheManager.Entrance carries out acquisition operation according to the name of buffer zone to Cache.The use-pattern of buffer memory is generally as follows:
// obtain buffer area
Cachecache=CacheManager.getCache(cacheName);
// place data cached
cache.put(key,obj);
// obtain data cached
cahce.get(key);
If be every class archives application one block cache region dynamically, i.e. the corresponding class archives of each cacheName (a corresponding cacheconfig of Cache, namely distributes one piece of region of memory), the total amount of large internal memory is not easy to control.Consider data cached total quantitative limitation, one piece of high memory area can not be distributed really by each class archives number.
The storage mode of general cache scheme should be deposit multiple Map in a buffer zone, deposits the file data of concrete a certain type in each Map.
As shown in Figure 1 C, if use the data that operating system memory (large internal memory) stores, the corresponding class file data of each Map, when get, cacheName is the coding of archives, obtain the buffer memory that result is class archives, when being transformed in large internal memory by java object, need the serializing carrying out object; In like manner, when becoming java object by the data transformations being buffered in large internal memory, need to carry out unserializing.The process of serializing and unserializing is high flow rate, has pressure to CPU and server, and carrying out this operation frequently certainly will cause efficiency, even can lower than the storage of database.
Such as: if take up room comparatively large (as the 2G) of buffer area (cache), then call when getCache (key) method obtains the Map object of a class archives buffer memory at every turn and all will carry out a serializing or unserializing action to data, consume the resources such as CPU.
In addition in buffer memory monitoring, the granularity of conventional caching technology monitoring in Cache rank, i.e. the characteristic such as size, hit rate, refreshing frequency of whole buffer zone.For the monitoring business of this type of file data, the level of monitoring is more general.
The service condition of the total Cache of overall archives can only be monitored, be unfavorable for monitoring respectively various types of file data caching situation.
So the mode that above multiclass file data is directly stored in the Cache of a large internal memory is inadvisable.This is also that this patent scheme needs one of problem solved.
Two must be dealt with problems: the control of data cached version and renewal.
Data cached version problem is that caching technology needs one of problem solved always, and generalized case arranges refresh time, carries out once data cached cleaning at set intervals.Set-up mode is usually as follows:
<propertyname="flushInterval"value="3000"/>
The feature large for archives class Data Data amount, variability is little, if refresh interval setting is too large, then the validity of data has delay, if the time is shorter, mass data can be eliminated, and buffer memory characteristic utilizes insufficient.So the buffer memory of this kind of not too applicable file data of buffer update strategy.
Summary of the invention
The present invention is just based on the problems referred to above, propose a kind of Data cache technology, can when carrying out large memory cache, by KEY value set and True Data Separate Storage, and realize the audiomonitor of data cached Version Control and automatic more new archive rank, and improve monitoring granularity.
In view of this, the present invention proposes a kind of data buffering system, comprising: buffer memory division unit, for inquiring about the type of data to be stored, the data for each type arrange corresponding mark, and divide corresponding buffer zone respectively for the data of described each type; Storage unit, is stored in the internal memory of operating system for the data in each buffer zone being deposited, and is stored in the internal memory of virtual machine by metadata, mark and/or attribute information corresponding respectively for the data in described each buffer zone.
In technique scheme, preferably, also comprising: monitoring unit, being arranged in the buffer zone of data difference correspondence of described each type, for monitoring the version information of the data of described each type respectively; Cache flush unit, for listen at described monitoring unit described version information change time, refresh the buffer zone at data place of respective type according to the version information of change.
In technique scheme, preferably, described monitoring unit is used for the file data of the data of real-time listening respective type, and obtains according to the change of described file data the version upgraded; Described cache flush unit refreshes the buffer zone at the data place of respective type according to the version of described renewal.
In technique scheme, preferably, described monitoring unit is also for the number of times recording the file data of respective type data, the size of shared buffer zone, accessed number of times and/or be hit.
In technique scheme, preferably, whether described monitoring unit also for judging the data buffer memory extremely corresponding buffer zone of respective type according to corresponding mark, if there is no buffer memory, the data of described respective type are then obtained according to corresponding mark, according to corresponding mark, the data of described respective type are write corresponding buffer zone, and the data of described respective type are carried out serializing, to be stored in the internal memory of described operating system, simultaneously by metadata corresponding for the data of described respective type, mark and/or attribute information carry out serializing, to be stored in the internal memory of described virtual machine.
The application also proposed a kind of data cache method, comprising: step 202, inquires about the type of data to be stored, and the data for each type arrange corresponding mark, and divides corresponding buffer zone respectively for the data of described each type; Data in each buffer zone are deposited and are stored in the internal memory of operating system by step 204, and are stored in the internal memory of virtual machine by metadata, mark and/or attribute information corresponding respectively for the data in described each buffer zone.
In technique scheme, preferably, also comprise: step 206, according to the audiomonitor in the buffer zone that the data of described each type are corresponding respectively, monitor the version information of the data of described each type respectively, and when described version information changes, refresh the buffer zone at the data place of respective type according to the version information of change.
In technique scheme, preferably, described step 206 comprises: by the file data of the data of described audiomonitor real-time listening respective type, and obtains according to the change of described file data the version upgraded, and refreshes the buffer zone at the data place of respective type according to the version of described renewal.
In technique scheme, preferably, also comprise: by the file data of described audiomonitor record respective type data, the size of shared buffer zone, accessed number of times, the number of times that is hit.
In technique scheme, preferably, also comprise: whether described audiomonitor judges the data buffer memory extremely corresponding buffer zone of respective type according to corresponding mark, if there is no buffer memory, the data of described respective type are then obtained according to corresponding mark, according to corresponding mark, the data of described respective type are write corresponding buffer zone, and the data of described respective type are carried out serializing, to be stored in the internal memory of described operating system, simultaneously by metadata corresponding for the data of described respective type, mark and/or attribute information carry out serializing, to be stored in the internal memory of described virtual machine.
By above technical scheme, when carrying out large memory cache, by KEY value set and True Data Separate Storage, and the audiomonitor of data cached Version Control and automatic more new archive rank can be realized, and improving monitoring granularity.
Accompanying drawing explanation
Figure 1A to Fig. 1 C shows the schematic diagram of data buffer storage in correlation technique;
Fig. 2 shows the schematic block diagram of data buffering system according to an embodiment of the invention;
Fig. 3 shows the schematic flow diagram of data cache method according to an embodiment of the invention;
Fig. 4 shows schematic diagram data cached according to an embodiment of the invention;
Fig. 5 shows the schematic diagram of monitored data according to an embodiment of the invention;
Fig. 6 shows the schematic flow diagram of data cached according to an embodiment of the invention access and Version Control;
Fig. 7 shows the concrete schematic diagram of monitored data according to an embodiment of the invention;
Fig. 8 shows the structural representation of audiomonitor according to an embodiment of the invention.
Embodiment
In order to more clearly understand above-mentioned purpose of the present invention, feature and advantage, below in conjunction with the drawings and specific embodiments, the present invention is further described in detail.It should be noted that, when not conflicting, the feature in the embodiment of the application and embodiment can combine mutually.
Set forth a lot of detail in the following description so that fully understand the present invention; but; the present invention can also adopt other to be different from other modes described here and implement, and therefore, protection scope of the present invention is not by the restriction of following public specific embodiment.
Fig. 2 shows the schematic block diagram of data buffering system according to an embodiment of the invention.
As shown in Figure 2, data buffering system 100 comprises according to an embodiment of the invention: buffer memory division unit 102, for inquiring about the type of data to be stored, the data for each type arrange corresponding mark, and divide corresponding buffer zone respectively for the data of described each type; Storage unit 104, is stored in the internal memory of operating system for the data in each buffer zone being deposited, and is stored in the internal memory of virtual machine by metadata, mark and/or attribute information corresponding respectively for the data in described each buffer zone.
In this technical scheme, metadata, mark and/or the data of attribute information amount corresponding due to data are less, when being written to virutal machine memory, the data volume of serializing is also less, and first by dissimilar Data Placement in different buffer zones, then by the internal memory of its write operation system, also can reduce the data volume of serializing, reduce the consumption of serializing, with the pressure of mitigation system.
In technique scheme, preferably, also comprising: monitoring unit 106, being arranged in the buffer zone of data difference correspondence of described each type, for monitoring the version information of the data of described each type respectively; Cache flush unit 108, for listen at described monitoring unit described version information change time, refresh the buffer zone at data place of respective type according to the version information of change.
In technique scheme, preferably, described monitoring unit 106 for the file data of the data of real-time listening respective type, and obtains according to the change of described file data the version upgraded; Described cache flush unit 108 refreshes the buffer zone at the data place of respective type according to the version of described renewal.
In technique scheme, preferably, described monitoring unit 106 is also for the number of times recording the file data of respective type data, the size of shared buffer zone, accessed number of times and/or be hit.
In technique scheme, preferably, whether described monitoring unit 106 also for judging the data buffer memory extremely corresponding buffer zone of respective type according to corresponding mark, if there is no buffer memory, the data of described respective type are then obtained according to corresponding mark, according to corresponding mark, the data of described respective type are write corresponding buffer zone, and the data of described respective type are carried out serializing, to be stored in the internal memory of described operating system, simultaneously by metadata corresponding for the data of described respective type, mark and/or attribute information carry out serializing, to be stored in the internal memory of described virtual machine.
Fig. 3 shows the schematic flow diagram of data cache method according to an embodiment of the invention.
As shown in Figure 3, data cache method comprises according to an embodiment of the invention: step 202, inquires about the type of data to be stored, and the data for each type arrange corresponding mark, and divides corresponding buffer zone respectively for the data of described each type; Data in each buffer zone are deposited and are stored in the internal memory of operating system by step 204, and are stored in the internal memory of virtual machine by metadata, mark and/or attribute information corresponding respectively for the data in described each buffer zone.
In this technical scheme, metadata, mark and/or the data of attribute information amount corresponding due to data are less, when being written to virutal machine memory, the data volume of serializing is also less, and first by dissimilar Data Placement in different buffer zones, then by the internal memory of its write operation system, also can reduce the data volume of serializing, reduce the consumption of serializing, with the pressure of mitigation system.
In technique scheme, preferably, also comprise: step 206, according to the audiomonitor in the buffer zone that the data of described each type are corresponding respectively, monitor the version information of the data of described each type respectively, and when described version information changes, refresh the buffer zone at the data place of respective type according to the version information of change.
In technique scheme, preferably, described step 206 comprises: by the file data of the data of described audiomonitor real-time listening respective type, and obtains according to the change of described file data the version upgraded, and refreshes the buffer zone at the data place of respective type according to the version of described renewal.
In technique scheme, preferably, also comprise: by the file data of described audiomonitor record respective type data, the size of shared buffer zone, accessed number of times, the number of times that is hit.
In technique scheme, preferably, also comprise: whether described audiomonitor judges the data buffer memory extremely corresponding buffer zone of respective type according to corresponding mark, if there is no buffer memory, the data of described respective type are then obtained according to corresponding mark, according to corresponding mark, the data of described respective type are write corresponding buffer zone, and the data of described respective type are carried out serializing, to be stored in the internal memory of described operating system, simultaneously by metadata corresponding for the data of described respective type, mark and/or attribute information carry out serializing, to be stored in the internal memory of described virtual machine.
Fig. 4 shows schematic diagram data cached according to an embodiment of the invention.
As shown in Figure 4, for storing base profile class data clearly, just need the fragmentation carrying out buffer zone, for each class file data distributes a region, the information of buffer zone is set respectively.The data volume of Key value correspondence is little, and the data volume of each serializing drops to minimum.
For this situation, this solution adopts the mode of the many cached configuration of many archives, the essential information of every class archives is recorded, is stored in JVM internal memory and resides permanently; Leave in large internal memory by all buffer memory True Datas, KEY value is separated with truly data cached, is placed into by the True Data of discrete fragmentation in the large internal memory of operating system.Effectively solve the problem of serializing.
Only store the essential information of all kinds of file data, the key value information, statistical information etc. of buffer memory in JVM internal memory, real data are stored in operating system memory.In JVM, all types of classification of documents is stored, distinguished statistics and the refreshing in convenient future.
Be the large internal memory cache region BigMemCache of all files distributing uniform in operating system memory, unified management data, so, each of True Data all can take smaller space.Avoid the problem of large Data Serialization and unserializing.
Fig. 5 shows the schematic diagram of monitored data according to an embodiment of the invention.
As shown in Figure 5, consider the needs of the problem that true large buffer zone sum must control and classification monitoring, this programme takes the mode to all types of buffer memory KEY value and True Data Separate Storage, and storage organization is as above schemed.
For version and Data Update aspect, the data automatically upgrading buffer memory according to the change of monitor data are good solutions.
This memory mechanism is the VersionSensitiveHashMap that the file data of every type creates a version sensitivity, and its major function just comprises storage key value, binding archives Back ground Information and table name, binding audiomonitor.
This programme adds an audiomonitor (Listener) for each class file data, monitors the version of data.
Record data table name corresponding to archives and version (such as timestamp) in audiomonitor, when file data changes, the version of meeting updated data table, audiomonitor, after learning version change, upgrades data cached.Realize the process flow diagram of data cached access and Version Control as shown in Figure 6.
Fig. 7 shows the concrete schematic diagram of monitored data according to an embodiment of the invention.
As shown in Figure 7, for data content, this programme adopts the mode of fragmentation buffer zone, and often kind of archives distribute a buffer zone, add an audiomonitor, audiomonitor is responsible for monitoring the service condition of this file data and controlling version according to the data table name of archives.The storage, cache policy, size control etc. of data are responsible in large memory field.
Fig. 8 shows the structural representation of audiomonitor according to an embodiment of the invention.
The major architectural of the large memory caching technology of archives, forms primarily of two parts: the buffer memory KEY value set of archives and audiomonitor set.
The buffer memory district that the fragmentation of buffer zone will distribute originally, is distributed into multiple region (VersionSensitiveMap) according to the kind of archives, and adds an audiomonitor for each region.
The title of archives, the data table name of correspondence, the latest edition of file data and the statistical information such as access times, hit-count is have recorded in each audiomonitor.
Embody the Separate Storage of buffer memory key and data herein, the key of each data cached correspondence has record in VersionSensitiveMap.During each access cache data, first carry out judging whether to there is buffer memory according to key value information, and carry out the statistics of information according to access.
While access, the latest edition that in inquiry audiomonitor, table name is corresponding, realizes the control of version.The problem solving the Version Control in the problems referred to above and automatically refresh, monitors all kinds of archives simultaneously.
Large internal memory cache storage area BigMemCache.Operating system memory memory buffers data are utilized also to have realization in other caching technologys, as EhCache etc., but for this scene, perfect not to Version Control.This programme adopts the caching technology of NC63, and VersionSensitiveMap on version is integrated, solves large data and stores and the unification in version updating.
Large data are stored on operating system memory, do not take JVM internal memory, and not by the impact of JVM internal memory GC, take full advantage of 64 for operating system is to the support of large memory size.
On above two pieces of main technical point, this memory mechanism encapsulates entrance, the Map of audiomonitor and version sensitivity is bound, large memory caching technology and this programme are reasonably integrated, form the realization mechanism of the complete large memory cache of archives class data.
This programme, in buffer memory monitoring granularity, refine to the file type aspect of business, more clear, is easy to management.Adopt this programme to carry out the buffer memory of file data, breach the restriction of memory size, reduce the pressure of server, decrease the link number of times to database, improve the access efficiency of data.
In addition, this programme, on the basis that the large memory caching technology that NC63 provides is combined, achieves the data monitoring of archives level and the automatic renewal of data cached version.
More than be described with reference to the accompanying drawings technical scheme of the present invention, considered in correlation technique, when carrying out large memory cache, the division of buffer zone and Data Serialization and unserializing problem, and the control of data cached version and renewal etc. are topic.By the technical scheme of the application, when carrying out large memory cache, by KEY value set and True Data Separate Storage, and the audiomonitor of data cached Version Control and automatic more new archive rank can be realized, and improving monitoring granularity.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a data buffering system, is characterized in that, comprising:
Buffer memory division unit, for inquiring about the type of data to be stored, the data for each type arrange corresponding mark, and divide corresponding buffer zone respectively for the data of described each type;
Storage unit, is stored in the internal memory of operating system for the data in each buffer zone being deposited, and is stored in the internal memory of virtual machine by metadata, mark and/or attribute information corresponding respectively for the data in described each buffer zone.
2. data buffering system according to claim 1, is characterized in that, also comprises:
Monitoring unit, is arranged in the buffer zone of data difference correspondence of described each type, for monitoring the version information of the data of described each type respectively;
Cache flush unit, for listen at described monitoring unit described version information change time, refresh the buffer zone at data place of respective type according to the version information of change.
3. data buffering system according to claim 2, is characterized in that, described monitoring unit is used for the file data of the data of real-time listening respective type, and obtains according to the change of described file data the version upgraded; Described cache flush unit refreshes the buffer zone at the data place of respective type according to the version of described renewal.
4. data buffering system according to claim 2, is characterized in that, described monitoring unit is also for the number of times recording the file data of respective type data, the size of shared buffer zone, accessed number of times and/or be hit.
5. the data buffering system according to any one of claim 2 to 4, it is characterized in that, whether described monitoring unit also for judging the data buffer memory extremely corresponding buffer zone of respective type according to corresponding mark, if there is no buffer memory, the data of described respective type are then obtained according to corresponding mark, according to corresponding mark, the data of described respective type are write corresponding buffer zone, and the data of described respective type are carried out serializing, to be stored in the internal memory of described operating system, simultaneously by metadata corresponding for the data of described respective type, mark and/or attribute information carry out serializing, to be stored in the internal memory of described virtual machine.
6. a data cache method, is characterized in that, comprising:
Step 202, inquires about the type of data to be stored, and the data for each type arrange corresponding mark, and divides corresponding buffer zone respectively for the data of described each type;
Data in each buffer zone are deposited and are stored in the internal memory of operating system by step 204, and are stored in the internal memory of virtual machine by metadata, mark and/or attribute information corresponding respectively for the data in described each buffer zone.
7. data cache method according to claim 6, is characterized in that, also comprises:
Step 206, according to the audiomonitor in the buffer zone that the data of described each type are corresponding respectively, monitor the version information of the data of described each type respectively, and when described version information changes, refresh the buffer zone at the data place of respective type according to the version information of change.
8. data cache method according to claim 7, it is characterized in that, described step 206 comprises: by the file data of the data of described audiomonitor real-time listening respective type, and obtain according to the change of described file data the version upgraded, the buffer zone at the data place of respective type is refreshed according to the version of described renewal.
9. data cache method according to claim 7, is characterized in that, also comprises: by the file data of described audiomonitor record respective type data, the size of shared buffer zone, accessed number of times, the number of times that is hit.
10. the data cache method according to any one of claim 7 to 9, it is characterized in that, also comprise: whether described audiomonitor judges the data buffer memory extremely corresponding buffer zone of respective type according to corresponding mark, if there is no buffer memory, the data of described respective type are then obtained according to corresponding mark, according to corresponding mark, the data of described respective type are write corresponding buffer zone, and the data of described respective type are carried out serializing, to be stored in the internal memory of described operating system, simultaneously by metadata corresponding for the data of described respective type, mark and/or attribute information carry out serializing, to be stored in the internal memory of described virtual machine.
CN201310397394.4A 2013-09-04 2013-09-04 Data buffering system and data cache method Active CN103488581B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310397394.4A CN103488581B (en) 2013-09-04 2013-09-04 Data buffering system and data cache method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310397394.4A CN103488581B (en) 2013-09-04 2013-09-04 Data buffering system and data cache method

Publications (2)

Publication Number Publication Date
CN103488581A CN103488581A (en) 2014-01-01
CN103488581B true CN103488581B (en) 2016-01-13

Family

ID=49828829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310397394.4A Active CN103488581B (en) 2013-09-04 2013-09-04 Data buffering system and data cache method

Country Status (1)

Country Link
CN (1) CN103488581B (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104111899A (en) * 2014-07-03 2014-10-22 北京思特奇信息技术股份有限公司 Cache data storage method and system and cache data reading method
CN104112024A (en) * 2014-07-30 2014-10-22 北京锐安科技有限公司 Method and device for high-performance query of database
CN104123264A (en) * 2014-08-01 2014-10-29 浪潮(北京)电子信息产业有限公司 Cache management method and device based on heterogeneous integrated framework
CN104281673B (en) * 2014-09-22 2018-10-02 珠海许继芝电网自动化有限公司 A kind of caching structure system of database and corresponding construction method
CN104484285B (en) * 2014-12-09 2017-11-17 杭州华为数字技术有限公司 A kind of memory management method and device
CN104572973A (en) * 2014-12-31 2015-04-29 上海格尔软件股份有限公司 High-performance memory caching system and method
CN104657435B (en) * 2015-01-30 2019-09-17 新华三技术有限公司 A kind of memory management method and Network Management System using data
CN104866976A (en) * 2015-06-01 2015-08-26 北京圆通慧达管理软件开发有限公司 Multi-tenant-oriented information managing system
CN107273522B (en) * 2015-06-01 2020-01-14 明算科技(北京)股份有限公司 Multi-application-oriented data storage system and data calling method
JP6424330B2 (en) * 2015-10-13 2018-11-21 株式会社アクセル INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
CN105550300B (en) * 2015-12-11 2020-02-04 北京奇虎科技有限公司 Method and device for issuing message
CN106021445B (en) * 2016-05-16 2019-10-15 努比亚技术有限公司 It is a kind of to load data cached method and device
CN106095698B (en) * 2016-06-03 2019-04-23 合一网络技术(北京)有限公司 Caching write-in, read method and the device of object-oriented
CN108009019B (en) * 2016-10-29 2021-06-22 网宿科技股份有限公司 Distributed data positioning example method, client and distributed computing system
CN108153794B (en) * 2016-12-02 2022-06-07 阿里巴巴集团控股有限公司 Page cache data refreshing method, device and system
CN107301051A (en) * 2017-06-27 2017-10-27 深圳市金立通信设备有限公司 The caching of terminal dynamic data and exchange method, terminal, system and computer-readable recording medium
CN107704573A (en) * 2017-09-30 2018-02-16 山东浪潮通软信息科技有限公司 A kind of intelligent buffer method coupled with business
CN109769005A (en) * 2017-11-09 2019-05-17 宁波方太厨具有限公司 A kind of data cache method and data buffering system of network request
CN108197456B (en) * 2018-01-16 2020-05-19 飞天诚信科技股份有限公司 Equipment data caching method and device
CN109446222A (en) * 2018-08-28 2019-03-08 厦门快商通信息技术有限公司 A kind of date storage method of Double buffer, device and storage medium
CN109324761A (en) * 2018-10-09 2019-02-12 郑州云海信息技术有限公司 A kind of data cache method, device, equipment and storage medium
CN109491873B (en) * 2018-11-05 2022-08-02 阿里巴巴(中国)有限公司 Cache monitoring method, medium, device and computing equipment
CN110008213A (en) * 2019-03-13 2019-07-12 国电南瑞科技股份有限公司 A kind of regulator control system real time data separate type management method
CN110389781B (en) * 2019-05-31 2023-04-28 深圳赛安特技术服务有限公司 Version control-based localtorage cache implementation method, device and storage medium
CN110737680A (en) * 2019-09-23 2020-01-31 贝壳技术有限公司 Cache data management method and device, storage medium and electronic equipment
CN111736776B (en) * 2020-06-24 2023-10-10 杭州海康威视数字技术股份有限公司 Data storage and reading method and device
CN111897819A (en) * 2020-07-31 2020-11-06 平安普惠企业管理有限公司 Data storage method and device, electronic equipment and storage medium
CN113342824A (en) * 2021-06-30 2021-09-03 平安资产管理有限责任公司 Data storage method, device, equipment and medium based on target storage equipment
CN115878505B (en) * 2023-03-01 2023-05-12 中诚华隆计算机技术有限公司 Data caching method and system based on chip implementation
CN117271840B (en) * 2023-09-22 2024-02-13 北京海致星图科技有限公司 Data query method and device of graph database and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6430564B1 (en) * 1999-03-01 2002-08-06 Hewlett-Packard Company Java data manager for embedded device
US6507891B1 (en) * 1999-07-22 2003-01-14 International Business Machines Corporation Method and apparatus for managing internal caches and external caches in a data processing system
US6633862B2 (en) * 2000-12-29 2003-10-14 Intel Corporation System and method for database cache synchronization across multiple interpreted code engines
CN101510144A (en) * 2009-03-24 2009-08-19 中国科学院计算技术研究所 Distributed cache system based on distributed virtual machine manager and working method thereof
CN102081523A (en) * 2009-11-27 2011-06-01 浙江省公众信息产业有限公司 Dynamic loading system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6430564B1 (en) * 1999-03-01 2002-08-06 Hewlett-Packard Company Java data manager for embedded device
US6507891B1 (en) * 1999-07-22 2003-01-14 International Business Machines Corporation Method and apparatus for managing internal caches and external caches in a data processing system
US6633862B2 (en) * 2000-12-29 2003-10-14 Intel Corporation System and method for database cache synchronization across multiple interpreted code engines
CN101510144A (en) * 2009-03-24 2009-08-19 中国科学院计算技术研究所 Distributed cache system based on distributed virtual machine manager and working method thereof
CN102081523A (en) * 2009-11-27 2011-06-01 浙江省公众信息产业有限公司 Dynamic loading system and method

Also Published As

Publication number Publication date
CN103488581A (en) 2014-01-01

Similar Documents

Publication Publication Date Title
CN103488581B (en) Data buffering system and data cache method
US11899937B2 (en) Memory allocation buffer for reduction of heap fragmentation
CN101916302B (en) Three-dimensional spatial data adaptive cache management method and system based on Hash table
CN102331986B (en) Database cache management method and database server
CN103336849B (en) A kind of database retrieval system improves the method and device of retrieval rate
CN103902474B (en) Mixed storage system and method for supporting solid-state disk cache dynamic distribution
US9449005B2 (en) Metadata storage system and management method for cluster file system
CN102902730B (en) Based on data reading method and the device of data buffer storage
CN104850358B (en) A kind of magneto-optic electricity mixing storage system and its data acquisition and storage method
US8799409B2 (en) Server side data cache system
US20140006687A1 (en) Data Cache Apparatus, Data Storage System and Method
CN103795781B (en) A kind of distributed caching method based on file prediction
CN103379156B (en) Realize the mthods, systems and devices of memory space dynamic equalization
CN109947363A (en) A kind of data cache method of distributed memory system
CN101162441B (en) Access apparatus and method for data
CN102638584A (en) Data distributing and caching method and data distributing and caching system
CN102163231A (en) Method and device for data collection
CN101373445B (en) Method and apparatus for scheduling memory
CN103488685B (en) Fragmented-file storage method based on distributed storage system
CN110188080A (en) Telefile Research of data access performance optimization based on client high-efficiency caching
CN102945251A (en) Method for optimizing performance of disk database by memory database technology
CN104111898A (en) Hybrid storage system based on multidimensional data similarity and data management method
Yoon et al. Mutant: Balancing storage cost and latency in lsm-tree data stores
CN107430551A (en) Data cache method, memory control device and storage device
CN104320448A (en) Method and device for accelerating caching and prefetching of computing device based on big data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100094 Haidian District North Road, Beijing, No. 68

Applicant after: Yonyou Network Technology Co., Ltd.

Address before: 100094 Beijing city Haidian District North Road No. 68, UFIDA Software Park

Applicant before: UFIDA Software Co., Ltd.

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant