US20080140941A1 - Method and System for Hoarding Content on Mobile Clients - Google Patents
Method and System for Hoarding Content on Mobile Clients Download PDFInfo
- Publication number
- US20080140941A1 US20080140941A1 US11/567,936 US56793606A US2008140941A1 US 20080140941 A1 US20080140941 A1 US 20080140941A1 US 56793606 A US56793606 A US 56793606A US 2008140941 A1 US2008140941 A1 US 2008140941A1
- Authority
- US
- United States
- Prior art keywords
- client
- content set
- hoarding
- content
- files
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/172—Caching, prefetching or hoarding of files
Abstract
A device and method for efficient hoarding content set on a mobile client prior to disconnection of the mobile client from a network. The content set to be hoarded on the mobile client and a respective schedule for hoarding the content set on the mobile client are dynamically computed by considering various real times factors such as file utilities, device capabilities and network connectivity that affect the performance of the mobile client and the hoarding process.
Description
- This invention relates to a method of hoarding content on a mobile client and an intelligent mobile information processing system.
- With the increasing prevalence and popularity of mobile computing, users are increasingly demanding for constant and continuous availability of content thereby making mobile computing a dominant force in personal computing. With advances in mobile technology, a plethora of portable electronic devices (mobile clients) such as laptops computers, handheld devices and the like, promise to deliver the vision of accessing user data anytime, anywhere. However, in the absence of a continuous wireless connectivity, it becomes imperative to provide support for disconnected operations in mobile environments. Wireless links are slow, sometimes unreliable, expensive to implement and use and are not available to all users.
- Hoarding (caching) is a technique that allows users, for example mobile users, to locally cache content on a mobile client and then accesses the cached content even while the mobile client is operational in a disconnected mode. Hoarding is used for selecting content set, for example a set of documents or files or any other specific form of data, and caching the content set on a mobile client. The cached content set may be used when mobile client is disconnected for example from the network. This allows anytime, anywhere access to data on a mobile client and is advantageous in supporting off line access to content. To ensure that the client can continue to access all the necessary files, the hoarding scheme may cache some files that the client never uses during that particular period when the client is disconnected. This leads to loading the client with information and files that are unnecessary and hence leads to consumption of disk space and resources, which are limited on a mobile client.
- Often constraints, such as the device memory and the like on the mobile client are not sufficiently large to accommodate the content set that has been requested and hence a decision needs to be made on the content set that is required to be hosted on the mobile client. A normal hoarding process requires the mobile client to be synchronized with a main system, for example a server, from where the content set is being fetched. Hoarding and user behavior analyzing engines are placed typically on the server side in order to analyze the tracking data; to create user models and to decide on what materials should be included in the content set. Existing solutions to this problem rely on solutions that are a combination of hoard profiles and spying on a users file access. Neither of these approaches is ideal in terms of user friendliness and reliability. In fact one of the problems with mobile clients is that they are primarily disconnected entities that connect to and disconnect from the servers at the clients' discretion.
- Assuming the existence of an anytime, anywhere, on-demand wireless communication service, a cache miss while disconnected is an expensive proposition due to the cost and re-establishing communications with the servers. The goal of hoarding (caching) is to eliminate cache misses entirely during periods when the mobile client is disconnected. For example, consider a user who travels. Prior to disconnection the user runs the applications intended for use while traveling thereby filling the local disk cache appropriately. A disadvantage with this is, in addition to being inconvenient; a program may require different files for different types of executions. No single execution reveals the full gamut of an application's file access. In another case, the user may specify precisely the files and directories to be hoarded in a hoard profile. This approach is more cumbersome and unreliable. An additional disadvantage of this approach is that creating an accurate hoard profile is not trivial. For example, even a conscientious user might not be able to accurately specify all of the files needed by a specific program. A further disadvantage is that when a mobile user is disconnected a cache miss could mean a significant loss in time, money and a complete halt to work if critical information has not been cached.
- These disadvantages are magnified if critical cache misses occur during disconnected operations. The penalty of such cache misses is very high and may prevent a disconnected client from continuing its operation altogether. A further disadvantage is that the hoarding systems do not work well for personal information appliances that provide access to information that cannot be neatly organized and that do not have a recurring access pattern. A further disadvantage is that mobile computing devices have constraints such as battery power, signal strength, network bandwidth etc., which are not taken into consideration in present hoarding systems.
- Without a way to improve the method and system of hoarding, the promise of this technology may never be fully achieved.
- A first aspect of the invention is a method for hoarding, on a requesting entity. The client is configured to compute a hoard set, and a respective schedule for hoarding the hoard set on the client. At the schedule time, the client is configured to initiate the hoarding process by communicating with a respective servicing entity. The hoarding process typically involves a client communicating with a server to fetch the desired content set and then cache the content set onto the client. The process of hoarding is preferably executed prior to disconnection of the client and/or the server from the network. An advantage is that the client is configured to compute available power, bandwidth and other dependent factors thereby enhancing the performance of the client for hoarding the content set. Factors such as available power and bandwidth resources are considered while computing the content set and the schedule thereby allowing the client to efficiently manage the power consumption and network bandwidth during the hoarding process.
- A second aspect of the invention comprises an electronic computing device comprising at least a processor unit, a memory unit, an input output interface and a transceiver. The electronic computing device is configured to compute the content set to be hoarded and the respective schedule for hoarding the content set. At the scheduled time for hoarding, the required content set is transmitted as a signal embodied in a carrier wave from a server to the electronic computing device. The client is further configured to receive the signal which contains the content set and then cache the content set on the electronic computing device prior to disconnection. The electronic computing device comprising at least the processor unit and the memory unit further contains a computer program product which on being loaded is capable of providing the processing unit with the capability of execute the hoarding method.
-
FIG. 1 illustrates an embodiment of typical system architecture in accordance with the invention. -
FIG. 2A illustrates an exemplary embodiment of hoarding workflow for a receiving entity and a servicing entity. -
FIG. 2B illustrates an exemplary embodiment of hoarding workflow for a receiving entity and a servicing entity involving a gateway as a preferred embodiment -
FIG. 3 illustrates an exemplary embodiment of a flow of transferring files based on a disconnection deadline. -
FIG. 4 illustrates an exemplary embodiment of a computer system suitable for use with the method ofFIGS. 2A and 2B and in the architecture ofFIG. 1 . - Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears. The expression “requesting entity” should be understood as a client, such as a mobile client. The expression “servicing entity” should be understood as a server on which content may be hosted. The expression “hoarding” is to be understood as synonymous with “caching”. The expression “hoard set” is to be understood as a “content set” or “content” or “file set” or “set of files” or “files”. The expression “intermediate communication medium” is to be understood as a “gateway”. Other equivalent expressions to the above expression would be apparent to a person skilled in the art.
- Disclosed is a system and method of efficient hoarding content on mobile clients particularly during disconnected operations in a mobile environment. Hoarding allows mobile users to locally cache content while a mobile clients, for example portable electronic device such as a mobile phone, a personal digital assistant, a pocket personal computer and the like, immediately prior to disconnection or in a weakly connected mode. In this application, whenever the phrase “immediately prior to disconnection” is used it refers to the mobile client being in a disconnected mode and/or in a weakly connected mode. The content set cached can then be accessed by the client in the disconnected mode. To cache content efficiently it is necessary to compute a content set and a proper schedule for hoarding the hoard set on the client that is requesting the hoarding.
- To compute the schedule for hoarding content on the mobile content, typically factors/attributes such as file utilities, device capabilities, network connectivity and the like are considered. The hoarding process selects a content set based on the content/file utilities and then, depending on the other factors, a schedule for hoarding the content set is computed by the client. In one embodiment, the content set and the schedule can also be computed on a gateway based on the request of the client.
- At the scheduled time for hoarding, the client is configured to receive the content set from the server and/or the gateway. The client then caches the content set on the local memory or storage space available in the client such that the content set is available when the client is in a disconnected mode.
-
FIG. 1 depicts an exemplary embodiment ofsystem architecture 100 involving aclient 110 and aserver 120. Theclient 110 is configured to fetch content from aserver 120. The content fetched from the server is then cached locally on theclient 110, for example on a memory device of the client. According to the framework, thesystem 100 is also geared towards storage limited clients, for example thin clients and semi-thin clients.Clients 110 preferably include and are not limited a variety of portable electronic devices such as mobile phones, personal digital assistants (PDAs), pocket personal computers, laptop computers and the like. It should be apparent to a person skilled in the art that any electronic devices which include at least a processor and a memory can be termed as a client within the scope of the present invention. - The
client 110 is configured to compute the content set which is required to be cached, for example the content set contains a set of files. In addition to computing the content set, theclient 110 is also configured to compute a schedule for hoarding the content set by considering factors such as network connectivity, device characteristics, file characteristics and the like. Factors that typically affect the performance of theclient 110 are also considered in computing the content set and the schedule for hoarding the content set. Once the content set and the schedule for hoarding the content set has been computed, the client stores the content set and the schedule, for example as a calendar entry, and at the schedule time, theclient 110 is configured to trigger and initiate the hoarding process. The client fetches the content set from the respective server, and then caches the content set onto the client prior to disconnection. The content set may be computed by the client either manually or without any manual intervention. - In one embodiment, the
client 110 is configured to dynamically determine the content set and the schedule for hoarding the content set. For example, if during the process of fetching the content set from the server and caching the content set locally on theclient 110, the available battery power of theclient 110 is insufficient to complete the process of fetching and caching, theclient 110 is configured to immediately disconnect from the hoarding process. Theclient 110 is then configured to determine entities (for example, a set of files) that have been fetched and cached and compute a new schedule for hoarding the remaining entities of the original content set. In one embodiment, theclient 110 may be configured to discard the content set that was fetched and compute a new schedule for fetching and caching the original content set when partially cached content is found on the client. In a further embodiment the client may be configured to store a list of proximate servers from which the content set is fetched at a quickly and reliably. - The content set required by the
client 110 is computed or generated by theclient 110, with or without any human intervention. The content set typically contains a set of files or programs or any other specific content that is required by theclient 110 for use by the client in an offline mode. When the content set has been determined, theclient 110 computes a schedule for hoarding the content set on the client. At the schedule time, the client is configured to transmit arequest 112, for example the request containing the content set and any other relevant information, to arespective server 120. Theserver 120 is configured to transmit aresponse 116 to therequest 112, theresponse 116 is received by theclient 110. For example, theresponse 116 contains a set of files that were requested with any other relevant information. Typically, communication between theclient 110 and theserver 120 requires a client-server protocol 114, for example TCP/IP, push-pull mechanisms or the like, which is established once theclient 110 sends a request to theserver 120 and the server acknowledges the request from the client. - Typically,
servers 120 are available in many forms such as application servers, web servers, database servers, and so forth. Preferably, thegateway 130 and theserver 120 may be coupled into a single system that is configured to perform the role of theserver 120 and functions as agateway 130. In addition, an external storage device capable of storing content may be coupled to the server and/or the intermediate communication medium and can be termed as a server within the scope of this invention. - In a further embodiment, the
client 110 is configured to send a request to theserver 120 requesting a content set to be cached on the client. Theserver 120 based on the parameters/factors described above is configured to compute a schedule for hoarding the content set, and at the scheduled time initiating transferring the content set to the cached on theclient 110 using a push mechanism. In this case, theserver 120 needs to constantly ping theclient 110 to get information on the current status of theclient 110 and then predict an accurate schedule for hoarding the content set on theclient 110 by considering the variations involved the various client factors. This can typically be computed using a historical database of the previously stored status factors related to the client and use the current status factor to interpolate with the historically available data to predict and compute an accurate schedule. - In a further embodiment,
certain clients 110 require agateway 130 to communicate with theserver 120. The essential feature ofgateway 130 is to translate therequest 112 from theclient 110 in a format that theserver 120 is capable of understanding. Therequest 112 from theclient 110 is first transmitted to agateway 130. The gateway now performs the role of the client by requesting theserver 120 for the content set. Thegateway 130 is then configured to receive the content set and locally cache the content set on the local cache of thegateway 130. The communication between thegateway 130 and theserver 120 is typically via thecommunication channel 118, TCP/IP, push-pull or the like as described previously. At the scheduled time for hoarding the content on theclient 110, the gateway is configured to transmit the content set to the client as aresponse 116 using for example a push mechanism and the content set pushed from thegateway 130 is cached on theclient 110. - In a further embodiment, the
client 110 may transmit therequest 112 containing the hoarding schedule and/or the content set in advance to thegateway 130. Therequest 114 from theclient 110 is transmitted to thegateway 130 in advance. The gateway processes therequest 114 of theclient 110 and is configured to pre-fetch the content set from theserver 120, even prior to the schedule time for hoarding and the fetched content set from the server is cached on thegateway 130. At the scheduled time for hoarding the content set on theclient 110, thegateway 130 is configured to push the content set to theclient 110. Theclient 110 is configured to receive the content set and subsequently cache the content set prior to disconnection from the network. An advantage of using the gateway is that the content set may be pre-fetched and stored on the server and/or gateway, thereby pushing the content set from the gateway to the client making the connection faster by pre-processing of therequest 114 and formatting content set received from theserver 120 in a way that is required by theclient 110. In yet a further embodiment, as described earlier in the case of the server, thegateway 130 may compute the scheduled time of hoarding the content on theclient 110 by determining the current status of theclient 110 and using a historical database of previously stored factors for the client. The pre-fetching from the server may also be advantageously performed by scheduling the content fetching from the server to the gateway by considering current status of the server and previously stored values in a historical database. An advantage of this method is better predictability and efficient content fetching and caching. - A further advantage of using the
gateway 130 is because of the limited memory on theclient 110 large amount of content/files cannot be stored on theclient 110. Therefore, the content set is fetched as requested by theclient 110 from the local cache of thegateway 130. Content is accessed by theclient 110 via a network from theserver 120 and/or thegateway 130 when in a connect mode or in a weakly connect mode. A further advantage is the efficient performance of hoarding the content set over a slow and unreliable network connection. This is, because the schedule computed considers various parameters for theclient 110 such as the battery power, the energy consumption required for hoarding the files onto the local cache, the received and transmitted signal strength, the signal strength of the network, network bandwidth availability, and the like. Hoarding the content set on theclient 110 is preferably done dynamically without any human intervention or can be also done by manual intervention. - The
client 110 is coupled to theserver 120 either directly or viagateway 130 by means of a wired network, a wireless network or a combination thereof. For example a wired network includes coupling via cable, optical fiber and the like. Wireless networks include wireless standard such as Bluetooth, digitally enhanced cordless telecommunication (DECT), dedicated short range communication (DSRC), HIPERLAN, HIPERMAN, IEEE 802.11x, IrDA, Radio frequency Identification (RFID), WiFi, WiMax, xMax, ZigBee and the like. - For example, it is advantageous for the
client 110 to cache the required content/files only when the desired level of network connectivity is favorable. The content/files are prioritized, by grouping the files or ordering the files, and based on the various parameters such as battery power, network bandwidth, signal strength etc, which are real-time characteristics of the client and/or the connectivity, and a schedule for hoarding the content/files is computed either at theclient 110 and/or the server and/or thegateway 130 as described previously and the required content set is fetched from a respective server and cached on theclient 110 efficiently. -
FIG. 2A illustrates an exemplary embodiment of atypical workflow 200 for hoarding the content set on theclient 210. Theclient 210 is typically coupled to arespective server 230 on which the content set is available and the client-server connection 114 allows seamless access to the content set forclients 210 over a network. - The main components of the
client 210 comprise acall monitor 240 which is configured to monitor the file system calls made by client 210 (and all its applications) such as open, close, read, write etc. For example, in a Linux client call monitoring can be done at the virtual file system (VFS) layer by trapping. For other operating systems, thecall monitor 240 is configured to hook into the file system that is used to trap these calls. The call monitor 240 collects the information about file references such as file names, time of reference, process name etc., and these references are passed to on to afilter 241 that is coupled to thecall monitor 240. - Some processes such as the UNIX find (stat) access a number of files for a short period of time. These processes can cause confusion in calculating the hoarding schedule for the content set and affect the prediction of computing the hoarding schedule adversely. In order to nullify the effects of such unwanted processes in the computation of the hoarding schedule, the
filter 241 is configured to discard such accesses to these processes. The information that remains on the valid file access is saved into thecollector 243 which is coupled to thefilter 241. - The
collector 243 stores information about all valid file accesses for the client. Thecollector 243 is shared betweencall monitor 240, for example the kernel entity, and themaster manager 245, for example the user entity which is configured to address information exchange efficient. An explicit list is created on theclient 210 to provide the user a means to specify a list of files/list of content, i.e., the content set, which should be hoarded irrespective of their utility values. This option avoids any inconvenience of missing important but less frequently used content/files. Preferably, the content/files that were missed during the disconnected operation are stored in theclient 210 to improve its accuracy during any future use. For example, the user records the miss along with an indicator specifying whether the particular miss was a show-stopper, hard-miss or ignorable soft-miss. The list of content/files from the content set that are missed is updated with theclient 210 for computing a suitable hoarding schedule. - The
master manager 245 maintains a master table, for example a lookup table, where information about the file access is stored. Information from thecollector 240 is used to update the corresponding file access history in the master table. Autility calculator 246 is coupled to themaster manager 245, which is configured to read the file access history and the previously stored utility values in the master table. Theutility calculator 246 is configured to compute the utility value for each file at the scheduled hoarding time using the already stored utility values and file access history. The utility calculation is an adaptive process and it gains accuracy over a period of time as the master table is updated during every hoarding process. In a further embodiment, the system is configured to learn the important content/files in a content set being cached and to dynamically compute a hoarding schedule for such content/files. - For example, a list of critical files are those that are essential for system operation and may contain important control and configuration information, for example “dot” files that record the start up and configuration information of UNIX-specific applications. These files tend to be small and consume relatively low disk space, for example 1.5 MB for office/, 52 KB for .fvwm/, etc. The master table in the
master manager 245 determines which content/files of the content set are critical for aclient 210. For example, in thin clients that have a small hoard capacity, the dot files may be selectively chosen based on utility of the applications. The critical files are always included in the hoard. - A
file selector 247 is coupled to theutility calculator 246 and is configured to select the content set based on utility value for each of the content/files in the content set at the time of the scheduled hoarding given the constraints. The constraints amongst other, include the available hoard capacity, available battery lifetime, available bandwidth, the signal strength, remaining time before which the client disconnects, planned disconnection period and the like. A scheduling algorithm running in thefile selector 247 selects the actual content set that is required to be hoarded on theclient 210. - Once the content set is computed the
client 210 the schedule for hoarding the content set on theclient 210 is also computed. The communication between theclient 210 and theserver 220 is typically accomplished using for example a push-pull mechanism. - In a further embodiment
FIG. 2B illustrates an exemplary embodiment of atypical workflow 201 for the hoarding the content set on theclient 210. Theclient 210 couples to arespective server 230 viagateway 230 and allows seamless access of to theclients 210 over a network to the content set. - In
FIG. 2B , thecall monitor 240, thecall filter 241 and thecollector 243 form part of themobile client 210. The functioning of each of these components has been described previously with respect toFIG. 2A . Themaster manager 245,utility calculator 246 and thefile selector 247 form part of thegateway 230. The functioning of these components has also been described previously inFIG. 2A . It should be apparent to a person skilled in the art that inclients 210 with limited memory and processing power, the computation of the content set is performed on thegateway 230. - In a further embodiment, the
client 210 may compute the hoard set and transmit the same to thegateway 230. Based on the hoard set, thegateway 230 is configured to compute a schedule to hoard the files on theclient 210 based on the various constraints such as includes the available hoard capacity, available battery lifetime, available bandwidth, the signal strength, remaining time before which the client disconnects, planned disconnection period and the like. Thegateway 230 can pre-fetch the files required to be hoarded on theclient 210 and store the files on the local cache of thegateway 230. At the scheduled time thegateway 230 can initiate transferring of the files to theclient 210, by a push mechanism such that the content set is hoarded on the client prior to disconnection. - File access patterns can be used to determine the utility of a file which is an indication of the usefulness of the file to the user for hoarding purposes. The hoarding utility of content/files also indicates the probability of the content/files being accessed at a future time. The utility of the file can for example be calculated from frequency of access, recency of access, duration of access and regularity of access
- Future probability of the file access depends on the access history due to the local properties of the content/file, properties of the device etc. The file access time represents the temporal file access behavior of the user and is exploited for file access prediction. Least recently and frequently used (LRFU) is a cache replacement algorithm that combines the most popular cache replacement policies of least recently used (LRU) and least frequently used (LFU) in order to improve the systematic performance of hoarding. In LRFU, equal weights are given to the most recent history as well as past accesses of the files. However, a users long term access patterns of files is subject to change often, the invention provides a scheme that gives more priority to recent behavior over past and accommodates for more adaptive changes in access patterns. The invention describes a modified LRFU algorithm wherein the utility value based on access history at time H(t) can be calculated using:
-
- where ‘Q’ represents the total number of accesses before time ‘t’ and ‘ti’ represents the time of the ith access of the same file, and I=1, . . . ,Q. The controllable parameters, λr and λo, determine the weights given to the recent history and old history. Hr(t) and Ho(t) denote the contributions by the most recent access and the older accesses, respectively. The most recent access Hr(t) is isolated from other older accesses, Ho(t), as the recent access information provides the short term behavior of the user whereas the older accesses indicate the long term behavior. Notably, when λr=λo, the method of the present invention is equivalent to the method of the LRFU model.
- H(t) can be computed from the utility value at the previous hoard event and the file references made immediately after that hoard instant. Assuming that the last hoarding time is th, the current utilities can be computed using:
-
- wherein ‘n’ represents the number of accesses made after the last hoarding time.
- The duration of activity for the file is the time period between an “open” reference and the corresponding “close” reference by the same hoarding process. Files with longer durations of activity have a higher probability of being accessed and hence have higher utility values for hoarding. Notably, when multiple processes are involved, these processes can open a single file concurrently and can perform various accesses in parallel. However, durations of activity created by all the hoarding processes are added between two consecutive hoarding times. The utility value based on activity for the file is computed for each file using the last utility value and the current sum of durations of activities created by all the hoarding processes. The utility value based on an activity at time ‘t’ is computed by:
-
- where di represents the duration of ith activity, i=1, . . . , k, and k is the total number of active periods between the last hoarding event and current hoarding event. The parameter β controls the weight for the history as time progresses. The denominator in the function Δ(th,t) is used for normalization.
- Regularity of file references has an important role in capturing the long term behavior of the user. Regularity of access of files depends on how “regularly” the file is being accessed by some process and may reflect the user's day-to-day behavior with the system. For example, a typical user checks his scheduler daily creates events and updates intermittently on the scheduler. Although, these files are accessed infrequently, these are important for the hoarding process as these files have a higher chance of reference within a specific time period. Assuming regularity based on daily reference, a day can be divided into several time segments. The utility value based on regularity at rth segment on mth day is computed as C(r,m)=δC(r,m−1)+(1−δ)Δc(r,m), 0≦δ≦1, wherein Δc(r,m)=1 if the file is accessed at least once in the rth time segment within the day, and Δc(r,m)=0 if there is no access during the same segment. The controllable parameter Δ determines the weights given to the current regularity measure and the old regularity measures.
- Based on these three factors of frequency/recency, duration of activity and regularity of access, the utility value of a file at time ‘t’ is computed using the equation:
-
U(t)=w 1 H(t)+w 2 A(t)+w 3 C(r,m), (7) - where w1, w2 and w3 are the corresponding weights given to frequency/recency, duration of activity and regularity of access. The utility function U(t) can be calculated based on previous utility components at the last hoarding time th and the subsequent file accesses.
- The hoard set selection problem is to find an optimal set of files that fits into the available capacity C of the cache size, while maximizing the total utility of the selected files. Let Uj(t) denote the hoarding utility of the jth file at hoarding time t, where j=1, . . . ,N, and N denotes the total number of files in the system. Let αj be the indicator variable such that αj=1 if the file j is selected for hoarding, else αj=0. The automated hoarding problem can be formulated as an optimization problem subject to the constraint of available cache size. The problem is to choose the set of αj so as to maximize the total hoarding utility Σj=1 NαjUj(t), subject to the size constraints, i.e., Σj=1 NαjSj(t), should not exceed the available cache capacity C. If Sj(t) denote the actual size of the jth file, the optimized objective function is:
-
- subject to the condition that:
-
- where the objective function and the constraints are linear functions of the variables αj and all variables are restricted to take binary values referred to as an integer linear program or ILP, more specially, a binary integer linear programming or BIP.
- Relaxing the constraint in Eq.(10), where αj can take linear values this reduces to an linear program (LP) which can be solved using standard mathematical toolkits as apparent to a person skilled in the art. However, the values of αj returned are fractional. In order to obtain integral values of αj, a rounding off approach is used wherein the αj returned by the LP solution are sorted in descending order. Each αj=1 is then rounded off to 1 if Eq. (9) is not violated, else set αj=0 and continue till all the αj are exhausted.
- In a further embodiment, another algorithm for sorting Uj(t)/Sj(t) may also be used to solve the above mentioned problem. In this technique, the files are sorted according to decreasing order of Uj(t)/Sj(t) and selected one after another from the beginning of the list until Eq. (9) is violated. In this case preferences for files that have higher utility per unit size are chosen.
- Once a candidate content set is chosen the
client 210 needs to obtain the relevant files contained in the content set fromserver 230. This process of file transfer from theserver 230 can happen as a background task either periodically or when disconnection is anticipated for example based on a known deadline. In case of periodic hoarding, the decision to hoard is dependent on the change in the user's working set, device characteristics and network characteristics. A change in the user's working set is captured by the difference between currently hoarded files on the device and predicted hoard set. When the users working set does not change drastically, the process of hoarding can be postponed until there is significant change in the working set. Any change in current device and network characteristics is captured by the difference for example in available battery power and network bandwidth and the like. In case of good network connectivity or ample battery power, the hoarding can be done in a proactive manner. On the other hand, it may happen that the disconnection happens soon after the device moved to a weakly connected state. In this case the mobile device can anticipate an imminent disconnection, either from signal strength indications or from TCP timeouts, packet retransmissions and the like, and the system i.e., the client-server and/or the gateway can then decide to schedule a hoard transfer such that most important files are transferred apriori, and in a method that the entire content set is transferred from the server to the client. - In case of deadline based hoarding where the client knows the estimated time of disconnection, the selected hoard set should be transferred before the client disconnects. If the estimated disconnect time is D, hoarding process completion time is E, and estimated time to transfer the entire hoard set depending on available network bandwidth and the total size of file set is T, then for transferring all the files in the hoard set it is required that T<=D−E as shown in
FIG. 3 . When D is small and a large number of files have to be transmitted over a weak connection, the client might be able to transfer only a partial set. Thus there should be a scheduling policy which, depending on available network bandwidth, and the size of the candidate set to be hoarded, detects when to start the transfer of files and the order in which the transfer should happen, such that the files of greater importance are transferred ahead of the others. - Additionally, for constrained clients, such as portable handhelds\devices, power consumption for file transfer is preferred to be below a threshold P. Let δj(t) be the number of bytes of the jth file that needs to be updated at the client at time t. To transfer all files in the hoard set, the power consumed is Σj=1 Neδj(t), where “e” is the energy consumed (joules/byte) to receive one byte of data. If the complete file needs to be updated at time t then δj(t) equals Sj(t). Once again if P is small, then the client might have to transfer a partial list of files. Thus, given a hoard deadline D and a hoard energy threshold P, the transfer scheduling is to decide on an ordering of file transfer such that the maximum utility files are transferred before the deadline expires and/or the energy threshold is overrun. File transfer continues until either one or both constraints of D and P are violated. This is a policy based scheduling algorithms for a hoarding schedule computation in order to prioritize the files to be fetched. The policy base schedule performs the following tasks:
-
- selecting files in decreasing order of utility or increasing order of size;
- group the sorted list of files according to type and select based on the file types;
- select files that have observed to cause a hard-miss. Lower priority of files that cause soft-miss such that they will be chosen only when all other files are hoarded; and
- prompt user to prioritize files etc.
-
FIG. 4 schematically shows an embodiment of thesystem 400, wherein thesystem 400 can comprise a client, a server or a gateway. It should be understood thatFIG. 4 is only intended to depict the representative major components of thesystem 400 and that individual components may have greater complexity than that represented inFIG. 4 . Several particular examples of such additional complexity or additional variations are disclosed herein; it being understood that these are by way of example only and are not necessarily the only such variations. - The
system 400 comprises asystem bus 401. Aprocessor 410, amemory 420, a disk I/O adapter 430, a network interface (not shown in the Figure), a transceiver and aUI adapter 440 are operatively connected to thesystem bus 401. Adisk storage device 431 is operatively coupled to the disk I/O adapter 430, in the case of the client this being an optional element. Akeyboard 441, a mouse 442 (optional element) and a display 443 are operatively coupled to theUI adapter 440. Adisplay device 451 is operatively coupled to thesystem bus 401 via adisplay adapter 450. The terminal/display interface 450 is used to directly connect one ormore display units 451 to thecomputer system 400. - The
system 400 is configured to implement the hoarding process via a signal embodied in a carrier ware is stored on a tangible computer readable medium such as adisk storage device 431 or the memory of the client and/or gateway. The client stores and runs the program whereas the server stores content that needs to be fetched and hoarded on the client. Thesystem 400 is configured to load the program intomemory 420 and execute the program on theprocessor 410, on the client, the server and/or the gateway. The user inputs information to thesystem 400 using thekeyboard 441 and/or themouse 442. Thesystem 400 outputs information to thedisplay device 451 coupled via thedisplay adapter 450. The skilled person will understand that there are numerous other embodiments of the workstation known in the art and that the present embodiment serves the purpose of illustrating the invention and must not be interpreted as limiting the invention to this particular embodiment. - The disk I/
O adapter 430 coupled to thedisk storage device 431, in turn, coupled to thesystem bus 401 and the disk storage devices represents one or more mass storage devices, such as a direct access storage device or a readable/writable optical disk drive. The disk I/O adapter 430 supports the attachment of one or moremass storage devices 431, which are typically rotating magnetic disk drive storage devices, although there could alternatively be other devices, including arrays of disk drives configured to appear as a single large storage device to a host and/or archival storage media, such as hard disk drives, tape (e.g., mini-DV), writable compact disks (e.g., CD-R and CD-RW), digital versatile disks (e.g., DVD, DVD-R, DVD+R, DVD+RW, DVD-RAM), high density DVD (HDDVD), holography storage systems, blue laser disks, IBM Millipede devices and the like. - The network interfaces and the transceiver allow the
system 400 to communicate withother computing systems 400 over a communications medium, preferably over a network. The network may be any suitable network or combination of networks and may support any appropriate protocol suitable for communication of data and/or code to/frommultiple computing systems 400. Accordingly, the network interfaces can be any device that facilitates such communication, regardless of whether the network connection is made using present day analog and/or digital techniques or via some networking mechanism of the future. Suitable communication media include, but are not limited to, networks implemented using one or more of the IEEE (Institute of Electrical and Electronics Engineers) 802.3x “Ethernet” specification; cellular transmission networks; and wireless networks implemented one of the IEEE 802.11x, IEEE 802.16, General Packet Radio Service (“GPRS”), FRS (Family Radio Service), or Bluetooth specifications. Those skilled in the art will appreciate that many different network and transport protocols can be used to implement the communication medium. The Transmission Control Protocol/Internet Protocol (“TCP/IP”) suite contains suitable network and transport protocols. - The
system 400 is a general-purpose computing device. Accordingly, theCPUs 410 may be any device capable of executing program instructions stored in themain memory 420 and/or a supplementary memory (not shown in the figure) and may themselves be constructed from one or more microprocessors and/or integrated circuits. Themain memory unit 420 in this embodiment also comprises an operating system, a plurality of application programs (such as the program installation manager), and some program data. Thesystem 400 contains multiple processors and/or processing cores, as is typical of larger, more capable computer systems. - The
computing system 400 ofFIG. 4 can have multiple attachedterminals 451, such as might be typical of a multi-user “mainframe” computer system. In such a case, the actual number of attached devices is typically greater than those shown inFIG. 4 , although the present invention is not limited to systems of any particular size. Thecomputing systems 400 may alternatively be a single-user system, typically containing only a single user display and keyboard input, or might be a server or similar device which has little or no direct user interface, but receives requests from other computer systems (clients). In other embodiments, thecomputing systems 400 may be implemented as a personal computer, portable computer, laptop or notebook computer, PDA (Personal Digital Assistant), tablet computer, pocket computer, telephone, pager, automobile, teleconferencing system, appliance, or any other appropriate type of electronic device. - Embodiments of the present invention may also be delivered as part of a service engagement with a client corporation, nonprofit organization, government entity, internal organizational structure, or the like. Aspects of these embodiments may include configuring a computer system to perform, and deploying software, hardware, and web services that implement, some or all of the methods described herein. Aspects of these embodiments may also include analyzing the client's operations, creating recommendations responsive to the analysis, building systems that implement portions of the recommendations, integrating the systems into existing processes and infrastructure, metering use of the systems, allocating expenses to users of the systems, and billing for use of the systems.
- The embodiments described with reference to
FIGS. 1-4 generally use client-server network architecture. However, those skilled in the art will appreciate that other network architectures are within the scope of the present invention. Examples of other suitable network architectures include peer-to-peer architectures, grid architectures, and multi-tier architectures. Accordingly, the terms web server and client computer should not be construed to limit the invention to client-server network architectures. - The various software components illustrated in
FIGS. 1-4 and implementing various embodiments of the invention may be implemented in a number of manners, including using various computer software applications, routines, components, programs, objects, modules, data structures, etc., referred to hereinafter as “computer programs,” or simply “programs.” The computer programs typically comprise one or more instructions that are resident at various times in various memory and storage devices in the computer system, and that, when read and executed by one or more processors in the computer system, cause the computer system to perform the steps necessary to execute steps or elements comprising the various aspects of an embodiment of the invention. The various software components may also be located on different systems than depicted inFIGS. 1-4 . - The accompanying figures and this description depicted and described embodiments of the present invention, and features and components thereof. Those skilled in the art will appreciate that any particular program nomenclature used in this description was merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature. Thus, for example, the routines executed to implement the embodiments of the invention, whether implemented as part of an operating system or a specific application, component, program, module, object, or sequence of instructions could have been referred to as a “program”, “application”, “server”, or other meaningful nomenclature. Indeed, other alternative hardware and/or software environments may be used without departing from the scope of the invention. Therefore, it is desired that the embodiments described herein be considered in all respects as illustrative, not restrictive, and that reference be made to the appended claims for determining the scope of the invention.
- Although the invention has been described with reference to the embodiments described above, it will be evident that other embodiments may be alternatively used to achieve the same object. The scope of the invention is not limited to the embodiments described above, but can also be applied to software programs and computer program products in general. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs should not limit the scope of the claim. The invention can be implemented by means of hardware and software comprising several distinct elements.
Claims (2)
1. A method for hoarding content on a requesting entity comprising:
computing a schedule for hoarding a content set at the requesting entity;
receiving the content set at the requesting entity;
hoarding the received content set on the requesting entity;
computing the content set to be hoarded at the requesting entity;
computing a utility value for each of the entities in the content set;
prioritizing each of the entities in the content set in an order of importance based on the utility value;
transmitting a request comprising the content set from the requesting entity to a respective servicing entity at the computed schedule;
transmitting said request from the requesting entity to a respective intermediate communication medium,
pre-fetching the respective content set from the servicing entity;
caching the content set on the intermediate communications medium;
initiating a push mechanism on the intermediate communication medium to push the content set to the requesting entity at the computed schedule; and
updating the intermediate communication medium by transmitting the request from the requesting entity to the intermediate communication medium at periodic intervals,
wherein the request comprises the content set and the computed schedule,
wherein the request comprises the content set,
wherein the intermediated communicating medium is configured to compute a schedule for hoarding the content set,
wherein the intermediated communicating medium is configured to compute the content set and a respective schedule for hoarding the content set,
wherein the content set comprises entities and respective attributes associated with each of the entities, and
wherein the intermediate communication medium is configured to perform in addition the function of a server.
2-23. (canceled)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/567,936 US20080140941A1 (en) | 2006-12-07 | 2006-12-07 | Method and System for Hoarding Content on Mobile Clients |
US12/061,716 US7882092B2 (en) | 2006-12-07 | 2008-04-03 | Method and system for hoarding content on mobile clients |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/567,936 US20080140941A1 (en) | 2006-12-07 | 2006-12-07 | Method and System for Hoarding Content on Mobile Clients |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/061,716 Continuation US7882092B2 (en) | 2006-12-07 | 2008-04-03 | Method and system for hoarding content on mobile clients |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080140941A1 true US20080140941A1 (en) | 2008-06-12 |
Family
ID=39523811
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/567,936 Abandoned US20080140941A1 (en) | 2006-12-07 | 2006-12-07 | Method and System for Hoarding Content on Mobile Clients |
US12/061,716 Expired - Fee Related US7882092B2 (en) | 2006-12-07 | 2008-04-03 | Method and system for hoarding content on mobile clients |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/061,716 Expired - Fee Related US7882092B2 (en) | 2006-12-07 | 2008-04-03 | Method and system for hoarding content on mobile clients |
Country Status (1)
Country | Link |
---|---|
US (2) | US20080140941A1 (en) |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090089341A1 (en) * | 2007-09-28 | 2009-04-02 | Microsoft Corporation | Distriuted storage for collaboration servers |
US20090125462A1 (en) * | 2007-11-14 | 2009-05-14 | Qualcomm Incorporated | Method and system using keyword vectors and associated metrics for learning and prediction of user correlation of targeted content messages in a mobile environment |
US20090157834A1 (en) * | 2007-12-14 | 2009-06-18 | Qualcomm Incorporated | Method and system for multi-level distribution information cache management in a mobile environment |
US20100036858A1 (en) * | 2008-08-06 | 2010-02-11 | Microsoft Corporation | Meta file system - transparently managing storage using multiple file systems |
WO2010076140A1 (en) * | 2008-12-30 | 2010-07-08 | International Business Machines Corporation | Managing data across a plurality of data storage devices based upon collaboration relevance |
US20100318745A1 (en) * | 2009-06-16 | 2010-12-16 | Microsoft Corporation | Dynamic Content Caching and Retrieval |
US20120198171A1 (en) * | 2010-09-28 | 2012-08-02 | Texas Instruments Incorporated | Cache Pre-Allocation of Ways for Pipelined Allocate Requests |
US20130007260A1 (en) * | 2011-07-01 | 2013-01-03 | Google Inc. | Access to network content |
US8627021B2 (en) * | 2011-08-31 | 2014-01-07 | Qualcomm Incorporated | Method and apparatus for load-based prefetch access |
US8655819B1 (en) | 2011-09-15 | 2014-02-18 | Google Inc. | Predicting user navigation events based on chronological history data |
US20140136595A1 (en) * | 2009-08-17 | 2014-05-15 | Yahoo! Inc. | Push pull caching for social network information |
US8744988B1 (en) | 2011-07-15 | 2014-06-03 | Google Inc. | Predicting user navigation events in an internet browser |
US20140189121A1 (en) * | 2012-10-18 | 2014-07-03 | Tara Chand Singhal | Apparatus and method for a thin form-factor technology for use in handheld smart phone and tablet devices |
US8788711B2 (en) | 2011-06-14 | 2014-07-22 | Google Inc. | Redacting content and inserting hypertext transfer protocol (HTTP) error codes in place thereof |
US8793235B2 (en) | 2012-01-19 | 2014-07-29 | Google Inc. | System and method for improving access to search results |
US8862529B1 (en) | 2011-09-15 | 2014-10-14 | Google Inc. | Predicting user navigation events in a browser using directed graphs |
US8887239B1 (en) | 2012-08-08 | 2014-11-11 | Google Inc. | Access to network content |
JP2015036928A (en) * | 2013-08-15 | 2015-02-23 | 富士通株式会社 | Information processing system, information processing device, control program of information processing device, and control method of information processing system |
US20150113521A1 (en) * | 2013-10-18 | 2015-04-23 | Fujitsu Limited | Information processing method and information processing apparatus |
US9104664B1 (en) | 2011-10-07 | 2015-08-11 | Google Inc. | Access to search results |
US9141722B2 (en) | 2012-10-02 | 2015-09-22 | Google Inc. | Access to network content |
US20150358406A1 (en) * | 2013-02-27 | 2015-12-10 | Hewlett-Packard Development Company, L.P. | Data synchronization |
US9392074B2 (en) | 2007-07-07 | 2016-07-12 | Qualcomm Incorporated | User profile generation architecture for mobile content-message targeting |
US9398113B2 (en) | 2007-07-07 | 2016-07-19 | Qualcomm Incorporated | Methods and systems for providing targeted information using identity masking in a wireless communications device |
US20160277776A1 (en) * | 2015-03-19 | 2016-09-22 | Amazon Technologies, Inc. | Uninterrupted playback of video streams using lower quality cached files |
US9531830B2 (en) | 2014-07-21 | 2016-12-27 | Sap Se | Odata offline cache for mobile device |
US9584579B2 (en) | 2011-12-01 | 2017-02-28 | Google Inc. | Method and system for providing page visibility information |
US9613009B2 (en) | 2011-05-04 | 2017-04-04 | Google Inc. | Predicting user navigation events |
US9769285B2 (en) | 2011-06-14 | 2017-09-19 | Google Inc. | Access to network content |
US9846842B2 (en) | 2011-07-01 | 2017-12-19 | Google Llc | Predicting user navigation events |
US20180081649A1 (en) * | 2013-03-21 | 2018-03-22 | Razer (Asia-Pacific) Pte. Ltd. | Storage optimization in computing devices |
US9946792B2 (en) | 2012-05-15 | 2018-04-17 | Google Llc | Access to network content |
US10264094B2 (en) * | 2009-05-01 | 2019-04-16 | International Business Machines Corporation | Processing incoming messages |
US10279757B2 (en) * | 2015-10-30 | 2019-05-07 | Audi Ag | Control device update in a motor vehicle |
US20190149629A1 (en) * | 2017-11-15 | 2019-05-16 | Cisco Technology, Inc. | Application buffering of packets by fog computing node for deterministic network transport |
US10389838B2 (en) | 2014-05-09 | 2019-08-20 | Amazon Technologies, Inc. | Client-side predictive caching for content |
US10574779B2 (en) | 2012-08-23 | 2020-02-25 | Amazon Technologies, Inc. | Predictive caching for content |
US11113345B2 (en) * | 2014-07-17 | 2021-09-07 | Bigtincan Holdings Limited | Method and system for providing contextual electronic content |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7990900B2 (en) * | 2007-06-28 | 2011-08-02 | Alcatel-Lucent Usa Inc. | Event notification control based on data about a user's communication device stored in a user notification profile |
US9361326B2 (en) * | 2008-12-17 | 2016-06-07 | Sap Se | Selectable data migration |
US8693353B2 (en) * | 2009-12-28 | 2014-04-08 | Schneider Electric USA, Inc. | Intelligent ethernet gateway system and method for optimizing serial communication networks |
FR2961924A1 (en) * | 2010-06-29 | 2011-12-30 | France Telecom | MANAGING THE PLACE OF DATA STORAGE IN A DISTRIBUTED STORAGE SYSTEM |
US10373121B2 (en) | 2011-09-13 | 2019-08-06 | International Business Machines Corporation | Integrating a calendaring system with a mashup page containing widgets to provide information regarding the calendared event |
US9558508B2 (en) * | 2013-03-15 | 2017-01-31 | Microsoft Technology Licensing, Llc | Energy-efficient mobile advertising |
US10623243B2 (en) * | 2013-06-26 | 2020-04-14 | Amazon Technologies, Inc. | Management of computing sessions |
JP6340917B2 (en) * | 2014-05-23 | 2018-06-13 | 富士ゼロックス株式会社 | Document management program, document browsing / editing program, document management apparatus, terminal apparatus, and document management system |
US10015625B2 (en) * | 2015-02-11 | 2018-07-03 | Flipboard, Inc. | Providing digital content for offline consumption |
US10820167B2 (en) * | 2017-04-27 | 2020-10-27 | Facebook, Inc. | Systems and methods for automated content sharing with a peer |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6182133B1 (en) * | 1998-02-06 | 2001-01-30 | Microsoft Corporation | Method and apparatus for display of information prefetching and cache status having variable visual indication based on a period of time since prefetching |
US6243755B1 (en) * | 1995-03-07 | 2001-06-05 | Kabushiki Kaisha Toshiba | Information processing system using information caching based on user activity |
US20030145038A1 (en) * | 2002-01-25 | 2003-07-31 | Bin Tariq Muhammad Mukarram | System for management of cacheable streaming content in a packet based communication network with mobile hosts |
US20030187984A1 (en) * | 2002-03-29 | 2003-10-02 | International Business Machines Corporation | Method and apparatus for content pre-fetching and preparation |
US20050080994A1 (en) * | 2003-10-14 | 2005-04-14 | International Business Machines Corporation | Method of dynamically controlling cache size |
US20050198309A1 (en) * | 2000-03-20 | 2005-09-08 | Nec Corporation | System and method for intelligent web content fetch and delivery of any whole and partial undelivered objects in ascending order of object size |
US6959436B2 (en) * | 2000-12-15 | 2005-10-25 | Innopath Software, Inc. | Apparatus and methods for intelligently providing applications and data on a mobile device system |
US20060004923A1 (en) * | 2002-11-02 | 2006-01-05 | Cohen Norman H | System and method for using portals by mobile devices in a disconnected mode |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6075770A (en) * | 1996-11-20 | 2000-06-13 | Industrial Technology Research Institute | Power spectrum-based connection admission control for ATM networks |
JP2001325457A (en) * | 2000-05-15 | 2001-11-22 | Sony Corp | System, device and method for managing contents |
US7221943B2 (en) * | 2003-02-24 | 2007-05-22 | Autocell Laboratories, Inc. | Wireless station protocol program |
US7636132B2 (en) * | 2003-04-17 | 2009-12-22 | Sharp Kabushiki Kaisha | Transmitter, receiver, wireless system, control method, control program, and computer-readable recording medium containing the program |
US8200775B2 (en) * | 2005-02-01 | 2012-06-12 | Newsilike Media Group, Inc | Enhanced syndication |
JP2005109722A (en) * | 2003-09-29 | 2005-04-21 | Toshiba Corp | Radio communication equipment and radio communication method |
-
2006
- 2006-12-07 US US11/567,936 patent/US20080140941A1/en not_active Abandoned
-
2008
- 2008-04-03 US US12/061,716 patent/US7882092B2/en not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6243755B1 (en) * | 1995-03-07 | 2001-06-05 | Kabushiki Kaisha Toshiba | Information processing system using information caching based on user activity |
US6182133B1 (en) * | 1998-02-06 | 2001-01-30 | Microsoft Corporation | Method and apparatus for display of information prefetching and cache status having variable visual indication based on a period of time since prefetching |
US20050198309A1 (en) * | 2000-03-20 | 2005-09-08 | Nec Corporation | System and method for intelligent web content fetch and delivery of any whole and partial undelivered objects in ascending order of object size |
US6959436B2 (en) * | 2000-12-15 | 2005-10-25 | Innopath Software, Inc. | Apparatus and methods for intelligently providing applications and data on a mobile device system |
US20030145038A1 (en) * | 2002-01-25 | 2003-07-31 | Bin Tariq Muhammad Mukarram | System for management of cacheable streaming content in a packet based communication network with mobile hosts |
US20030187984A1 (en) * | 2002-03-29 | 2003-10-02 | International Business Machines Corporation | Method and apparatus for content pre-fetching and preparation |
US20060004923A1 (en) * | 2002-11-02 | 2006-01-05 | Cohen Norman H | System and method for using portals by mobile devices in a disconnected mode |
US20050080994A1 (en) * | 2003-10-14 | 2005-04-14 | International Business Machines Corporation | Method of dynamically controlling cache size |
Cited By (74)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9392074B2 (en) | 2007-07-07 | 2016-07-12 | Qualcomm Incorporated | User profile generation architecture for mobile content-message targeting |
US9596317B2 (en) | 2007-07-07 | 2017-03-14 | Qualcomm Incorporated | Method and system for delivery of targeted information based on a user profile in a mobile communication device |
US9497286B2 (en) | 2007-07-07 | 2016-11-15 | Qualcomm Incorporated | Method and system for providing targeted information based on a user profile in a mobile environment |
US9485322B2 (en) | 2007-07-07 | 2016-11-01 | Qualcomm Incorporated | Method and system for providing targeted information using profile attributes with variable confidence levels in a mobile environment |
US9398113B2 (en) | 2007-07-07 | 2016-07-19 | Qualcomm Incorporated | Methods and systems for providing targeted information using identity masking in a wireless communications device |
US8195700B2 (en) * | 2007-09-28 | 2012-06-05 | Microsoft Corporation | Distributed storage for collaboration servers |
US20090089341A1 (en) * | 2007-09-28 | 2009-04-02 | Microsoft Corporation | Distriuted storage for collaboration servers |
US8650216B2 (en) | 2007-09-28 | 2014-02-11 | Microsoft Corporation | Distributed storage for collaboration servers |
US20090125462A1 (en) * | 2007-11-14 | 2009-05-14 | Qualcomm Incorporated | Method and system using keyword vectors and associated metrics for learning and prediction of user correlation of targeted content messages in a mobile environment |
US20090216847A1 (en) * | 2007-11-14 | 2009-08-27 | Qualcomm Incorporated | Method and system for message value calculation in a mobile environment |
US9705998B2 (en) | 2007-11-14 | 2017-07-11 | Qualcomm Incorporated | Method and system using keyword vectors and associated metrics for learning and prediction of user correlation of targeted content messages in a mobile environment |
US9203912B2 (en) | 2007-11-14 | 2015-12-01 | Qualcomm Incorporated | Method and system for message value calculation in a mobile environment |
US9203911B2 (en) | 2007-11-14 | 2015-12-01 | Qualcomm Incorporated | Method and system for using a cache miss state match indicator to determine user suitability of targeted content messages in a mobile environment |
US9391789B2 (en) * | 2007-12-14 | 2016-07-12 | Qualcomm Incorporated | Method and system for multi-level distribution information cache management in a mobile environment |
US20090157834A1 (en) * | 2007-12-14 | 2009-06-18 | Qualcomm Incorporated | Method and system for multi-level distribution information cache management in a mobile environment |
US20100036858A1 (en) * | 2008-08-06 | 2010-02-11 | Microsoft Corporation | Meta file system - transparently managing storage using multiple file systems |
WO2010076140A1 (en) * | 2008-12-30 | 2010-07-08 | International Business Machines Corporation | Managing data across a plurality of data storage devices based upon collaboration relevance |
US10264094B2 (en) * | 2009-05-01 | 2019-04-16 | International Business Machines Corporation | Processing incoming messages |
US20100318745A1 (en) * | 2009-06-16 | 2010-12-16 | Microsoft Corporation | Dynamic Content Caching and Retrieval |
US9137329B2 (en) * | 2009-08-17 | 2015-09-15 | Yahoo! Inc. | Push pull caching for social network information |
US20140136595A1 (en) * | 2009-08-17 | 2014-05-15 | Yahoo! Inc. | Push pull caching for social network information |
US8683137B2 (en) * | 2010-09-28 | 2014-03-25 | Texas Instruments Incorporated | Cache pre-allocation of ways for pipelined allocate requests |
US20120198171A1 (en) * | 2010-09-28 | 2012-08-02 | Texas Instruments Incorporated | Cache Pre-Allocation of Ways for Pipelined Allocate Requests |
US9613009B2 (en) | 2011-05-04 | 2017-04-04 | Google Inc. | Predicting user navigation events |
US10896285B2 (en) | 2011-05-04 | 2021-01-19 | Google Llc | Predicting user navigation events |
US9769285B2 (en) | 2011-06-14 | 2017-09-19 | Google Inc. | Access to network content |
US8788711B2 (en) | 2011-06-14 | 2014-07-22 | Google Inc. | Redacting content and inserting hypertext transfer protocol (HTTP) error codes in place thereof |
US9928223B1 (en) | 2011-06-14 | 2018-03-27 | Google Llc | Methods for prerendering and methods for managing and configuring prerendering operations |
US11019179B2 (en) | 2011-06-14 | 2021-05-25 | Google Llc | Access to network content |
US11032388B2 (en) | 2011-06-14 | 2021-06-08 | Google Llc | Methods for prerendering and methods for managing and configuring prerendering operations |
US9846842B2 (en) | 2011-07-01 | 2017-12-19 | Google Llc | Predicting user navigation events |
US9530099B1 (en) | 2011-07-01 | 2016-12-27 | Google Inc. | Access to network content |
US10332009B2 (en) | 2011-07-01 | 2019-06-25 | Google Llc | Predicting user navigation events |
US8745212B2 (en) * | 2011-07-01 | 2014-06-03 | Google Inc. | Access to network content |
US20130007260A1 (en) * | 2011-07-01 | 2013-01-03 | Google Inc. | Access to network content |
US10089579B1 (en) | 2011-07-15 | 2018-10-02 | Google Llc | Predicting user navigation events |
US9075778B1 (en) | 2011-07-15 | 2015-07-07 | Google Inc. | Predicting user navigation events within a browser |
US8744988B1 (en) | 2011-07-15 | 2014-06-03 | Google Inc. | Predicting user navigation events in an internet browser |
US8627021B2 (en) * | 2011-08-31 | 2014-01-07 | Qualcomm Incorporated | Method and apparatus for load-based prefetch access |
US9443197B1 (en) | 2011-09-15 | 2016-09-13 | Google Inc. | Predicting user navigation events |
US8655819B1 (en) | 2011-09-15 | 2014-02-18 | Google Inc. | Predicting user navigation events based on chronological history data |
US8862529B1 (en) | 2011-09-15 | 2014-10-14 | Google Inc. | Predicting user navigation events in a browser using directed graphs |
US9104664B1 (en) | 2011-10-07 | 2015-08-11 | Google Inc. | Access to search results |
US9584579B2 (en) | 2011-12-01 | 2017-02-28 | Google Inc. | Method and system for providing page visibility information |
US8793235B2 (en) | 2012-01-19 | 2014-07-29 | Google Inc. | System and method for improving access to search results |
US9672285B2 (en) | 2012-01-19 | 2017-06-06 | Google Inc. | System and method for improving access to search results |
US10572548B2 (en) | 2012-01-19 | 2020-02-25 | Google Llc | System and method for improving access to search results |
US10754900B2 (en) | 2012-05-15 | 2020-08-25 | Google Llc | Access to network content |
US9946792B2 (en) | 2012-05-15 | 2018-04-17 | Google Llc | Access to network content |
US8887239B1 (en) | 2012-08-08 | 2014-11-11 | Google Inc. | Access to network content |
US10574779B2 (en) | 2012-08-23 | 2020-02-25 | Amazon Technologies, Inc. | Predictive caching for content |
US9141722B2 (en) | 2012-10-02 | 2015-09-22 | Google Inc. | Access to network content |
US20140189121A1 (en) * | 2012-10-18 | 2014-07-03 | Tara Chand Singhal | Apparatus and method for a thin form-factor technology for use in handheld smart phone and tablet devices |
US9774488B2 (en) * | 2012-10-18 | 2017-09-26 | Tara Chand Singhal | Apparatus and method for a thin form-factor technology for use in handheld smart phone and tablet devices |
US9781203B2 (en) * | 2013-02-27 | 2017-10-03 | Hewlett-Packard Development Company, L.P. | Data synchronization |
US20150358406A1 (en) * | 2013-02-27 | 2015-12-10 | Hewlett-Packard Development Company, L.P. | Data synchronization |
US10684995B2 (en) * | 2013-03-21 | 2020-06-16 | Razer (Asia-Pacific) Pte. Ltd. | Storage optimization in computing devices |
US20180081649A1 (en) * | 2013-03-21 | 2018-03-22 | Razer (Asia-Pacific) Pte. Ltd. | Storage optimization in computing devices |
JP2015036928A (en) * | 2013-08-15 | 2015-02-23 | 富士通株式会社 | Information processing system, information processing device, control program of information processing device, and control method of information processing system |
US20150113521A1 (en) * | 2013-10-18 | 2015-04-23 | Fujitsu Limited | Information processing method and information processing apparatus |
US9904531B2 (en) * | 2013-10-18 | 2018-02-27 | Fujitsu Limited | Apparatus and method for installing vehicle correction program |
CN104570823A (en) * | 2013-10-18 | 2015-04-29 | 富士通株式会社 | Information processing method and information processing apparatus |
US10516753B2 (en) | 2014-05-09 | 2019-12-24 | Amazon Technologies, Inc. | Segmented predictive caching for content |
US10389838B2 (en) | 2014-05-09 | 2019-08-20 | Amazon Technologies, Inc. | Client-side predictive caching for content |
US11113345B2 (en) * | 2014-07-17 | 2021-09-07 | Bigtincan Holdings Limited | Method and system for providing contextual electronic content |
US9531830B2 (en) | 2014-07-21 | 2016-12-27 | Sap Se | Odata offline cache for mobile device |
US10070163B2 (en) | 2015-03-19 | 2018-09-04 | Amazon Technologies, Inc. | Uninterrupted playback of video streams using lower quality cached files |
US10728593B2 (en) * | 2015-03-19 | 2020-07-28 | Amazon Technologies, Inc. | Uninterrupted playback of video streams using lower quality cached files |
US20160277776A1 (en) * | 2015-03-19 | 2016-09-22 | Amazon Technologies, Inc. | Uninterrupted playback of video streams using lower quality cached files |
US20180343475A1 (en) * | 2015-03-19 | 2018-11-29 | Amazon Technologies, Inc. | Uninterrupted playback of video streams using lower quality cached files |
US9819978B2 (en) * | 2015-03-19 | 2017-11-14 | Amazon Technologies, Inc. | Uninterrupted playback of video streams using lower quality cached files |
US10279757B2 (en) * | 2015-10-30 | 2019-05-07 | Audi Ag | Control device update in a motor vehicle |
US20190149629A1 (en) * | 2017-11-15 | 2019-05-16 | Cisco Technology, Inc. | Application buffering of packets by fog computing node for deterministic network transport |
US10897516B2 (en) * | 2017-11-15 | 2021-01-19 | Cisco Technology, Inc. | Application buffering of packets by fog computing node for deterministic network transport |
Also Published As
Publication number | Publication date |
---|---|
US7882092B2 (en) | 2011-02-01 |
US20090100127A1 (en) | 2009-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7882092B2 (en) | Method and system for hoarding content on mobile clients | |
US11184857B2 (en) | Optimizing mobile network traffic coordination across multiple applications running on a mobile device | |
TWI549001B (en) | Power and load management based on contextual information | |
US9553816B2 (en) | Optimizing mobile network traffic coordination across multiple applications running on a mobile device | |
Flinn et al. | Self-tuned remote execution for pervasive computing | |
CA2806529C (en) | Prediction of activity session for mobile network use optimization and user experience enhancement | |
US20150296505A1 (en) | Mobile traffic optimization and coordination and user experience enhancement | |
GB2391963A (en) | Method and apparatus for preloading caches | |
WO2016195775A1 (en) | Predictive control systems and methods | |
WO2012161751A1 (en) | Mobile network traffic coordination across multiple applications | |
WO2010127137A1 (en) | User profile-based wireless device system level management | |
WO2014001927A1 (en) | Incremental preparation of videos for delivery | |
RU2435236C1 (en) | System and method of recording data into cloud storage | |
US11943716B2 (en) | Optimizing mobile network traffic coordination across multiple applications running on a mobile device | |
CN112887349B (en) | Method and device for distributing files | |
Walfield et al. | Smart phones need smarter applications | |
Higgins | Balancing Interactive Performance and Budgeted Resources in Mobile Computing. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESSS MACHINES CORPORATION, NEW Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DASGUPTA, GARGI B;NAYAK, TAPAN K;VISWANATHAN, BALAJI;REEL/FRAME:018596/0632 Effective date: 20061013 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |