WO2011028177A1 - A context aware content management and delivery system for mobile devices - Google Patents

A context aware content management and delivery system for mobile devices Download PDF

Info

Publication number
WO2011028177A1
WO2011028177A1 PCT/SG2009/000314 SG2009000314W WO2011028177A1 WO 2011028177 A1 WO2011028177 A1 WO 2011028177A1 SG 2009000314 W SG2009000314 W SG 2009000314W WO 2011028177 A1 WO2011028177 A1 WO 2011028177A1
Authority
WO
WIPO (PCT)
Prior art keywords
context
content
user
transformation
context information
Prior art date
Application number
PCT/SG2009/000314
Other languages
French (fr)
Inventor
Murali Krishnan Vijendran
Ramesh Venkatraman
Original Assignee
Murali Krishnan Vijendran
Ramesh Venkatraman
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Murali Krishnan Vijendran, Ramesh Venkatraman filed Critical Murali Krishnan Vijendran
Priority to PCT/SG2009/000314 priority Critical patent/WO2011028177A1/en
Publication of WO2011028177A1 publication Critical patent/WO2011028177A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3013Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is an embedded system, i.e. a combination of hardware and software dedicated to perform a certain function in mobile devices, printers, automotive or aircraft systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3058Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data
    • G06F11/3072Monitoring arrangements determined by the means or processing involved in reporting the monitored data where the reporting involves data filtering, e.g. pattern matching, time or event triggered, adaptive or policy-based reporting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/2876Pairs of inter-processing entities at each side of the network, e.g. split proxies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/289Intermediate processing functionally located close to the data consumer application, e.g. in same machine, in same home or in same sub-network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/2895Intermediate processing functionally located close to the data provider application, e.g. reverse proxies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/303Terminal profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/306User profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/565Conversion or adaptation of application format or content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This invention relates to a context aware content management system, and a method for obtaining and processing context information and delivering content using the processed context information or taking relevant actions based on the processed context information.
  • Context information is defined here as any information describing the user of a mobile device, the mobile device itself and/or the environment of the mobile device (e.g. ambient temperature) and/or its position (i.e.
  • context parameters which could be envisaged from the user or the device perspective like the device model, display capability, memory capability, operating system, battery status etc of the device and similarly the age, sex, food preference, frequently used numbers, usage history, billing history, etc of the user.
  • the sender may have his/her own context information and the receiver may have his/her own context information and based on the processed contexts of both the sender and receiver the appropriate content may be sent from sender to receiver.
  • the list possible context information potentially large and the same can be considered during development of the programs and configuration of the programs etc.
  • each such system consists of at least one server and at least one mobile device.
  • the server(s) has means for retrieving context information such as user preferences and location.
  • the server(s) will then process the information, generate appropriate output, and deliver the content to the mobile device(s) for output to the user.
  • the mobile device(s) are relatively simple compared the server(s) and do not possess native capabilities to process content extensively. However based on the capability of the device, the device could have more capabilities to process the context information and display relevant content or take any other relevant actions.
  • An improved context aware content management system is disclosed herein.
  • the context engine is capable of extracting context information from various sources, processing the context information, and accordingly trigger appropriate action in the server and/or mobile device for selecting, prioritizing, transforming or rendering content for delivery to users or taking any other appropriate actions based on the processed context information.
  • the context engine maybe implemented in hardware or software, or some combination of both.
  • a first aspect of the invention proposes in general terms that context engines are provided both on a server and on mobile device(s), and that the workload is shared between them. The distribution of the workload can be varied, e.g. dynamically.
  • the first aspect of the invention proposes a content delivery system which comprises:
  • a mobile device operative for communication with the server
  • first and second context engines are configured to perform the tasks of: obtaining context information, and processing content to be delivered to a user based on said processed context information based on such processed context information;
  • Adjusting the workload distribution between the context engines may bring about many advantages (described later) such as processing speed and efficiency, lower power consumption, lower transmission bandwidth required, etc.
  • the construction of the first and second context engines need not be identical. Furthermore, their capabilities may not be identical. Where their capabilities are not identical the adjustment of the distribution of the workload is within the limits of the respective capabilities.
  • the first context engine may be incapable of obtaining context information from sensor devices associated with the mobile device. In this case, the task of obtaining such information may either always be carried out by the second context engine, or it may be omitted when the second context engine does not have capacity to obtain such information.
  • the adjustment mechanism is configured to adjust processing workload distribution between the one or both of the first and second context engines concurrently with processing the workload. This is important as it allows the context engines to dynamically adapt, e.g. to other tasks which the server or mobile device may be required to perform.
  • the adjustment mechanism can be provided separately from the first and second context engines (e.g. as a unit which monitors the status of both context engines and decides how to distribute workload).
  • the context engines may communicate with each other to distribute work between them.
  • either or both of the context engines may transmit messages to the other, of any one or more of the following types: (i) request the other engine to take over a certain portion of the workload, (ii) indicate that the engine is able to accept a certain task from the other.
  • a second aspect of the invention proposes in general terms that a context aware content management system is adaptive to gradually improve the quality of its responses to common context information, using an accumulated database of its previous responses to the common context information.
  • the invention proposes a context aware content management system which comprises:
  • a context engine configured to obtain context information, and process content to be delivered to a user based on said context information
  • the context engine comprising a context bench-marker arranged to monitor the performance of the context aware content management system , and build up a database summarizing corresponding responses which the content management system has previously made to repeatedly received context information,
  • context bench-marker is arranged to use the database to modify adaptively responses by the context aware content management system to new items of context information conforming to said repeatedly received context information.
  • the invention proposes in general terms that a context aware content management system prioritizes the delivery to a user of content which does not require any content transformation, so that the user is provided such content while other content is being processed or transformed for delivey.
  • the invention proposes a method of transmitting content from a server to a user via a mobile device operated by the user, the content being in the form of a collection of content items, one or more of said items requiring transformation before delivery to the user, the method
  • a transformation time which is a time required for transforming the content item
  • a consumption time which is a time the user is expected to take to consume the content
  • transformation such that the sum of estimated consumption time of the set of content items is greater or equal to the estimated transformation time of said content item which requires transformation.
  • Fig. 1 shows a general view of the embodiment including two context engines (one on the device and one on the server), and illustrates the structure of each of the context engines;
  • Fig. 2 shows the components of a context processor module of the context engine of Fig. 1 ;
  • Fig. 3 shows the components of an ontology engine module of the context engine of Fig. 1 ;
  • Fig. 4 shows a partial list of the tables and parameters utilised by a reasoning engine within the ontology engine module of Fig. 3;
  • Fig. 5 shows a table with some parameters utilised by a learning engine within the ontology engine module of Fig. 3;
  • Fig. 6 shows the components of a data store module of the context engine of Fig. 1 ;
  • Fig. 7 shows a basic flowchart of the inputs and outputs to/from a rules engine portion of the embodiment of Fig. 1 ;
  • Fig. 8 shows the components of a rendering engine portion of the embodiment of Fig. 1.
  • FIG. 1 An embodiment of the invention is illustrated in Fig. 1. It is a system comprising at least one content server 108, and at least one mobile device 110 operated by a user.
  • the server 108 has access to content data, which it may store itself or be able to retrieve from other servers.
  • the server 108 includes a first context engine
  • the mobile device 110 includes a second context engine.
  • the first context engine and second context engine cooperate to perform a single computational task, and the workload is distributed dynamically between them, so that, as far as a user is concerned, there is only a single context engine which is illustrated as 101 in Fig. 1.
  • the main functions of the context engine 101 are:
  • the context engine 101 comprises component modules which perform the functions described above.
  • the modules include:
  • both of the first and second context engines have all of these modules, provided that together they form a context engine 101 which has them all.
  • At least the first context engine is connected to one or more context information sources 109, and to the mobile device 110 and to those of other users, so the context engine 101 is illustrated as having such connections. It is also possible that both the context engines can have all the components as listed above.
  • each server 108 may have its own first context engine, and all cooperating together with the second context engine context enginejo provide context based content management and delivery to end user.
  • Figure 2 shows the context processor 102 of the context engine 101.
  • the context processor 102 of context engine 101 comprises the following components: • Context adapters and receivers 201
  • the context processor 102 is a component for obtaining and processing context information obtained from various sources. These sources may include dedicated sensors such as temperature sensor(s) or noise level detector(s) or any other information sources such as customer usage history or customer preference settings.
  • the context information could be explicit (e.g. stored as pre-existing fields in the memory of the mobile device, such as user settings) or implicit (e.g. which has to be derived by computation). It could be obtained from device itself, user settings, other devices, server, network, other network devices, the content delivered (for example the content could be a news article, or a music audio, or a video clip, etc and such contexts could be obtained from the content itself) or from various other sources.
  • the context processer 102 processes the context information based on various rules and abstracts the context information to various levels as appropriate. If necessary, it may send the context information to another module, the ontology engine 103, for more accurate reasoning. Once context processor 102 has processed the context information, it will trigger the context engine 101 to execute various actions, such as selecting content, prioritizing content and/or transforming content for delivery to users or taking any other appropriate action based on the processed contexts.
  • One example of "other appropriate actions” include having the context engine of a server or the initiating, sending and receiving communication to other devices or servers using SMS or any other means or invoking one or more specific applications in those devices or servers. It may communicate with server 108 or other devices through communication engine 105 to update the server 108; or receive instructions from the server 08; or send context information to the server 108 for processing; or update / receive information from other devices.
  • the context processor 102 has the necessary adapters 201 for interfacing with the Operating System and other elements of server 108 or mobile device 110.
  • This interface may be used to obtain context information from the server and/or mobile device, send output to the server and/or mobile device, or allow use of the server and/or mobile device's resources.
  • it can communicate with a Global Positioning System (GPS) within the mobile device 110, and utilise it to obtain context information, such as the user's location.
  • Context information may be extracted from various explicit context information sources such as user profiles, user location, language settings of mobile devices, user settings on mobile devices, device capabilities, temperature sensors, or noise level detectors, etc.
  • Implicit sources of context information may also be used, and includes mobile device usage data such as history, trend and nature of content requested/delivered. Such data may be gathered from logs which track device usage habits, user-device interactions from the device itself or from the server.
  • mobile device usage data such as history, trend and nature of content requested/delivered. Such data may be gathered from logs which track device usage habits, user-device interactions from the device itself or from the server.
  • context information is collected, one of the context receivers 201 within context processor 102 checks a quality level of the context information and if the quality is not acceptable, that particular context information is rejected and not considered for further processing or further information is obtained regarding that particular context information.
  • Context processor 102 communicates with the rules and rendering engine 106 for such quality processing.
  • the rules and rendering engine 106 includes threshold levels for quality of context information and determines the acceptable quality levels for them.
  • context processor 102 may trigger existing or new adapters 201 to get further context information from same context information sources.
  • location context information from a GPS system may be assessed to be of high quality, and this causes the context engine 101 to communicate with a temperature sensor to obtain a temperature reading (further context information) for the location.
  • a context data miner 205 is used, which includes a data mining program that receives existing context information and . analyses the context information for other related information.
  • the initial context information may be a user's Internet Protocol address.
  • the context data miner 205 can further analyse it for information about the user's location, service provider, etc. Many known methods and techniques for data mining may be used to implement this component.
  • the context processor 102 can configure context information sources 109. This configuration process can be done via adapters which support interfacing with a particular context information source. If this is done, the context engine 101 is more likely to be able to obtain context information in a timely manner and in a preferred format.
  • the core context processor 202 processes context information by working with the mini ontology engine 203, which provides basic reasoning for processing context information based on a set of rules. For advanced context interpretation and reasoning, this mini ontology engine 203 communicates with ontology engine module 103. Core context processer 202 can also pass the context information to server 108 for processing. The context information processed by server 108 can then be sent back to the second context engine or, further action based on the processed context information can be taken by server 108. For example, server 108 may select appropriate content on its side and simply deliver the content to the mobile device, instead of sending it the processed context information.
  • the context bench marker 204 is a component which records and benchmarks the context processor's 102 response to processed context information. Through this benchmarking, the common responses to similar processed context information can be identified. The context bench marker 204 is capable of using a database to compare the action taken (eg content delivery) based on the processed context information and adapt the actions so as to increase its relevance to end user.
  • the response could be response by the server. For example, if processed context information determines that an automated teller machine (ATM) is near the user's location and it further determines that the funds in the user's smartcard are low, the context processor will generate a response which triggers an alert to the user to remind him to top up funds in his smartcard.
  • the context bench marker 204 will thus record this and in the event that a similar scenario occurs (i.e. causing similar context information to be processed) and a similar response is generated, the context bench marker may classify this response as a standard response to the context information of "ATM nearby" and "smart card funds low". Once this happens, future occurrences of the similar scenarios will result in the response being performed automatically, without having to process the context information again.
  • the context benchmarker 201 takes into account any reaction by the user to content information transmitted to the user by the system in response to the context information. For example, use the reaction to form a measure of the relevance of that content. This is done by observing user reaction to the standard responses of the system to the context information, utilization of the delivered content, frequency of content requested or the follow-up actions performed by a user after a given context. With any of such information, the context bench marker 204 is able to actively "learn” and optimize the response to a given processed context by modifying the standard response of the system to the context data. In the "fast food” example described earlier, if the user's reaction to the lunch suggestion was positive multiple times, it would reinforce the strength of the standard response.
  • the standard response may be further modified to always recommend fast food restaurant XYZ.
  • the standard response may be deleted completely.
  • the context bench marker 204 may then modify the standard response to become "suggest user to borrow author ABC's latest book".
  • Communication handler 206 is a standard communication component which facilitates the communication needs of the context processor 102 with the other relevant engines like ontology engine module 103. This communication handler 206 can also be used to establish communication channels to receive context information from context information sources 109 located in the device or server 08 or elsewhere.
  • the context processor module 102 controls the overall functioning of the context engine 101.
  • the context processor 102 also allows the first (server- side) and second (device side) context engines to communicate, via communication engine 105. This allows the first and second context engines to exchange data and share processing capabilities.
  • all the processing required for obtaining context information, processing context information, selecting, prioritizing and transforming content for delivery to users is distributed between the first and second context engines, for example so that content can be delivered faster by making full use of the available processing capacity of each of the server and mobile device.
  • the second context engine may only obtain context information and render the results for display to the user, while leaving all the processing of context information and content data to the first context engine. This places a majority of the computational workload on the server. This means that the mobile device will not need to do much processing, thus consuming less power - a huge benefit for mobile devices with limited energy resources.
  • the mobile device might choose to obtain and process context information and select, prioritize and transform content on the mobile device itself. Apart from possibly receiving context information from server 108, it would mostly be independent from the first context engine on the server 108. Although this places a greater burden on the processing capabilities of the mobile device, it minimises the use of transmission bandwidth between server 108 and the mobile device.
  • This distribution of processing workload can be controlled by an adjustment mechanism which is separate from the first and second context engines, e.g. by separate adjustment modules on one or both of the server or mobile device.
  • the adjustment mechanism can be a feature of the first and second context engines themselves.
  • the distribution of processing workload can be dynamically performed to suit existing conditions. For example, if the second context engine detects a Wi-Fi broadband connection, it will choose to perform minimal processing and instead let the first (server-side) context engine do most of the processing, before transmitting the content to the user.
  • the second context engine will adjust the processing workload accordingly - it will process the remaining context information and content on the mobile device so as to avoid costly mobile broadband charges. This ensures that the processing load is optimally distributed between the mobile device and server 108. If more than two context engines are used (e.g. if there are multiple servers), the processing workload may be further distributed.
  • the context engine 101 is also capable of adjusting its own processing capabilities. Using the same example as above, when the context engine 101 lets the server perform most of the processing, the mobile device's context engine may choose to turn off its idle components, such as the ontology engine 103, which are not required.
  • the server(s) 108 and mobile devices are not limited to having only one context engine 101 each. Different versions of context engines 101 designed for specialist purposes can be invoked based on the context information, such as the nature of content required, extent of processing or transformation of the content required by the client device, type of content and other factors.
  • Ontology engine module Figure 3 shows the ontology engine module 103 of an embodiment of the invention.
  • the ontology engine module 103 of context engine 101 comprises the following components:
  • the ontology engine module 103 provides the reasoning mechanism for processing context information. Through processing context information, it also improves the reasoning mechanism to improve its ability to provide better reasoning information. The resultant reasoning information used to process context information is then passed back to the context processor 102.
  • the context parser 301 receives context information from context processor 102 and analyses it.
  • the context information is input to the ontology engine 103 which then refers to reasoning engine 302 to associate the context information with corresponding rules and meaning information.
  • Figure 4 shows a partial data structure (made up of three tables 401 , 402 and 403) employed by the ontology engine 103 and the parameters used in the process.
  • the tables include context information, correlation information, meaning related information, abstraction related information and the rules associated with the given set of contexts.
  • table 401 associates possible contexts (labelled by a "context ID”) with respective “correlation IDs.
  • Each row is associated with a given type of context (e.g. temperature), which has "context details' (e.g. temperature of 30 degrees), key words (if any), correlated context ID (the related context to this particular context) correlation ID (the id which specified the combination of two related contexts) and "consolidated correlation ID (optional reference to the group of contexts which are related and connected through a common id called consolidated correlation id).
  • Table 402 associates the correlation IDs with "Meaning IDs" and table 403 associates the meaning IDs with meaning details.
  • the reasoning engine 302 Given a given input element of context information, the reasoning engine 302 will seek to match the input context information with data from the table 401 under "context details" and "key words". If a match is found, the reasoning engine 302 can then obtain the corresponding "correlated context id", “correlation id” and "consolidated correlation id".
  • the "correlated context id” refers to another context id which stores a related entry to this one.
  • the inter-context relationships are stored and maintained using the "correlation id”.
  • the "consolidated correlation id” will contain the link to table 402, where the same "consolidated correlation id" is matched to obtain “meaning id”. This meaning id is matched with that in table 403 for the "meaning detail”.
  • the learning engine 303 updates the tables used by reasoning engine 302 and the rules used by rules engine 106.
  • the "learning" process may be implemented using a method known in the field or through a process similar to the adaptive context bench marking, which was described earlier.
  • Figure 5 shows an example of a table used by the learning engine 303 to update rules engine 106. Each row of the table corresponds to a particular instance of content forwarded by the system to the user. The particular action id to be changed/updated/inserted is input, together with user experience ratings.
  • the system automatically makes an entry in the table giving the relevant changes to be made to action item. This entry may optionally be reviewed manually as well.
  • a similar method may be used (with different table contents) to update content, rules, processed context or other relevant information in the mobile device or server.
  • the learning engine of the ontology engine upgrades the action ids of the rules engine based on this exercise as rule engine does not update its action by itself.
  • Data store module
  • FIG. 6 shows the components of a data store module 104 of this embodiment of the invention.
  • Data store module 104 stores content to be delivered and meta data of the content. Meta data comprises short tags which describe the content it is tagged to. It may be located on either server and/or mobile device sides' context engine and can interact / share information with the server and other mobile devices.
  • the data store module 104 can store and manage all the data utilised by the context engine 101. This may include the context information used by context processor 102, the tables used by ontology engine 103 or the rules tables used by the rules and rendering engine 106. It may receive and send data from/to internal and external entities such as server 108, users, context processor 102, ontology engine 103, etc.
  • the components of data store module 04 include:
  • a database engine 603 Embedded database 601 stores, manages and processes data used by context engine 10 .
  • Data cache and data wrapper 602 allows fast and efficient data retrieval .
  • the data wrapper portion provides a method of packaging and providing data based on the type of data request.
  • the database interface engine 603 functions as an interface for storing and retrieving data from embedded database 601.
  • data store module 104 may store the actual content, meta data of the content or the context information of the content.
  • Data Store 104 also has the ability to communicate with the source of the content (if stored externally) through communication engine 105 to receive either the content or context of the content, or to pass back the context information gathered device side. This helps in preparing and/or selecting appropriate content to be between server and mobile device.
  • Data store 104 may also subscribe or obtain relevant information from data sources based on context information. It may also generate queries for extracting content from data sources including third party data sources based on the context. The queries may contain meta data defining specific data requirements. If so, such meta tags should be matched with the meta tags of content to ensure that relevant content is retrieved. The degree of matching can be specified based on the context as well as the availability of the content. It could be perfectly matched or partially matched. Such generated queries may be stored against a particular context scenario, so that it may be reused for the same scenarios. The data store may store the array of such queries and associated context scenario for future retrieval as appropriate.
  • Communication engine 105 is the component for managing all forms of communication with server 108, other mobile devices or any other device which context engine 101 may need to communicate with (e.g. sensor devices). It has in-built communication handlers which may utilize various communication protocols to perform such communication.
  • a client side context engine 101 communicates with its user and server 108 and may also communicate with other users, devices and servers to transmit/receive of context information.
  • the rules and rendering engine module 106 comprises the rules engine and rendering engine. Although these engines are arranged within a single module in this embodiment, they may be arranged to be separate and still perform the same functions. Rules engine -
  • the rules engine receives input in the form of processed context or meaning information (which most likely comes from context processor 102, ontology engine 103 or server 108) further processes it to derive resultant actions for context engine 101. These resultant actions are based on the rules defined in a context rule set which is stored within data store module 104 or within server 108.
  • the resultant actions performed by the rules engine include identifying and delivering data to the mobile device or other devices, triggering transformation of the content for subsequent rendering, or any action as defined within the rule set.
  • Rules engine may also include rules to weigh and prioritize context information and context information sources, and rules to define context events based on parameters of context information sources or context requestor.
  • Figure 7 shows a basic flowchart of the inputs and outputs to/from the rules engine.
  • the inputs include rule sets and context information.
  • Context information may be provided in many forms - for example, the device context, sender context, receiver context, context of the content, etc.
  • the rule sets may be represented in the form of tables, which contain the actions to take for the different context inputs. More than one action items may be returned by the rules engine. The actions items can be selected and carried out by the core context processor 02.
  • the rules engine may also provide the content itself instead of action items. If the rules engine provides the actual content, it may provide the absolute content or a set of content tags. Further, it may also return a content description which references absolute content or a list of content tags which can be sent to a content manager for further retrieval.
  • the rule engine selects action items to return based on the processed context(s) ID. In turn, these action items may invoke further action on server 108, other mobile devices, or devices related to the context engine.
  • An action may also include generating alerts if the value of context information is more than a threshold value. For example, if input context information included a temperature reading sent by a context information source, and the reading is more than 60 degree Celsius, the action item generated may be an alert.
  • the rules engine can also contain rules and corresponding action items to instruct or control other devices or context information sources.
  • the action items may include further features such as security. For example, if a given context information is considered confidential, the action item may indicate so. When such an indication is detected, the content may be encrypted, or the context query may be erased from memory when it has been completed or addional authentication is enforced or any other appropriate action is taken.
  • a reporting engine produces various reports based on the available information including trend analysis of context information, action taken log, learning engine's action modification activities, etc.
  • the information collector receives information required for report generation purposes and analyzer helps to process the data.
  • Report generator considers the processed data and generators various reports using the available templates.
  • Figure 8 shows the components of a rendering engine of a preferred embodiment of the invention.
  • the components include:
  • the rendering core engine 801 renders content and other information to end users.
  • the type of rendering performed is based on context information which may include device characteristics, user preferences or trend of user experience.
  • the main job of the rendering engine is to select, prioritize and transform the content for delivery to users.
  • the rendering engine may render content to its own mobile device or other mobile devices. While rendering information to other mobile devices, the rendering engine may send content as is, or send to the rendering engine of the target mobile device for rendering. In addition, content may be transformed based on the target mobile device characteristics before it is delivered to the target.
  • the rendering engine selects and prioritizes the content for delivery to users. Based on the context information, appropriate content is selected and queued/prioritised so as to deliver content which is more relevant to the end user first, with or without other transformation of the content.
  • the content transformation engine 803 may also communicate with the server 108 to transform content when necessary.
  • PQ1 top node would have top priority in all content elements which needs transformation.
  • PQ2 top node would have top priority in all content elements which does not need transformation.
  • NP1 PQ1.pop()
  • NP2 PQ2.pop()
  • NP1 PQLpopO
  • NP2 PQ2.pop()
  • LJranstime LJranstime - NP2.consumption_time
  • NP2 PQ2.pop()
  • the goal of the algorithm is to queue the content items such that the most relevant content will be displayed first. If any of them require transformation, the next content items which do no require transformation will be delivered while transformation of the most relevant content item takes place. Once it has been transformed, it is delivered to the user and the process repeats itself.
  • At least a few contents which do not require transformation are constantly queued in a buffer as a backup content for rendering if the user requests for more content in shorter time than estimated in the content consumption time.
  • the amount of buffer content has ideally a consumption time of at least a few times (factor n) of the present content being transformed.
  • factor n can be determined by user settings, context engine 01 or from instructions from server 108.
  • This backup content is also useful in the case where the user terminates use of the current content, and requests for the next content to be delivered. Delivered content may also be flagged to ensure that the same content is not considered when the algorithm is run again within the same session.
  • the maximum size of the table may be set up during an initial set up process, and the table size could be determined by the breadth of context considered, i.e. device level/content level/server level/user preference level context.
  • the context engine 101 can be customized to dynamically reprioritize the content based on the current context.
  • Content transformation may occur either client side or server side. If content transformation is done server side, the content can be transmitted to the server in advance, so as to ensure that the processing pipeline of the server is constantly occupied and less time is lost idling. While this prioritizing process may be run on server side or mobile device side, it is preferable to for the server to run it since the server is likely to possess more content and have access to more content than a mobile device. Additionally, servers in general have a lot more processing capability and fewer limitations, compared to mobile devices, for computationally intensive functions like these.
  • One type of transformation done is summarizing or expanding the content.
  • content may be condensed to become concise or expanded to become comprehensive.
  • Such a rendering may take place if context information indicates that the user is a high-level busy executive (who typically only needs to know the important points) or a student (who may prefer more comprehensive and varied information).
  • the content may be reformatted for it to be suitable for output to a mobile device. If a mobile device was only capable of processing WAV format sound files, but the content is provided in MP3 sound format, the rendering engine may reformat the output to allow it to be compatible with the device's capabilities.
  • the rendering engine can cache additional content within the mobile device or in the server 108, and simple render reference links for the cached content or the source content, together with the delivered content. Doing this allows the user to minimise transmission bandwidth usage. Furthermore, the linked content is only accessed/retrieved when the user wants it. Having the actual content cached on the mobile device or server 108 allows the content to be loaded faster in subsequent content requests. Content can also be dynamically constructed, by combining various content pieces and organizing/prioritizing them based on context information. Again, the rendering engine may do this transformation locally (on its own mobile device) or externally (on the content server). The rendering engine also keeps track of any problems, bugs, performance parameters and other user satisfaction parameters for subsequent usage in reporting, providing feedback to server 108 for improvement of the content or application or for other purposes.
  • the rendering engine may value the context and content and combination thereof based on customer satisfaction, user experience and other context information point of view and map them to calculate, dynamically, the value of the content delivered. This information is and can be used for valuation of content and can be used for redeeming loyalty points, a basis for charging customers, a basis for commercial arrangements with content providers, a source for management information system, a mechanism for customer retention initiatives and for various other purposes.
  • the data interface 802 interfaces with context processor 102 to get the context information, rules engine 106 to get the meaning information and data store 104 to manage the contents. It may also be used to receive content from server 108 or other devices through the communication engine 103.
  • Rendering rules engine 504 maintains the rules to be followed for rendering data based on context information. These rules may be improved and evolved by the engine based on the base rules from the rule engine or server 108, creating new rules or modifying existing ones. It may also be updated by the learning engine 303 within the ontology engine 103, as described earlier.
  • the self management engine module 107 communicates with server 108 to update and maintain context engine 101. For example, if a new version of the context engine 101 is available, the self management engine 107 may proceed to upgrade the context engine 101 on the mobile device.
  • Self management engine 107 also executes updates the various modules of the context engine 101. It is configured to regularly communicate with server 108 to check for updates and when relevant updates are found, it downloads the update(s) from the server 108 and applies it to the corresponding modules.

Abstract

A context aware content management system for servers and mobile devices includes context engines installed in each of a server and mobile device. The context engines are capable of extracting context information from various sources, processing the context information and triggering accordingly, appropriate actions in the server and/or mobile device for prioritizing, transforming or rendering content for delivery to users or taking any other appropriate actions. Workload may be adjusted between the context engines of the device and the server. The system is adaptive to gradually improve the quality of its responses to common context information by using an adaptive database of its previous responses to common context information. Furthermore, the device prioritizes the delivery to a user of content which does not require processing, so that the user is provided such content while other content is being processed.

Description

A Context aware Content management and Delivery System for Mobile Devices Field of the Invention
This invention relates to a context aware content management system, and a method for obtaining and processing context information and delivering content using the processed context information or taking relevant actions based on the processed context information.
Background of the Invention
Technological advances today have made it possible to provide large amounts of content to various end-user consumers using many types of device. For example, content may be delivered to users over the internet and radio transmissions via the personal computers, laptops, television, radio, mobile phones and devices. A large amount of content is delivered to billions of people daily and the sheer volume has resulted in an "information overload" for many users, despite only a small percentage of the data being relevant to the users. The additional unnecessary information can take the form of "junk" or "spam" emails, unsolicited short message system (SMS) messages, irrelevant advertisements, etc. "Information overload" also arises even for information which is relevant for a user. In such scenarios, the information should ideally be filtered or ranked in some manner, such that a user can acquire the most relevant information first.
A common goal in the mobile devices industry is to be able to deliver the most appropriate information in a fast and efficient manner to end users. However, the needs of end users are dynamic in nature and will vary based on situational factors like location, language, content preference, etc. Furthermore, in the context of mobile devices, this is much more complex since a device's context may be constantly changing - depending on its user's location, habits, personality, etc. There are also physical constraints in the form of limited computational capabilities and data transmission resources of the devices. The term "context information" is defined here as any information describing the user of a mobile device, the mobile device itself and/or the environment of the mobile device (e.g. ambient temperature) and/or its position (i.e. location and/or orientation) and/or context for the user's activity (such as time of day, year, prevailing weather, etc). There are many such context parameters which could be envisaged from the user or the device perspective like the device model, display capability, memory capability, operating system, battery status etc of the device and similarly the age, sex, food preference, frequently used numbers, usage history, billing history, etc of the user. For example, if an user sends a particular SMS to another user, then the sender may have his/her own context information and the receiver may have his/her own context information and based on the processed contexts of both the sender and receiver the appropriate content may be sent from sender to receiver. The list possible context information potentially large and the same can be considered during development of the programs and configuration of the programs etc. The context information given above are only examples and are not representing the exhaustive list. Therefore, there is a need for systems in the industry which are able obtain context information about a particular user and/or device, process the context information and determine a relevant context of a user and/or device and, deliver content to the user based on the determined context or take relevant actions based on such processed context. Presently, each such system consists of at least one server and at least one mobile device. The server(s) has means for retrieving context information such as user preferences and location. The server(s) will then process the information, generate appropriate output, and deliver the content to the mobile device(s) for output to the user. In many cases, the mobile device(s) are relatively simple compared the server(s) and do not possess native capabilities to process content extensively. However based on the capability of the device, the device could have more capabilities to process the context information and display relevant content or take any other relevant actions.
Presently known systems are not flexible enough because servers and mobile devices need to be synchronized in the sense that the mobile devices have to be configured to accept whatever particular type of output the servers provide. Using an example of smart phones, if the servers only provide video data in a particular format, such as MPEG4, then a phone without an MPEG4 decoder will not be able to process the data and output the video. This applies not just to data type delivery, but can also apply to the order of delivery, quality of data, etc. Even in the case of more powerful mobile devices such as laptops computers, where the above problem of strict data type input is less of an issue, there is still a need for such devices to be able to select appropriate content from what is input into the devices.
Summary of the Invention
An improved context aware content management system is disclosed herein. At the core of this system is a set of dedicated programs for servers and another set of dedicated programs for mobiles devicescontext engine. These programs can be called a context engine. The context engine is capable of extracting context information from various sources, processing the context information, and accordingly trigger appropriate action in the server and/or mobile device for selecting, prioritizing, transforming or rendering content for delivery to users or taking any other appropriate actions based on the processed context information. The context engine maybe implemented in hardware or software, or some combination of both. A first aspect of the invention proposes in general terms that context engines are provided both on a server and on mobile device(s), and that the workload is shared between them. The distribution of the workload can be varied, e.g. dynamically. Specifically, the first aspect of the invention proposes a content delivery system which comprises:
a server;
a mobile device operative for communication with the server;
a first context engine installed in the server;
a second context engine installed in the mobile device, wherein the first and second context engines are configured to perform the tasks of: obtaining context information, and processing content to be delivered to a user based on said processed context information based on such processed context information; and
a mechanism to adjust the distribution of the workload of performing the tasks between the first and second context engines.
Adjusting the workload distribution between the context engines may bring about many advantages (described later) such as processing speed and efficiency, lower power consumption, lower transmission bandwidth required, etc.
Note that the construction of the first and second context engines need not be identical. Furthermore, their capabilities may not be identical. Where their capabilities are not identical the adjustment of the distribution of the workload is within the limits of the respective capabilities. For example, in some embodiments, the first context engine may be incapable of obtaining context information from sensor devices associated with the mobile device. In this case, the task of obtaining such information may either always be carried out by the second context engine, or it may be omitted when the second context engine does not have capacity to obtain such information. Preferably, the adjustment mechanism is configured to adjust processing workload distribution between the one or both of the first and second context engines concurrently with processing the workload. This is important as it allows the context engines to dynamically adapt, e.g. to other tasks which the server or mobile device may be required to perform.
The adjustment mechanism can be provided separately from the first and second context engines (e.g. as a unit which monitors the status of both context engines and decides how to distribute workload). Alternatively, the context engines may communicate with each other to distribute work between them. For example, either or both of the context engines may transmit messages to the other, of any one or more of the following types: (i) request the other engine to take over a certain portion of the workload, (ii) indicate that the engine is able to accept a certain task from the other. A second aspect of the invention proposes in general terms that a context aware content management system is adaptive to gradually improve the quality of its responses to common context information, using an accumulated database of its previous responses to the common context information. Specifically, in the second aspect the invention proposes a context aware content management system which comprises:
a server;
a mobile device; and
a context engine configured to obtain context information, and process content to be delivered to a user based on said context information;
the context engine comprising a context bench-marker arranged to monitor the performance of the context aware content management system , and build up a database summarizing corresponding responses which the content management system has previously made to repeatedly received context information,
wherein the context bench-marker is arranged to use the database to modify adaptively responses by the context aware content management system to new items of context information conforming to said repeatedly received context information.
In a third aspect, the invention proposes in general terms that a context aware content management system prioritizes the delivery to a user of content which does not require any content transformation, so that the user is provided such content while other content is being processed or transformed for delivey.
Specifically, in the third aspect the invention proposes a method of transmitting content from a server to a user via a mobile device operated by the user, the content being in the form of a collection of content items, one or more of said items requiring transformation before delivery to the user, the method
comprising:
a) performing a prioritization of the content items based on estimated relevance of the respective content items to the user;
b) for each content item that requires transformation, estimating a transformation time which is a time required for transforming the content item; c) for each content item which does not require transformation, or if the content item has already been transformed, estimating a consumption time, which is a time the user is expected to take to consume the content; and
d) transmitting the items to the user in an order based on the priority, but, in the case of a content item which requires transformation, first transmitting a respective set of one or more said content items which do not require
transformation, such that the sum of estimated consumption time of the set of content items is greater or equal to the estimated transformation time of said content item which requires transformation.
The three aspects of the invention defined above are independent, in the sense that any one, or any two, of them can be implemented in a single content management and delivery system according to the invention both with and without the present of the other(s).
Brief Description of the Drawings An embodiment of the invention will now be described, by way of example, with reference to the accompanying drawings in which,
Fig. 1 shows a general view of the embodiment including two context engines (one on the device and one on the server), and illustrates the structure of each of the context engines;
Fig. 2 shows the components of a context processor module of the context engine of Fig. 1 ;
Fig. 3 shows the components of an ontology engine module of the context engine of Fig. 1 ;
Fig. 4 shows a partial list of the tables and parameters utilised by a reasoning engine within the ontology engine module of Fig. 3;
Fig. 5 shows a table with some parameters utilised by a learning engine within the ontology engine module of Fig. 3;
Fig. 6 shows the components of a data store module of the context engine of Fig. 1 ;
Fig. 7 shows a basic flowchart of the inputs and outputs to/from a rules engine portion of the embodiment of Fig. 1 ; and
Fig. 8 shows the components of a rendering engine portion of the embodiment of Fig. 1.
Detailed Description of the Preferred Embodiments
An embodiment of the invention is illustrated in Fig. 1. It is a system comprising at least one content server 108, and at least one mobile device 110 operated by a user. The server 108 has access to content data, which it may store itself or be able to retrieve from other servers.
The server 108 includes a first context engine, and the mobile device 110 includes a second context engine. As described in more detail below, the first context engine and second context engine cooperate to perform a single computational task, and the workload is distributed dynamically between them, so that, as far as a user is concerned, there is only a single context engine which is illustrated as 101 in Fig. 1.
The main functions of the context engine 101 are:
(a) Extracting context information from various sources accessible from the mobile device or elsewhere (such as from sensors, databases, from the server, etc)
(b) Processing the context information
(c) Using the processed context information including selecting content to deliver to a user, prioritizing and/or transforming the content and rendering it for display to the end user of the device 110, or perform other relevant tasks based on the processed context information. The context engine 101 comprises component modules which perform the functions described above. The modules include:
• A context processor 102
• An ontology engine 103 • A data store 104
• A communication engine 105
• A rules and rendering engine 106
• A self management engine 107
Note that it is not necessary that both of the first and second context engines have all of these modules, provided that together they form a context engine 101 which has them all. At least the first context engine is connected to one or more context information sources 109, and to the mobile device 110 and to those of other users, so the context engine 101 is illustrated as having such connections. It is also possible that both the context engines can have all the components as listed above.
Note that in a variation of the embodiment, there may be multiple servers 108, each with its own first context engine, and all cooperating together with the second context engine context enginejo provide context based content management and delivery to end user.
Context processor 102
Figure 2 shows the context processor 102 of the context engine 101. The context processor 102 of context engine 101 comprises the following components: • Context adapters and receivers 201
• A core context processor 202
• A mini ontology engine 203
• A context bench marker 204
· A context data miner 205
• A communication handler 206
The context processor 102 is a component for obtaining and processing context information obtained from various sources. These sources may include dedicated sensors such as temperature sensor(s) or noise level detector(s) or any other information sources such as customer usage history or customer preference settings. The context information could be explicit (e.g. stored as pre-existing fields in the memory of the mobile device, such as user settings) or implicit (e.g. which has to be derived by computation). It could be obtained from device itself, user settings, other devices, server, network, other network devices, the content delivered (for example the content could be a news article, or a music audio, or a video clip, etc and such contexts could be obtained from the content itself) or from various other sources. The context processer 102 processes the context information based on various rules and abstracts the context information to various levels as appropriate. If necessary, it may send the context information to another module, the ontology engine 103, for more accurate reasoning. Once context processor 102 has processed the context information, it will trigger the context engine 101 to execute various actions, such as selecting content, prioritizing content and/or transforming content for delivery to users or taking any other appropriate action based on the processed contexts. One example of "other appropriate actions" include having the context engine of a server or the initiating, sending and receiving communication to other devices or servers using SMS or any other means or invoking one or more specific applications in those devices or servers. It may communicate with server 108 or other devices through communication engine 105 to update the server 108; or receive instructions from the server 08; or send context information to the server 108 for processing; or update / receive information from other devices.
The context processor 102 has the necessary adapters 201 for interfacing with the Operating System and other elements of server 108 or mobile device 110. This interface may be used to obtain context information from the server and/or mobile device, send output to the server and/or mobile device, or allow use of the server and/or mobile device's resources. For example, it can communicate with a Global Positioning System (GPS) within the mobile device 110, and utilise it to obtain context information, such as the user's location. Context information may be extracted from various explicit context information sources such as user profiles, user location, language settings of mobile devices, user settings on mobile devices, device capabilities, temperature sensors, or noise level detectors, etc. Implicit sources of context information may also be used, and includes mobile device usage data such as history, trend and nature of content requested/delivered. Such data may be gathered from logs which track device usage habits, user-device interactions from the device itself or from the server. When context information is collected, one of the context receivers 201 within context processor 102 checks a quality level of the context information and if the quality is not acceptable, that particular context information is rejected and not considered for further processing or further information is obtained regarding that particular context information.. Context processor 102 communicates with the rules and rendering engine 106 for such quality processing. The rules and rendering engine 106 includes threshold levels for quality of context information and determines the acceptable quality levels for them. Based on the quality of context information received, context processor 102 may trigger existing or new adapters 201 to get further context information from same context information sources. Using the same example above, location context information from a GPS system may be assessed to be of high quality, and this causes the context engine 101 to communicate with a temperature sensor to obtain a temperature reading (further context information) for the location.
The invention is also capable of performing context information mining on existing context information obtained. A context data miner 205 is used, which includes a data mining program that receives existing context information and . analyses the context information for other related information. For example, the initial context information may be a user's Internet Protocol address. Using this information, the context data miner 205 can further analyse it for information about the user's location, service provider, etc. Many known methods and techniques for data mining may be used to implement this component.
The context processor 102 can configure context information sources 109. This configuration process can be done via adapters which support interfacing with a particular context information source. If this is done, the context engine 101 is more likely to be able to obtain context information in a timely manner and in a preferred format.
The core context processor 202 processes context information by working with the mini ontology engine 203, which provides basic reasoning for processing context information based on a set of rules. For advanced context interpretation and reasoning, this mini ontology engine 203 communicates with ontology engine module 103. Core context processer 202 can also pass the context information to server 108 for processing. The context information processed by server 108 can then be sent back to the second context engine or, further action based on the processed context information can be taken by server 108. For example, server 108 may select appropriate content on its side and simply deliver the content to the mobile device, instead of sending it the processed context information. The context bench marker 204 is a component which records and benchmarks the context processor's 102 response to processed context information. Through this benchmarking, the common responses to similar processed context information can be identified. The context bench marker 204 is capable of using a database to compare the action taken (eg content delivery) based on the processed context information and adapt the actions so as to increase its relevance to end user.
The response could be response by the server. For example, if processed context information determines that an automated teller machine (ATM) is near the user's location and it further determines that the funds in the user's smartcard are low, the context processor will generate a response which triggers an alert to the user to remind him to top up funds in his smartcard. The context bench marker 204 will thus record this and in the event that a similar scenario occurs (i.e. causing similar context information to be processed) and a similar response is generated, the context bench marker may classify this response as a standard response to the context information of "ATM nearby" and "smart card funds low". Once this happens, future occurrences of the similar scenarios will result in the response being performed automatically, without having to process the context information again. There is no fixed value as to how many times a scenario must occur before the response is considered a standard response but in general, common scenarios are likely to enable the context engine to formulate standard responses sooner. A scenario with context information "user is near fast food restaurant", "time is now lunch time", "user likes to eat fast food" and response "suggest user to take fast food lunch" will be a lot more common, leading to the response being classified as a standard response sooner. Compare this with another scenario with context information "user likes books from author ABC", "user is near a bookstore", "author ABC's new book is now on sale" and response "suggest user to buy new book", this response is likely to require a longer time before becoming a standard response.
Alternatively or additionally, the context benchmarker 201 takes into account any reaction by the user to content information transmitted to the user by the system in response to the context information. For example, use the reaction to form a measure of the relevance of that content. This is done by observing user reaction to the standard responses of the system to the context information, utilization of the delivered content, frequency of content requested or the follow-up actions performed by a user after a given context. With any of such information, the context bench marker 204 is able to actively "learn" and optimize the response to a given processed context by modifying the standard response of the system to the context data. In the "fast food" example described earlier, if the user's reaction to the lunch suggestion was positive multiple times, it would reinforce the strength of the standard response. In addition, if it is determined that in upon receiving such reminder, the user frequently patronises fast food restaurant XYZ, the standard response may be further modified to always recommend fast food restaurant XYZ. In the smart card example, if it is determined that the user never tops up his card (perhaps due to lack of use) despite processing similar context information and generating the same response (reminder to top up smart card), the standard response may be deleted completely. In the bookstore example, if it is determined that the user constantly visits the library to borrow the suggested book instead of purchasing it, the context bench marker 204 may then modify the standard response to become "suggest user to borrow author ABC's latest book". The user reaction can be obtained explicitly by asking his / her feedback on the recommendation made by the context engine or implicitly understanding the user behaviour (For example, whether he/she not searching for further recommendation and this would mean that they are fine with the current recommendation) Communication handler 206 is a standard communication component which facilitates the communication needs of the context processor 102 with the other relevant engines like ontology engine module 103. This communication handler 206 can also be used to establish communication channels to receive context information from context information sources 109 located in the device or server 08 or elsewhere.
The context processor module 102 controls the overall functioning of the context engine 101. The context processor 102 also allows the first (server- side) and second (device side) context engines to communicate, via communication engine 105. This allows the first and second context engines to exchange data and share processing capabilities.
In this embodiment, all the processing required for obtaining context information, processing context information, selecting, prioritizing and transforming content for delivery to users is distributed between the first and second context engines, for example so that content can be delivered faster by making full use of the available processing capacity of each of the server and mobile device. For example, at a given time, the second context engine may only obtain context information and render the results for display to the user, while leaving all the processing of context information and content data to the first context engine. This places a majority of the computational workload on the server. This means that the mobile device will not need to do much processing, thus consuming less power - a huge benefit for mobile devices with limited energy resources. In another scenario, the mobile device might choose to obtain and process context information and select, prioritize and transform content on the mobile device itself. Apart from possibly receiving context information from server 108, it would mostly be independent from the first context engine on the server 108. Although this places a greater burden on the processing capabilities of the mobile device, it minimises the use of transmission bandwidth between server 108 and the mobile device.
This distribution of processing workload can be controlled by an adjustment mechanism which is separate from the first and second context engines, e.g. by separate adjustment modules on one or both of the server or mobile device. Alternatively, the adjustment mechanism can be a feature of the first and second context engines themselves. The distribution of processing workload can be dynamically performed to suit existing conditions. For example, if the second context engine detects a Wi-Fi broadband connection, it will choose to perform minimal processing and instead let the first (server-side) context engine do most of the processing, before transmitting the content to the user. However, should the Wi-Fi connection suddenly be disconnected, so the mobile device now has to rely on mobile broadband (which is relatively more expensive and provides lesser bandwidth) the second context engine will adjust the processing workload accordingly - it will process the remaining context information and content on the mobile device so as to avoid costly mobile broadband charges. This ensures that the processing load is optimally distributed between the mobile device and server 108. If more than two context engines are used (e.g. if there are multiple servers), the processing workload may be further distributed. The context engine 101 is also capable of adjusting its own processing capabilities. Using the same example as above, when the context engine 101 lets the server perform most of the processing, the mobile device's context engine may choose to turn off its idle components, such as the ontology engine 103, which are not required. This reduces the amount of power consumed by the mobile device. Accordingly, when the processing workload was subsequently adjusted for the mobile device to perform all the processing, all the components of the context engine are turned on. In some variations of the present embodiment, the server(s) 108 and mobile devices are not limited to having only one context engine 101 each. Different versions of context engines 101 designed for specialist purposes can be invoked based on the context information, such as the nature of content required, extent of processing or transformation of the content required by the client device, type of content and other factors.
Ontology engine module Figure 3 shows the ontology engine module 103 of an embodiment of the invention. The ontology engine module 103 of context engine 101 comprises the following components:
• A context parser 301
· A reasoning engine 302
• A learning engine 303
• A data source 304
The ontology engine module 103 provides the reasoning mechanism for processing context information. Through processing context information, it also improves the reasoning mechanism to improve its ability to provide better reasoning information. The resultant reasoning information used to process context information is then passed back to the context processor 102. The context parser 301 receives context information from context processor 102 and analyses it. The context information is input to the ontology engine 103 which then refers to reasoning engine 302 to associate the context information with corresponding rules and meaning information. Figure 4 shows a partial data structure (made up of three tables 401 , 402 and 403) employed by the ontology engine 103 and the parameters used in the process. The tables include context information, correlation information, meaning related information, abstraction related information and the rules associated with the given set of contexts. Referring to Figure 4, table 401 associates possible contexts (labelled by a "context ID") with respective "correlation IDs. Each row is associated with a given type of context (e.g. temperature), which has "context details' (e.g. temperature of 30 degrees), key words (if any), correlated context ID (the related context to this particular context) correlation ID (the id which specified the combination of two related contexts) and "consolidated correlation ID (optional reference to the group of contexts which are related and connected through a common id called consolidated correlation id). Table 402 associates the correlation IDs with "Meaning IDs" and table 403 associates the meaning IDs with meaning details. Given a given input element of context information, the reasoning engine 302 will seek to match the input context information with data from the table 401 under "context details" and "key words". If a match is found, the reasoning engine 302 can then obtain the corresponding "correlated context id", "correlation id" and "consolidated correlation id". The "correlated context id" refers to another context id which stores a related entry to this one. The inter-context relationships are stored and maintained using the "correlation id". The "consolidated correlation id" will contain the link to table 402, where the same "consolidated correlation id" is matched to obtain "meaning id". This meaning id is matched with that in table 403 for the "meaning detail". Once the meaning of a given context information is obtained, it is returned to the context processor 102 or rules engine 106 for further processing. For example, the context processor 102 will send the meaning information to rules engine 106 for retrieving a response corresponding to the meaning information. The learning engine 303 updates the tables used by reasoning engine 302 and the rules used by rules engine 106. The "learning" process may be implemented using a method known in the field or through a process similar to the adaptive context bench marking, which was described earlier. Figure 5 shows an example of a table used by the learning engine 303 to update rules engine 106. Each row of the table corresponds to a particular instance of content forwarded by the system to the user. The particular action id to be changed/updated/inserted is input, together with user experience ratings. Based on the implicit and explicit ratings, the system automatically makes an entry in the table giving the relevant changes to be made to action item. This entry may optionally be reviewed manually as well.. A similar method may be used (with different table contents) to update content, rules, processed context or other relevant information in the mobile device or server. The learning engine of the ontology engine upgrades the action ids of the rules engine based on this exercise as rule engine does not update its action by itself. Data store module
Figure 6 shows the components of a data store module 104 of this embodiment of the invention. Data store module 104 stores content to be delivered and meta data of the content. Meta data comprises short tags which describe the content it is tagged to. It may be located on either server and/or mobile device sides' context engine and can interact / share information with the server and other mobile devices. The data store module 104 can store and manage all the data utilised by the context engine 101. This may include the context information used by context processor 102, the tables used by ontology engine 103 or the rules tables used by the rules and rendering engine 106. It may receive and send data from/to internal and external entities such as server 108, users, context processor 102, ontology engine 103, etc.
The components of data store module 04 include:
• An embedded database 601
• A data cache and data wrapper 602
• A database engine 603 Embedded database 601 stores, manages and processes data used by context engine 10 .
Data cache and data wrapper 602 allows fast and efficient data retrieval . The data wrapper portion provides a method of packaging and providing data based on the type of data request.
The database interface engine 603 functions as an interface for storing and retrieving data from embedded database 601.
As mentioned previously, data store module 104 may store the actual content, meta data of the content or the context information of the content. Data Store 104 also has the ability to communicate with the source of the content (if stored externally) through communication engine 105 to receive either the content or context of the content, or to pass back the context information gathered device side. This helps in preparing and/or selecting appropriate content to be between server and mobile device.
Data store 104 may also subscribe or obtain relevant information from data sources based on context information. It may also generate queries for extracting content from data sources including third party data sources based on the context. The queries may contain meta data defining specific data requirements. If so, such meta tags should be matched with the meta tags of content to ensure that relevant content is retrieved. The degree of matching can be specified based on the context as well as the availability of the content. It could be perfectly matched or partially matched. Such generated queries may be stored against a particular context scenario, so that it may be reused for the same scenarios. The data store may store the array of such queries and associated context scenario for future retrieval as appropriate.
Communication engine module
Communication engine 105 is the component for managing all forms of communication with server 108, other mobile devices or any other device which context engine 101 may need to communicate with (e.g. sensor devices). It has in-built communication handlers which may utilize various communication protocols to perform such communication. A client side context engine 101 communicates with its user and server 108 and may also communicate with other users, devices and servers to transmit/receive of context information.
Rules and rendering engine module
The rules and rendering engine module 106 comprises the rules engine and rendering engine. Although these engines are arranged within a single module in this embodiment, they may be arranged to be separate and still perform the same functions. Rules engine -
The rules engine receives input in the form of processed context or meaning information (which most likely comes from context processor 102, ontology engine 103 or server 108) further processes it to derive resultant actions for context engine 101. These resultant actions are based on the rules defined in a context rule set which is stored within data store module 104 or within server 108. The resultant actions performed by the rules engine include identifying and delivering data to the mobile device or other devices, triggering transformation of the content for subsequent rendering, or any action as defined within the rule set.
Rules engine may also include rules to weigh and prioritize context information and context information sources, and rules to define context events based on parameters of context information sources or context requestor.
Figure 7 shows a basic flowchart of the inputs and outputs to/from the rules engine. The inputs include rule sets and context information. Context information may be provided in many forms - for example, the device context, sender context, receiver context, context of the content, etc. The rule sets may be represented in the form of tables, which contain the actions to take for the different context inputs. More than one action items may be returned by the rules engine. The actions items can be selected and carried out by the core context processor 02. The rules engine may also provide the content itself instead of action items. If the rules engine provides the actual content, it may provide the absolute content or a set of content tags. Further, it may also return a content description which references absolute content or a list of content tags which can be sent to a content manager for further retrieval.
The rule engine selects action items to return based on the processed context(s) ID. In turn, these action items may invoke further action on server 108, other mobile devices, or devices related to the context engine. An action may also include generating alerts if the value of context information is more than a threshold value. For example, if input context information included a temperature reading sent by a context information source, and the reading is more than 60 degree Celsius, the action item generated may be an alert. Apart from generating alerts, the rules engine can also contain rules and corresponding action items to instruct or control other devices or context information sources.
In some embodiments of the invention, the action items may include further features such as security. For example, if a given context information is considered confidential, the action item may indicate so. When such an indication is detected, the content may be encrypted, or the context query may be erased from memory when it has been completed or addional authentication is enforced or any other appropriate action is taken.
In addition to the above components, a reporting engine produces various reports based on the available information including trend analysis of context information, action taken log, learning engine's action modification activities, etc. The information collector receives information required for report generation purposes and analyzer helps to process the data. Report generator considers the processed data and generators various reports using the available templates.
Rendering engine -
Figure 8 shows the components of a rendering engine of a preferred embodiment of the invention. The components include:
• A rendering core 801
• A data interface 802
• A content and transformation engine 803
• A rendering rules engine 804
The rendering core engine 801 renders content and other information to end users. The type of rendering performed is based on context information which may include device characteristics, user preferences or trend of user experience. The main job of the rendering engine is to select, prioritize and transform the content for delivery to users.
The rendering engine may render content to its own mobile device or other mobile devices. While rendering information to other mobile devices, the rendering engine may send content as is, or send to the rendering engine of the target mobile device for rendering. In addition, content may be transformed based on the target mobile device characteristics before it is delivered to the target.
As mentioned, the rendering engine selects and prioritizes the content for delivery to users. Based on the context information, appropriate content is selected and queued/prioritised so as to deliver content which is more relevant to the end user first, with or without other transformation of the content. As with the other components, the content transformation engine 803 may also communicate with the server 108 to transform content when necessary.
Notes regarding the algorithm:
Pre processing -
Construct two priority queues from original content (this can be online or from a fixed content) one for content which needs transformation (PQ1 ) and one for content which do not requires transformation (PQ2). Each node in priority queue (PQ1/PQ2) will have a local priority in addition to having a priority corresponding to overall content.
PQ1 top node would have top priority in all content elements which needs transformation.
PQ2 top node would have top priority in all content elements which does not need transformation.
Algorithm:
Struct ContentNode {
long priority; long transformation_time;
long consumptionjime;
Struct ContentNode *left;
Struct ContentNode *right;
} CONTENTNODE;
Input: Two priority Queues (PQ1 and PQ2)
1. Pop Out top element from PQ1 and PQ2.
If ( ( PQ1.emptyO == TRUE) && ( PQ2.empty() == TRUE ) ) { End of algorithm
}
Else lf(PQ2.empty() == TRUE) {
NP1 = PQ1.pop();
Deliver NP1 after transformation.
Repeat step 1.
}
Else If ( PQ1.empty() == TRUE ) {
NP2 = PQ2.pop();
Deliver NP2
Repeat step 1.
}
Else {
NP1 = PQLpopO;
NP2 = PQ2.pop(); }
Find out which node has highest priority in the original content i. If (NP2.priority > NP1.priority) { Deliver NP2; }
ii. Else {
Send NP1 for Transformation.
L_transtime = NP1. transformationjime;
While( ( LJranstime >= 0 ) && ( PQ2.empty() == FALSE )) {
Deliver NP2;
LJranstime = LJranstime - NP2.consumption_time;
NP2 = PQ2.pop();
}
lf(NP2 != NULL) {
PQ2.Push(NP2);
}
// Transformation of NP1 is not done but all content in PQ2 is consumed
lf( LJranstime > 0 ) {
Stop transformation of NP1 ;
PQ1.Push(NP1 );
Go to step 1
}
iii. Once Transformation of NP1 is done, change its transformation time to zero and push it to PQ2. NP1.trasformation_time = 0;
PQ2.push(NP1 );
PQ1.pop(NP1 );
3. Repeat step 1.
Essentially, the goal of the algorithm is to queue the content items such that the most relevant content will be displayed first. If any of them require transformation, the next content items which do no require transformation will be delivered while transformation of the most relevant content item takes place. Once it has been transformed, it is delivered to the user and the process repeats itself.
In addition to the above, it is preferable that at least a few contents which do not require transformation are constantly queued in a buffer as a backup content for rendering if the user requests for more content in shorter time than estimated in the content consumption time. The amount of buffer content has ideally a consumption time of at least a few times (factor n) of the present content being transformed. The factor n can be determined by user settings, context engine 01 or from instructions from server 108. This backup content is also useful in the case where the user terminates use of the current content, and requests for the next content to be delivered. Delivered content may also be flagged to ensure that the same content is not considered when the algorithm is run again within the same session. The maximum size of the table may be set up during an initial set up process, and the table size could be determined by the breadth of context considered, i.e. device level/content level/server level/user preference level context. Apart from the above mentioned algorithm, the context engine 101 can be customized to dynamically reprioritize the content based on the current context.
Content transformation may occur either client side or server side. If content transformation is done server side, the content can be transmitted to the server in advance, so as to ensure that the processing pipeline of the server is constantly occupied and less time is lost idling. While this prioritizing process may be run on server side or mobile device side, it is preferable to for the server to run it since the server is likely to possess more content and have access to more content than a mobile device. Additionally, servers in general have a lot more processing capability and fewer limitations, compared to mobile devices, for computationally intensive functions like these.
One type of transformation done is summarizing or expanding the content. For example, content may be condensed to become concise or expanded to become comprehensive. Such a rendering may take place if context information indicates that the user is a high-level busy executive (who typically only needs to know the important points) or a student (who may prefer more comprehensive and varied information). Furthermore, it is also possible to augment output content with more content, again, based on the user requirements or context of the user of the mobile device. It another example, the content may be reformatted for it to be suitable for output to a mobile device. If a mobile device was only capable of processing WAV format sound files, but the content is provided in MP3 sound format, the rendering engine may reformat the output to allow it to be compatible with the device's capabilities.
The rendering engine can cache additional content within the mobile device or in the server 108, and simple render reference links for the cached content or the source content, together with the delivered content. Doing this allows the user to minimise transmission bandwidth usage. Furthermore, the linked content is only accessed/retrieved when the user wants it. Having the actual content cached on the mobile device or server 108 allows the content to be loaded faster in subsequent content requests. Content can also be dynamically constructed, by combining various content pieces and organizing/prioritizing them based on context information. Again, the rendering engine may do this transformation locally (on its own mobile device) or externally (on the content server). The rendering engine also keeps track of any problems, bugs, performance parameters and other user satisfaction parameters for subsequent usage in reporting, providing feedback to server 108 for improvement of the content or application or for other purposes. The rendering engine may value the context and content and combination thereof based on customer satisfaction, user experience and other context information point of view and map them to calculate, dynamically, the value of the content delivered. This information is and can be used for valuation of content and can be used for redeeming loyalty points, a basis for charging customers, a basis for commercial arrangements with content providers, a source for management information system, a mechanism for customer retention initiatives and for various other purposes. The data interface 802 interfaces with context processor 102 to get the context information, rules engine 106 to get the meaning information and data store 104 to manage the contents. It may also be used to receive content from server 108 or other devices through the communication engine 103. Rendering rules engine 504 maintains the rules to be followed for rendering data based on context information. These rules may be improved and evolved by the engine based on the base rules from the rule engine or server 108, creating new rules or modifying existing ones. It may also be updated by the learning engine 303 within the ontology engine 103, as described earlier.
Self management engine module
The self management engine module 107 communicates with server 108 to update and maintain context engine 101. For example, if a new version of the context engine 101 is available, the self management engine 107 may proceed to upgrade the context engine 101 on the mobile device.
Self management engine 107 also executes updates the various modules of the context engine 101. It is configured to regularly communicate with server 108 to check for updates and when relevant updates are found, it downloads the update(s) from the server 108 and applies it to the corresponding modules.
Having now fully described the invention, it should be apparent to one of ordinary skill in the art that many modifications can be made hereto without departing from the scope as claimed.

Claims

1. A context aware content management system which comprises:
a server;
a mobile device operative for communication with the server;
a first context engine installed in the server;
a second context engine installed in the mobile device, wherein the first and second context engines are configured to perform the tasks of: obtaining context information, processing context information and delivering content to a user based on said processed context information; and
a mechanism to adjust the distribution of the workload of performing the tasks between the first and second context engines.
2. A context aware content management system according to claim 1 , wherein the first and second context engines are configured to adjust
processing workload distribution between the first and second context engines concurrently with processing the workload.
3. A context aware content management system according to claim 1 or claim 2, wherein at least one of the first and second context engines is adapted to control processing by the other context engine.
4. A context aware content management system according to any of claims 1 to 3, wherein at the first and second context engines are further configured to select, prioritize and transform content for delivery to a user of the mobile device.
5. A context aware content management system according to any of claims 1 to 4, wherein the context engines further comprise a context processor and a context bench-marker; the context processor being arranged to process the obtained context information; the context bench-marker being arranged to monitor the performance of the context aware content management system, and build up a database summarizing corresponding responses which the system has previously made to repeatedly received context information,
wherein the context bench-marker is arranged to use the database to modify adaptively responses by the context aware content management system to new items of context information conforming to said repeatedly received context information.
6. A context aware content management system according to claim 5 wherein the context bench marker optimises the responses to new items of context information conforming to said repeatedly received context information, by monitoring the relevance to the user of the previous responses, and modifying the content delivery system to increase said relevance.
7. A context aware content management system according to any
preceding claim, for delivering the content in the form of a collection of content items, one or more of said items requiring transformation before delivery to the user, the system being configured to:
a) perform a prioritization of the content items based on estimated relevance of the respective content items to the user;
b) for each content item that requires transformation, estimate a transformation time which is a time required for transforming the content item; c) for each content item which does not require transformation, or if the content item has already been transformed, estimate a consumption time, which is a time the user is expected to take to consume the content; and
d) transmit the items to the user in an order based on the priority, but, in the case of a content item which requires transformation, first transmitting a set of one or more said content items which do not require transformation, such that the sum of estimated consumption time of the set of content items is greater or equal to the estimated transformation time of said content item which requires transformation.
8. A mobile device for communicating with a server, the mobile device including a context engine configured to obtain context information, process the context information and deliver the content received from the server to the user or take any other appropriate action based on said context information;
the context engine being configured to cooperate with a corresponding context engine provided on the server, the mobile device containing a mechanism for redistributing processing workload between said context engines.
9. A sever for communicating with a mobile device, the server including a context engine configured to obtain context information, process the context information and deliver content to the device based on said context information; the context engine being configured to cooperate with a corresponding context engine provided on the mobile device,
the server containing a mechanism for redistributing processing workload between said context engines.
10. A context aware content management system which comprises:
a server;
a mobile device; and
a context engine configured to obtain context information, process the contextinformation and deliver content to a user based on said context information;
the context engine comprising a context bench-marker arranged to monitor the performance of the context aware content management system, and build up a database summarizing corresponding responses which the context aware content management system has previously made to repeatedly received context information,
wherein the context bench-marker is arranged to use the database to modify adaptively responses by the context aware content management system to new items of context information conforming to said repeatedly received context information.
11. A context aware content management system according to claim 10 wherein the context bench marker improves the responses of the context aware content management system to new items of context information conforming to said repeatedly received context information, by monitoring the relevance to the user of the previous responses, and modifying the context aware content management system to increase said relevance.
12. A method of transmitting content from a server to a user via a mobile device operated by the user, the content being in the form of a collection of content items, one or more of said items requiring transformation before delivery to the user, the method comprising:
a) performing a prioritization of the content items based on estimated relevance of the respective content items to the user;
b) for each content item that requires transformation, estimating a transformation time which is a time required for transforming the content item; c) for each content item which does not require transformation, or if the content item has already been transformed, estimating a consumption time, which is a time the user is expected to take to consume the content; and
d) transmitting the items to the user in an order based on the priority, but, in the case of a content item which requires transformation, first transmitting a respective set of one or more said content items which do not require transformation, such that the sum of estimated consumption time of the set of content items is greater or equal to the estimated transformation time of said content item which requires transformation.
13. A method according to claim 12 in which the set of content items is composed of content items which do not require transformation and which have highest priority, the number of content items in the set being the minimum number such that the sum of the estimated consumption times of the set of content items is at least equal to the transformation time of said content item which requires transformation.
14. A method according to claim 12 wherein the steps a to d comprise:
aa) allocating the content items into one of two priority queues (PQ1 and PQ2) indicating whether the content items in each queue requires
transformation (PQ1 ) or does not require transformation (PQ2);
bb) performing a prioritization of the content items based on estimated relevance of the respective content items to the user within both priority queues; cc) transmitting the items to the user in an order based on the priority, wherein the order of items transmitted is determined by the following steps;
1. Determine whether any of the priority queues are empty and if both are empty, the algorithm ends; or, if one of the priority queues is empty, obtain the most relevant item from the other priority queue, transform the content if required and deliver it; or, if both priorities queues have items, obtain the most relevant items from PQ1 and PQ2 and proceed to step 2;
2. Determine which item from PQ1 or PQ2 is more relevant and if the item from PQ2 is more relevant, deliver that item; otherwise, perform transformation on the item from PQ1 , and deliver the most relevant content items from PQ2 until the transformation of the item from PQ1 is completed;
3. If the transformation of the item from PQ1 is completed before all the content items in PQ2 are delivered, the transformed item is inserted into PQ2 and the algorithm is repeated from step 1 ;
otherwise, transformation of the item from PQ1 is stopped, reinserted into PQ1 and the algorithm is repeated from step 1.
PCT/SG2009/000314 2009-09-04 2009-09-04 A context aware content management and delivery system for mobile devices WO2011028177A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SG2009/000314 WO2011028177A1 (en) 2009-09-04 2009-09-04 A context aware content management and delivery system for mobile devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2009/000314 WO2011028177A1 (en) 2009-09-04 2009-09-04 A context aware content management and delivery system for mobile devices

Publications (1)

Publication Number Publication Date
WO2011028177A1 true WO2011028177A1 (en) 2011-03-10

Family

ID=43649531

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2009/000314 WO2011028177A1 (en) 2009-09-04 2009-09-04 A context aware content management and delivery system for mobile devices

Country Status (1)

Country Link
WO (1) WO2011028177A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015080735A1 (en) * 2013-11-27 2015-06-04 Intel Corporation Contextual power management
US9064273B2 (en) 2004-04-23 2015-06-23 Jpmorgan Chase Bank, N.A. System and method for management and delivery of content and rules
US10074155B2 (en) 2016-06-10 2018-09-11 Apple Inc. Dynamic selection of image rendering formats

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040001476A1 (en) * 2002-06-24 2004-01-01 Nayeem Islam Mobile application environment
US20090070415A1 (en) * 2006-07-31 2009-03-12 Hidenobu Kishi Architecture for mixed media reality retrieval of locations and registration of images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040001476A1 (en) * 2002-06-24 2004-01-01 Nayeem Islam Mobile application environment
US20090070415A1 (en) * 2006-07-31 2009-03-12 Hidenobu Kishi Architecture for mixed media reality retrieval of locations and registration of images

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9064273B2 (en) 2004-04-23 2015-06-23 Jpmorgan Chase Bank, N.A. System and method for management and delivery of content and rules
WO2015080735A1 (en) * 2013-11-27 2015-06-04 Intel Corporation Contextual power management
US10074155B2 (en) 2016-06-10 2018-09-11 Apple Inc. Dynamic selection of image rendering formats
US10586304B2 (en) 2016-06-10 2020-03-10 Apple Inc. Dynamic selection of image rendering formats

Similar Documents

Publication Publication Date Title
US11539809B2 (en) Push notification delivery system with feedback analysis
US9900398B2 (en) Apparatus and method for context-aware mobile data management
US8081955B2 (en) Managing content to constrained devices
US7725508B2 (en) Methods and systems for information capture and retrieval
US8849854B2 (en) Method and system for providing detailed information in an interactive manner in a short message service (SMS) environment
WO2017167121A1 (en) Method and device for determining and applying association relationship between application programs
US8904274B2 (en) In-situ mobile application suggestions and multi-application updates through context specific analytics
JP2010517165A (en) Mobile device management proxy system
CN102567091A (en) Electronic communications triage
US9531827B1 (en) Push notification delivery system with feedback analysis
Al-Masri et al. MobiEureka: an approach for enhancing the discovery of mobile web services
US20160239533A1 (en) Identity workflow that utilizes multiple storage engines to support various lifecycles
US7680888B1 (en) Methods and systems for processing instant messenger messages
EP2336902B1 (en) A method and system for improving information system performance based on usage patterns
WO2011028177A1 (en) A context aware content management and delivery system for mobile devices
Ganchev et al. Smart recommendation of mobile services to consumers
Song et al. Intelligent smart cloud computing for smart service
US6681367B1 (en) Objects with self-reflecting object relevance functions
WO2023075774A1 (en) Machine learning techniques for user group based content distribution
CN116932914A (en) Multi-model personalized recommendation method, system, terminal equipment and storage medium
Timmins et al. Using Future Context in Personal Information Retrieval
Timmins et al. Delivering Relevant and Useful Information with IMPACT

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09849058

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 040712)

122 Ep: pct application non-entry in european phase

Ref document number: 09849058

Country of ref document: EP

Kind code of ref document: A1