WO2009002831A2 - Server-assisted and peer-to-peer synchronization - Google Patents
Server-assisted and peer-to-peer synchronization Download PDFInfo
- Publication number
- WO2009002831A2 WO2009002831A2 PCT/US2008/067636 US2008067636W WO2009002831A2 WO 2009002831 A2 WO2009002831 A2 WO 2009002831A2 US 2008067636 W US2008067636 W US 2008067636W WO 2009002831 A2 WO2009002831 A2 WO 2009002831A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- blob
- endpoint
- metadata
- endpoints
- implementations
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/178—Techniques for file synchronisation in file systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1061—Peer-to-peer [P2P] networks using node-based peer discovery mechanisms
- H04L67/1063—Discovery through centralising entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1074—Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L7/00—Arrangements for synchronising receiver with transmitter
Definitions
- Data may be synchronized between different endpoints, such between different computing devices, in a wide variety of ways and using a wide array of connection topologies.
- endpoints may rely on endpoints to synchronize with a single (or multiple) centralized endpoints, such as computing servers.
- endpoints might communicate directly with each other in a variety of ways, including in one or more decentralized or peer-to-peer topologies.
- endpoints and different communications topologies may each have their own advantages and disadvantages.
- a system in which endpoints synchronize data directly with each other may have a variety of advantages.
- endpoints might be able to communicate and synchronize data with other endpoints that are physically or logically "close” without being required to first (or perhaps to ever) synchronize data with a central server that might be located farther away than the synchronizing endpoints.
- endpoints can communicate with a variety of other endpoints, a single point of failure - as might exist if a centralized server endpoint is required - may be eliminated.
- at least a peer-to-peer synchronization system may be easier to set up or configure, if, for example, such set up doesn't require configuring one or more servers that have particular requirements such as always needing to be available to service requests from other endpoints or being able to scale to provide additional capacity.
- decentralized systems may also have disadvantages, including at least some disadvantages that may in some cases be corrected or provided for in a topology that uses one or more centralized endpoints.
- peer-to-peer endpoints may not always be available or able to synchronize data (in contrast to at least some "server” endpoints, which might be configured to be “always available"), and the lack of an available endpoint may mean sometimes that data may not be synchronized until or unless a particular endpoint is available.
- Another potential disadvantage may relate to determining how to efficiently synchronize data, particularly large amounts of data, between a set of endpoints connected in, perhaps, an arbitrary topology.
- determining which endpoints should synchronize with which other endpoints, what data should be synchronized, and so on is not a trivial problem, and solutions may be computationally intensive to find, may be sub-optimal (which may lead in turn to problems such as transferring more data than is necessary), and so on.
- Another potential disadvantage of decentralized systems is that information about the system of endpoints - such as characteristics of the endpoints, a holistic view of the data communicated by particular or all endpoints, and so on - may be more difficult to gather when no one endpoint, or a subset of endpoints, necessarily synchronizes or has access to all or at least a significant amount of the synchronized data.
- Yet another potential disadvantage may be related to applying security policies or other security-related functionality without necessarily being able to fully trust any particular peer machine; at least some of such security issues may in some implementations be ameliorated with the existence of one or more centralized and perhaps trusted endpoints.
- some subset of synchronized data may be communicated to a centralized endpoint, while another subset of the synchronized data may be communicated in a decentralized or peer-to-peer fashion directly with other endpoints.
- a variety of cooperative and potentially beneficial functionality may be implemented on particular endpoints - including on centralized endpoints - to assist in the synchronization of data between endpoints.
- FIC. 1 illustrates an exemplary system in which data may be synchronized and shared in both centralized and peer-to-peer topologies.
- FIC. 2 illustrates an exemplary generalized operational flow including various operations that may be performed by an endpoint, including a device endpoint, when changing data that is synchronized and making such a change available to other endpoints.
- FIC. 3 illustrates an exemplary generalized operational flow including various operations that may be performed by an endpoint, including a device endpoint, when synchronizing data from another endpoint.
- FIC. 4 illustrates an exemplary generalized operational flow including various operations that may be performed to retrieve or download a blob.
- FIC. 5 illustrates an exemplary generalized operational flow including various operations that may be performed when maintaining and providing metadata.
- FIC. 6 illustrates an exemplary generalized operational flow including various operations that may be performed when storing and providing locations and locators associated with blobs referenced by metadata.
- FIC. 7 illustrates an exemplary generalized operational flow including various operations that may be performed when storing blobs in a cache and providing blobs from a cache.
- FIC. 8 illustrates an exemplary generalized operational flow including various operations that may be performed when storing blobs in cloud storage and providing blobs from cloud storage.
- FIC. 9 illustrates an exemplary generalized operational flow including various operations that may be performed when retrieving a blob without the use of full metadata or at least some of the functionality provided by an exemplary server endpoint.
- FIC. 1 0 illustrates an exemplary computing environment in which the various technologies described herein may be implemented.
- the exemplary system 1 00 shown therein is an exemplary system 1 00 in which data may be synchronized and shared in both centralized and peer-to-peer topologies.
- the exemplary system 1 00 contains server endpoint 1 1 0, device endpoint A 1 60, device endpoint B 1 70, and device endpoint C 1 80.
- the exemplary server endpoint 1 1 0 is shown as containing storage service 1 20, blob lookup service 1 25, blob cache service 1 30, cloud storage service 1 35, notification service 1 40, knowledge store module 145, and blob download module 1 50.
- the exemplary device endpoint A 1 60 is shown as containing blob download module 1 62.
- This description of FIC. 1 may be made with reference to other figures. However, it should be understood that the elements described with reference to FIC.
- data may be synchronized between any two, or more, endpoints.
- data might be synchronized between the device endpoint A 1 60 and the server endpoint 1 1 0.
- the same or other data might then be synchronized between the server endpoint and the device endpoint B 1 70.
- the device endpoint B might then in turn in some cases synchronize data with device endpoint C 1 80.
- particular endpoints may only synchronize with particular other endpoints, while in other implementations endpoints may synchronize with a wide variety of endpoints, including in some implementations with all other endpoints.
- SSE Simple Sharing Extensions
- RSS Really Simple Syndication
- Atom Atom document
- endpoints may make data available to other endpoints for synchronization by providing a feed document that includes both SSE information and at least a portion of the synchronized data.
- the endpoint to which the information is being synchronized might obtain the feed document and interpret its contents using, among other techniques, SSE processes for merging information from feeds with information in one or more data stores.
- the endpoint to which the information is synchronized may generally update its own copy or version of the synchronized data, and thereby complete the synchronization of data from the first endpoint to the second endpoint.
- the device endpoint A 1 60 might make an RSS feed that includes SSE information available to the server endpoint 1 1 0 so that information maintained or accessed by the device endpoint A can be synchronized to the server endpoint.
- the server endpoint might obtain the feed and merge the contents of the feed - possibly using an "SSE merge" - with data maintained by the server endpoint. After such a merge, at least some of the data provided by the device endpoint A may have been incorporated into data maintained by the server endpoint.
- additional changes to data made on the device endpoint A may also be synchronized to the server endpoint in a similar fashion: the additional change may be made on the device endpoint A, the device endpoint A may update its provided feed to include the additional change, and then the server endpoint may process the updated feed and incorporate the additional change.
- the server endpoint may in turn update its own feed to include both the change or updated data and corresponding updates to the SSE information in the feed.
- Another endpoint that synchronizes this information from the server endpoint, including the device endpoint A may then obtain the feed and merge the changes (perhaps performing the same or similar merge processing as is executed by the server endpoint to incorporate changes from the device endpoint A).
- two endpoints may synchronize the same data with each other - so that either endpoint may make changes to the data, for example - by each making the data available and by each updating their own local data when the data provided by the other endpoint changes.
- multiple endpoints may be able to synchronize with a single
- the device endpoint B 1 70 might synchronize the same (or some other) data that is synchronized between the device endpoint A 1 60 and the server endpoint 1 1 0.
- the device endpoint might synchronize the same data by retrieving a feed provided by the server endpoint - including in some cases the same feed that is retrieved by the device endpoint A - and incorporating the data or changes represented by the feed.
- the device endpoint B might also make a feed available to the server endpoint so that the server endpoint can include changes made to the data by the device endpoint B.
- data may be synchronized between devices through another endpoint - as in, for example, the previous exemplary description where changes made by the device endpoint A 1 60 might have been synchronized to the device endpoint B 1 70 through the server endpoint 1 1 0 -
- other synchronization topologies may also be available in at least some cases, and when using at least particular synchronization techniques.
- a synchronization technique or protocol, including SSE may provide the ability to synchronize through both a centralized server - like perhaps the server endpoint 1 1 0 - and directly with other endpoints.
- the device endpoint A might in some implementations be able to synchronize data to the device endpoint B either by synchronizing through the server endpoint 1 1 0 or by synchronizing directly with the device endpoint B (as shown by the dashed line in FIC. 1 between the device endpoint A and the device endpoint B).
- the device endpoint B might synchronize data directly with the device endpoint C 1 80.
- direct synchronizations might be implemented between the device endpoint A and the device endpoint B, or between the device endpoint B and the device endpoint C, by the exchange of SSE feed documents and the merging of the data comprised in the exchanged feeds.
- the existence of a server endpoint may not even be required.
- metadata and blob data may in some cases be communicated directly between device endpoints that know of the location of other device endpoints without the intervention of or assistance provided by one or more server endpoints.
- a wide variety of types of data may be synchronized between endpoints.
- two endpoints might synchronize text information comprising a wide variety of information; might synchronize "binary" information such as audio or video files, or compiled executable code; or might synchronize any other type or format of data.
- Data that is synchronized may be represented in a wide variety of ways.
- such data may be included in the information that is synchronized between endpoints, including information synchronized through the use of steps as described previously.
- such data may be included in SSE feeds that are exchanged between endpoints.
- the information communicated between endpoints may not include all of the data that may be ultimately synchronized or communicated between endpoints. In some of such cases, the information communicated between endpoints may instead or also include references to other data to be synchronized.
- references may be provided in a variety of ways, such as perhaps through the use of "enclosure” elements in RSS or Atom feeds that include URLs that reference resources.
- the data identified by the references may then be accessed or stored using a variety of means.
- metadata may be interpreted to refer to information communicated between endpoints that identifies (and may, in some implementations, include) the data that is synchronized or communicated.
- metadata may be embodied by a document or feed that includes an "item” for each piece of synchronized data.
- Each item in turn and in some exemplary implementations, might include a "sync data” portion and an "item data” portion.
- the sync data may be associated with information about the synchronization of the particular item while the item data may be associated with the information to be synchronized.
- the sync data might include information such a version number associated with the item and a history of how or when the item was updated, while the item data might include the item itself, or at least information associated with the item.
- all of the item data may be included in the metadata, while in other cases the metadata may not include all of the item data and may instead include one or more references to item data.
- references may enable the metadata to remain (relatively) small, especially in cases where there are large amounts of item data.
- an item data element might instead include some type of reference or identifier that refers to, identifies, or otherwise assists in the retrieval of the large audio or video file.
- Such information or data may be stored or accessed separately from the metadata and may in some cases be referred to as a "blob" of data or simply as a “blob” or “enclosure.” (In some cases it may also be possible for at least portions of the sync data to not be included in the metadata and to instead be referenced from the metadata.)
- metadata may in some cases be embodied by an SSE feed.
- an RSS feed that includes SSE information might be embodied as an XML document that includes common RSS XML elements, such as "item" elements for each piece of data.
- Each "item” element might in turn include sync data - which might be embodied by an XML element named “sync”, in at least some implementations - and item data - which might be embodied by an XML element named “item”, in at least some implementations, and might include information about the item, or the item itself.
- the item data or item data element may include any type of data.
- item data might include standard RSS elements like "title" and "description".
- item data might include, for example, contact information perhaps encoded using a format like hCard, or other data.
- item data might include one or more references to information located elsewhere, including information that is not included in the metadata or RSS feed. When the metadata uses RSS, such references might be embodied using an element like the RSS "enclosure" element.
- the metadata may include all of the information that is synchronized. That is, metadata synchronized between endpoints may not always reference or identify other information. If, for example, the metadata includes, say, all of the information associated with a contact, the synchronized data - including a feed or document - that includes the contact information may still generally be referred to as metadata.
- Metadata might be communicated to and through the use of the server endpoint 1 1 0, while blob data may be communicated in at least some cases directly between device endpoints. Furthermore, while some or all of the metadata may be transferred between all endpoints that synchronize data, blob data may not necessarily be communicated between every endpoint. For example, a blob might only be communicated between endpoints when, for example, that blob is specifically requested or required. [0032] Furthermore, in some implementations one or more alternate representations of blob data may be generated and communicated between endpoints depending on a variety of characteristics.
- an endpoint that is only connected to one or more other endpoints using a communication mechanism that communicates information relatively slowly might desire or retrieve blobs that are in some cases smaller than the blobs that might be retrieved by at least some other endpoints.
- a mobile phone connected using a relatively low- bandwidth data network might obtain an image or video file that has been transcoded or converted so that the image or video is represented in a smaller amount of space (perhaps by reducing the size of the image, by using more aggressive compression settings, and the like).
- Other kinds of alternate representations may also be possible or supported by at least some implementations.
- some endpoints might retrieve, say, a clip of an audio or video file rather than the entire audio or video file, and so on.
- one or more particular endpoints may have the capability of generating or providing particular alternate representations.
- an "alternate representation service” or “transcoding service” might exist on a server endpoint 1 1 0 (not shown in FIC. 1 ) and might participate in the generation of at least some alternate representations of blobs stored on the server endpoint or on other endpoints.
- at least some particular other endpoints, such as one or more device endpoints might also or instead have the capability of providing alternate representations of blobs. Additional modules, services, and processes related to the generation and providing of alternate representations are also as described further elsewhere herein.
- An exemplary implementation of a server endpoint might include a variety of functionality, including one or more of the services and modules illustrated in FIC. 1 as being a part of the exemplary server endpoint 1 1 0.
- not all server endpoints may include all of the illustrated modules and services, while other server endpoints may include additional modules and services.
- some of the modules and services may be described as communicating with each other in specific ways, in general many modules and services may communicate with other modules or services, including with modules and services located on the server endpoint as well as on other endpoints and including with modules and services where such communication is not specifically described.
- the knowledge store module 145 may be described as being used with, say, the blob cache service 1 30 in one or more specific ways, but such description does not limit the knowledge store module to only communicating with the blob cache service or with other modules or services where the communication is specifically described; the knowledge store module may in at least some implementations also communicate with a variety of other modules or services.
- more than a single server endpoint may exist.
- at least some server endpoints may communicate with other server endpoints and participate in the synchronization of data between different endpoints.
- Each of such multiple server endpoints may include only some or perhaps all of the exemplary services and modules illustrated in FIC.
- An exemplary storage service 1 20 may generally store metadata and provide the ability for other endpoints to synchronize metadata with the server endpoint 1 1 0.
- An exemplary device endpoint A 1 60 might, for example, make a local change and update metadata it maintains so that the metadata includes the change, and then communicate the updated metadata to the storage service.
- the storage service might then merge the changes in the metadata communicated from the device endpoint A with some representation of the same metadata that is managed by the storage service.
- the storage service might then make its own merged or updated metadata available to other endpoints (which might in turn enable other endpoints to retrieve the metadata provided by the storage service and then incorporate the change made originally by the device endpoint A).
- a storage service may store and provide multiple separate pieces of metadata, such as multiple SSE feeds in at least some implementations, and each piece of metadata may identify different (or the same) sets of data. At least some of the operations that might be performed by an exemplary storage service 1 20 may be described in more detail below, with reference especially to FIC. 5. [0036] In some implementations, including those that use SSE, a storage service may accept (or retrieve, or obtain in some fashion) SSE feeds made available by other endpoints. The storage service may then merge the changes in the obtained feeds with a local representation of the feed that the storage service may store or access in a variety of ways, including as a file itself, as records in a database, and so on.
- the storage service may then make an updated feed that is the result of the merge operation available to at least some endpoints, also in a variety of ways including as a file accessible by other endpoints - perhaps using a web or file server, by proactively communicating the updated feed to other endpoints, and so on.
- an exemplary blob lookup service 1 25 may accept some data that identifies a blob, such as a blob identifier, and provide one or more "locators" that an endpoint might then use to obtain or retrieve the blob. For example, in implementations including those where metadata and blobs may be communicated or synchronized differently or separately, one or more endpoints may provide, to a blob lookup service, information including the location or locations where a particular blob may be retrieved. In some implementations, such locations might include device endpoints, a blob cache service 1 30, or a cloud storage service 1 35.
- the blob lookup service might then at some later time provide another endpoint that wants to obtain the blob with one or more of these locations. At least some operations that might be performed by an exemplary blob lookup service may be described in more detail below, with reference especially to FIC. 4 and FIC. 6.
- a blob identifier may generally be some piece of data that identifies a blob.
- a blob identifier might be used by a variety of device and server functionality to particularly identify a blob without requiring that the blob itself be communicated. So, for example, a storage service and other endpoints might use a blob identifier in metadata, a blob lookup service might store and provide locations using a blob identifier, a blob cache service or cloud storage service might store blobs that are indexed or may be retrieved using an associated blob identifier, and so on.
- a single blob identifier may be used by a variety of modules or functionality to identify a blob, while in the same or other implementations multiple and perhaps different blob identifiers may be used - perhaps by different modules or functionality - to identify a particular single blob.
- a blob identifier might be implemented in a variety of ways and take a variety of forms.
- a blob identifier might just be a string or set of alphanumeric or other characters, such as something like "ABCl 23", “ABCDEFCH”, "1 23456", and so on.
- a blob identifier might take the form of, for example, a uniform resource locator (URL) or uniform resource identifier (URI), such as perhaps "http://www.els.live.com/ABC1 23", "http://www.els.live.com/ABCDEFCH", "http://www.els.live.com/ 1 23456", and so on.
- URL uniform resource locator
- URI uniform resource identifier
- the string following the "http://www.els.live.com/" portion of the identifier may actually identify the blob.
- the "http://www.els.live.com/" portion of the identifier may be used for a variety of other purposes, including, perhaps, to make the blob identifier appear more like an identifier with which a user might be familiar, or even to actually locate the blob in at least some particular cases, including in an exemplary implementation described below with reference especially to FIC. 9.
- An exemplary blob cache service 1 30 may provide a storage location for transient blobs or files. Such a location may in some cases aid in the transfer of blobs between endpoints. For example, suppose the device endpoint A 1 60 synchronizes metadata with the server endpoint 1 1 0, and such metadata identifies a blob that is available from the device endpoint A. Also suppose that the blob is not communicated to the server endpoint - instead, it remains, available for retrieval, on the device endpoint A. Suppose then that the device endpoint A goes offline or becomes unavailable to one or both of the server endpoint and the device endpoint B 1 70.
- the device endpoint B may not be able to do so, because the device endpoint A - which may have the only copy of the blob - is not online or available.
- a problem may be resolved through a variety of means, including through the use of a blob cache service.
- the blob associated with the metadata provided by the device endpoint A may be stored in the blob cache service before the device endpoint A becomes unavailable, and then the device endpoint B may retrieve the blob from the blob cache service instead of from the device endpoint A.
- an exemplary blob cache service may store blobs as transient data. That is, in contrast perhaps to the cloud storage service 1 35 described below, blobs stored by the blob cache service may only be stored for a limited period of time and might be removed at a variety of times controlled in some cases by the blob cache service itself, or by some other module or service. For example, in some implementations, a blob cache service might use one or more caching algorithms to determine when to delete blobs or replace currently stored blobs with new blobs.
- a blob cache service might first fill up or occupy its available storage space and then, when a new blob is to be stored, might use an algorithm such as a "least recently used" (or LRU) algorithm to find and delete one or more blobs that have not been used recently and so make space available for new blob data.
- LRU least recently used algorithm
- implementing functionality associated with a blob cache service may require fewer monetary, operational, or other resources than implementing functionality associated with a persistent store, including perhaps the cloud storage service 1 35 described below.
- a blob cache service may not require the same level or types of redundant data storage functionality that might be required by a store designed to hold at least some other non-transient data.
- An exemplary cloud storage service 1 35 may provide storage "in the cloud” for generally non-transient data. That is, while it may operate in some ways like a blob cache service - in that it may store blobs, for example - a cloud storage service may be viewed by users and endpoints as, for example, "always on” or “always accessible” and as providing "reliable” storage. Endpoints may store blob data using a cloud storage service in a variety of cases, including in some cases in lieu of storing blob data on, say, a device endpoint. At least some operations that might be performed by an exemplary cloud storage service may be described in more detail below, with reference especially to FIC. 8.
- one or more additional services or modules that provide storage for blob data and operate differently than a blob cache service 1 30 or a cloud storage service 1 35 may be a part of an exemplary server endpoint 1 1 0.
- a different blob data storage service might provide storage that is relatively more persistent than that provided by a blob cache service but relatively less persistent than that provided by a cloud storage service (at least as these exemplary services have been introduced above).
- Such a service might do so by determining when to store a blob using, for example, knowledge of the other endpoints on which the blob might be stored or from which the blob might be available.
- such a blob data storage service might not store a blob if some knowledge indicates that the blob is already stored on some number of other endpoints that are known to be relatively highly available.
- knowledge that is accessible to the blob data storage service indicates that a particular blob is stored on, say, four other endpoints that are typically on and connected to the network most or all of the time, then an exemplary blob data storage service might be able to not store the blob because a requesting endpoint would likely be able to retrieve the blob from at least one of the four other endpoints.
- An endpoint that synchronizes particular metadata generally may want to know when such metadata has been updated on one or more other endpoints, so that, for example, the endpoint can update its own copy of the metadata, as well as possibly retrieve any desired blob data.
- an endpoint might "poll" or periodically check one or more other available pieces of metadata - perhaps including those provided by a server endpoint 1 1 0 - and determine that there are changes to synchronize when the other metadata has changed.
- an endpoint might make use of the functionality provided by an exemplary notification service 140.
- a notification service might provide the ability for endpoints to register or indicate an interest in particular metadata, and then might provide notifications to interested or registered endpoints when particular metadata changes.
- an endpoint may be able to avoid polling for changes, or at least may poll less often, and still be able to take some action when metadata changes.
- Notifications may be implemented in a variety of ways, including through the use of point-to-point or broadcast functionality provided by a variety of networking or other communication mechanisms.
- an exemplary notification module might also provide notifications about other events besides changes to metadata.
- a notification module might periodically provide endpoints notifications that include information about updated or changed locators (such locators might be managed by an exemplary blob lookup service). Using updated locators when retrieving blobs may enable an endpoint to change how it retrieves blobs and so perhaps retrieve blobs more efficiently, adapt to changing network or other conditions, and so on.
- An exemplary knowledge store module 145 may provide a store for and access to "knowledge" that may be gained through the interactions or functionality provided by the server endpoint 1 1 0. Such knowledge may then be used in some cases for a variety of purposes, including to control or optimize the synchronization of data. While more examples of how knowledge may be gathered and used may be described elsewhere herein, in some cases knowledge may generally relate to data such as characteristics associated with endpoints that communicate with the server endpoint or that communicate with other endpoints.
- Characteristics stored or used by the knowledge store module may generally be referred to as "knowledge characteristics.”
- some exemplary knowledge characteristics may include information about the manner in which a device endpoint commonly connects to the server endpoint, like the speed of the connection, the available bandwidth of the connection, any costs associated with the connection, and so on.
- a connection might be over a high-speed data network or over a relatively slow mobile phone network, a connection might have free bandwidth at particular times of the day but be in use at other times of the day, and so on.
- Knowledge characteristics might also include other information about endpoints, including information about when the endpoint is commonly online or available - the device might be a laptop or a mobile phone and might be intermittently connected, might be an "always-on" desktop computer that is “always" connected, or so on.
- a knowledge module might store information arising from the interaction of endpoints with the server endpoint, such as how often particular blobs are requested, from which endpoints blobs are requested, and so on. Regardless of how or where the knowledge is obtained, it may then be provided and used by the server endpoint, or possibly by other endpoints, in a variety of ways, including as part of controlling or optimizing how data is synchronized.
- knowledge - such as the connection characteristics or other characteristics of a device, or other knowledge - may be used to determine whether to identify or provide one or more alternate representations of a requested blob.
- an alternate representation might comprise, say, a smaller transcoded version of an otherwise large image when the image is requested by an endpoint connected using a slow connection mechanism or when the blob will be stored on an endpoint that has relatively more limited storage space.
- the knowledge may include whether particular endpoints have particular alternate representations of a particular blob, and/or whether particular endpoints have the capability of generating or providing a blob represented using a particular alternate representation (perhaps by generating the alternate representation on the particular endpoint, by retrieving an alternate version from some other endpoint, and so on).
- an exemplary blob download module such as the blob download module 1 50 that is a part of the server endpoint 1 1 0 or the blob download module 1 62 that is a part of the device endpoint A 1 60, may in some cases download or retrieve blobs from a variety of other locations, including other endpoints.
- a blob download module may use one or more blob retrieval mechanisms to retrieve some portion of a blob, or an entire blob, where a "blob retrieval mechanism" may specify, for example, one or more ways in which a blob may be located and/or one or more ways in which a blob may be retrieved.
- blob download module may only be able to retrieve blobs using, for example, a single blob retrieval mechanism that uses a protocol such as HTTP, FTP, or a one of a variety of other file sharing protocols.
- Another blob download module might be able to retrieve a file using any of such blob retrieval mechanisms, as well as other blob retrieval mechanisms.
- some blob download modules might be able to use mechanisms that may retrieve pieces of the same blob from a variety of endpoints, "at the same time," such as mechanisms described by a protocol like the BitTorrent protocol.
- the same or other blob download modules may also use other functionality to more efficiently retrieve blobs, such as, for example, "differential compression," where only the parts of a blob that have changed are communicated so that the entire blob does not need to be communicated every time even a small part of the blob changes.
- Some blob download modules may in some cases locate a blob using a blob lookup service that may in some implementations be the same as or similar to the exemplary blob lookup service 1 25 introduced previously, while in the same or other implementations blob download modules may locate or retrieve a blob without the use of blob lookup functionality or the use of a blob lookup service.
- a blob download module may use a
- one blob retrieval mechanism or driver may enable the retrieval of blobs from an exemplary blob cache service
- another driver may use a blob lookup service to retrieve a set of endpoints that may be able to provide the blob and then retrieve the blob from one or more of such endpoints (including through the use of BitTorrent-like mechanisms in some cases with more than one endpoint)
- yet another driver might not use a blob lookup service and might identify one or more endpoints from which to retrieve the blob using one or more of a variety of other mechanisms, and so on.
- a pluggable architecture may also enable additional blob retrieval mechanisms to be added at a variety of times, including after a blob download module has been deployed or already used, without necessarily changing the overall organization or implementation of the blob download module or the endpoint that includes the blob download module. Such functionality may enable the later addition of new blob retrieval mechanisms without changing the core blob download module or the endpoint itself.
- each endpoint in this example might represent any number of general-purpose or dedicated computers, including desktop computers, server computers, laptop computers, workstation computers, mobile or cellular telephones, personal digital assistants (PDAs), and the like.
- PDAs personal digital assistants
- a server endpoint or device endpoint may in at least some implementations be implemented on any type of computing hardware, including desktop computers, server computers, laptop computers, workstation computers, mobile or cellular telephones, PDAs, and so on.
- an endpoint is considered a device endpoint or a server endpoint may be determined, among other characteristics, by the functionality provided by the endpoint rather than by, for example, the nature of the computing hardware on which the endpoint is implemented.
- an endpoint that provides server functionality using one, or more, of the services and modules described previously as being associated with an exemplary server endpoint 1 1 0 might be considered to be a server endpoint, even if the endpoint is implemented on, say, a laptop computer.
- an endpoint may provide server functionality at particular times - and so might be considered a server endpoint - while operating as a device endpoint or other type of endpoint at the same or other times.
- a particular or single computing device may host or include multiple endpoints. In these or other implementations, the communication of information between endpoints may in at least come cases comprise only communication between executable code that runs on the particular single computing device.
- Endpoints and even modules and services within an endpoint may be connected using a variety of networking or other connection means.
- Such communications means may include any means by which data may be transmitted, including any type of network - such as an Ethernet, Wi-Fi, or mobile telephone or data network - and in at least some cases any other kind of transfer, including the transfer of physical media, like a compact disc (CD) or flash memory drive.
- FIG. 2 shown therein is an exemplary generalized operational flow 200 including various operations that may be performed by an endpoint, including a device endpoint, when changing data that is synchronized and making such a change available to other endpoints.
- FIC. 2 may be made with reference to other figures.
- a change is made to local data on an endpoint, such as perhaps the device endpoint A 1 60 that was previously introduced with reference to FIC. 1 .
- an endpoint such as perhaps the device endpoint A 1 60 that was previously introduced with reference to FIC. 1 .
- the modified data might include one or more of a wide variety of types of data, such as, for example, and without limitation, contacts, calendar items, audio clips, video clips, word processing or spreadsheet files, other files, and so on.
- Such a change might then result in one or more changes to metadata or blob data that is synchronized between endpoints. For example, suppose a user creates a new video file using, say, a video editing application, and that the new file is part of data that is synchronized with at least one other endpoint. As part of operation 21 0, or as part of another operation, metadata maintained by the endpoint on which the change was made might be updated so that the metadata includes an item associated with the new video file.
- Such a new item might include, in at least some implementations, data relevant to the synchronization of the item (such as perhaps version information or data about when the item was created or, later, updated) as well as information about the item itself, such as, perhaps, a name for the video, a text description of the video, and a reference that identifies the actual video data or content (if the video data or content is not included in the metadata feed itself)-
- data relevant to the synchronization of the item such as perhaps version information or data about when the item was created or, later, updated
- information about the item itself such as, perhaps, a name for the video, a text description of the video, and a reference that identifies the actual video data or content (if the video data or content is not included in the metadata feed itself)
- the metadata and possibly blob data may be similarly updated.
- the metadata may be modified so that, for example, the item is marked as being deleted, and any associated blob data may be deleted (or may be kept available, in at least some implementations).
- the metadata that was updated as part of operation 21 0 may be communicated to a server endpoint, such as, perhaps, a server endpoint that is in at least some way like the exemplary server endpoint 1 1 0 described previously with reference to FIC. 1 .
- a server endpoint such as, perhaps, a server endpoint that is in at least some way like the exemplary server endpoint 1 1 0 described previously with reference to FIC. 1 .
- Such communication may be implemented in a wide variety of ways depending on, for example, how the metadata is represented and the networking or other communication functionality available to the endpoint that has made the change or that is available to the receiving endpoint.
- the metadata may be communicated to the server endpoint using, perhaps, an HTTP POST request that contains the XML document and that is submitted to an HTTP server running on or associated with the server endpoint.
- the endpoint that has made the change might make the updated metadata available at a particular location, say, on a network share or through an HTTP server associated with the endpoint itself, and the server endpoint may retrieve the metadata from that location.
- blob data associated with the metadata change may not be communicated to the server endpoint. Instead, the blob data may be, for example and without limitation, retained (at least for some period of time) on the endpoint where the change was made. For example, when the blob data comprises video data that was created on the endpoint, the video data may reside only on the endpoint after an implementation of operation 21 5 has completed. In some cases, such blob data may be retrieved or obtained by another endpoint at some later point in time.
- a blob lookup service may be updated as part of an implementation of operation 21 5 so that the blob lookup service includes location information for the blob data associated with the metadata change. Another endpoint might then use the blob lookup service as part of retrieving the blob data from the endpoint.
- a blob lookup service may be updated when another endpoint merges or processes the metadata communicated to another endpoint. Such an operation might be implemented, in some examples at least, by an operation like the operation 525 described below with reference to FIC. 5.
- the updated metadata may be communicated to a server endpoint
- the updated metadata may instead, or also, be communicated to one or more other endpoints, including other server endpoints or other device endpoints, such as the device endpoint B 1 70 and/or the device endpoint C 1 80, both described previously with reference to FIC. 1 .
- the metadata includes or uses SSE information, and so metadata may be communicated and synchronized between arbitrary (and sometimes non-server) endpoints, at least some implementations of operation 21 5 may communicate the updated metadata to a variety of other endpoints.
- the endpoint on which the change was made may subscribe or register to be notified of future updates to the metadata that was previously communicated to the server (or other) endpoint. After such a subscription or registration, the endpoint may then in at least some cases receive notifications (embodied or implemented in a variety of ways) when the metadata changes on some other endpoint, such as on the server endpoint to which the updated metadata was previously communicated. As described previously, this may enable the endpoint to be aware of other changes to data with which it is concerned without requiring that the endpoint actively poll or check for changes to the data.
- a subscription or registration may be made in some cases as part of executing operation 220, the same (or a different) subscription or registration may be made at another time, or may not be made at all.
- the endpoint may respond to a request for the blob associated with the metadata change by, for example, providing the blob to the requestor. That is, for example, at some point in time after the metadata change that refers to or is associated with the blob is received or processed by one or more other endpoints, one or more of those other endpoints (or another endpoint) may determine that they need or desire the data comprised by the blob. In some implementations, including those where the blob data is not communicated as part of the metadata, such other endpoints may then retrieve the blob by requesting it from another endpoint on which the blob exists.
- FIG. 3 shown therein is an exemplary generalized operational flow 300 including various operations that may be performed by an endpoint, including a device endpoint, when synchronizing data from another endpoint.
- FIC. 3 may be made with reference to other figures. However, it should be understood that the operational flow described with reference to FIC. 3 is not intended to be limited to being used with the elements described with reference to these other figures.
- an endpoint may determine that metadata in which the endpoint has some interest has changed. For example, the endpoint may have previously updated the same metadata and registered to be notified of future changes, perhaps in a manner like that described previously with reference to FIC. 2. In this case, or in other cases, such an endpoint may receive a notification from some other endpoint, including from a server endpoint that maintains a copy of the metadata. In another implementation, an endpoint may periodically poll some location and determine through the act of polling that the metadata at that location has changed.
- this operation may not be performed - in such an implementation, an endpoint might, for example, periodically execute at least some of the other operations in the exemplary operational flow 300 - such as obtaining metadata and merging metadata - regardless of whether the metadata has changed.
- the description associated with this operational flow may in some cases assume that a change exists in the metadata in question - in the case where no change actually exists, a variety of the operations may be executed differently or perhaps may not be executed at all).
- the endpoint may obtain the metadata that may contain a change through a variety of means.
- an endpoint including perhaps a device endpoint like the device endpoint A 1 60 described previously with reference to FIC. 1 , might initiate an HTTP GET request to a web server that is part of or associated with a server endpoint, such as perhaps the server endpoint 1 1 0 also described previously with reference to FIC. 1 , and receive in response metadata that includes one or more changes.
- the metadata may be proactively communicated by another endpoint, or may be obtained in a variety of other ways.
- the obtained metadata may include all items that may be represented by the particular piece of metadata, while in the same or other implementations, in at least some cases, the metadata may only include a subset of the items, including, for example, only the items that have changed.
- the obtained metadata may be merged with a copy of the metadata maintained locally.
- Such a merge operation may incorporate the changes embodied in the obtained metadata into another copy of the metadata that may be maintained or modified by the endpoint that executes the operational flow 300.
- Such a local copy of the metadata, or local metadata store may be maintained in a variety of ways, including (at least in some implementations) as rows or records in a database, as a copy of an XML feed (or other) document that might exist in one or more files in a file system, and so on.
- such a merge operation may be implemented using an SSE merge.
- SSE a process specified by SSE may be followed that compares "foreign" items - those that are embodied by the obtained metadata in this example - with "local" items that are maintained locally or associated with the endpoint merging the metadata. New foreign items that don't exist in the local metadata store may generally be incorporated into the local metadata store, thereby creating a new local item.
- a merge process may choose a "winning item” and "losing item” using one or more means, such as by choosing the winning item to be the item that has been updated more recently. Finally, when the local item is not the winning item, the local item may be updated so that it incorporates the data embodied by the winning item. [0067] In at least some implementations of operation 325, it may be determined whether the endpoint that is synchronizing information and that has obtained the metadata also requires or desires one or more blobs referenced or identified by the metadata.
- this operation may determine that one or more of those blobs are desired or required. Such might be the case, in just one example, when at least part of the objective of the operational flow 300 is to synchronize both metadata and associated blob data to the endpoint that is executing the operational flow 300. If a blob is needed, the exemplary operational flow may proceed to operation 330. If no blobs are needed, the exemplary operational flow may end.
- one or more blobs that are identified or referenced by the obtained metadata may be retrieved.
- the blob or blobs to retrieve may be identified in a variety of ways. For example, in some cases any blobs associated with changed metadata items may be retrieved. In other implementations, only a subset of the changed blobs may be retrieved - perhaps only blobs that are under a specific size might be retrieved, or some other criteria might be used to filter or determine the blob or blobs to retrieve.
- a single metadata item may identify a single blob, while in the same or other implementations, a single metadata item may identify multiple blobs.
- Each identified blob may then be retrieved in one or more different ways.
- one or more alternate representations of identified blob(s) may be retrieved, such as transcoded versions of particular files (such as image, video, or audio files), and so on.
- a blob may be retrieved using operations that are the same as or similar to the operations described below with reference to FIC. 4. In other implementations, a blob may be retrieved in one or more different ways.
- FIG. 4 shown therein is an exemplary generalized operational flow 400 including various operations that may be performed to retrieve or download a blob.
- FIC. 4 may be made with reference to other figures. However, it should be understood that the operational flow described with reference to FIC. 4 is not intended to be limited to being used with the elements described with reference to these other figures.
- the exemplary operational flow of FIC. 4 indicates a particular order of execution, in one or more alternative embodiments the operations may be ordered differently.
- the exemplary operational flow contains multiple steps, it should be recognized that in some implementations at least some of these operations may be combined or executed contemporaneously, and in the same or other implementations, some steps may not be executed.
- a blob lookup service (or BLS) will be used as part of retrieving the blob. For example, this determination may be made, at least in part, because in some implementations an endpoint may attempt to retrieve the blob first without using (or by using) a blob lookup service.
- a blob lookup service might, for example, only be used after an initial attempt to retrieve the blob without using the blob lookup service is made. If such an initial attempt fails, then a blob lookup service might be used. In another implementation, a blob lookup service may not be available, or the determination may be made in a variety of other ways. If a blob lookup service is used, the operational flow 400 may proceed to operation 41 5. If a blob lookup service is not used, the operational flow may proceed to operation 435.
- a blob lookup service may proceed to operation 41 5, where, in an exemplary implementation, an endpoint may ask or query a blob lookup service for one or more locators associated with the desired blob. Such a query may be communicated to a blob lookup service using a variety of communication mechanisms or means. In some implementations, additional information may also be included in the request. For example, in some implementations, an endpoint may also request that a particular blob be provided in one or more alternate representations.
- the blob lookup service may provide one or more "locators," where a locator may generally comprise an identification of an endpoint that may be able to provide at least a portion of the blob.
- a blob lookup service might provide one or more locators that each in turn comprise a domain name or IP address that identifies one or more endpoints that the blob lookup service has determined may be able to provide at least a portion of the requested blob.
- the one or more locators provided by a blob lookup service might also include other information.
- a locator might include information that might be used, at least in part, to determine the manner in which the endpoint identified by the locator should be contacted or the manner in which the blob should be retrieved.
- a locator might include some indication of supported protocols that might be used, such as some kind of peer-to-peer or distributed data transfer mechanism, HTTP, FTP, and so on.
- the one or more locators might include some indication of preference or order that might identify a particular locator or locators as being preferred - such preferred locators might be used first and before other locators are used, for example.
- a locator may in some cases include information that indicates what portions of a blob may be provided by the particular endpoint associated with the locator.
- a locator may include information derived or provided by some type of knowledge, such as knowledge made available by an exemplary knowledge store module 1 45, described previously with reference to FIC. 1 .
- Such knowledge may, for example, enable the blob lookup service that provides one or more locators to indicate a preference for the use of particular locators (because perhaps those locators identify endpoints that have faster connections, are physically or logically closer to the endpoint requesting the locators, and so on), or may enable the blob lookup service to modify or change the locators it provides in a variety of other ways.
- a locator may contain additional information, including as described previously, in some of the same cases or implementations a provided locator may not include any additional information. In such cases, and in other cases, other operations - including those not shown - may determine how to retrieve the blob through other means such as by a negotiation process with the endpoint identified in a locator to determine a communication or transfer protocol to use, and so on.
- operation 420 it may be determined whether the provided locator(s) can be used to retrieve the blob. For example, the endpoint identified in a locator may be queried or a request may be sent to retrieve the blob, to initiate blob retrieval, the check the status or availability of the blob, and so on. If a determination can be made that the blob cannot be retrieved - perhaps the locator identifies an endpoint that is offline, for example, or the blob cannot be retrieved for some other reason - then in at least some implementations the operational flow 400 may proceed to operation 430. If one or more of the locators can be used to retrieve the blob, the operational flow may proceed to operation 425.
- the locator(s) may be used to retrieve the blob.
- This operation may be implemented in a wide variety of manners depending on a variety of characteristics including, but not limited to, the information provided in the locator(s), the supported or available communication or file transfer protocols, and so on.
- retrieving the blob may be implemented, at least in part, through the use of a blob download module that might be similar to or the same as the blob download module 1 50 and the blob download module 1 62 described previously with reference to FIC. 1 .
- the endpoint might provide the locator(s) to the blob download module, and the blob download module might then evaluate the information in the locator(s) and use the information to determine one or more ways to retrieve or download the requested blob.
- an endpoint or blob download module might simply use a single locator to locate another endpoint that has the blob, send some type of network communication - such as perhaps an HTTP GET or file system file retrieval request - to the identified other endpoint, and receive the requested blob in response.
- an endpoint or blob download service might use multiple locators to initiate multiple requests to different endpoints, including perhaps separate requests for different (or the same) portions of a single blob. In some cases obtaining different portions of the same blob from multiple endpoints may enable the retrieval of the blob to be completed more quickly, or with less impact on any particular single endpoint, or may have other advantages.
- one or more techniques may be used to minimize the amount of data to be transferred or to otherwise further optimize the retrieval of the blob.
- some implementations may use data compression - such as ZIP, or other compression techniques - to retrieve less data.
- data compression such as ZIP, or other compression techniques - to retrieve less data.
- only portions of the blob that have changed from a copy of the blob that might already exist on the requesting endpoint may be requested or communicated from or one or more other endpoints.
- Such "differential compression” may enable only small amounts of data to be communicated even when the blob in which the changed data exists is relatively large.
- the endpoint may inform another service or endpoint about the status of the blob retrieval or download.
- an endpoint might inform a blob lookup service that it has retrieved some or all of a particular blob.
- the blob lookup service may be able to provide a locator that identifies the endpoint as part of servicing some other request for the same blob.
- an endpoint that provides this information may do so only when the blob is completely retrieved, while in other cases or implementations the endpoint may periodically provide information as the blob is retrieved, so that, for example, a blob lookup service is updated when 25% of the blob has been retrieved, when 50% of the blob has been retrieved, and so on.
- the information communicated to another endpoint might include an identification of which portions of the blob have been retrieved - say, that the first 50% of a blob has been retrieved, that the first 1 0% and the last 1 0% of the blob has been retrieved, or so on - and such identification may in turn be useful to enable other endpoints to retrieve portions of the same blob from different endpoints.
- the operational flow may end.
- a retrieval failure may be interpreted similarly to or in the same way as a failure of the previously described operation 420.
- the operational flow may proceed to operation 430 (such a path is illustrated in FIC. 4 using a dashed line).
- the operational flow 400 may in some cases proceed to operation 430.
- reasons for executing operation 430 may include those described above, such as when the endpoints identified by the blob lookup service cannot provide the blob, perhaps because they are not online or available.
- the endpoint may request that a blob cache service retrieve and cache a copy of the requested blob.
- such a blob cache service may be accessed more reliably than at least some other endpoints, and so may serve as a useful interim or transient location for a blob that an endpoint that wants to retrieve the blob can use instead of some other (perhaps unavailable) endpoint.
- the blob cache service may accept the request and retrieve the blob itself when possible. For example, the blob cache service may wait until an originally identified endpoint is available and may then retrieve the blob from the original endpoint (or may instruct the endpoint to upload the blob).
- Some exemplary blob cache service implementations and operations that might be performed by a blob cache service are described elsewhere herein, especially with reference to FIC. 1 and FIC. 7.
- an endpoint that requests that a blob be stored by a blob cache service may later again execute operation 41 5 and query the blob lookup service for locators that may be used to retrieve the blob.
- the endpoint might receive a notification (perhaps from something like the notification service 1 40 described previously with reference to FIC. 1 ) that the blob cache service can provide the desired blob.
- the blob cache service may have already informed the blob lookup service that it has the blob, and so the blob lookup service may provide the requesting endpoint a locator that identifies the blob as being available using the blob cache service.
- the original endpoint may retrieve the blob from the blob cache service.
- endpoints including other more reliable or available endpoints - like perhaps a cloud storage service like that described previously with reference to FIC. 1 - may be used instead of a blob cache service as a location from where the endpoint may retrieve the blob.
- a blob lookup service may proceed to operation 435, where the blob may be retrieved using a variety of other mechanisms that may not use a blob lookup service.
- one or more endpoints may attempt to retrieve a blob using some store that might not be associated with or known to a blob lookup service.
- the blob may be retrieved using any of a variety of mechanisms, including those described previously, such as HTTP, FTP, peer-to-peer "BitTorrent"-style communications, or other mechanisms.
- functionality made available elsewhere - like functionality provided by a blob cache service or other server functionality - may also be used in at least some implementations that do not use a blob lookup service.
- the blob download module might have one or more pluggable "drivers" that each may locate and/or obtain the requested blob perhaps in different ways. For example, one driver might use a blob lookup service to obtain a set of locators and then use a single locator to retrieve the blob, another driver might also use a blob lookup service and might use multiple locators - perhaps in parallel, or with a BitTorrent or other similar type of mechanism - to retrieve the blob, yet another driver might not use a blob lookup service and might locate and retrieve the blob in a variety of other ways, and so on. In at least some of such implementations, the blob download module might determine the driver to use in a variety of ways.
- the blob download module might query each driver for an estimate of how long the driver might take to retrieve the blob and each driver might then return an estimated time or perhaps some value that indicates that the driver cannot retrieve the blob. Using such returned data, the blob download module might pick one (or more) drivers and instruct the chosen driver(s) to actually retrieve the blob.
- FIG. 5 shown therein is an exemplary generalized operational flow 500 including various operations that may be performed when maintaining and providing metadata.
- the following description of FIC. 5 may be made with reference to other figures. However, it should be understood that the operational flow described with reference to FIC. 5 is not intended to be limited to being used with the elements described with reference to these other figures.
- an endpoint such as a server endpoint 1 1 0 described previously with reference to FIC. 1 may obtain metadata from another endpoint.
- the endpoint that obtains the metadata may in some implementations be the server endpoint that receives metadata with changes communicated by other endpoints, as was previously described, for example, with reference to operation 21 5 of FIC. 2.
- at least some of the operations described herein with reference to the operational flow 400 may be implemented by an exemplary storage service 1 20, also as described previously with reference to FIC. 1 .
- Metadata may be obtained using any of a variety of communication mechanisms suitable for communicating metadata.
- an HTTP server associated with the receiving or obtaining endpoint may accept HTTP POST requests that contain metadata at a certain URL.
- another endpoint might communicate the metadata using another protocol, like FTP or one or more file sharing protocols, email, and so on.
- a receiving endpoint may proactively retrieve the metadata rather than rely on another endpoint to submit the metadata. For example, the obtaining endpoint might initiate an HTTP GET request to some other endpoint and be supplied with the metadata in a response.
- the obtained metadata may be merged with a copy of the metadata maintained by the server endpoint.
- Such a merge operation may result in the incorporation of the changes embodied in the obtained metadata into another copy of the metadata that may be maintained and modified by the endpoint that executes the operational flow 500.
- Such a local copy of the metadata may be maintained in a variety of ways, including (in at least some implementations) as rows or records in a database, as a copy of an XML feed (or other) document, and so on.
- the obtained metadata may be merged with the local metadata using SSE merge techniques.
- SSE merge techniques may be the same as or similar to the SSE merge techniques described previously with reference to, especially, operation 320 of FIC. 3.
- one or more endpoints that have previously registered or subscribed to be notified when the metadata in question changes may be notified using one or more notification means.
- an endpoint might in some implementations initiate the execution of operations to obtain the newly updated metadata from the server endpoint and merge the changes in the newly updated metadata with that endpoint's own local metadata store. Such operations might include, perhaps, at least some of those described previously with reference to FIC. 3.
- no subscribers may be notified.
- a blob lookup service may by updated with location information for a blob or blobs identified by the obtained metadata, so that endpoints that obtain or use the metadata may be able to retrieve blobs referenced by or associated with the metadata. That is, for example, suppose the obtained metadata comprises a new item that references a new blob. It may be that when the operational flow 500 is executed that the referenced blob exists only on the endpoint that generated and communicated the updated metadata to the server endpoint.
- the (initial) location of the blob may be stored by a blob lookup service, including a blob lookup service like the blob lookup services described elsewhere herein.
- the updated metadata created as part of the merging operation 51 5 may be communicated to one or more endpoints. For example, another endpoint that synchronizes the data represented by the metadata may submit a request to the server endpoint for updated metadata.
- Such a request might be submitted after the other endpoint receives a notification that the metadata has changed, when the other endpoint polls the server endpoint and discovers that the metadata has changed, and so on.
- the server endpoint may provide the metadata to the other endpoint in a variety of ways, including those described previously and elsewhere herein, such as through the use of HTTP GET or POST requests and responses, file sharing protocols, FTP, email, some other communication mechanism or mechanisms, and so on.
- FIG. 6 shown therein is an exemplary generalized operational flow 600 including various operations that may be performed when storing and providing locations and locators associated with blobs referenced by metadata.
- the following description of FIG. 6 may be made with reference to other figures. However, it should be understood that the operational flow described with reference to FIG. 6 is not intended to be limited to being used with the elements described with reference to these other figures.
- the exemplary operational flow of FIG. 6 indicates a particular order of execution, in one or more alternative embodiments the operations may be ordered differently.
- the exemplary operational flow contains multiple steps, it should be recognized that in some implementations at least some of these operations may be combined or executed contemporaneously, and in the same or other implementations, some steps may not be executed.
- an endpoint may accept one or more locations associated with a particular blob.
- the locations may specify an endpoint where at least part of a particular blob may be retrieved.
- the blob or blobs may be identified using one or more blob identifiers.
- An endpoint that accepts one or more locations might include at least some server endpoints, such as the exemplary server endpoint 1 1 0 described previously with reference to FIG. 1 .
- at least some of the operations associated with the operational flow 600 may be implemented in at least some implementations by a blob lookup service, such as the exemplary blob lookup service 1 25 also described previously with reference to FIG. 1 .
- While some of the operations of the operational flow 600 may be described herein as associated with an exemplary server endpoint or exemplary blob lookup service, it should be understood that the operations do not necessarily need to be implemented or executed by a server endpoint or blob lookup service and may instead be implemented or executed by one or more of a variety of endpoints, or modules or services included in those endpoints.
- the one or more blob locations may be accepted or retrieved from a variety of other endpoints and processes. For example, when a server endpoint or storage service processes metadata from another endpoint, it may provide this operation with one or more locations associated with one or more blobs identified by the metadata it has received (such an action might be associated with, for example, the operation 525 previously described with reference to FIC. 5). In another example, when an endpoint obtains some or all of a blob from one or more other endpoints, perhaps as part of synchronizing the metadata and blobs associated with the metadata, including through the use of operations described previously with reference to FIC.
- that endpoint might also update a blob lookup service with its location so that other endpoints may be able to retrieve some or all of the blob from the endpoint that has now also obtained the blob (in addition to receiving it from the endpoint where it was initially created, say).
- the endpoint that updates the blob lookup service might also provide other information to the blob lookup service.
- Such other information might include, for example, an identification of the part (or parts) of the blob available at the particular endpoint (or an indication that the entire blob is available), some indication that the particular endpoint may be able to provide one or more alternate representations of the blob, and so on.
- blob lookup service (and possibly other accepted data), may be stored by the blob lookup service using one or more of a variety of means or mechanisms, including as rows or records in a database, and so on.
- the blob lookup service may receive a request for locators associated with one or more blobs.
- Such requests may be sent or communicated from a variety of endpoints, modules, or services.
- a device endpoint might submit a request for a locator for a particular blob so that the device endpoint may be able to synchronize a blob identified by particular metadata. (The device endpoint may then use any returned locators to actually retrieve the blob.)
- a blob cache service or cloud storage service might request locators so that the blob cache service or cloud storage service may be able to retrieve one or more blobs.
- the blob lookup service may identify one or more locators associated with the particular requested blob.
- this operation may be executed, at least in part, by looking up a particular blob (perhaps using a blob identifier associated with the blob), and identifying a locator for every endpoint that is registered as being able to provide the requested blob. For example, if three endpoints have previously registered as having a requested blob, then this operation might identify three locators, one for each of the three endpoints.
- the identified locators may also include additional information aside from just a means for identifying an endpoint associated with the blob. For example, some locators may include information about which parts or portions of the blob may be retrieved from the particular endpoint, the same or other locators may include information about one or more communication protocols that may be used with the particular endpoint to retrieve the blob, and so on.
- a blob lookup service may simply, for example, identify a locator for each and every endpoint that may provide a particular blob, in other implementations the process of identifying locators may, for example, use additional information or knowledge.
- a blob lookup service might use a variety of knowledge about the metadata, about the server endpoint(s) and device endpoint(s), and so on, to filter, prioritize, or otherwise identify (perhaps the most suitable) locators.
- knowledge may be provided, at least in part, by a knowledge store such as the exemplary knowledge store module 145 described previously with reference to FIC. 1 .
- a knowledge store such as the exemplary knowledge store module 145 described previously with reference to FIC. 1 .
- a knowledge store has knowledge that one of the providing endpoints is implemented by a workstation-class desktop computer that is often connected to a network using a highspeed and high-bandwidth network connection; that another of the providing endpoints is a laptop computer that connects using a variety of networks with differing connection characteristics; and that a third providing endpoint is a mobile phone that is often turned on and connected to a network but that uses a relatively slow connection where data transmission is associated with per-unit monetary charges.
- at least some implementations of operation 625 may use this knowledge, as well as possibly other knowledge, to filter or prioritize the identified locator or locators.
- locators for all three endpoints might be identified, but they may include or be communicated with priority information that indicates that the desktop computer should be used first, the laptop computer should be used next, and that the mobile phone should only be used as a last resort.
- the identified locators might not even include particular endpoints, such as the mobile phone in this example - instead, only locators for the desktop and laptop computers may be identified.
- Other information or criteria may also be used to identify one or more locators, and even possibly to determine if any locators at all should be returned.
- the blob might contain executable code that contains a virus, might contain video data with objectionable content, and so on.
- an implementation of operation 625 might identify no locators for the particular blob, even if one or more endpoints have registered to be able to provide the blob in question.
- the blob lookup service may be used to filter or control the content that may be transmitted when synchronizing or communicating data.
- locator or locators identified may be communicated or provided to one or more endpoints, including an endpoint that made the request received as part of operation 620.
- FIG. 7 shown therein is an exemplary generalized operational flow 700 including various operations that may be performed when storing blobs in a cache and providing blobs from a cache.
- the following description of FIC. 7 may be made with reference to other figures. However, it should be understood that the operational flow described with reference to FIC. 7 is not intended to be limited to being used with the elements described with reference to these other figures.
- the exemplary operational flow of FIC. 7 indicates a particular order of execution, in one or more alternative embodiments the operations may be ordered differently.
- the exemplary operational flow contains multiple steps, it should be recognized that in some implementations at least some of these operations may be combined or executed contemporaneously, and in the same or other implementations, some steps may not be executed.
- an endpoint may accept or receive a request to store or cache a particular blob.
- An endpoint that receives such a request might include at least some server endpoints, such as the exemplary server endpoint 1 1 0 described previously with reference to FIC. 1 .
- at least some of the operations associated with the operational flow 700 may be implemented in at least some implementations by a blob cache service, such as the exemplary blob cache service 1 30 also described previously with reference to FIC. 1 .
- While some of the operations of the operational flow 700 may be described herein as associated with an exemplary server endpoint or exemplary blob cache service, it should be understood that the operations do not necessarily need to be implemented or executed by a server endpoint or blob cache service and may instead be implemented or executed by one or more of a variety of endpoints, or modules or services included in those endpoints.
- a request to cache a particular blob might be received from a variety of endpoints, including both server endpoints and device endpoints.
- a device endpoint might submit a request to cache a blob when, in one example, it is unable to obtain that blob itself from another endpoint (as might be the case for example and as described previously, when two device endpoints are not online at the same time).
- a storage service or other service or module might submit a request to cache a particular blob for one or more of a variety of reasons. For example, perhaps using knowledge maintained by a knowledge store like the knowledge store module 145 described previously with reference to FIC.
- a storage service might have knowledge that a particular endpoint is often unavailable. In such a case, and possibly in other cases, when such an endpoint provides metadata associated with new or changed blobs, the storage service might proactively - even before another endpoint requests the blob - request that the particular blob or blobs be cached. Such an implementation might make it unnecessary for a device endpoint that wants to retrieve such a blob to make a request to cache the blob itself.
- the blob cache service may retrieve the requested blob. Retrieving a blob may be implemented in a variety of ways.
- the blob cache service may execute one or more operations that are the same as or similar to those described previously with reference to FIC. 4. Such operations might, for example, use a blob lookup service to identify one or more locations at which the blob is available and might retrieve the blob from such locations.
- endpoints may periodically communicate with the blob cache service, or may receive notifications that a communication with the blob cache service is necessary, and may provide or upload blobs that the blob cache service requests.
- a device endpoint might receive a communication or notification that a particular blob available on the device endpoint is desired by the blob cache service, and so might make the blob available for retrieval by the blob cache service as part of operation 71 5, or might proactively upload or provide the blob to the blob cache service as part of another implementation of operation 71 5.
- the blob cache service may in some implementations use knowledge about the synchronization system, metadata, endpoints, and so on - including knowledge maintained by a knowledge store such as the exemplary knowledge store 145 described previously with reference to FIC. 1 - to more efficiently retrieve blobs that are to be cached.
- a blob cache service might not immediately attempt to cache a blob requested by some other endpoint. Instead, the blob cache service might wait until, say, there is free bandwidth available to the one or more endpoints that may provide the blob, or until there is relatively cheap bandwidth available, or until endpoints that have sufficient computing power to service the request are available, and so on.
- the retrieved blob may be stored using any of a variety of storage mechanisms or means.
- blobs may be stored in a database, may be stored as files in a file system, or may be stored in some other manner.
- a blob cache service may use a variety of caching techniques to determine, for example, how long a particular blob should be stored. For example, one implementation of a blob cache service might store blobs until its available storage space is full, and might then use one or more algorithms - including LRU algorithms - to determine which blobs to delete so that space is available for additional blobs. In some implementations, blobs that have been retrieved or accessed recently might be kept while blobs that have not been retrieved recently may be deleted.
- blobs may be associated with a reference count that defines how many endpoints may want to retrieve the blob and a blob may be deleted in some cases when the specified number of endpoints have retrieved the blob.
- a variety of other techniques and algorithms may be used to manage the storage associated with the blob cache service.
- the blob cache service may update an exemplary blob lookup service with a location associated with the blob cache service when the blob cache service has stored at least a portion of a particular blob and can provide the stored portion of the blob to other endpoints.
- a blob lookup service may be able to provide a locator that is associated with the blob cache service to an endpoint, and such an endpoint may be able to thereby retrieve a blob from the blob cache service.
- endpoints may also be able to retrieve blobs from the blob cache service without the use of a blob lookup service.
- the blob cache service may receive a request from one or more endpoints for all or a portion of a cached blob and may, in an exemplary implementation of operation 730, provide the requested blob (or portion of the blob) to the requestor.
- Requests and responses associated with cached blobs may be implemented in a wide variety of manners, using a wide variety of communication mechanisms, and so on, including mechanisms described elsewhere herein such as HTTP transfers, FTP transfers, BitTorrent-style transfers, and so on.
- FIG. 8 shown therein is an exemplary generalized operational flow 800 including various operations that may be performed when storing blobs in cloud storage and providing blobs from cloud storage.
- the following description of FIC. 8 may be made with reference to other figures. However, it should be understood that the operational flow described with reference to FIC. 8 is not intended to be limited to being used with the elements described with reference to these other figures.
- the exemplary operational flow of FIC. 8 indicates a particular order of execution, in one or more alternative embodiments the operations may be ordered differently.
- the exemplary operational flow contains multiple steps, it should be recognized that in some implementations at least some of these operations may be combined or executed contemporaneously, and in the same or other implementations, some steps may not be executed.
- an endpoint may accept or receive a request to store some or all of a particular blob "in the cloud" or in cloud storage.
- the endpoint or module or service that accepts this request may be implemented in some cases by a server endpoint, perhaps like the server endpoint 1 1 0, and at least some of the operations associated with the operational flow 800 may be implemented or provided by an exemplary cloud storage module 1 35, both of which were previously described with reference to FIC. 1 .
- While some of the operations of the operational flow 800 may be described herein as associated with an exemplary server endpoint or exemplary cloud storage service, it should be understood that the operations do not necessarily need to be implemented or executed by a server endpoint or cloud storage service and may indeed be implemented or executed by one or more of a variety of endpoints, or modules or services included in those endpoints. [001 20] While many of the operations in the operational flow 800 might appear similar to those described in the blob cache service operational flow 700, a cloud storage service may be used differently than a blob cache service.
- a cloud storage service might provide non-transient or "reliable" storage for blobs (and perhaps other data) by, for example, not deleting stored blobs unless specifically directed by, for example, a user; by maintaining multiple copies of stored data, perhaps on separate disk drives or other storage mechanisms; and so on.
- the cloud storage service may retrieve or be provided with blob data, including in some cases with a portion of or the entirety of the blob associated with the request that may have been received in operation 81 0.
- the cloud storage service might retrieve the blob in a variety of ways, including in some cases by using a blob lookup service, blob download module, or possibly some of the operations described previously, for example, with reference to FIC. 4.
- the cloud storage service might retrieve or be provided with the blob using any of a variety of other communication or file transfer mechanisms.
- knowledge of the communication system, metadata, blobs, and so on - perhaps including knowledge associated with an exemplary knowledge store, such as the knowledge store module 1 45 described previously with reference to FIC. 1 - may be used to more efficiently obtain the requested blob, perhaps in similar ways to those already described with reference to the blob cache service, or in other ways.
- the retrieved or provided blob may be stored by the cloud storage service in a variety of ways. For example, as introduced previously, a cloud storage service might store data, including blobs, using multiple redundant stores.
- the cloud storage service may in some cases update or provide information to an exemplary blob lookup service when it has stored the blob, so that the blob lookup service may in some implementations provide locators to other endpoints that reference or identify the cloud storage service, after which the other endpoints may in some cases retrieve blobs from the cloud storage service.
- a cloud storage service may receive a request to provide a stored blob, or a portion of a stored blob, in at least some implementations of operation 825.
- the cloud storage service may then provide the requested blob data in an exemplary implementation of operation 830, possibly through the use of a variety of communication or transfer mechanisms, including those described elsewhere herein.
- FIG. 9 shown therein is an exemplary generalized operational flow 900 including various operations that may be performed when retrieving a blob without the use of full metadata or at least some of the functionality provided by an exemplary server endpoint.
- the following description of FIC. 9 may be made with reference to other figures. However, it should be understood that the operational flow described with reference to FIC. 9 is not intended to be limited to being used with the elements described with reference to these other figures.
- the exemplary operational flow of FIC. 9 indicates a particular order of execution, in one or more alternative embodiments the operations may be ordered differently.
- the exemplary operational flow contains multiple steps, it should be recognized that in some implementations at least some of these operations may be combined or executed contemporaneously, and in the same or other implementations, some steps may not be executed.
- blob identifiers are included in metadata that may be synchronized between different endpoints. Such synchronized metadata provides at least one mechanism by which an endpoint might obtain a blob identifier.
- a blob identifier might be provided in a variety of other ways. For example, a user might send an email or instant message to another user that contains a blob identifier that in turn identifies a blob, such as perhaps an audio or video clip, a word processing document or spreadsheet, some other file, and so on.
- a blob lookup service it may be determined whether a blob lookup service should be used to retrieve the blob. For example, some endpoints may not have access to or even know about a blob lookup service. In such a case, and in other cases - including when a blob lookup service is available but is not used - the determination may be made to not use a blob lookup service, and the operational flow 900 may proceed to operation 930. If the blob is to be retrieved using a blob lookup service, the operational flow 900 may proceed to operation 920. [001 28] As one example of an environment in which a blob lookup service might or might not be used, consider the example of a web browser.
- Many web browsers include executable code that enables the download or retrieval of resources using protocols such as HTTP.
- HTTP HyperText Transfer Protocol
- an HTTP request may be made to a single server - or at least a single URL - and a response may be received from a single server.
- a request may be made to a single server - or at least a single URL - and a response may be received from a single server.
- a request may be made to a single server - or at least a single URL - and a response may be received from a single server.
- some cases for example, when retrieving a blob that may be large - it may be more effective and quicker to download the blob using one or more other mechanisms, including mechanisms, like BitTorrent or BitTorrent-like transfer implementations, where different pieces of a particular blob are retrieved from or supplied by multiple different endpoints.
- web browsers may not natively include functionality to enable downloads of this, or other, types.
- functionality may be added to a web browser, or some other application, that adds this capability by, for example, enabling the web browser or application to support the use of a blob lookup service (and also possibly a blob download module, as described in more detail below).
- a web browser might support "plug-ins" or other code added by users.
- One such plug-in might implement support for a blob lookup service and, when installed, might enable a web browser to use a blob lookup service (and so to proceed to operation 920, instead of proceeding to operation 930).
- the endpoint retrieving the blob may query or ask a blob lookup service for one or more locators associated with the desired blob, perhaps by providing the blob lookup service with some data, like a blob identifier, that identifies the desired blob.
- a blob lookup service for one or more locators associated with the desired blob, perhaps by providing the blob lookup service with some data, like a blob identifier, that identifies the desired blob.
- Such an operation may be implemented in at least some implementations in a manner that is similar to or the same as the manner in which, for example, the operation 41 5, described previously with reference to FIC. 4, was implemented.
- one or more of the obtained locators and in some cases a blob download module may be used to actually retrieve the blob.
- Such an operation may be implemented in at least some implementations in a manner that is similar to or the same as the manner in which, for example, the operation 425, described previously with reference to FIC. 4, was implemented.
- the endpoint might take additional actions if the blob cannot be downloaded or a retrieval of the blob fails. For example, if the blob lookup service does not supply any locators, or only supplies locators that cannot be used to actually retrieve the blob (because of network conditions, because the endpoints identified by the locators are not available, and so on), the endpoint retrieving the blob might, for example, request that a blob cache service store the blob and then attempt to later retrieve the blob from the blob cache service. Such a mechanism was described previously with reference to, for example, FIC. 4, and is not illustrated in FIC. 9.
- the endpoint may update the blob lookup service with additional information so that the blob lookup service may in some cases identify the endpoint as a source for the blob when other endpoints request the same blob.
- the blob lookup service may in some cases identify the endpoint as a source for the blob when other endpoints request the same blob.
- it may be possible, for example, for one user in an office, say, to download or retrieve a large file or blob, and have other users, say, in the same office, retrieve the blob from the first user rather than from some other endpoint that might be located elsewhere on one or more other networks.
- an endpoint associated with the first user might have provided a blob lookup service with information that identifies the endpoint.
- the blob lookup service may provide a locator that identifies the first user's endpoint.
- the blob lookup service might provide such a locator by using knowledge - including knowledge maintained and provided by a knowledge store - that the first user's endpoint is perhaps geographically closer than the original endpoint, is connected to the second endpoint using a faster network connection than the original endpoint, and so on.
- the operational flow 900 may proceed to operation 930.
- the blob may be retrieved using one or more file transfer or blob retrieval mechanisms that do not directly (but may in some implementations, indirectly) use a blob lookup service or blob download module.
- Such mechanisms might include HTTP, FTP, file sharing or transfer protocols, and so on.
- the endpoint may simply initiate an HTTP request to the URL embodied by the blob identifier.
- the server endpoint or HTTP server that responds to the request for the blob might then be configured to use URLs of this particular form by extracting an identifier (such as "ABCl 23") and using the identifier to locate the requested blob, and then finally to return the blob in an HTTP response.
- an identifier such as "ABCl 23”
- a server endpoint itself might locate the blob by using operations that are the same as or similar to, for example, the blob retrieval operations previously described with reference to FIC. 4.
- a server endpoint might only return blobs that exist in a specific storage location, such as a blob cache service or a cloud storage service.
- FIG. 1 this figure and the related description are intended to provide a brief and general description of an exemplary computing environment in which the various technologies described herein may be implemented. Although not required, the technologies are described herein, at least in part, in the general context of computer-executable instructions, such as program modules that are executed by a controller, processor, personal computer, or other computing device, such as the computing device 1 000 illustrated in FIC. 1 0.
- program modules include routines, programs, objects, components, user interfaces, data structures, and so on, that perform particular tasks, display particular information, or implement particular abstract data types. Operations performed by the program modules have been described previously with the aid of one or more block diagrams and operational flowcharts.
- Computer- readable media may be any media that can store or embody information that is encoded in a form that can be accessed and understood by a computer.
- Typical forms of computer-readable media include, without limitation, both volatile and nonvolatile memory, data storage devices, including removable and/or non-removable media, and communications media.
- Communication media embodies computer-readable information in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communications media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
- the computing device 1 000 may implement at least part of, for example, one of the endpoints described previously with reference to FIC. 1 , such as the server endpoint 1 1 0, the device endpoint A 1 60, the device endpoint B 1 70, and so on.
- the processing unit 1 002 may be a general purpose central processing unit (CPU), as exists, for example, on a variety of computers, including desktop and laptop computers.
- the memory 1 004 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two. This most basic configuration is illustrated in FIC. 1 0 by dashed line 1 006.
- the computing device 1 000 may also have additional features and functionality.
- the computing device 1 000 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIC. 1 0 by the removable storage 1 008 and the non-removable storage 1 01 0.
- the computing device 1 000 may also contain one or more communications connection(s) 1 01 2 that allow the computing device 1 000 to communicate with other devices and services.
- the computing device might have one or more connections to a variety of communication means or computing devices, including, for example, connections to or between endpoints as described previously with reference to FIC. 1 .
- the computing device 1 000 may also have one or more input device(s) 1 01 4 such as an image input devices like cameras or scanners, keyboards, mice, pens, voice input devices including microphone arrays, touch input devices, and so on.
- One or more output device(s) 1 01 6 such as a display, speakers, printer, and so on, may also be included in the computing device 1 000.
- the technologies described herein may likewise be practiced in hand-held devices including mobile telephones and PDAs, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like.
- hand-held devices including mobile telephones and PDAs, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like.
- multiprocessor systems microprocessor-based or programmable consumer electronics
- network PCs network PCs
- minicomputers minicomputers
- mainframe computers mainframe computers
- the technologies described herein may also be implemented in distributed computing environments where operations are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote devices. [00143] While described herein as being implemented in software, it will further be appreciated that the technologies described herein may alternatively be implemented all or in part as hardware, firmware, or various combinations of software, hardware, and/or firmware.
Abstract
Description
Claims
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP08771572A EP2160688A4 (en) | 2007-06-22 | 2008-06-20 | Server-assisted and peer-to-peer synchronization |
CN200880021302XA CN101689164B (en) | 2007-06-22 | 2008-06-20 | Server-assisted and peer-to-peer synchronization |
AU2008268539A AU2008268539B2 (en) | 2007-06-22 | 2008-06-20 | Server-assisted and peer-to-peer synchronization |
JP2010513440A JP2010531026A (en) | 2007-06-22 | 2008-06-20 | Server-assisted and peer-to-peer synchronization |
BRPI0811308A BRPI0811308A2 (en) | 2007-06-22 | 2008-06-20 | non-hierarchical and server-assisted synchronization. |
RU2009147457/08A RU2471227C2 (en) | 2007-06-22 | 2008-06-20 | Peer-to-peer synchronisation assisted with service unit |
CA002687859A CA2687859A1 (en) | 2007-06-22 | 2008-06-20 | Server-assisted and peer-to-peer synchronization |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/766,985 US8239479B2 (en) | 2007-06-22 | 2007-06-22 | Server-assisted and peer-to-peer synchronization |
US11/766,985 | 2007-06-22 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2009002831A2 true WO2009002831A2 (en) | 2008-12-31 |
WO2009002831A3 WO2009002831A3 (en) | 2009-02-26 |
Family
ID=40136434
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2008/067636 WO2009002831A2 (en) | 2007-06-22 | 2008-06-20 | Server-assisted and peer-to-peer synchronization |
Country Status (11)
Country | Link |
---|---|
US (2) | US8239479B2 (en) |
EP (1) | EP2160688A4 (en) |
JP (1) | JP2010531026A (en) |
KR (1) | KR20100033966A (en) |
CN (1) | CN101689164B (en) |
AU (1) | AU2008268539B2 (en) |
BR (1) | BRPI0811308A2 (en) |
CA (1) | CA2687859A1 (en) |
RU (1) | RU2471227C2 (en) |
TW (1) | TW200907719A (en) |
WO (1) | WO2009002831A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10063501B2 (en) | 2015-05-22 | 2018-08-28 | Microsoft Technology Licensing, Llc | Unified messaging platform for displaying attached content in-line with e-mail messages |
US10216709B2 (en) | 2015-05-22 | 2019-02-26 | Microsoft Technology Licensing, Llc | Unified messaging platform and interface for providing inline replies |
Families Citing this family (67)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8677270B2 (en) | 2007-05-04 | 2014-03-18 | Microsoft Corporation | Live companion user interface |
US8954507B2 (en) * | 2007-06-22 | 2015-02-10 | Microsoft Corporation | Gathering and using awareness information |
US20090112870A1 (en) * | 2007-10-31 | 2009-04-30 | Microsoft Corporation | Management of distributed storage |
US8195608B2 (en) * | 2008-06-02 | 2012-06-05 | International Business Machines Corporation | File synchronization between multiple nodes |
US8495036B2 (en) * | 2008-10-24 | 2013-07-23 | Microsoft Corporation | Blob manipulation in an integrated structured storage system |
US8620884B2 (en) * | 2008-10-24 | 2013-12-31 | Microsoft Corporation | Scalable blob storage integrated with scalable structured storage |
US10783121B2 (en) * | 2008-12-22 | 2020-09-22 | Ctera Networks, Ltd. | Techniques for optimizing data flows in hybrid cloud storage systems |
US9614924B2 (en) | 2008-12-22 | 2017-04-04 | Ctera Networks Ltd. | Storage device and method thereof for integrating network attached storage with cloud storage services |
US8762642B2 (en) * | 2009-01-30 | 2014-06-24 | Twinstrata Inc | System and method for secure and reliable multi-cloud data replication |
US8769055B2 (en) * | 2009-04-24 | 2014-07-01 | Microsoft Corporation | Distributed backup and versioning |
US8935366B2 (en) * | 2009-04-24 | 2015-01-13 | Microsoft Corporation | Hybrid distributed and cloud backup architecture |
EP2252031A1 (en) * | 2009-05-15 | 2010-11-17 | Thomson Licensing | Device and method for controlling dissemination of data by transfer of sets of instructions between peers having wireless communication capacities |
US20100306253A1 (en) * | 2009-05-28 | 2010-12-02 | Hewlett-Packard Development Company, L.P. | Tiered Managed Storage Services |
US20110070820A1 (en) * | 2009-09-23 | 2011-03-24 | Qualcomm Incorporated | System and apparatus for power-efficiently delivering personalized contents in a broadcast network |
US8266290B2 (en) | 2009-10-26 | 2012-09-11 | Microsoft Corporation | Scalable queues on a scalable structured storage system |
US8516137B2 (en) | 2009-11-16 | 2013-08-20 | Microsoft Corporation | Managing virtual hard drives as blobs |
US8621098B2 (en) * | 2009-12-10 | 2013-12-31 | At&T Intellectual Property I, L.P. | Method and apparatus for providing media content using a mobile device |
TWI484335B (en) * | 2010-01-07 | 2015-05-11 | Alibaba Group Holding Ltd | Cached data processing method, processing system, and means |
US20110246721A1 (en) * | 2010-03-31 | 2011-10-06 | Sony Corporation | Method and apparatus for providing automatic synchronization appliance |
US9235354B2 (en) * | 2010-04-13 | 2016-01-12 | Hitachi, Ltd. | Storage network system and its control method |
US8650311B2 (en) * | 2010-04-22 | 2014-02-11 | Cisco Technology, Inc. | Client device configured to connect with a home network |
KR101697979B1 (en) * | 2010-11-23 | 2017-01-19 | 삼성전자주식회사 | Method and apparatus for syncronizing data in connected devices |
KR101502895B1 (en) | 2010-12-22 | 2015-03-17 | 주식회사 케이티 | Method for recovering errors from all erroneous replicas and the storage system using the method |
KR101585146B1 (en) | 2010-12-24 | 2016-01-14 | 주식회사 케이티 | Distribution storage system of distributively storing objects based on position of plural data nodes, position-based object distributive storing method thereof, and computer-readable recording medium |
KR101544480B1 (en) | 2010-12-24 | 2015-08-13 | 주식회사 케이티 | Distribution storage system having plural proxy servers, distributive management method thereof, and computer-readable recording medium |
KR101483127B1 (en) | 2011-03-31 | 2015-01-22 | 주식회사 케이티 | Method and apparatus for data distribution reflecting the resources of cloud storage system |
KR101544483B1 (en) | 2011-04-13 | 2015-08-17 | 주식회사 케이티 | Replication server apparatus and method for creating replica in distribution storage system |
KR101544485B1 (en) | 2011-04-25 | 2015-08-17 | 주식회사 케이티 | Method and apparatus for selecting a node to place a replica in cloud storage system |
US9020892B2 (en) * | 2011-07-08 | 2015-04-28 | Microsoft Technology Licensing, Llc | Efficient metadata storage |
US8838764B1 (en) * | 2011-09-13 | 2014-09-16 | Amazon Technologies, Inc. | Hosted network management |
US9002790B2 (en) * | 2011-09-14 | 2015-04-07 | Google Inc. | Hosted storage locking |
US9294307B2 (en) * | 2011-10-07 | 2016-03-22 | Microsoft Technology Licensing, Llc | Synchronization of conversation data |
US20130132500A1 (en) | 2011-11-18 | 2013-05-23 | Apple Inc. | Selection of a master in a peer-to-peer network environment |
US9516615B2 (en) | 2011-11-18 | 2016-12-06 | Apple Inc. | Selection of synchronization stations in a peer-to-peer network environment |
US10271293B2 (en) | 2011-11-18 | 2019-04-23 | Apple Inc. | Group formation within a synchronized hierarchy of peer-to-peer devices |
KR101944403B1 (en) * | 2012-01-04 | 2019-02-01 | 삼성전자주식회사 | Apparatas and method of using for cloud system in a terminal |
US9258144B2 (en) * | 2012-02-23 | 2016-02-09 | Infosys Limited | Methods, systems and computer-readable media for integrating a composite MIME message |
KR101928915B1 (en) | 2012-02-24 | 2019-03-12 | 삼성전자 주식회사 | Apparatus and method for processing a data of mobile terminal |
US8949179B2 (en) * | 2012-04-23 | 2015-02-03 | Google, Inc. | Sharing and synchronizing electronically stored files |
US9471591B2 (en) * | 2012-05-02 | 2016-10-18 | Microsoft Technology Licensing, Llc | Iterative disk upload based on differencing disk format |
CN102694860A (en) * | 2012-05-25 | 2012-09-26 | 北京邦诺存储科技有限公司 | Method, equipment and system for data processing of cloud storage |
CN102724319B (en) * | 2012-06-20 | 2015-03-18 | 新浪网技术(中国)有限公司 | Data synchronization method and system, and associated storage server |
US9262429B2 (en) * | 2012-08-13 | 2016-02-16 | Microsoft Technology Licensing, Llc | De-duplicating attachments on message delivery and automated repair of attachments |
US9779124B2 (en) * | 2012-08-27 | 2017-10-03 | Lg Electronics Inc. | Mobile terminal and control method thereof |
CA2903830C (en) | 2013-03-05 | 2021-08-24 | Fasetto, Llc | System and method for cubic graphical user interfaces |
MX2016000616A (en) | 2013-07-18 | 2017-05-17 | Fasetto L L C | System and method for multi-angle videos. |
US10095873B2 (en) | 2013-09-30 | 2018-10-09 | Fasetto, Inc. | Paperless application |
US11379425B1 (en) * | 2013-12-18 | 2022-07-05 | Open Text Corporation | Optimizing synchronization of content management servers |
US9584402B2 (en) | 2014-01-27 | 2017-02-28 | Fasetto, Llc | Systems and methods for peer to peer communication |
CN103927288B (en) * | 2014-04-30 | 2018-12-07 | 惠州Tcl移动通信有限公司 | Information transfer and shared method and system between a kind of mobile device |
TWI599892B (en) * | 2014-06-25 | 2017-09-21 | D-Link Corp | Home network system file management and sharing methods |
JP6847031B2 (en) * | 2014-07-10 | 2021-03-24 | ファセット・インコーポレーテッド | System and method for editing messages |
NZ730674A (en) | 2014-10-06 | 2021-07-30 | Fasetto Inc | Systems and methods for portable storage devices |
US10437288B2 (en) | 2014-10-06 | 2019-10-08 | Fasetto, Inc. | Portable storage device with modular power and housing system |
KR102308140B1 (en) | 2015-03-11 | 2021-10-05 | 파세토, 인크. | Systems and methods for web API communication |
WO2017096245A1 (en) | 2015-12-03 | 2017-06-08 | Fasetto, Llc | Systems and methods for memory card emulation |
CN105635278A (en) * | 2015-12-30 | 2016-06-01 | 深圳市瑞驰信息技术有限公司 | Method for managing metadata of storage system and metadata server |
KR20230129589A (en) | 2016-11-23 | 2023-09-08 | 파세토, 인크. | Systems and methods for streaming media |
CA3054681A1 (en) | 2017-02-03 | 2018-08-09 | Fasetto, Inc. | Systems and methods for data storage in keyed devices |
WO2019079628A1 (en) | 2017-10-19 | 2019-04-25 | Fasetto, Inc. | Portable electronic device connection systems |
KR20210018217A (en) | 2018-04-17 | 2021-02-17 | 파세토, 인크. | Device presentation with real-time feedback |
CN109462640B (en) * | 2018-10-29 | 2021-07-20 | 上海掌门科技有限公司 | Metadata synchronization method, data terminal, interaction system and medium |
KR102041453B1 (en) * | 2018-12-07 | 2019-11-27 | 삼성전자 주식회사 | Apparatus and method for processing a data of mobile terminal |
US11057495B2 (en) | 2019-05-01 | 2021-07-06 | Ciena Corporation | Selecting where to process data associated with Internet of Things (IoT) devices |
US11075812B2 (en) * | 2019-06-20 | 2021-07-27 | Kaloom Inc. | Server and methods for synchronizing networking information with client devices |
CN110647514B (en) * | 2019-08-23 | 2022-07-08 | 北京浪潮数据技术有限公司 | Metadata updating method and device and metadata server |
CN111245707A (en) * | 2020-01-08 | 2020-06-05 | 北京小米移动软件有限公司 | Mail transmission method, device, electronic equipment and storage medium |
Family Cites Families (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5682524A (en) * | 1995-05-26 | 1997-10-28 | Starfish Software, Inc. | Databank system with methods for efficiently storing non-uniform data records |
US6295541B1 (en) * | 1997-12-16 | 2001-09-25 | Starfish Software, Inc. | System and methods for synchronizing two or more datasets |
US6275831B1 (en) * | 1997-12-16 | 2001-08-14 | Starfish Software, Inc. | Data processing environment with methods providing contemporaneous synchronization of two or more clients |
US7260610B2 (en) | 1998-02-10 | 2007-08-21 | Gateway Inc. | Convergence events notification system |
US6356946B1 (en) | 1998-09-02 | 2002-03-12 | Sybase Inc. | System and method for serializing Java objects in a tubular data stream |
US20030069874A1 (en) * | 1999-05-05 | 2003-04-10 | Eyal Hertzog | Method and system to automate the updating of personal information within a personal information management application and to synchronize such updated personal information management applications |
US6381605B1 (en) * | 1999-05-29 | 2002-04-30 | Oracle Corporation | Heirarchical indexing of multi-attribute data by sorting, dividing and storing subsets |
US6366907B1 (en) * | 1999-12-15 | 2002-04-02 | Napster, Inc. | Real-time search engine |
US6694336B1 (en) * | 2000-01-25 | 2004-02-17 | Fusionone, Inc. | Data transfer and synchronization system |
US6891953B1 (en) * | 2000-06-27 | 2005-05-10 | Microsoft Corporation | Method and system for binding enhanced software features to a persona |
US7158953B1 (en) * | 2000-06-27 | 2007-01-02 | Microsoft Corporation | Method and system for limiting the use of user-specific software features |
US7472396B2 (en) | 2001-05-18 | 2008-12-30 | Qualcomm Incorporated | Extensible event notification mechanism |
US6745209B2 (en) * | 2001-08-15 | 2004-06-01 | Iti, Inc. | Synchronization of plural databases in a database replication system |
US7207008B1 (en) | 2001-09-12 | 2007-04-17 | Bellsouth Intellectual Property Corp. | Method, system, apparatus, and computer-readable medium for interactive notification of events |
US20030144892A1 (en) | 2002-01-29 | 2003-07-31 | International Business Machines Corporation | Method, system, and storage medium for providing knowledge management services |
US6993534B2 (en) * | 2002-05-08 | 2006-01-31 | International Business Machines Corporation | Data store for knowledge-based data mining system |
US20030220966A1 (en) | 2002-05-24 | 2003-11-27 | International Business Machines Corporation | System and method for dynamic content dependent conflict resolution |
CN1249597C (en) * | 2002-09-03 | 2006-04-05 | 鸿富锦精密工业(深圳)有限公司 | Synchronous system in distributed files and method |
US20040172584A1 (en) * | 2003-02-28 | 2004-09-02 | Microsoft Corporation | Method and system for enhancing paste functionality of a computer software application |
US7743022B2 (en) | 2003-02-28 | 2010-06-22 | Microsoft Corporation | Method and system for synchronizing data shared among peer computing devices |
US7660833B2 (en) * | 2003-07-10 | 2010-02-09 | Microsoft Corporation | Granular control over the authority of replicated information via fencing and unfencing |
US7734690B2 (en) | 2003-09-05 | 2010-06-08 | Microsoft Corporation | Method and apparatus for providing attributes of a collaboration system in an operating system folder-based file system |
US7200620B2 (en) * | 2003-09-29 | 2007-04-03 | International Business Machines Corporation | High availability data replication of smart large objects |
US20050203851A1 (en) * | 2003-10-25 | 2005-09-15 | Macrovision Corporation | Corruption and its deterrence in swarm downloads of protected files in a file sharing network |
US8250150B2 (en) | 2004-01-26 | 2012-08-21 | Forte Internet Software, Inc. | Methods and apparatus for identifying and facilitating a social interaction structure over a data packet network |
US20050203935A1 (en) | 2004-03-11 | 2005-09-15 | International Business Machines Corporation | Clipboard content and document metadata collection |
US7730012B2 (en) * | 2004-06-25 | 2010-06-01 | Apple Inc. | Methods and systems for managing data |
US7529734B2 (en) | 2004-11-12 | 2009-05-05 | Oracle International Corporation | Method and apparatus for facilitating a database query using a query criteria template |
US20060168123A1 (en) * | 2004-12-14 | 2006-07-27 | Alcatel | Queue and load for wireless hotspots |
US8214353B2 (en) * | 2005-02-18 | 2012-07-03 | International Business Machines Corporation | Support for schema evolution in a multi-node peer-to-peer replication environment |
US20060218492A1 (en) * | 2005-03-22 | 2006-09-28 | Andrade Jose O | Copy and paste with citation attributes |
US20060247961A1 (en) | 2005-04-28 | 2006-11-02 | Klemow Jason L | Method and system for providing aggregation and display of notifications |
US7454406B2 (en) * | 2005-04-29 | 2008-11-18 | Adaptec, Inc. | System and method of handling file metadata |
US20060265262A1 (en) * | 2005-05-18 | 2006-11-23 | Microsoft Corporation | Distributed conference scheduling |
US8229897B2 (en) | 2006-02-03 | 2012-07-24 | International Business Machines Corporation | Restoring a file to its proper storage tier in an information lifecycle management environment |
US7698258B2 (en) | 2006-06-02 | 2010-04-13 | Microsoft Corporation | Searchable storage system |
US7516141B2 (en) * | 2006-06-05 | 2009-04-07 | Research In Motion Limited | System and method for generating runtime metadata for use in the development of mobile device applications |
US20110047230A1 (en) * | 2006-11-17 | 2011-02-24 | Mcgee Steven J | Method / process / procedure to enable: The Heart Beacon Rainbow Force Tracking |
US7933296B2 (en) * | 2007-03-02 | 2011-04-26 | Microsoft Corporation | Services for data sharing and synchronization |
US20080219436A1 (en) * | 2007-03-05 | 2008-09-11 | General Instrument Corporation | Method and apparatus for providing a digital rights management engine |
US8706914B2 (en) * | 2007-04-23 | 2014-04-22 | David D. Duchesneau | Computing infrastructure |
US20090234967A1 (en) * | 2008-03-17 | 2009-09-17 | Nokia Corporation | Method, system, and apparatus for transferring P2P file distribution tasks between devices |
-
2007
- 2007-06-22 US US11/766,985 patent/US8239479B2/en not_active Expired - Fee Related
-
2008
- 2008-06-20 CA CA002687859A patent/CA2687859A1/en not_active Abandoned
- 2008-06-20 BR BRPI0811308A patent/BRPI0811308A2/en not_active Application Discontinuation
- 2008-06-20 AU AU2008268539A patent/AU2008268539B2/en active Active
- 2008-06-20 KR KR1020097026257A patent/KR20100033966A/en not_active IP Right Cessation
- 2008-06-20 TW TW097123125A patent/TW200907719A/en unknown
- 2008-06-20 RU RU2009147457/08A patent/RU2471227C2/en active
- 2008-06-20 JP JP2010513440A patent/JP2010531026A/en not_active Withdrawn
- 2008-06-20 EP EP08771572A patent/EP2160688A4/en not_active Ceased
- 2008-06-20 CN CN200880021302XA patent/CN101689164B/en active Active
- 2008-06-20 WO PCT/US2008/067636 patent/WO2009002831A2/en active Application Filing
-
2012
- 2012-04-26 US US13/457,321 patent/US8386587B2/en active Active
Non-Patent Citations (1)
Title |
---|
See references of EP2160688A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10063501B2 (en) | 2015-05-22 | 2018-08-28 | Microsoft Technology Licensing, Llc | Unified messaging platform for displaying attached content in-line with e-mail messages |
US10216709B2 (en) | 2015-05-22 | 2019-02-26 | Microsoft Technology Licensing, Llc | Unified messaging platform and interface for providing inline replies |
US10360287B2 (en) | 2015-05-22 | 2019-07-23 | Microsoft Technology Licensing, Llc | Unified messaging platform and interface for providing user callouts |
Also Published As
Publication number | Publication date |
---|---|
US20080317068A1 (en) | 2008-12-25 |
AU2008268539A1 (en) | 2008-12-31 |
KR20100033966A (en) | 2010-03-31 |
RU2471227C2 (en) | 2012-12-27 |
WO2009002831A3 (en) | 2009-02-26 |
US20120210020A1 (en) | 2012-08-16 |
EP2160688A4 (en) | 2012-10-03 |
AU2008268539B2 (en) | 2012-06-14 |
US8239479B2 (en) | 2012-08-07 |
CA2687859A1 (en) | 2008-12-31 |
RU2009147457A (en) | 2011-06-27 |
CN101689164A (en) | 2010-03-31 |
US8386587B2 (en) | 2013-02-26 |
BRPI0811308A2 (en) | 2019-02-26 |
EP2160688A2 (en) | 2010-03-10 |
TW200907719A (en) | 2009-02-16 |
CN101689164B (en) | 2013-03-27 |
JP2010531026A (en) | 2010-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8386587B2 (en) | Server-assisted and peer-to-peer synchronization | |
US9003059B2 (en) | Running applications in an online or offline mode based on the availability of the connection to the remote web server | |
US7451236B2 (en) | Document distribution and storage system | |
US9298747B2 (en) | Deployable, consistent, and extensible computing environment platform | |
US9485300B2 (en) | Publish-subscribe platform for cloud file distribution | |
US10242100B2 (en) | Managing cached data in a network environment | |
US20080243847A1 (en) | Separating central locking services from distributed data fulfillment services in a storage system | |
CN100407627C (en) | Method and system for realizing end-to-end data sharing | |
US20140181142A1 (en) | Syndication feeds for peer computer devices and peer networks | |
US20080212616A1 (en) | Services For Data Sharing And Synchronization | |
US20040143836A1 (en) | System and method for sharing objects among two or more electronic devices | |
US20050278389A1 (en) | Method and device for distributing digital data in particular for a peer-to-peer network | |
EP3238096B1 (en) | System and method for discovering a lan synchronization candidate for a synchronized content management system | |
US9323587B2 (en) | Method and system for automatic detecting and resolving APIs | |
JP2011523470A (en) | Automatic content transmission technology to mobile devices based on feeds | |
WO2012151993A1 (en) | Service pushing method and device | |
US20090089393A1 (en) | System and method for managing data downloads and data uploads | |
US20130054525A1 (en) | Using amqp for replication | |
CA3131954C (en) | Information synchronization method, system, apparatus, computer device and storage medium | |
EP2048855A2 (en) | Information transmitting system, information processing apparatus, information managing apparatus, and information transmitting method | |
US10997000B1 (en) | Event publishing system for heterogeneous events | |
TW437205B (en) | An internet caching system and a method and an arrangement in such a system | |
US20130018987A1 (en) | Adaptive replication | |
US10402373B1 (en) | Filesystem redirection | |
US7937441B2 (en) | Data-control device and method of controlling same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200880021302.X Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08771572 Country of ref document: EP Kind code of ref document: A2 |
|
REEP | Request for entry into the european phase |
Ref document number: 2008771572 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008771572 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008268539 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2687859 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010513440 Country of ref document: JP |
|
ENP | Entry into the national phase |
Ref document number: 2008268539 Country of ref document: AU Date of ref document: 20080620 Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 8106/DELNP/2009 Country of ref document: IN |
|
ENP | Entry into the national phase |
Ref document number: 20097026257 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009147457 Country of ref document: RU |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: PI0811308 Country of ref document: BR Kind code of ref document: A2 Effective date: 20091109 |