US20140344379A1 - Multi-Tier Push Hybrid Service Control Architecture for Large Scale Conferencing over ICN - Google Patents

Multi-Tier Push Hybrid Service Control Architecture for Large Scale Conferencing over ICN Download PDF

Info

Publication number
US20140344379A1
US20140344379A1 US14/280,336 US201414280336A US2014344379A1 US 20140344379 A1 US20140344379 A1 US 20140344379A1 US 201414280336 A US201414280336 A US 201414280336A US 2014344379 A1 US2014344379 A1 US 2014344379A1
Authority
US
United States
Prior art keywords
message
service
conference
proxy
service proxy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/280,336
Inventor
Asit Chakraborti
Guoqiang Wang
Jun Wei
Ravishankar Ravindran
Xuan Liu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FutureWei Technologies Inc
Original Assignee
FutureWei Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FutureWei Technologies Inc filed Critical FutureWei Technologies Inc
Priority to US14/280,336 priority Critical patent/US20140344379A1/en
Publication of US20140344379A1 publication Critical patent/US20140344379A1/en
Assigned to FUTUREWEI TECHNOLOGIES, INC. reassignment FUTUREWEI TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHAKRABORTI, Asit, RAVINDRAN, RAVISHANKAR, WANG, GUOQIANG, LIU, XUAN, WEI, JUN
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1831Tracking arrangements for later retrieval, e.g. recording contents, participants activities or behavior, network status

Definitions

  • Virtual conferencing may refer to a service that allows conference events and/or data to be shared and/or exchanged simultaneously with multiple participants located in geographically distributed networking sites.
  • the service may allow participants to interact in real-time and may support point-to-point (P2P) communication (e.g. one sender to one receiver), point-to-multipoint (P2MP) communication (e.g. one sender to multiple receivers), and/or multipoint-to-multipoint (MP2MP) communication (e.g. multiple senders to multiple receivers).
  • P2P point-to-point
  • P2MP point-to-multipoint
  • MP2MP multipoint-to-multipoint
  • Some examples of virtual conferencing may include chat rooms, E-conferences, and virtual white board (VWB) services, where participants may exchange audio, video, and/or data over the Internet.
  • VWB virtual white board
  • Some technical challenges in virtual conferencing may include real-time performance (e.g. real-time data exchanges among multi-parties), scalability (e
  • the disclosure includes a network element (NE) comprising a memory configured to store a digest log for a conference, a receiver configured to receive a first message from a first of a plurality of participants associated with the NE, wherein the first message comprises a signature profile of the first participant, a processor coupled to the receiver and the memory and configured to track a state of the conference by performing a first update of the digest log according to the first message, and a transmitter coupled to the processor and configured to send a second message to a first of a plurality of service proxies that serve the conference, wherein the second messages indicate the updated digest log.
  • NE network element
  • the disclosure includes a method for synchronizing service controls for a conference at a local service proxy in an Information Centric Networking (ICN) network, the method comprising receiving a first message from a first of a plurality of participants associated with the service proxy, wherein the first message comprises a signature profile of the first participant, and wherein the first message is received by employing an ICN content name based routing scheme, tracking a state of the conference by performing a first update for a digest log according to the first message, and sending a second message to indicate the first update to a first of a plurality of remote service proxies serving the conference.
  • ICN Information Centric Networking
  • the disclosure includes a computer program product for use by a local service proxy serving a conference in an ICN network
  • the computer program product comprises computer executable instructions stored on a non-transitory computer readable medium such that when executed by a processor cause the local service proxy to receive a first message from a first of a plurality of participants associated with the local service proxy, wherein the first message comprises a signature profile of the first participant, and wherein the first message is received via an ICN content name based routing scheme, track a state of the conference by performing a first update for a digest log according to the first message, wherein performing the first update comprises recording the first participant's signature profile in the digest log, and send a second message to indicate the first update to a first of a plurality of remote service proxies serving the conference.
  • FIG. 1 is a schematic diagram of an embodiment of a multi-tier hybrid conference service network.
  • FIG. 2 is a schematic diagram of an embodiment of a network element (NE), which may act as a node in a multi-tier hybrid conference service network.
  • NE network element
  • FIG. 3 is a schematic diagram of an embodiment of an architectural view of an Information Centric Networking (ICN)-enabled service proxy.
  • ICN Information Centric Networking
  • FIG. 4 is a schematic diagram of an embodiment of an architectural view of an ICN-enabled user equipment (UE).
  • UE user equipment
  • FIG. 5 is a schematic diagram of an embodiment of a hierarchical view of a multi-tier hybrid conference service network.
  • FIG. 6 is a schematic diagram of an embodiment of a digest tree at a service proxy.
  • FIG. 7 is a schematic diagram of an embodiment of a conference formation.
  • FIG. 8 is a schematic diagram of an embodiment of a digest tree log at a service proxy.
  • FIG. 9 is a table of an embodiment of a digest history log at a service client.
  • FIG. 10 is a protocol diagram of an embodiment of a conference bootstrap method.
  • FIG. 11 is a protocol diagram of an embodiment of a conference synchronization method.
  • FIG. 12 is a protocol diagram of another embodiment of a conference synchronization method.
  • FIG. 13 is a flowchart of an embodiment of a conference recovery method.
  • FIG. 14 is a flowchart of another embodiment of a conference recovery method.
  • FIG. 15 is a schematic diagram of an embodiment of an ICN protocol interest packet.
  • FIG. 16 is a schematic diagram of an embodiment of an ICN protocol data packet.
  • Conference applications and/or systems may support real-time information and/or media data exchange among multiple parties located in a distributed networking environment. Some conference applications and/or systems may be implemented over a host-to-host Internet Protocol (IP) communication model and may duplicate meeting traffics over Wide Area Network (WAN) links.
  • IP Internet Protocol
  • WAN Wide Area Network
  • a central server may control and manage a conference, process data from participants (e.g. meeting subscribers), and then redistribute the data back to the participants.
  • the server-centric model may be simple for control management, but the duplication and/or redistribution of conference data may lead to high data traffic concentration at the server.
  • the data traffic in a server-centric model may be in an order of N 2 (O(N 2 )), where N may represent the number of participants.
  • N may represent the number of participants.
  • the server-centric model may not be suitable for large scale conferences (e.g. with 1000 to 10K participants).
  • Some conference service technologies may employ a Content Delivery Networking (CDN) model and/or a P2P content distribution networking model (e.g. multi-server architecture) to reduce the high data traffic in a server-centric model.
  • CDN models and/or P2P content distribution networking models may be over-the-top (OTT) solutions, where content (e.g. audio, video, and/or data) may be delivered over the Internet from a source to end users without network operators being involved in the control and/or distribution of the content (e.g. content providers operate independently from network operators).
  • OTT over-the-top
  • content e.g. audio, video, and/or data
  • networking optimizations and/or cross-layer optimizations may not be easily performed in CDN models and/or P2P content distribution networking models, and thus may lead to inefficient bandwidth utilization.
  • CDN models and/or P2P content distribution networking models may not support MP2MP communication and may not be suitable for real-time interactive communication.
  • NDN Name Data Networking
  • Some other conference service technologies may employ a Name Data Networking (NDN) model to improve bandwidth utilization and reduce traffic load by leveraging NDN features, such as sharable and distributed in-net storage and name-based routing mechanisms.
  • NDN may be a receiver driven, data centric communication protocol, in which data flows through a network when requested by a consumer.
  • the data access model in NDN may be referred to as a pull-based model.
  • conference events and/or updates may be nondeterministic as conference participants may publish meeting updates at any time during a conference.
  • participants may actively query conference events. For example, every participant in Chronos may periodically broadcast a query to request meeting updates from other participants.
  • control signaling overheads in Chronos may be significant.
  • the coupling of the data plane and the control plane in Chronos may lead to complex support for simultaneous data updates and/or recovery.
  • Chronos may not be suitable for supporting real-time interactive large scale conferences.
  • ICN architecture is a type of network architecture that focuses on information delivery. ICN architecture may also be known as content-aware, content-centric, or data oriented networking architecture. ICN models may shift the IP communication model from a host-to-host model to an information-object-to-object model. The IP host-to-host model may address and identify data by storage location (e.g. host IP address), whereas the information-object-to-object model may employ a non-location based addressing scheme that is content-based.
  • Information objects may be the first class abstraction for entities in an ICN communication model. Some examples of information objects may include content, data streams, services, user entities, and/or devices.
  • information objects may be assigned with non-location based names, which may be used to address the information objects, decoupling the information objects from locations. Routing to and from the information objects may be based on the assigned names.
  • ICN architecture may provision for in-network caching, where any network device or element may serve as a temporary content server, which may improve performance of content transfer. The decoupling of information objects from location and the name-based routing in ICN may allow mobility to be handled efficiently.
  • ICN architecture may also provision for security by appending security credentials to data content instead of securing the communication channel that transports the data content.
  • conference applications may leverage ICN features, such as name-based routing, security, multicasting, and/or multi-path routing, to support real-time interactive large scale conferences.
  • hybrid conference service control architecture which may employ a combination of push and pull mechanisms for synchronizing conference updates.
  • the hybrid conference service control architecture may comprise a plurality of distributed service proxies serving a plurality of conference participants.
  • Each service proxy may serve a group of participants that is associated with the service proxy.
  • the service proxy may synchronize and consolidate conference updates with remote service proxies serving the same conference and distribute the consolidated conference updates to the associated participants.
  • Conference updates may include participants' fingerprints (FPs), which may include signatures and/or credentials of the participants, and/or update sequence numbers associated with the FPs.
  • FPs participants' fingerprints
  • the synchronization of control flows between a participant and a service proxy may employ a push mechanism.
  • each service proxy and each participant may maintain a digest log to track conference updates.
  • Each digest log may comprise a snapshot of a current localized view (e.g. in the form of a digest tree) of the conference and a history of participants' FPs.
  • each service proxy may comprise a proxy digest tree with a snapshot view of the associated participants (e.g. local digest tree) and other remote service proxies serving the conference and a history of FP updates corresponding to the proxy digest tree.
  • Each participant may comprise a digest tree with the root of the proxy digest tree (e.g. global root digest) and a history of FP updates corresponding to the global root digest.
  • synchronization may operate according to a child-parent relationship between a service proxy and an associated participant.
  • the disclosed hybrid conference service control architecture may leverage native ICN in-network storage and name-based routing.
  • the disclosed hybrid conference service control architecture may provide control plane and data plane separation.
  • the disclosed hybrid conference service control architecture may provide efficient synchronization of conference updates and fast recovery from network interrupts.
  • the disclosed hybrid conference service control architecture may be suitable for supporting real-time interactive large scale conferences.
  • FIG. 1 is a schematic diagram of an embodiment of a multi-tier hybrid conference service network 100 .
  • Network 100 may comprise a plurality of conference service routers (SRs) 120 and a plurality of UEs 130 .
  • the plurality of SRs 120 may be situated in one or more networks 170 .
  • network 170 may be an edge cloud network 170 that employs edge computing to provide widely dispersed distributed services and/or any other network.
  • Each UE 130 may be connected to a network 180 via links 142 (e.g. wireless and/or wireline links).
  • network 180 may be any Layer two (L2) access network (e.g. wireless access network, wireline access network, etc.), where L2 may be a data link layer in an Open System Interconnection (OSI) model.
  • OSI Open System Interconnection
  • the networks 170 and 180 may be interconnected via an Internet network 150 via links 141 .
  • the Internet network 150 may be formed from one or more interconnected local area networks (LANs), metropolitan area networks (MANs), wide area networks (WANs), etc.
  • the Internet network 150 may comprise an Internet Service Provider (ISP) network.
  • the links 141 may include physical connections, such as fiber optic links, electrical links, and/or logical connections.
  • Each of the networks 150 , 170 , and/or 180 may comprise a plurality of NEs 140 , which may be routers, gateways, switches, edge routers, edge gateways, and/or any network devices suitable for routing and communicating packets as would be appreciated by one of ordinary skill in the art.
  • networks 150 , 170 , and/or 180 may be alternatively configured and the SRs 120 and the UEs 130 may be alternatively situated in the network 100 as determined by a person of ordinary skill in the art to achieve the same functionalities.
  • the SRs 120 may be routers, virtual machines (VMs), and/or any network devices that may be configured to synchronize controls and/or signaling for a large scale conference (e.g. chat rooms, E-conference services, Virtual White Board (VWB) services, etc. with about 1000 to about 10K participants) among each other and with a plurality of conference participants, where such participants may participate in the conference via UEs 130 .
  • a large scale conference e.g. chat rooms, E-conference services, Virtual White Board (VWB) services, etc. with about 1000 to about 10K participants
  • each SR 120 may act as a conference proxy and may be referred to as a service proxy.
  • an SR 120 may host one or more VMs, where each VM may act as a service proxy for a different conference. It should be noted that each SR 120 may serve a different group of conference participants.
  • Each UE 130 may be an end user device, such as a mobile device, a desktop, a mobile device, a cellphone, and/or any network device configured to participate in one or more large scale conferences.
  • a conference participant may participate in a conference by executing a conference application on a UE 130 .
  • the conference participant may request to participate in a particular conference by providing a FP and a conference name.
  • the conference participant may also subscribe and/or publish data for the conference.
  • each UE 130 may be referred to as a service client and may synchronize conference controls and/or signaling with a service proxy.
  • network 100 may be an ICN-enabled network, which may employ ICN name-based routing, security, multicasting, and/or multi-path routing to support large scale conference applications.
  • Network 100 may comprise a control plane separate from a data plane.
  • network 100 may employ a two-tier (e.g. proxy-client) architecture for conference controls and signaling in the control plane.
  • the two-tier architecture may comprise a proxy layer and a client layer.
  • the proxy layer may include a plurality of service proxies situated in a plurality of SRs 120 and the client layer may include a plurality of service clients situated in a plurality of UEs 130 .
  • control paths 191 and 192 may be logic paths specifically for exchanging conference controls and signaling.
  • a service proxy situated in a SR 120 may exchange conference controls and signaling with remote service proxies situated in other SRs 120 serving the conference via control path 191 .
  • each service proxy may exchange conference controls and signaling with associated service clients situated in UEs 130 via control path 192 .
  • the service proxies may participate in control plane functions, but may not participate in data plane functions. As such, data communications (e.g. audio, video, rich text exchanges) among the service clients may be independent from the conference controls.
  • a push mechanism may be employed for synchronizing controls (e.g. FPs, other signed information, events, etc.) in a conference between a service proxy and a service client, as well as among service proxies.
  • a push mechanism may refer to a sender initiated transmission of information without a request from an interested recipient as opposed to an ICN protocol pull mechanism where an interested recipient may pull information from an information source.
  • a service client may initiate and push a FP update to a service proxy serving the service client.
  • the service proxy may consolidate all received FP updates into a first proxy update and may push the first proxy update to other remote service proxies serving the same conference.
  • the service proxy may also receive a second proxy update from one of the other remote service proxies and may push the second proxy update to participants served by the service proxy.
  • the push mechanism may enable real-time or nearly real-time communications, which may be a performance factor for conference services.
  • a service client and a service proxy may synchronize conference controls by employing substantially similar push mechanisms as described herein above.
  • synchronizations between service proxies may employ a pull mechanism.
  • a first service proxy may send an outstanding synchronizing (sync) interest to a second service proxy, where the sync interest may comprise a most recent global conference view at the first service proxy.
  • the second proxy receives a FP update from a client served by the second proxy, the second proxy may consolidate the FP update and update a global conference view at the second proxy.
  • the second proxy may detect that the first service proxy's global conference view indicated in the sync interest is out of date, and thus may send a sync response to the first service proxy indicating the latest updated global conference view.
  • conference data e.g. audio, video, and/or data
  • conference data may be exchanged among the participants by employing the pulling model in the ICN protocol.
  • FIG. 2 is a schematic diagram of an embodiment of an NE 200 , which may act as a service proxy (e.g. situated in a SR 120 ) and/or a service client (e.g. situated in a UE 130 ) by implementing any of the schemes described herein.
  • NE 200 may be implemented in a single node or the functionality of NE 200 may be implemented in a plurality of nodes.
  • NE encompasses a broad range of devices of which NE 200 is merely an example.
  • NE 200 is included for purposes of clarity of discussion, but is in no way meant to limit the application of the present disclosure to a particular NE embodiment or class of NE embodiments.
  • the features/methods described in the disclosure may be implemented in a network apparatus or component such as an NE 200 .
  • the features/methods in the disclosure may be implemented using hardware, firmware, and/or software installed to run on hardware.
  • the NE 200 may comprise transceivers (Tx/Rx) 210 , which may be transmitters, receivers, or combinations thereof.
  • Tx/Rx 210 may be coupled to plurality of downstream ports 220 for transmitting and/or receiving frames from other nodes and a Tx/Rx 210 may be coupled to plurality of upstream ports 250 for transmitting and/or receiving frames from other nodes, respectively.
  • a processor 230 may be coupled to the Tx/Rx 210 to process the frames and/or determine which nodes to send the frames to.
  • the processor 230 may comprise one or more multi-core processors and/or memory devices 232 , which may function as data stores, buffers, etc.
  • Processor 230 may be implemented as a general processor or may be part of one or more application specific integrated circuits (ASICs) and/or digital signal processors (DSPs).
  • Processor 230 may comprise a conference service control management module 233 , which may implement a conference bootstrap method 1000 , conference synchronization methods 1100 and/or 1200 , conference recovery methods 1300 and/or 1400 , and/or any other MP2MP related communication functions discussed herein.
  • the conference service control management module 233 may be implemented as instructions stored in the memory devices 232 (e.g. a computer program product), which may be executed by processor 230 .
  • the memory device 232 may comprise a cache for temporarily storing content, e.g., a Random Access Memory (RAM).
  • the memory device 232 may comprise a long-term storage for storing content relatively longer, e.g., a Read Only Memory (ROM).
  • the cache and the long-term storage may include dynamic random access memories (DRAMs), solid-state drives (SSDs), hard disks, or combinations thereof.
  • a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design.
  • a design that is stable that will be produced in large volume may be preferred to be implemented in hardware, for example in an ASIC, because for large production runs the hardware implementation may be less expensive than the software implementation.
  • a design may be developed and tested in a software form and later transformed, by well-known design rules, to an equivalent hardware implementation in an ASIC that hardwires the instructions of the software.
  • a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions may be viewed as a particular machine or apparatus.
  • FIG. 3 is a schematic diagram of an embodiment of an architectural view of an ICN-enabled service proxy 300 , which may be situated in a SR (e.g. SR 120 ) in a multi-tier hybrid conference service network (e.g. network 100 ).
  • the service proxy 300 may comprise an application layer 310 , a service application programming interface (API) to application layer 320 , a service layer 330 , an ICN layer 340 , a L2/Layer three (L3) layer 350 , a service user-to-network interface (S-UNI) layer 361 , and a service network-to-network interface (S-NNI) layer 362 .
  • API service application programming interface
  • S-UNI service user-to-network interface
  • S-NNI service network-to-network interface
  • the application layer 310 may comprise an application pool 311 , which may comprise a plurality of applications, such as chat, VWB, and/or other applications.
  • the service API to application layer 320 may comprise a set of APIs for interfacing between the application layer 310 and the service layer 330 .
  • the APIs may be well-defined function calls and/or primitives comprising input parameters, output parameters, and/or return parameters.
  • the service layer 330 may comprise a sync proxy 336 and other service modules 335 .
  • the sync proxy 336 may serve a conference (e.g. chat, VWB, etc.) with a plurality of other sync proxies and may act as a control proxy for a group of conference participants and/or service clients (e.g. situated in UEs 130 ).
  • the other service modules 335 may manage and/or control other services.
  • the ICN layer 340 may comprise ICN protocol layer modules, which may include a content store (CS) (e.g. for caching interest and/or data), a forwarding information base (FIB) (e.g. name-based routing look up), and/or a pending interest table (PIT) (e.g.
  • CS content store
  • FIB forwarding information base
  • PIT pending interest table
  • the L2/L3 layer 350 may comprise networking protocol stack modules, which may include data and/or address encoding and/or decoding for network transmissions.
  • the L2 layer and the L3 layer may be referred to as the data link layer and the network layer in the OSI model.
  • the S-UNI layer 361 may interface (e.g. signaling functions between networks and users) with one or more conference participants (e.g. situated in UEs 130 ) situated in the network.
  • the S-NNI layer 362 may interface (e.g. signaling functions between networks) with one or more SRs (e.g. SRs 120 ) situated in the network.
  • the sync proxy 336 may communicate with the other remote service proxies to synchronize FPs of conference participants.
  • the sync proxy 336 may comprise a FP processor 331 , a heartbeat signal processor 332 , a digest log 333 , and an application cache 334 .
  • the FP processor 331 may receive FP updates (e.g. FPs of participants) from service clients and/or remote service proxies.
  • the FP processor 331 may also send FP updates to service clients and/or remote service proxies.
  • the FP processor 331 may track and maintain FP updates received from the service clients and/or the remote service proxies as discussed more fully below.
  • the FP updates may be sent and/or received in the form of notification messages and/or sync messages via the ICN layer 340 , the L2/L3 layer 350 , and/or the S-NNI layer 360 .
  • the notification messages and/or sync messages may include ICN interest packets and/or ICN data packets, which may be handled according to ICN protocol (e.g. forwarded according to a FIB and/or cached in a CS).
  • the heartbeat signal processor 332 may monitor and exchange liveliness (e.g. functional and connectivity statuses) of the remote service proxies and/or the service clients attached to the service proxy 300 .
  • the heartbeat signal processor 332 may generate and send heartbeat indication signals (e.g. periodically and/or event-driven) to the remote service proxies and/or the attached service clients.
  • the heartbeat signal processor 332 may also listen to heartbeat indication signals from the remote service proxies and the attached service clients.
  • the heartbeat signal processor 332 may send a heartbeat response signal to confirm the reception of a heartbeat indication signal.
  • the heartbeat signal processor 332 may send a network failure signal to the application layer 310 to notify the application serving the faulty service client and/or the faulty remote service proxy of the network failure.
  • the heartbeat signals may be sent and/or received in the form of heartbeat messages via the ICN layer 340 , the L2/L3 layer 350 , and the S-NNI layer 360 .
  • the heartbeat messages may include ICN interest packets and/or ICN data packets, which may be handled according to ICN protocol (e.g. forwarded according to a FIB and cached in a CS).
  • the digest log 333 may be a cache or any temporary data storage that records recent FP updates.
  • the digest log 333 may store a snapshot of a local view of the conference service including all the attached participants (e.g. including FPs) and all the remote proxies (e.g. some digest information) at a specified time, where the local view may be represented in the form of a digest tree as discussed more fully below.
  • the digest log 333 may also store a history of FP updates corresponding to the digest tree, where each entry may be in the form of ⁇ global root digest>: ⁇ global digest tree>: ⁇ local digest tree> as discussed more fully herein below.
  • the application cache 334 may be a temporary data storage that stores FPs that are in transmission (e.g. transmission status may not be confirmed).
  • the FP processor 331 may manage the digest log 333 and/or the application cache 334 for storing and tracking FP updates.
  • the FP processor 331 and the heartbeat signal processor 332 may serve one or more conferences (e.g. a chat and a VWB).
  • the FP processor 331 may employ different digest logs 333 and/or different application caches 334 for different conferences.
  • FIG. 4 is a schematic diagram of an embodiment of an architectural view of an ICN-enabled UE 400 (e.g. UE 130 ), which may be situated in a multi-tier hybrid conference service network (e.g. network 100 ).
  • the UE 400 may comprise a substantially similar architecture as in a service proxy 300 .
  • the UE 400 may comprise an application layer 410 , an application pool 411 , a service API to application layer 420 , a service layer 430 , other service modules 435 , an ICN layer 440 , and a L2/L3 layer 450 , which may be substantially similar to application layer 310 , application pool 311 , service API to application layer 320 , service layer 330 , other service modules 335 , ICN layer 340 , and L2/L3 layer 350 .
  • the service layer 430 may comprise a sync client 436 instead of a sync proxy 336 as in the service proxy 300 .
  • the service client 400 may comprise an S-UNI control layer 461 and an S-UNI data layer 462 instead of an S-UNI layer 361 and S-NNI layer 362 as in the service proxy 300 .
  • the S-UNI data layer 462 may exchange conference data (e.g. video, audio, rich text) with one or more conference participants (e.g. situated in UEs 130 ) in the network.
  • the S-UNI control layer 461 may interface with a service proxy to exchange control signaling with the network.
  • the sync client 436 may be a service client configured to participate in a conference.
  • the sync client 436 may communicate with a service proxy (e.g. service proxy 400 ) serving the conference or more specifically a sync proxy (e.g. sync proxy 336 ) in the network.
  • the sync client 436 may communicate with the service proxy via the ICN layer 440 , the L2/L3 layer 450 , and the S-UNI control layer 461 .
  • the sync client 436 may comprise a FP processor 431 , a heartbeat signal processor 432 , a digest log 433 , and an application cache 434 .
  • the FP processor 431 may be substantially similar to FP processor 331 . However, the FP processor 431 may send FP updates (e.g. join, leave, re-join a conference) to a service proxy and may receive other participant's FP updates from the service proxy.
  • FP updates e.g. join, leave, re-join a conference
  • the heartbeat signal processor 432 may be substantially similar to heartbeat processor 332 . However, the heartbeat signal processor 432 may monitor and exchange liveliness (e.g. functional statuses) indications with the service proxy and may employ substantially similar mechanisms for detecting network failure at the service proxy and notifying application layer 410 .
  • liveliness e.g. functional statuses
  • the digest log 433 may be substantially similar to digest log 333 , but may store a digest tree with a most recent global root digest received from the associated service proxy and a history of FP updates corresponding to the global root digest (e.g. ⁇ global root digest>: ⁇ user FP>) as discussed more fully below.
  • the application cache 434 may be substantially similar to application cache 334 .
  • the FP processor 431 and the heartbeat signal processor 432 may serve one or more conferences (e.g. a chat and a VWB). In such an embodiment, the FP processor 431 may employ different digest logs 433 and/or different application caches 434 for different conferences.
  • FIG. 5 is a schematic diagram of an embodiment of a hierarchical view of a multi-tier hybrid conference service network 500 , for example, as implemented in network 100 .
  • Network 500 may comprise a plurality of service proxies 521 , 522 , and 523 (e.g. service proxies 300 and/or SRs 120 ) at a first level and a plurality of service clients 531 , 532 , and 533 (e.g. service clients 400 and/or UEs 130 ) at a second level.
  • Each service client 531 , 532 , and/or 533 may be associated with one of the service proxies 521 , 522 , or 523 .
  • service clients 531 , 532 , and 533 may be configured to communicate with service proxy 521 , 522 , and 523 , respectively.
  • the service proxies 521 , 522 , and 523 may be inter-connected and may be configured to exchange conference updates with each other and with corresponding service clients 531 , 532 , and 533 , respectively.
  • service proxy 521 e.g. a local service proxy
  • the service proxies 521 , 522 , and 523 and/or the service clients 531 , 532 , and 533 may also be referred to as conference components in the present disclosure.
  • FIG. 6 is a schematic diagram of an embodiment of a digest tree 600 at a service proxy (e.g. service proxy 300 , 521 - 523 , and/or SR 120 ) during a conference steady state.
  • the digest tree 600 may correspond to a digest tree at a service proxy P1.
  • the digest tree 600 may be generated by a FP processor (e.g. FP processor 331 ) at a sync proxy (e.g. sync proxy 336 ).
  • the digest tree 600 may be stored in a proxy digest log (e.g. digest log 333 ) and may represent a snapshot of a localized view (e.g. at a particular time instant) of the conference at the service proxy P1.
  • the digest tree 600 may comprise a node 610 (e.g. depicted as G 1,dg 1 (t) ), which may be may be referred to as a global root digest G 1 and may indicate a global state dg 1 (t) at the global root digest G 1 , where the subscript 1 may represent the identifier (ID) of the service proxy P1.
  • the node 610 may branch into a plurality of nodes 620 (e.g. depicted as P 1,dp 1 (t) , . . . P n,dp n (t) ), which may be referred to as proxy local digest roots.
  • Each node 620 may correspond to a service proxy serving the conference and may indicate a local state dp n (t) at the corresponding service proxy.
  • the node 620 that corresponds to service proxy P1 e.g. P 1,dp 1 (t)
  • may further branch into a plurality of leaf nodes 630 e.g. depicted as U 1,fp 1 (t) , . . . U m,fp m (t) ), where the tree branches that fall under the node 620 corresponding to service proxy P1 may be referred to as a local digest tree.
  • Each leaf node 630 may correspond to a service client U m (e.g.
  • service client 400 may indicate the FP update fp m (t) (e.g. published content and associating update sequence number) received from the service client U m , where the subscript m may represent the service client ID.
  • service proxy P1 may maintain FPs of the attached service clients and local states of the remote service proxies.
  • the service proxy P1 may track and update the states at each node 610 , 620 , and 630 by tracking updates received from the remote service proxies in the conference and/or the attached service clients.
  • service proxy P1 may update the node 630 that corresponds to the service client m according to the received FP update, as well as the global state dg 1 (t) at node 610 and the local state dp 1 (t) at the node 620 that corresponds to service proxy P1.
  • a FP update e.g. U m,fp m (t)
  • the service proxy P1 may update the global state dg 1 (t) at node 610 and the local state dp n (t) at the node 620 that corresponds to the remote service proxy Pn. It should be noted that the states at nodes 610 , 620 , and 630 may be updated at different time instants.
  • service proxy P1 may store each received FP update in a digest log history in the digest log and may purge old entries after a predetermined time and/or based on the size of the digest log history.
  • the global state dg 1 (t) and the local state dp t (t) at a service proxy P1 at a time instant t may be computed as shown below:
  • ⁇ j 2 n ⁇ ⁇ dp j ⁇ ( t )
  • the remote service proxies serving the conference may represent the total number of FP updates sent by the remote service proxies serving the conference.
  • a service client connecting to service proxy P1 may maintain a digest log (e.g. digest log 433 ) to track updates received from service proxy P1.
  • the service client may generate an entry in the digest log history of the digest log to record the received FP update (e.g. ⁇ G 1,dg 1 (t) >: ⁇ U m,fp m (t) >) and may update the digest tree with the received global root digest.
  • the disclosed multi-tier hybrid conference service control architecture may offload the maintenance and tracking of conference controls from the service clients when compared to a server-centric architecture and/or a server-less architecture.
  • the service client may purge old entries in the digest log history after a predetermined time and/or based on the size of the digest log history.
  • Each digest tree may represent a snapshot of a localized view of the conference at a specified time. Each digest tree may comprise a different tree structure at a particular time instant.
  • FIGS. 7-9 may illustrate an embodiment of a digest log at a service proxy and at a service client during a conference.
  • FIG. 7 is a schematic diagram of an embodiment of a conference 700 formation.
  • Conference 700 may comprise two service proxies P1 and P2 (e.g. service proxies 300 and/or SRs 120 ) managing conference 700 and three conference participants U1, U2, and U3 (e.g. service clients 400 and/or UEs 130 ).
  • participant U1 may join conference 700 via service proxy P1 at time instant t1
  • participant U3 may join conference 700 via service proxy P2 at time instant t2
  • participant U2 may join conference 700 via service proxy P1 at time instant t3.
  • FIG. 8 is a schematic diagram of an embodiment of digest tree log 800 at a service proxy (e.g. service proxy 300 and/or SR 120 ) during a conference 700 .
  • digest tree log 800 may represent the digest tree log at a service proxy P1.
  • Digest tree log 800 may illustrate digest trees 810 , 820 , 830 , and 840 at four different time instants during conference 700 .
  • Digest tree 810 may illustrate a conference view at a beginning time t0 of the conference at the service proxy P1.
  • Digest tree 820 may illustrate a conference view at a time instant t1 when a participant U1 joins the conference via the service proxy P1.
  • Digest tree 830 may illustrate a conference view at a time instant t2 when a participant U3 joins the conference via a service proxy P2.
  • Digest tree 840 may illustrate a conference view at a time instant t3 when a participant U2 joins the conference via the service proxy P1.
  • Each digest tree 810 , 820 , 830 , and 840 may comprise a substantially similar structure as in digest tree 600 described herein above.
  • digest tree log 800 when a participant joins the conference via a service proxy, the corresponding local state dp n at P n,dp n , as well as the global state dg 1 at G 1,dg 1 (t) may be incremented by one.
  • the local state dp 1 and the dg 1 of service proxy P1 may be computed by service proxy P1 and the local states dp n of the remote service proxies may be computed by the remote service proxies and received by service proxy P1.
  • FIG. 9 is a table of an embodiment of digest history log 900 at a service client (e.g. service clients 400 and/or UEs 130 ) during a conference 700 .
  • the digest history log 900 may be recorded by a service client U1 attached to a service proxy P1.
  • Digest history log 900 may represent a global root digest and a service client FP and may employ substantially similar notations as described in digest tree log 800 .
  • the digest history log 900 may be illustrated as entries 910 , 920 , 930 , and 940 , which may be generated after receiving FP updates from service proxy P1.
  • entry 910 may correspond to the beginning of conference 700 .
  • Entry 920 may be generated when service client U1 receives a FP update from service proxy P1 indicating the joining of service client U1.
  • Entry 930 may be generated when service client U1 receives a FP update from service proxy P1 indicating the joining of service client U3 (e.g. via service proxy P2).
  • Entry 940 may be generated when service client U1 receives a FP update from service proxy P1 indicating the joining of service client U2 (e.g. via service proxy P1).
  • FIG. 10 is a protocol diagram of an embodiment of a conference bootstrap method 1000 in a multi-tier hybrid conference service network (e.g. network 100 ).
  • Method 1000 may be implemented between service proxies P1, P2, and P3 (e.g. service proxies 300 and/or SRs 120 ), and a participant U1 (e.g. service client 400 and/or UE 130 ).
  • Method 1000 may represent a global root digest and a proxy local digest root by employing substantially similar notations as described herein above.
  • method 1000 may represent a service client's FP by employing a notation of U m ⁇ FP k for clarity, where U m may represent a service client m and FP k may represent the FP published by the service client.
  • Method 1000 may begin with a first participant U1 joining a conference.
  • participant U1 may send a connect request message to service proxy P1 to request a conference session.
  • service proxy P1 may respond by sending a connect reply message to participant U1 to complete the conference setup.
  • participant U1 may send (e.g. via a push) a join update message to service proxy P1, where the join update message may comprise the participant U1's signature profile (e.g. U 1 ⁇ FP 0 ).
  • service proxy P1 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333 ) at service proxy P1 according to the received join update message.
  • the FP history may comprise the following entries:
  • service proxy P1 may send a first digest update message (e.g. G 1,1 /U 1 ⁇ FP 0 ) to participant U1.
  • service proxy P1 may send (e.g. via a push) a first join update message (e.g. with updated state P 1,1 /P 2 /U 1 ⁇ FP 0 ) to service proxy P2 at step 1050 and a second join update message (e.g. with updated state P 1,1 /P 3 /U 1 ⁇ FP 0 ) to service proxy P3 at step 1060 .
  • service proxies P1, P2, and P3 may be synchronized with participant U1's joining update.
  • FIG. 11 is a protocol diagram of an embodiment of a conference synchronization method 1100 .
  • Method 1100 may be implemented between service proxies P1 and P2 (e.g. service proxies 400 and/or SRs 120 ) and participants U1 and U3 (e.g. service clients 400 and/or UEs 130 ) during a notification process, where participant U1 may be connected to service proxy P1 and participant U3 may be connected to service proxy P2.
  • the notification process may include join, log off, re-join, and/or other conference control notifications.
  • Method 1100 may represent a global root digest, a proxy local digest root, and a service client's FP by employing substantially similar notations as method 1000 described herein above.
  • Method 1100 may begin when a conference is in a steady state.
  • service proxy P1 and service proxy P2 may comprise a same global state n (e.g. G 1,n , G 2,n )
  • service proxy P1 may comprise a local state m (e.g. P 1,m )
  • service proxy P2 may comprise a local state k (e.g. P 2,k ).
  • Method 1100 may be suitable for synchronizing states upon an injection of a new notification message (e.g. join, leave, rejoin, and/or other control update messages), for example, from a participant U3.
  • a new notification message e.g. join, leave, rejoin, and/or other control update messages
  • participant U3 may send (e.g. via a push) a notification message to service proxy P2, where the notification message may comprise participant U3's signature profile (e.g. U 3 ⁇ FP j ).
  • service proxy P2 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333 ) at service proxy P2 according to the notification message.
  • the FP history may comprise the following entries:
  • service proxy P2 may send (e.g. via a push) a first digest update message (e.g. G 2,n+1 /U 3 ⁇ FP j ) to participant U3.
  • service proxy P2 may send (e.g. via a push) the notification message (e.g. with the updated state P 2,k+1 /U 3 ⁇ FP j ) to service proxy P1.
  • service proxy P1 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333 ) at the service proxy P1 according to the received notification message.
  • a proxy digest log e.g. digest log 333
  • the FP history may comprise the following entries as shown below:
  • service proxy P1 may send (e.g. via a push) a second digest update message (e.g. G 1,n+1 /U 3 ⁇ FP j ) to participant U1.
  • a second digest update message e.g. G 1,n+1 /U 3 ⁇ FP j
  • FIG. 12 is a protocol diagram of another embodiment of a conference synchronization method 1200 .
  • Method 1200 may be implemented between service proxies P1 and P2 (e.g. service proxies 300 and/or SRs 120 ) and participants U1 and U3 (e.g. service clients 400 and/or UEs 130 ) during a notification process, where participant U1 may be connected to service proxy P1 and participant U3 may be connected to service proxy P2.
  • the notification process may include join, log off, re-join, and/or other conference control notifications.
  • Method 1200 may represent a global root digest, a proxy local digest root, and a service client's FP by employing substantially similar notations as method 1100 described herein above.
  • Method 1200 may employ a same push mechanism between participants and service proxies, but may employ a pull mechanism between service proxies.
  • Method 1200 may begin when a conference is in a steady state.
  • service proxy P1 and service proxy P2 may comprise a same global state n (e.g. G 1,n , G 2,n )
  • service proxy P1 may comprise a local state m (e.g. P 1,m )
  • service proxy P2 may comprise a local state k (e.g. P 2,k ).
  • Method 1200 may be suitable for synchronizing states upon an injection of a new notification message (e.g. join, leave, rejoin, and/or other control update messages), for example, from a participant U3.
  • a new notification message e.g. join, leave, rejoin, and/or other control update messages
  • service proxy P1 may send a sync interest message (e.g. an interest packet to initiate a pull process) to service proxy P2.
  • the sync interest message may indicate the current global state (e.g. G 1,n ) at service proxy P1.
  • the sync interest message may serve as an outstanding interest for a next conference update.
  • participant U3 may send (e.g. via a push) a notification message to service proxy P2, where the notification message may comprise participant U3's signature profile (e.g. U 3 ⁇ FP j ).
  • service proxy P2 may update a digest tree and a FP history in proxy digest log (e.g. digest log 433 ) at service proxy P2 according to the notification message.
  • the FP history may comprise the following entries:
  • service proxy P2 may send (e.g. via a push) a first digest update message (e.g. G 2,n+1 /U 3 ⁇ FP j ) to participant U3.
  • service proxy P2 may detect that service proxy P1 comprises a sync interest message with an out dated global state of n, and thus may respond to the sync interest message by sending a sync response message (e.g. G 2,n+1 /U 3 ⁇ FP j ) to service proxy P1.
  • service proxy P1 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333 ) at the service proxy P1 according to the received notification message.
  • a proxy digest log e.g. digest log 333
  • the FP history may comprise the following entries as shown below:
  • service proxy P1 may send (e.g. via a push) a second digest update message (e.g. G 1,n+1 /U 3 ⁇ FP j ) to participant U1.
  • a second digest update message e.g. G 1,n+1 /U 3 ⁇ FP j
  • the global state at service proxies P1 and P2 may be synchronized at this time.
  • each service proxy P1 and/or P2 may send another pending sync interest message to pull a next conference update after receiving a sync response message (e.g. FP updates from remote service proxies).
  • sync interest messages may be aggregated into a single message per access link (e.g. links 141 ).
  • FIG. 13 is a flowchart of an embodiment of a conference recovery method 1300 .
  • Method 1300 may be implemented at a service proxy (e.g. service proxy 300 and/or SRs 120 ) and/or a service client (e.g. service client 400 and/or UE 130 ).
  • Method 1300 may be suitable for recovering conference updates from a temporary network interruption when a push mechanism (e.g. method 1100 ) is employed for conference control synchronization.
  • Method 1300 may begin when a service proxy and/or a service client recovers from a temporary network interruption (e.g. less than a few minutes), for example, due to network failure, link congestions, and/or other network faulty conditions.
  • method 1300 may wait for a notification message from a connected component.
  • the connected component for a service client may be a service proxy and the connected component for a service proxy may be a service client and/or a service proxy.
  • method 1300 may proceed to step 1320 .
  • method 1300 may determine whether there are missing updates (e.g. occurred during the temporary interruption) from the connected component. For example, method 1300 may compare a last state of the connected component indicated in the notification message to a most recent recorded state in a digest log (e.g. digest log 333 and/or 433 ). If there is no missing update (e.g. the received last state and the most recent recorded state are identical), then method 1300 may proceed to step 1330 . At step 1330 , method 1300 may update the digest log and return to step 1310 .
  • a digest log e.g. digest log 333 and/or 433
  • method 1300 may proceed to step 1340 .
  • method 1300 may send a recovery request message to the connected component, for example, indicating the most recent recorded state and the received current state (e.g. gap of missing updates).
  • method 1300 may wait for a recovery data message. Upon receiving the recovery data message, method 1300 may continue to step 1330 to update the digest log. Method 1300 may be repeated for the duration of a conference.
  • FIG. 14 is a flowchart of another embodiment of a conference recovery method 1400 , which may be implemented at a service proxy (e.g. service proxies 400 and/or SRs 120 ).
  • Method 1400 may be suitable for recovering conference updates from a temporary network interruption when a service proxy employs a pull mechanism (e.g. method 1200 ) for synchronizing conference updates with remote service proxies.
  • Method 1400 may begin when a service proxy recovers from a temporary network interruption (e.g. less than a few minutes), for example, due to link failure, link congestions, and/or other faulty conditions in a network. After the recovery, the service proxy may have missed one or more conference updates (e.g. more advanced global state) that occurred during the temporary network interruption.
  • method 1400 may start a timer with a predetermined wait period (e.g. randomized time period).
  • method 1400 may determine whether the timer has expired (e.g. reach the end of the wait period).
  • method 1400 may check if a digest update message is received at step 1440 . If method 1400 does not receive a digest update message, method 1400 may return to step 1420 and continue to wait for the expiration of the timer. If method 1400 receives a digest update message, method 1400 may proceed to step 1450 . At step 1450 , method 1400 may update a digest log (e.g. digest log 333 ) according to the received digest update message.
  • the digest update message may comprise the conference updates that occurred during the temporary network interruption.
  • method 1400 may proceed to step 1430 .
  • method 1400 may send a recovery sync message to request conference update recovery.
  • method 1400 may wait for a recovery update message. If method 1400 receives a recovery update message, method 1400 may proceed to step 1450 to update the digest log.
  • the recovery update message may comprise some or all of the conference updates that occurred during the temporary network interrupt. It should be noted that the conference may return to a steady state (e.g. all service proxies comprise a same global sate) at the end of the method 1400 .
  • method 1400 may be repeated when another network interruption occurs during the recovery process and may employ a different wait period (e.g. with exponential back-off) for the timer.
  • a local service proxy may detect missing heartbeat messages from a first remote service proxy (e.g. due to long-term network connectivity failure between the local service proxy and the first remote service proxy) while maintaining network connection with a second remote service proxy.
  • a network partition may occur.
  • the local service proxy may employ method 1300 and/or 1400 to request to recover conference updates at the first remote service proxy from the second remote service proxy.
  • service clients e.g. service client 400 and/or UE 130
  • service proxy e.g. service proxy 300 and/or SR 120
  • conference updates e.g. FP updates and/or states
  • notifications may include join notifications, log off notifications, re-join notifications, and/or any other types of notifications.
  • a join notification process may be initiated by a service client joining a conference for the first time and may include steps, such as login authorization at a login server, publishing of a login message to the network, and/or sending of login notification.
  • a join message e.g.
  • the service proxy may cache the service client's FP, update the service proxy's local digest tree (e.g. digest tree 600 ), recompute the service proxy's root digest, and push the join notification to other remote service proxies.
  • the service proxy's local digest tree e.g. digest tree 600
  • a log off notification process may be initiated by a service client intentionally leaving (e.g. sending a log off message) a conference and may include steps, such as log off authorization at a login server, publishing of log off message to the network, and/or sending of log off notification.
  • the service proxy may delete a corresponding leaf node in the service proxy's digest tree, recompute the service proxy's root digest, and push the log off notification to other remote service proxies.
  • the conference session may be closed after the log process, for example, the service client may send a close request message to the service proxy and the service proxy may respond with a close reply message.
  • a re-join notification process may be initiated by a service client intentionally leaving a conference (e.g. a log off notification) and then subsequently re-joining the conference.
  • the service proxy may preserve some information (e.g. FP updates and/or states) of the leaving service client for a predetermined period of time (e.g. re-join timeout interval).
  • a predetermined period of time e.g. re-join timeout interval
  • the service proxy may resume the last state of the service client just prior to the log off process.
  • the joining process may be substantially similar to a first-time join process.
  • a recovery process may occur after a network (e.g. network 100 ) experiences a temporary interruption and/or disconnection.
  • a conference component e.g. a sync proxy 336 and/or a sync client 436
  • each conference component may maintain digest log states (e.g. at digest log 333 and/or 433 ) and may continue with the last state (e.g. prior to the interruption) after recovering from the interruption.
  • each component may detect missing updates (e.g. occurred during the interruption) from a connected component and may request digest log history from the connected component (e.g. via method 1300 and/or 1400 ).
  • a disconnection process may be performed at the service proxy and/or the service client.
  • the service proxy may detect the failure and may disconnect the service client by employing substantially similar mechanisms as in a log off notification process.
  • other remote service proxies may detect the network failure and each service proxy may update the service proxy's digest log, for example, by deleting the node that correspond to the faulty service proxy and re-computing the global state.
  • conference controls and/or signaling exchanged between service proxies and/or service clients in a control plane may include session setup and/or close messages, service-related synchronization messages, heartbeat messages, and/or recovery messages.
  • the session setup and/or close messages, service-related messages, and/or recovery messages may be initiated and/or generated by a FP processor (e.g. FP processor 331 and/or 431 ) at a sync proxy (e.g. sync proxy 336 ) and/or a sync client (e.g. sync client 436 ).
  • the messages may be in the form of an interest packet and/or a data packet structured according to the ICN protocol.
  • an interest packet may be employed for sending a notification message and may employ a push mechanism.
  • Some interest packets may be followed by data packets (e.g. response messages).
  • FIG. 15 is a schematic diagram of an embodiment of an ICN protocol interest packet 1500 .
  • the packet 1500 may comprise a name field 1510 and a nonce field 1520 .
  • the name field 1510 may be a name-based identifier that identifies an information object.
  • the nonce field 1520 may be a numerical value that is employed for security, authentication, and/or encryption. For example, the nonce field 1520 may comprise a random number.
  • the packet 1500 may be sent via a push mechanism and/or a pull mechanism.
  • conference service control messages such as session setup and/or close messages, service-related notification messages, heartbeat messages, and/or recovery messages, may be sent as ICN protocol interest packets 1500 .
  • the name field 1510 in each packet 1500 may begin with a routing prefix (e.g. ⁇ Routing-Prefix>), which may be name-based and may identify a recipient of the interest packet 1510 .
  • routing prefixes e.g. ⁇ Routing-Prefix>
  • the ProxyIDR may be a remote sync proxy ID that identifies the sync proxy (e.g. sync proxy 336 ) situated in the SR (e.g. SR 120 may host one or more sync proxies).
  • the ISP may be the name of an ISP that provides Internet service to a conference participant (e.g. UE 130 ).
  • the DeviceID may be a UE ID or a sync client ID that identifies the conference participant (e.g. sync client 436 situated in the UE 130 ).
  • a sync client may send session setup and/or close messages to a sync proxy requesting to connect to and/or disconnect from a conference, respectively.
  • an interest packet for a session setup and/or close message may comprise a name field 1510 as shown below:
  • Routing-Prefix may be the routing prefix for a sync proxy as shown in Table 1 herein above.
  • the ServiceID, ClientID, and Msg-Type may indicate information as shown below:
  • a sync client may send notification messages to a sync proxy requesting to join, leave, and/or re-join a conference, and/or other notification information.
  • an interest packet for an notification message from a sync client to a sync proxy may comprise a name field 1510 as shown below:
  • Routing-Prefix may be the routing prefix for a sync proxy as shown in Table 1 herein above.
  • the ServiceID, Msg-Type, dg, and User-FP may indicate information as shown below:
  • User-FP may be generated at a UE (e.g. UE 130 ) by an application (e.g. a chat application) situated in in an application pool (e.g. application pool 411 ) at a service client (e.g. service client 400 ).
  • an application e.g. a chat application
  • an application pool e.g. application pool 411
  • service client 400 e.g. service client 400
  • ISP ISP, SR-ID, ServiceID
  • Service-AccountID may correspond to the UE account ID in the ISP network
  • msg-Seq may include the participant's signature information, credential information, security parameters, and/or an associating update sequence number.
  • the update sequence number may be employed for identifying the User-FP content.
  • a local sync proxy may send notification messages to a remote sync proxy to update the remote sync proxy of the joining, leaving, and/or re-joining of sync clients, and/or other sync clients' published information.
  • an interest packet for an notification message from a local sync proxy to a remote sync proxy may comprise a name field 1510 as shown below:
  • Routing-Prefix may be the routing prefix for a remote sync proxy as shown in Table 1 herein above.
  • the ServiceID, Msg-Type, ProxyID, dp_pre, dp_curr, and User-FP may indicate information as shown below:
  • a local sync proxy may send notification messages to a sync client (e.g. unicast) to update the sync client of the joining, leaving, and/or re-joining of sync clients attached to the remote sync proxy.
  • a sync client e.g. unicast
  • an interest packet for a notification message from a local sync proxy to an attached sync client may comprise a name field 1510 as shown below:
  • Routing-Prefix may be the routing prefix for a sync client as shown in Table 1 herein above.
  • the ServiceID, Flag, dr_pre, dr_curr, and User-FP may indicate information as shown below:
  • a local service proxy may send a sync interest packet as an outstanding request such that the local service proxy may receive a next conference update from a remote sync proxy.
  • an interest packet for a sync interest message may comprise a name field 1510 as shown below:
  • ServiceID and dg_curr may indicate information as shown below:
  • Heartbeat messages may be sent by a sync proxy and/or a sync client to indicate liveliness (e.g. functional indicator and/or connectivity).
  • a local sync proxy may send heartbeat messages to a remote sync proxy as well as connected sync clients and a sync client may send heartbeat messages to a connected sync proxy.
  • an interest packet for a heartbeat message may comprise a name field 1510 as shown below:
  • Routing-Prefix may vary depending the intended recipient as shown in Table 1 herein above.
  • the OriginiatorID, Flag, and sequence_no may indicate information as shown below:
  • heartbeat messages may be sent periodically and/or driven by predetermined events.
  • a recipient of a heartbeat message may send a confirmation message (e.g. as a data packet as discussed more fully below).
  • a recovery process may refer to a network recovery subsequent to a temporary network interruption at a sync client and/or a sync proxy.
  • the sync client and/or the sync proxy may miss updates (e.g. notifications messages) from a corresponding connected component.
  • the sync client and/or the sync proxy may detect missed updates when receiving notification messages received from a corresponding connected component. For example, when a local service proxy employs a push mechanism for conference update synchronization with a remote service proxy, the local service proxy may detect missed notifications from a remote sync proxy by determining whether the dp_pre in a notification message received from the remote sync proxy are identical (e.g. no gap) to the local state logged in a last entry associated with the remote service proxy.
  • a sync client may detect missed notifications from a sync proxy by determining whether the dg_pre in a notification message received from the sync proxy are identical (e.g. no gap) to the last global state logged at the sync client.
  • an interest packet for a recovery message may comprise a name field 1510 as shown below:
  • Routing-Prefix may vary depending the intended recipient as shown in Table 1 herein above.
  • the ServiceID, Msg-Type, digest_last, and digest_new may indicate information as shown below:
  • ServiceID Service name (e.g. chatroom-name) Msg-Type Recovery digest_last Last logged global state dg or local state dp. digest_new Most recent received global state dg or local state dp.
  • digest_last and digest_new may vary depending on the sender and/or the recipient.
  • the digest_last and digest_new may refer to the global state dg.
  • the digest_last and digest_new may refer to the remote sync proxy's local state dp n .
  • the digest_last and digest_new may indicate the missing digest log history.
  • FIG. 16 is a schematic diagram of an embodiment of an ICN protocol data packet 1600 .
  • the packet 1600 may comprise a name field 1610 and a data field 1620 .
  • the name field 1610 may be substantially similar to name field 1610 .
  • the data field 1620 may comprise content of message sequences and may vary depending on the message types, for example, some messages may comprise additional signature profile and/or credential information.
  • conference service control messages such as session setup and/or close response messages and/or recovery data messages, may be sent as ICN protocol data packets 1600 .
  • a sync proxy may respond to a sync client by sending a session setup response and/or a session close response.
  • a data packet for a session setup and/or close response message may comprise a name field 1610 substantially similar to the name field in a session setup and/or close interest described herein above.
  • the data field 1620 in a data packet for the session setup and/or close response may include a global state (e.g. dg) at the sync proxy and/or an acknowledgement to the requested session setup and/or close.
  • a sync proxy may respond to a connected component's recovery request message by sending a digest log history.
  • the depth (e.g. number of log entries) of the digest log may be determined by the digest_last and digest_new indicated in a recovery interest packet as described herein above and/or the depth in the cache as maintained by a responding component.
  • a data packet for a recovery response message may comprise a name field 1610 substantially similar to the name field in a recovery interest described herein above.
  • the data field 1620 in a data packet for the recovery response may include history of FPs that is updated between the digest_last and digest_new indicated in the name field 1610 .
  • a sync proxy may respond to an outstanding sync interest message from a remote sync proxy by sending a sync data response message (e.g. global state dg_curr in the sync interest message is out of date).
  • the sync proxy may send a sync data response message comprising the updated global state (e.g. dg_new) and a FP corresponding to the global state transition from dg_curr to dg_new.
  • R R l +k*(R u ⁇ R l ), wherein k is a variable ranging from 1 percent to 100 percent with a 1 percent increment, i.e., k is 1 percent, 2 percent, 3 percent, 4 percent, 7 percent, . . . , 70 percent, 71 percent, 72 percent, . . . , 97 percent, 96 percent, 97 percent, 98 percent, 99 percent, or 100 percent.
  • any numerical range defined by two R numbers as defined in the above is also specifically disclosed.

Abstract

A network element (NE) comprising a memory configured to store a digest log for a conference, a receiver configured to receive a first message from a first of a plurality of participants associated with the NE, wherein the first message comprises a signature profile of the first participant, a processor coupled to the receiver and the memory and configured to track a state of the conference by performing a first update of the digest log according to the first message, and a transmitter coupled to the processor and configured to send a second message to a first of a plurality of service proxies that serve the conference, wherein the second messages indicate the updated digest log.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application claims priority to U.S. Provisional Patent Application 61/824,656, filed May 17, 2013 by Guoqiang Wang, et. al., and entitled “Multitier “Push” Service Control for Virtual Whiteboard Conference Over Large Scale ICN Architecture”, and U.S. Provisional Patent Application 61/984,505, filed Apr. 25, 2014 by Asit Chakraborti, et. al., and entitled “Multitier “Push” Service Control for Virtual Whiteboard Conference Over Large Scale ICN Architecture”, both which are incorporated herein by reference as if reproduced in their entirety.
  • STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
  • Not applicable.
  • REFERENCE TO A MICROFICHE APPENDIX
  • Not applicable.
  • BACKGROUND
  • Virtual conferencing may refer to a service that allows conference events and/or data to be shared and/or exchanged simultaneously with multiple participants located in geographically distributed networking sites. The service may allow participants to interact in real-time and may support point-to-point (P2P) communication (e.g. one sender to one receiver), point-to-multipoint (P2MP) communication (e.g. one sender to multiple receivers), and/or multipoint-to-multipoint (MP2MP) communication (e.g. multiple senders to multiple receivers). Some examples of virtual conferencing may include chat rooms, E-conferences, and virtual white board (VWB) services, where participants may exchange audio, video, and/or data over the Internet. Some technical challenges in virtual conferencing may include real-time performance (e.g. real-time data exchanges among multi-parties), scalability (e.g. with one thousand to ten thousand (10K) participants), and interactive communication (e.g. MP2MP among participants, participants with simultaneous dual role as subscriber and publisher) support.
  • SUMMARY
  • In one embodiment, the disclosure includes a network element (NE) comprising a memory configured to store a digest log for a conference, a receiver configured to receive a first message from a first of a plurality of participants associated with the NE, wherein the first message comprises a signature profile of the first participant, a processor coupled to the receiver and the memory and configured to track a state of the conference by performing a first update of the digest log according to the first message, and a transmitter coupled to the processor and configured to send a second message to a first of a plurality of service proxies that serve the conference, wherein the second messages indicate the updated digest log.
  • In one embodiment, the disclosure includes a method for synchronizing service controls for a conference at a local service proxy in an Information Centric Networking (ICN) network, the method comprising receiving a first message from a first of a plurality of participants associated with the service proxy, wherein the first message comprises a signature profile of the first participant, and wherein the first message is received by employing an ICN content name based routing scheme, tracking a state of the conference by performing a first update for a digest log according to the first message, and sending a second message to indicate the first update to a first of a plurality of remote service proxies serving the conference.
  • In yet another embodiment, the disclosure includes a computer program product for use by a local service proxy serving a conference in an ICN network, wherein the computer program product comprises computer executable instructions stored on a non-transitory computer readable medium such that when executed by a processor cause the local service proxy to receive a first message from a first of a plurality of participants associated with the local service proxy, wherein the first message comprises a signature profile of the first participant, and wherein the first message is received via an ICN content name based routing scheme, track a state of the conference by performing a first update for a digest log according to the first message, wherein performing the first update comprises recording the first participant's signature profile in the digest log, and send a second message to indicate the first update to a first of a plurality of remote service proxies serving the conference.
  • These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
  • FIG. 1 is a schematic diagram of an embodiment of a multi-tier hybrid conference service network.
  • FIG. 2 is a schematic diagram of an embodiment of a network element (NE), which may act as a node in a multi-tier hybrid conference service network.
  • FIG. 3 is a schematic diagram of an embodiment of an architectural view of an Information Centric Networking (ICN)-enabled service proxy.
  • FIG. 4 is a schematic diagram of an embodiment of an architectural view of an ICN-enabled user equipment (UE).
  • FIG. 5 is a schematic diagram of an embodiment of a hierarchical view of a multi-tier hybrid conference service network.
  • FIG. 6 is a schematic diagram of an embodiment of a digest tree at a service proxy.
  • FIG. 7 is a schematic diagram of an embodiment of a conference formation.
  • FIG. 8 is a schematic diagram of an embodiment of a digest tree log at a service proxy.
  • FIG. 9 is a table of an embodiment of a digest history log at a service client.
  • FIG. 10 is a protocol diagram of an embodiment of a conference bootstrap method.
  • FIG. 11 is a protocol diagram of an embodiment of a conference synchronization method.
  • FIG. 12 is a protocol diagram of another embodiment of a conference synchronization method.
  • FIG. 13 is a flowchart of an embodiment of a conference recovery method.
  • FIG. 14 is a flowchart of another embodiment of a conference recovery method.
  • FIG. 15 is a schematic diagram of an embodiment of an ICN protocol interest packet.
  • FIG. 16 is a schematic diagram of an embodiment of an ICN protocol data packet.
  • DETAILED DESCRIPTION
  • It should be understood at the outset that, although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
  • Conference applications and/or systems may support real-time information and/or media data exchange among multiple parties located in a distributed networking environment. Some conference applications and/or systems may be implemented over a host-to-host Internet Protocol (IP) communication model and may duplicate meeting traffics over Wide Area Network (WAN) links. For example, in a server-centric model, a central server may control and manage a conference, process data from participants (e.g. meeting subscribers), and then redistribute the data back to the participants. The server-centric model may be simple for control management, but the duplication and/or redistribution of conference data may lead to high data traffic concentration at the server. For example, the data traffic in a server-centric model may be in an order of N2 (O(N2)), where N may represent the number of participants. Thus, the server-centric model may not be suitable for large scale conferences (e.g. with 1000 to 10K participants).
  • Some conference service technologies may employ a Content Delivery Networking (CDN) model and/or a P2P content distribution networking model (e.g. multi-server architecture) to reduce the high data traffic in a server-centric model. However, CDN models and/or P2P content distribution networking models may be over-the-top (OTT) solutions, where content (e.g. audio, video, and/or data) may be delivered over the Internet from a source to end users without network operators being involved in the control and/or distribution of the content (e.g. content providers operate independently from network operators). As such, networking optimizations and/or cross-layer optimizations may not be easily performed in CDN models and/or P2P content distribution networking models, and thus may lead to inefficient bandwidth utilization. In addition, CDN models and/or P2P content distribution networking models may not support MP2MP communication and may not be suitable for real-time interactive communication.
  • Some other conference service technologies, such as Chronos, may employ a Name Data Networking (NDN) model to improve bandwidth utilization and reduce traffic load by leveraging NDN features, such as sharable and distributed in-net storage and name-based routing mechanisms. NDN may be a receiver driven, data centric communication protocol, in which data flows through a network when requested by a consumer. The data access model in NDN may be referred to as a pull-based model. However, conference events and/or updates may be nondeterministic as conference participants may publish meeting updates at any time during a conference. In order to access nondeterministic conference events in a pull-based model, participants may actively query conference events. For example, every participant in Chronos may periodically broadcast a query to request meeting updates from other participants. Thus, control signaling overheads in Chronos may be significant. In addition, the coupling of the data plane and the control plane in Chronos may lead to complex support for simultaneous data updates and/or recovery. Thus, Chronos may not be suitable for supporting real-time interactive large scale conferences.
  • ICN architecture is a type of network architecture that focuses on information delivery. ICN architecture may also be known as content-aware, content-centric, or data oriented networking architecture. ICN models may shift the IP communication model from a host-to-host model to an information-object-to-object model. The IP host-to-host model may address and identify data by storage location (e.g. host IP address), whereas the information-object-to-object model may employ a non-location based addressing scheme that is content-based. Information objects may be the first class abstraction for entities in an ICN communication model. Some examples of information objects may include content, data streams, services, user entities, and/or devices. In an ICN architecture, information objects may be assigned with non-location based names, which may be used to address the information objects, decoupling the information objects from locations. Routing to and from the information objects may be based on the assigned names. ICN architecture may provision for in-network caching, where any network device or element may serve as a temporary content server, which may improve performance of content transfer. The decoupling of information objects from location and the name-based routing in ICN may allow mobility to be handled efficiently. ICN architecture may also provision for security by appending security credentials to data content instead of securing the communication channel that transports the data content. As such, conference applications may leverage ICN features, such as name-based routing, security, multicasting, and/or multi-path routing, to support real-time interactive large scale conferences.
  • Disclosed herein is hybrid conference service control architecture which may employ a combination of push and pull mechanisms for synchronizing conference updates. The hybrid conference service control architecture may comprise a plurality of distributed service proxies serving a plurality of conference participants. Each service proxy may serve a group of participants that is associated with the service proxy. The service proxy may synchronize and consolidate conference updates with remote service proxies serving the same conference and distribute the consolidated conference updates to the associated participants. Conference updates may include participants' fingerprints (FPs), which may include signatures and/or credentials of the participants, and/or update sequence numbers associated with the FPs. The synchronization of control flows between a participant and a service proxy may employ a push mechanism. However, the synchronization of control flows among the service proxies may employ a push and/or a pull mechanism. In an embodiment, each service proxy and each participant may maintain a digest log to track conference updates. Each digest log may comprise a snapshot of a current localized view (e.g. in the form of a digest tree) of the conference and a history of participants' FPs. For example, each service proxy may comprise a proxy digest tree with a snapshot view of the associated participants (e.g. local digest tree) and other remote service proxies serving the conference and a history of FP updates corresponding to the proxy digest tree. Each participant may comprise a digest tree with the root of the proxy digest tree (e.g. global root digest) and a history of FP updates corresponding to the global root digest. Thus, synchronization may operate according to a child-parent relationship between a service proxy and an associated participant. The disclosed hybrid conference service control architecture may leverage native ICN in-network storage and name-based routing. The disclosed hybrid conference service control architecture may provide control plane and data plane separation. The disclosed hybrid conference service control architecture may provide efficient synchronization of conference updates and fast recovery from network interrupts. The disclosed hybrid conference service control architecture may be suitable for supporting real-time interactive large scale conferences.
  • FIG. 1 is a schematic diagram of an embodiment of a multi-tier hybrid conference service network 100. Network 100 may comprise a plurality of conference service routers (SRs) 120 and a plurality of UEs 130. The plurality of SRs 120 may be situated in one or more networks 170. For example, network 170 may be an edge cloud network 170 that employs edge computing to provide widely dispersed distributed services and/or any other network. Each UE 130 may be connected to a network 180 via links 142 (e.g. wireless and/or wireline links). In an embodiment, network 180 may be any Layer two (L2) access network (e.g. wireless access network, wireline access network, etc.), where L2 may be a data link layer in an Open System Interconnection (OSI) model. The networks 170 and 180 may be interconnected via an Internet network 150 via links 141. The Internet network 150 may be formed from one or more interconnected local area networks (LANs), metropolitan area networks (MANs), wide area networks (WANs), etc. In an embodiment, the Internet network 150 may comprise an Internet Service Provider (ISP) network. The links 141 may include physical connections, such as fiber optic links, electrical links, and/or logical connections. Each of the networks 150, 170, and/or 180 may comprise a plurality of NEs 140, which may be routers, gateways, switches, edge routers, edge gateways, and/or any network devices suitable for routing and communicating packets as would be appreciated by one of ordinary skill in the art. It should be noted that networks 150, 170, and/or 180 may be alternatively configured and the SRs 120 and the UEs 130 may be alternatively situated in the network 100 as determined by a person of ordinary skill in the art to achieve the same functionalities.
  • The SRs 120 may be routers, virtual machines (VMs), and/or any network devices that may be configured to synchronize controls and/or signaling for a large scale conference (e.g. chat rooms, E-conference services, Virtual White Board (VWB) services, etc. with about 1000 to about 10K participants) among each other and with a plurality of conference participants, where such participants may participate in the conference via UEs 130. For example, each SR 120 may act as a conference proxy and may be referred to as a service proxy. In an embodiment, an SR 120 may host one or more VMs, where each VM may act as a service proxy for a different conference. It should be noted that each SR 120 may serve a different group of conference participants.
  • Each UE 130 may be an end user device, such as a mobile device, a desktop, a mobile device, a cellphone, and/or any network device configured to participate in one or more large scale conferences. For example, a conference participant may participate in a conference by executing a conference application on a UE 130. The conference participant may request to participate in a particular conference by providing a FP and a conference name. The conference participant may also subscribe and/or publish data for the conference. In network 100, each UE 130 may be referred to as a service client and may synchronize conference controls and/or signaling with a service proxy.
  • In an embodiment, network 100 may be an ICN-enabled network, which may employ ICN name-based routing, security, multicasting, and/or multi-path routing to support large scale conference applications. Network 100 may comprise a control plane separate from a data plane. For example, network 100 may employ a two-tier (e.g. proxy-client) architecture for conference controls and signaling in the control plane. The two-tier architecture may comprise a proxy layer and a client layer. The proxy layer may include a plurality of service proxies situated in a plurality of SRs 120 and the client layer may include a plurality of service clients situated in a plurality of UEs 130.
  • In an embodiment, control paths 191 and 192 may be logic paths specifically for exchanging conference controls and signaling. A service proxy situated in a SR 120 may exchange conference controls and signaling with remote service proxies situated in other SRs 120 serving the conference via control path 191. In addition, each service proxy may exchange conference controls and signaling with associated service clients situated in UEs 130 via control path 192. It should be noted that the service proxies may participate in control plane functions, but may not participate in data plane functions. As such, data communications (e.g. audio, video, rich text exchanges) among the service clients may be independent from the conference controls.
  • In an embodiment, a push mechanism may be employed for synchronizing controls (e.g. FPs, other signed information, events, etc.) in a conference between a service proxy and a service client, as well as among service proxies. A push mechanism may refer to a sender initiated transmission of information without a request from an interested recipient as opposed to an ICN protocol pull mechanism where an interested recipient may pull information from an information source. For example, a service client may initiate and push a FP update to a service proxy serving the service client. Upon receiving the FP update, the service proxy may consolidate all received FP updates into a first proxy update and may push the first proxy update to other remote service proxies serving the same conference. The service proxy may also receive a second proxy update from one of the other remote service proxies and may push the second proxy update to participants served by the service proxy. The push mechanism may enable real-time or nearly real-time communications, which may be a performance factor for conference services.
  • In another embodiment, a service client and a service proxy may synchronize conference controls by employing substantially similar push mechanisms as described herein above. However, synchronizations between service proxies may employ a pull mechanism. For example, a first service proxy may send an outstanding synchronizing (sync) interest to a second service proxy, where the sync interest may comprise a most recent global conference view at the first service proxy. When the second proxy receives a FP update from a client served by the second proxy, the second proxy may consolidate the FP update and update a global conference view at the second proxy. The second proxy may detect that the first service proxy's global conference view indicated in the sync interest is out of date, and thus may send a sync response to the first service proxy indicating the latest updated global conference view. It should be noted that after a service client receives updated FPs, the service client may fetch data over native ICN in-network storage with name-based routing. In some embodiments, conference data (e.g. audio, video, and/or data) may be exchanged among the participants by employing the pulling model in the ICN protocol.
  • FIG. 2 is a schematic diagram of an embodiment of an NE 200, which may act as a service proxy (e.g. situated in a SR 120) and/or a service client (e.g. situated in a UE 130) by implementing any of the schemes described herein. NE 200 may be implemented in a single node or the functionality of NE 200 may be implemented in a plurality of nodes. One skilled in the art will recognize that the term NE encompasses a broad range of devices of which NE 200 is merely an example. NE 200 is included for purposes of clarity of discussion, but is in no way meant to limit the application of the present disclosure to a particular NE embodiment or class of NE embodiments. At least some of the features/methods described in the disclosure may be implemented in a network apparatus or component such as an NE 200. For instance, the features/methods in the disclosure may be implemented using hardware, firmware, and/or software installed to run on hardware. As shown in FIG. 2, the NE 200 may comprise transceivers (Tx/Rx) 210, which may be transmitters, receivers, or combinations thereof. A Tx/Rx 210 may be coupled to plurality of downstream ports 220 for transmitting and/or receiving frames from other nodes and a Tx/Rx 210 may be coupled to plurality of upstream ports 250 for transmitting and/or receiving frames from other nodes, respectively. A processor 230 may be coupled to the Tx/Rx 210 to process the frames and/or determine which nodes to send the frames to. The processor 230 may comprise one or more multi-core processors and/or memory devices 232, which may function as data stores, buffers, etc. Processor 230 may be implemented as a general processor or may be part of one or more application specific integrated circuits (ASICs) and/or digital signal processors (DSPs). Processor 230 may comprise a conference service control management module 233, which may implement a conference bootstrap method 1000, conference synchronization methods 1100 and/or 1200, conference recovery methods 1300 and/or 1400, and/or any other MP2MP related communication functions discussed herein. In an alternative embodiment, the conference service control management module 233 may be implemented as instructions stored in the memory devices 232 (e.g. a computer program product), which may be executed by processor 230. The memory device 232 may comprise a cache for temporarily storing content, e.g., a Random Access Memory (RAM). Additionally, the memory device 232 may comprise a long-term storage for storing content relatively longer, e.g., a Read Only Memory (ROM). For instance, the cache and the long-term storage may include dynamic random access memories (DRAMs), solid-state drives (SSDs), hard disks, or combinations thereof.
  • It is understood that by programming and/or loading executable instructions onto the NE 200, at least one of the processor 230 and/or memory device 232 are changed, transforming the NE 200 in part into a particular machine or apparatus, e.g., a multi-core forwarding architecture, having the novel functionality taught by the present disclosure. It is fundamental to the electrical engineering and software engineering arts that functionality that can be implemented by loading executable software into a computer can be converted to a hardware implementation by well-known design rules. Decisions between implementing a concept in software versus hardware typically hinge on considerations of stability of the design and numbers of units to be produced rather than any issues involved in translating from the software domain to the hardware domain. Generally, a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design. Generally, a design that is stable that will be produced in large volume may be preferred to be implemented in hardware, for example in an ASIC, because for large production runs the hardware implementation may be less expensive than the software implementation. Often a design may be developed and tested in a software form and later transformed, by well-known design rules, to an equivalent hardware implementation in an ASIC that hardwires the instructions of the software. In the same manner as a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions may be viewed as a particular machine or apparatus.
  • FIG. 3 is a schematic diagram of an embodiment of an architectural view of an ICN-enabled service proxy 300, which may be situated in a SR (e.g. SR 120) in a multi-tier hybrid conference service network (e.g. network 100). The service proxy 300 may comprise an application layer 310, a service application programming interface (API) to application layer 320, a service layer 330, an ICN layer 340, a L2/Layer three (L3) layer 350, a service user-to-network interface (S-UNI) layer 361, and a service network-to-network interface (S-NNI) layer 362.
  • The application layer 310 may comprise an application pool 311, which may comprise a plurality of applications, such as chat, VWB, and/or other applications. The service API to application layer 320 may comprise a set of APIs for interfacing between the application layer 310 and the service layer 330. The APIs may be well-defined function calls and/or primitives comprising input parameters, output parameters, and/or return parameters.
  • The service layer 330 may comprise a sync proxy 336 and other service modules 335. For example, the sync proxy 336 may serve a conference (e.g. chat, VWB, etc.) with a plurality of other sync proxies and may act as a control proxy for a group of conference participants and/or service clients (e.g. situated in UEs 130). The other service modules 335 may manage and/or control other services. The ICN layer 340 may comprise ICN protocol layer modules, which may include a content store (CS) (e.g. for caching interest and/or data), a forwarding information base (FIB) (e.g. name-based routing look up), and/or a pending interest table (PIT) (e.g. records of forwarded interests). The L2/L3 layer 350 may comprise networking protocol stack modules, which may include data and/or address encoding and/or decoding for network transmissions. The L2 layer and the L3 layer may be referred to as the data link layer and the network layer in the OSI model. The S-UNI layer 361 may interface (e.g. signaling functions between networks and users) with one or more conference participants (e.g. situated in UEs 130) situated in the network. The S-NNI layer 362 may interface (e.g. signaling functions between networks) with one or more SRs (e.g. SRs 120) situated in the network.
  • The sync proxy 336 may communicate with the other remote service proxies to synchronize FPs of conference participants. The sync proxy 336 may comprise a FP processor 331, a heartbeat signal processor 332, a digest log 333, and an application cache 334. The FP processor 331 may receive FP updates (e.g. FPs of participants) from service clients and/or remote service proxies. The FP processor 331 may also send FP updates to service clients and/or remote service proxies. The FP processor 331 may track and maintain FP updates received from the service clients and/or the remote service proxies as discussed more fully below. The FP updates may be sent and/or received in the form of notification messages and/or sync messages via the ICN layer 340, the L2/L3 layer 350, and/or the S-NNI layer 360. It should be noted that the notification messages and/or sync messages may include ICN interest packets and/or ICN data packets, which may be handled according to ICN protocol (e.g. forwarded according to a FIB and/or cached in a CS).
  • The heartbeat signal processor 332 may monitor and exchange liveliness (e.g. functional and connectivity statuses) of the remote service proxies and/or the service clients attached to the service proxy 300. For example, the heartbeat signal processor 332 may generate and send heartbeat indication signals (e.g. periodically and/or event-driven) to the remote service proxies and/or the attached service clients. The heartbeat signal processor 332 may also listen to heartbeat indication signals from the remote service proxies and the attached service clients. In some embodiments, the heartbeat signal processor 332 may send a heartbeat response signal to confirm the reception of a heartbeat indication signal. When the heartbeat signal processor 332 detects missing heartbeat indication signals and/or heartbeat response signals from a remote service proxy and/or an attached service client over a duration that exceeds a predetermined timeout interval, the heartbeat signal processor 332 may send a network failure signal to the application layer 310 to notify the application serving the faulty service client and/or the faulty remote service proxy of the network failure. The heartbeat signals may be sent and/or received in the form of heartbeat messages via the ICN layer 340, the L2/L3 layer 350, and the S-NNI layer 360. It should be noted that the heartbeat messages may include ICN interest packets and/or ICN data packets, which may be handled according to ICN protocol (e.g. forwarded according to a FIB and cached in a CS).
  • The digest log 333 may be a cache or any temporary data storage that records recent FP updates. The digest log 333 may store a snapshot of a local view of the conference service including all the attached participants (e.g. including FPs) and all the remote proxies (e.g. some digest information) at a specified time, where the local view may be represented in the form of a digest tree as discussed more fully below. The digest log 333 may also store a history of FP updates corresponding to the digest tree, where each entry may be in the form of <global root digest>:<global digest tree>:<local digest tree> as discussed more fully herein below. The application cache 334 may be a temporary data storage that stores FPs that are in transmission (e.g. transmission status may not be confirmed). The FP processor 331 may manage the digest log 333 and/or the application cache 334 for storing and tracking FP updates. In an embodiment, the FP processor 331 and the heartbeat signal processor 332 may serve one or more conferences (e.g. a chat and a VWB). In such an embodiment, the FP processor 331 may employ different digest logs 333 and/or different application caches 334 for different conferences.
  • FIG. 4 is a schematic diagram of an embodiment of an architectural view of an ICN-enabled UE 400 (e.g. UE 130), which may be situated in a multi-tier hybrid conference service network (e.g. network 100). The UE 400 may comprise a substantially similar architecture as in a service proxy 300. For example, the UE 400 may comprise an application layer 410, an application pool 411, a service API to application layer 420, a service layer 430, other service modules 435, an ICN layer 440, and a L2/L3 layer 450, which may be substantially similar to application layer 310, application pool 311, service API to application layer 320, service layer 330, other service modules 335, ICN layer 340, and L2/L3 layer 350. However, the service layer 430 may comprise a sync client 436 instead of a sync proxy 336 as in the service proxy 300. In addition, the service client 400 may comprise an S-UNI control layer 461 and an S-UNI data layer 462 instead of an S-UNI layer 361 and S-NNI layer 362 as in the service proxy 300. The S-UNI data layer 462 may exchange conference data (e.g. video, audio, rich text) with one or more conference participants (e.g. situated in UEs 130) in the network. The S-UNI control layer 461 may interface with a service proxy to exchange control signaling with the network.
  • The sync client 436 may be a service client configured to participate in a conference. The sync client 436 may communicate with a service proxy (e.g. service proxy 400) serving the conference or more specifically a sync proxy (e.g. sync proxy 336) in the network. The sync client 436 may communicate with the service proxy via the ICN layer 440, the L2/L3 layer 450, and the S-UNI control layer 461. The sync client 436 may comprise a FP processor 431, a heartbeat signal processor 432, a digest log 433, and an application cache 434.
  • The FP processor 431 may be substantially similar to FP processor 331. However, the FP processor 431 may send FP updates (e.g. join, leave, re-join a conference) to a service proxy and may receive other participant's FP updates from the service proxy.
  • The heartbeat signal processor 432 may be substantially similar to heartbeat processor 332. However, the heartbeat signal processor 432 may monitor and exchange liveliness (e.g. functional statuses) indications with the service proxy and may employ substantially similar mechanisms for detecting network failure at the service proxy and notifying application layer 410.
  • The digest log 433 may be substantially similar to digest log 333, but may store a digest tree with a most recent global root digest received from the associated service proxy and a history of FP updates corresponding to the global root digest (e.g. <global root digest>:<user FP>) as discussed more fully below. The application cache 434 may be substantially similar to application cache 334. In an embodiment, the FP processor 431 and the heartbeat signal processor 432 may serve one or more conferences (e.g. a chat and a VWB). In such an embodiment, the FP processor 431 may employ different digest logs 433 and/or different application caches 434 for different conferences.
  • FIG. 5 is a schematic diagram of an embodiment of a hierarchical view of a multi-tier hybrid conference service network 500, for example, as implemented in network 100. Network 500 may comprise a plurality of service proxies 521, 522, and 523 (e.g. service proxies 300 and/or SRs 120) at a first level and a plurality of service clients 531, 532, and 533 (e.g. service clients 400 and/or UEs 130) at a second level. Each service client 531, 532, and/or 533 may be associated with one of the service proxies 521, 522, or 523. For example, service clients 531, 532, and 533 may be configured to communicate with service proxy 521, 522, and 523, respectively. The service proxies 521, 522, and 523 may be inter-connected and may be configured to exchange conference updates with each other and with corresponding service clients 531, 532, and 533, respectively. For example, service proxy 521 (e.g. a local service proxy) may receive conference update from client 531 and may update remote service proxies 522 and 523, which may further update associated clients 532 and 533, respectively. It should be noted that the service proxies 521, 522, and 523 and/or the service clients 531, 532, and 533 may also be referred to as conference components in the present disclosure.
  • FIG. 6 is a schematic diagram of an embodiment of a digest tree 600 at a service proxy (e.g. service proxy 300, 521-523, and/or SR 120) during a conference steady state. For example, the digest tree 600 may correspond to a digest tree at a service proxy P1. The digest tree 600 may be generated by a FP processor (e.g. FP processor 331) at a sync proxy (e.g. sync proxy 336). The digest tree 600 may be stored in a proxy digest log (e.g. digest log 333) and may represent a snapshot of a localized view (e.g. at a particular time instant) of the conference at the service proxy P1. The digest tree 600 may comprise a node 610 (e.g. depicted as G1,dg 1 (t)), which may be may be referred to as a global root digest G1 and may indicate a global state dg1(t) at the global root digest G1, where the subscript 1 may represent the identifier (ID) of the service proxy P1. The node 610 may branch into a plurality of nodes 620 (e.g. depicted as P1,dp 1 (t), . . . Pn,dp n (t)), which may be referred to as proxy local digest roots. Each node 620 may correspond to a service proxy serving the conference and may indicate a local state dpn(t) at the corresponding service proxy. The node 620 that corresponds to service proxy P1 (e.g. P1,dp 1 (t)) may further branch into a plurality of leaf nodes 630 (e.g. depicted as U1,fp 1 (t), . . . Um,fp m (t)), where the tree branches that fall under the node 620 corresponding to service proxy P1 may be referred to as a local digest tree. Each leaf node 630 may correspond to a service client Um (e.g. service client 400, 531-533, and/or UE 130) attached to the service proxy P1 and may indicate the FP update fpm(t) (e.g. published content and associating update sequence number) received from the service client Um, where the subscript m may represent the service client ID. It should be noted that service proxy P1 may maintain FPs of the attached service clients and local states of the remote service proxies.
  • The service proxy P1 may track and update the states at each node 610, 620, and 630 by tracking updates received from the remote service proxies in the conference and/or the attached service clients. When service proxy P1 receives a FP update (e.g. Um,fp m (t)) from a service client m, service proxy P1 may update the node 630 that corresponds to the service client m according to the received FP update, as well as the global state dg1(t) at node 610 and the local state dp1(t) at the node 620 that corresponds to service proxy P1. When service proxy P1 receives a FP update (e.g. comprising a global state dgn(t) and/or local state dpn(t)) from a remote service proxy Pn, the service proxy P1 may update the global state dg1(t) at node 610 and the local state dpn(t) at the node 620 that corresponds to the remote service proxy Pn. It should be noted that the states at nodes 610, 620, and 630 may be updated at different time instants. In addition, service proxy P1 may store each received FP update in a digest log history in the digest log and may purge old entries after a predetermined time and/or based on the size of the digest log history.
  • The global state dg1(t) and the local state dpt(t) at a service proxy P1 at a time instant t may be computed as shown below:
  • dp 1 ( t ) = i = 1 m fp i ( t ) dg 1 ( t ) = dp 1 + j = 2 n dp j ( t ) where i = 1 m fp i ( t ) ( 1 )
  • may represent the total number of FP updates sent by the attached service clients at time t and
  • j = 2 n dp j ( t )
  • may represent the total number of FP updates sent by the remote service proxies serving the conference.
  • In an embodiment, a service client connecting to service proxy P1 may maintain a digest log (e.g. digest log 433) to track updates received from service proxy P1. For example, the service client may generate an entry in the digest log history of the digest log to record the received FP update (e.g. <G1,dg 1 (t)>:<Um,fp m (t)>) and may update the digest tree with the received global root digest. As such, the disclosed multi-tier hybrid conference service control architecture may offload the maintenance and tracking of conference controls from the service clients when compared to a server-centric architecture and/or a server-less architecture. It should be noted that the service client may purge old entries in the digest log history after a predetermined time and/or based on the size of the digest log history.
  • Each digest tree may represent a snapshot of a localized view of the conference at a specified time. Each digest tree may comprise a different tree structure at a particular time instant. FIGS. 7-9 may illustrate an embodiment of a digest log at a service proxy and at a service client during a conference. FIG. 7 is a schematic diagram of an embodiment of a conference 700 formation. Conference 700 may comprise two service proxies P1 and P2 (e.g. service proxies 300 and/or SRs 120) managing conference 700 and three conference participants U1, U2, and U3 (e.g. service clients 400 and/or UEs 130). For example, participant U1 may join conference 700 via service proxy P1 at time instant t1, participant U3 may join conference 700 via service proxy P2 at time instant t2, and participant U2 may join conference 700 via service proxy P1 at time instant t3.
  • FIG. 8 is a schematic diagram of an embodiment of digest tree log 800 at a service proxy (e.g. service proxy 300 and/or SR 120) during a conference 700. For example, digest tree log 800 may represent the digest tree log at a service proxy P1. Digest tree log 800 may illustrate digest trees 810, 820, 830, and 840 at four different time instants during conference 700. Digest tree 810 may illustrate a conference view at a beginning time t0 of the conference at the service proxy P1. Digest tree 820 may illustrate a conference view at a time instant t1 when a participant U1 joins the conference via the service proxy P1. Digest tree 830 may illustrate a conference view at a time instant t2 when a participant U3 joins the conference via a service proxy P2. Digest tree 840 may illustrate a conference view at a time instant t3 when a participant U2 joins the conference via the service proxy P1. Each digest tree 810, 820, 830, and 840 may comprise a substantially similar structure as in digest tree 600 described herein above. As shown in digest tree log 800, when a participant joins the conference via a service proxy, the corresponding local state dpn at Pn,dp n , as well as the global state dg1 at G1,dg 1 (t) may be incremented by one. It should be noted that the local state dp1 and the dg1 of service proxy P1 may be computed by service proxy P1 and the local states dpn of the remote service proxies may be computed by the remote service proxies and received by service proxy P1.
  • FIG. 9 is a table of an embodiment of digest history log 900 at a service client (e.g. service clients 400 and/or UEs 130) during a conference 700. For example, the digest history log 900 may be recorded by a service client U1 attached to a service proxy P1. Digest history log 900 may represent a global root digest and a service client FP and may employ substantially similar notations as described in digest tree log 800. The digest history log 900 may be illustrated as entries 910, 920, 930, and 940, which may be generated after receiving FP updates from service proxy P1. For example, entry 910 may correspond to the beginning of conference 700. Entry 920 may be generated when service client U1 receives a FP update from service proxy P1 indicating the joining of service client U1. Entry 930 may be generated when service client U1 receives a FP update from service proxy P1 indicating the joining of service client U3 (e.g. via service proxy P2). Entry 940 may be generated when service client U1 receives a FP update from service proxy P1 indicating the joining of service client U2 (e.g. via service proxy P1).
  • FIG. 10 is a protocol diagram of an embodiment of a conference bootstrap method 1000 in a multi-tier hybrid conference service network (e.g. network 100). Method 1000 may be implemented between service proxies P1, P2, and P3 (e.g. service proxies 300 and/or SRs 120), and a participant U1 (e.g. service client 400 and/or UE 130). Method 1000 may represent a global root digest and a proxy local digest root by employing substantially similar notations as described herein above. However, method 1000 may represent a service client's FP by employing a notation of Um−FPk for clarity, where Um may represent a service client m and FPk may represent the FP published by the service client. Method 1000 may begin with a first participant U1 joining a conference. For example, at step 1010, participant U1 may send a connect request message to service proxy P1 to request a conference session. At step 1020, service proxy P1 may respond by sending a connect reply message to participant U1 to complete the conference setup.
  • At step 1030, participant U1 may send (e.g. via a push) a join update message to service proxy P1, where the join update message may comprise the participant U1's signature profile (e.g. U1−FP0). At step 1031, after receiving the join update message, service proxy P1 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333) at service proxy P1 according to the received join update message. For example, the FP history may comprise the following entries:
  • Last entry: <G1,0>:<P1,0, P2,0, P3,0>
  • Current entry: <G1,1>:<P1,1, P2,0, P3,0>:<U1−FP0>
  • where the global state dg1 and the local state dp1 at service proxy P1 may each be incremented by one.
  • At step 1040, service proxy P1 may send a first digest update message (e.g. G1,1/U1−FP0) to participant U1. In response to the join update message, service proxy P1 may send (e.g. via a push) a first join update message (e.g. with updated state P1,1/P2/U1−FP0) to service proxy P2 at step 1050 and a second join update message (e.g. with updated state P1,1/P3/U1−FP0) to service proxy P3 at step 1060. At this time, service proxies P1, P2, and P3 may be synchronized with participant U1's joining update.
  • FIG. 11 is a protocol diagram of an embodiment of a conference synchronization method 1100. Method 1100 may be implemented between service proxies P1 and P2 (e.g. service proxies 400 and/or SRs 120) and participants U1 and U3 (e.g. service clients 400 and/or UEs 130) during a notification process, where participant U1 may be connected to service proxy P1 and participant U3 may be connected to service proxy P2. The notification process may include join, log off, re-join, and/or other conference control notifications. Method 1100 may represent a global root digest, a proxy local digest root, and a service client's FP by employing substantially similar notations as method 1000 described herein above. Method 1100 may begin when a conference is in a steady state. For example, service proxy P1 and service proxy P2 may comprise a same global state n (e.g. G1,n, G2,n), service proxy P1 may comprise a local state m (e.g. P1,m), and service proxy P2 may comprise a local state k (e.g. P2,k). Method 1100 may be suitable for synchronizing states upon an injection of a new notification message (e.g. join, leave, rejoin, and/or other control update messages), for example, from a participant U3.
  • At step 1110, participant U3 may send (e.g. via a push) a notification message to service proxy P2, where the notification message may comprise participant U3's signature profile (e.g. U3−FPj). At step 1111, in response to the notification message, service proxy P2 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333) at service proxy P2 according to the notification message. For example, the FP history may comprise the following entries:
  • Last entry: <G2,n>:<P1,m, P2,k>
  • Current entry: <G2,n+1>:<P1,m, P2,k+1>:<U3−FPj>
  • where the global state dg2 and the local state dp2 at service proxy P2 may each be incremented by one.
  • At step 1120, after updating the proxy digest log at service proxy P2, service proxy P2 may send (e.g. via a push) a first digest update message (e.g. G2,n+1/U3−FPj) to participant U3. At step 1130, in response to the received notification message, service proxy P2 may send (e.g. via a push) the notification message (e.g. with the updated state P2,k+1/U3−FPj) to service proxy P1.
  • At step 1131, in response to the notification message, service proxy P1 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333) at the service proxy P1 according to the received notification message. For example, the FP history may comprise the following entries as shown below:
  • Last entry: <G1,n>:<P1,m, P2,k>
  • Current entry: <G1,n+1>:<P1,m, P2,k+1>:<U3−FPj>
  • where the global state dg1 at service proxy P1 may be incremented by one and the service proxy P2's local state dp2 may be updated according to the received FP update. At step 1140, after updating the proxy digest log at service proxy P1, service proxy P1 may send (e.g. via a push) a second digest update message (e.g. G1,n+1/U3−FPj) to participant U1. It should be noted that the global state at service proxies P1 and P2 may be synchronized at this time.
  • FIG. 12 is a protocol diagram of another embodiment of a conference synchronization method 1200. Method 1200 may be implemented between service proxies P1 and P2 (e.g. service proxies 300 and/or SRs 120) and participants U1 and U3 (e.g. service clients 400 and/or UEs 130) during a notification process, where participant U1 may be connected to service proxy P1 and participant U3 may be connected to service proxy P2. The notification process may include join, log off, re-join, and/or other conference control notifications. Method 1200 may represent a global root digest, a proxy local digest root, and a service client's FP by employing substantially similar notations as method 1100 described herein above. Method 1200 may employ a same push mechanism between participants and service proxies, but may employ a pull mechanism between service proxies. Method 1200 may begin when a conference is in a steady state. For example, service proxy P1 and service proxy P2 may comprise a same global state n (e.g. G1,n, G2,n), service proxy P1 may comprise a local state m (e.g. P1,m), and service proxy P2 may comprise a local state k (e.g. P2,k). Method 1200 may be suitable for synchronizing states upon an injection of a new notification message (e.g. join, leave, rejoin, and/or other control update messages), for example, from a participant U3.
  • At step 1210, service proxy P1 may send a sync interest message (e.g. an interest packet to initiate a pull process) to service proxy P2. The sync interest message may indicate the current global state (e.g. G1,n) at service proxy P1. The sync interest message may serve as an outstanding interest for a next conference update.
  • At step 1220, participant U3 may send (e.g. via a push) a notification message to service proxy P2, where the notification message may comprise participant U3's signature profile (e.g. U3−FPj). At step 1221, in response to the notification message, service proxy P2 may update a digest tree and a FP history in proxy digest log (e.g. digest log 433) at service proxy P2 according to the notification message. For example, the FP history may comprise the following entries:
  • Last entry: <G2,n>:<P1,m, P2,k>
  • Current entry: <G2,n+1>:<P1,m, P2,k+1>:<U3−FPj>
  • where the global state dg2 and the local state dp2 at service proxy P2 may each be incremented by one.
  • At step 1230, after updating the proxy digest log at service proxy P2, service proxy P2 may send (e.g. via a push) a first digest update message (e.g. G2,n+1/U3−FPj) to participant U3. At step 1240, service proxy P2 may detect that service proxy P1 comprises a sync interest message with an out dated global state of n, and thus may respond to the sync interest message by sending a sync response message (e.g. G2,n+1/U3−FPj) to service proxy P1.
  • At step 1241, in response to the notification message, service proxy P1 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333) at the service proxy P1 according to the received notification message. For example, the FP history may comprise the following entries as shown below:
  • Last entry: <G1,n>:<P1,m, P2,k>
  • Current entry: <G1,n+1>:<P1,m, P2,k+1>:<U3−FPj>
  • where the global state dg1 at service proxy P1 may be incremented by one. At step 1250, after updating the proxy digest log at service proxy P1, service proxy P1 may send (e.g. via a push) a second digest update message (e.g. G1,n+1/U3−FPj) to participant U1. It should be noted that the global state at service proxies P1 and P2 may be synchronized at this time. In addition, each service proxy P1 and/or P2 may send another pending sync interest message to pull a next conference update after receiving a sync response message (e.g. FP updates from remote service proxies). In some embodiments, sync interest messages may be aggregated into a single message per access link (e.g. links 141).
  • FIG. 13 is a flowchart of an embodiment of a conference recovery method 1300. Method 1300 may be implemented at a service proxy (e.g. service proxy 300 and/or SRs 120) and/or a service client (e.g. service client 400 and/or UE 130). Method 1300 may be suitable for recovering conference updates from a temporary network interruption when a push mechanism (e.g. method 1100) is employed for conference control synchronization. Method 1300 may begin when a service proxy and/or a service client recovers from a temporary network interruption (e.g. less than a few minutes), for example, due to network failure, link congestions, and/or other network faulty conditions. At step 1310, method 1300 may wait for a notification message from a connected component. For example, the connected component for a service client may be a service proxy and the connected component for a service proxy may be a service client and/or a service proxy.
  • Upon receiving the notification message, method 1300 may proceed to step 1320. At step 1320, method 1300 may determine whether there are missing updates (e.g. occurred during the temporary interruption) from the connected component. For example, method 1300 may compare a last state of the connected component indicated in the notification message to a most recent recorded state in a digest log (e.g. digest log 333 and/or 433). If there is no missing update (e.g. the received last state and the most recent recorded state are identical), then method 1300 may proceed to step 1330. At step 1330, method 1300 may update the digest log and return to step 1310.
  • If there are one or more missing updates (e.g. the received last state and the most recent recorded state are different), method 1300 may proceed to step 1340. At step 1340, method 1300 may send a recovery request message to the connected component, for example, indicating the most recent recorded state and the received current state (e.g. gap of missing updates). At step 1350, method 1300 may wait for a recovery data message. Upon receiving the recovery data message, method 1300 may continue to step 1330 to update the digest log. Method 1300 may be repeated for the duration of a conference.
  • FIG. 14 is a flowchart of another embodiment of a conference recovery method 1400, which may be implemented at a service proxy (e.g. service proxies 400 and/or SRs 120). Method 1400 may be suitable for recovering conference updates from a temporary network interruption when a service proxy employs a pull mechanism (e.g. method 1200) for synchronizing conference updates with remote service proxies. Method 1400 may begin when a service proxy recovers from a temporary network interruption (e.g. less than a few minutes), for example, due to link failure, link congestions, and/or other faulty conditions in a network. After the recovery, the service proxy may have missed one or more conference updates (e.g. more advanced global state) that occurred during the temporary network interruption. At step 1410, method 1400 may start a timer with a predetermined wait period (e.g. randomized time period). At step 1420, method 1400 may determine whether the timer has expired (e.g. reach the end of the wait period).
  • If the timer has not expired, method 1400 may check if a digest update message is received at step 1440. If method 1400 does not receive a digest update message, method 1400 may return to step 1420 and continue to wait for the expiration of the timer. If method 1400 receives a digest update message, method 1400 may proceed to step 1450. At step 1450, method 1400 may update a digest log (e.g. digest log 333) according to the received digest update message. For example, the digest update message may comprise the conference updates that occurred during the temporary network interruption.
  • If the timer expires at step 1420, method 1400 may proceed to step 1430. At step 1430, method 1400 may send a recovery sync message to request conference update recovery. At step 1431, method 1400 may wait for a recovery update message. If method 1400 receives a recovery update message, method 1400 may proceed to step 1450 to update the digest log. In an embodiment, the recovery update message may comprise some or all of the conference updates that occurred during the temporary network interrupt. It should be noted that the conference may return to a steady state (e.g. all service proxies comprise a same global sate) at the end of the method 1400. In addition, method 1400 may be repeated when another network interruption occurs during the recovery process and may employ a different wait period (e.g. with exponential back-off) for the timer.
  • It should be noted that in some embodiments a local service proxy may detect missing heartbeat messages from a first remote service proxy (e.g. due to long-term network connectivity failure between the local service proxy and the first remote service proxy) while maintaining network connection with a second remote service proxy. In such embodiments, a network partition may occur. However, the local service proxy may employ method 1300 and/or 1400 to request to recover conference updates at the first remote service proxy from the second remote service proxy.
  • In an embodiment, service clients (e.g. service client 400 and/or UE 130) and/or service proxy (e.g. service proxy 300 and/or SR 120) may synchronize conference updates (e.g. FP updates and/or states) after receiving a notification message from a corresponding component, for example, via method 1100 and/or 1200. Some examples of notifications may include join notifications, log off notifications, re-join notifications, and/or any other types of notifications. A join notification process may be initiated by a service client joining a conference for the first time and may include steps, such as login authorization at a login server, publishing of a login message to the network, and/or sending of login notification. When the service client sends a join message (e.g. including the service client's FP) to a service proxy, the service proxy may cache the service client's FP, update the service proxy's local digest tree (e.g. digest tree 600), recompute the service proxy's root digest, and push the join notification to other remote service proxies.
  • A log off notification process may be initiated by a service client intentionally leaving (e.g. sending a log off message) a conference and may include steps, such as log off authorization at a login server, publishing of log off message to the network, and/or sending of log off notification. When the service client sends a log off message to a service proxy, the service proxy may delete a corresponding leaf node in the service proxy's digest tree, recompute the service proxy's root digest, and push the log off notification to other remote service proxies. The conference session may be closed after the log process, for example, the service client may send a close request message to the service proxy and the service proxy may respond with a close reply message.
  • A re-join notification process may be initiated by a service client intentionally leaving a conference (e.g. a log off notification) and then subsequently re-joining the conference. After the service proxy receives a log off notification from the service client, the service proxy may preserve some information (e.g. FP updates and/or states) of the leaving service client for a predetermined period of time (e.g. re-join timeout interval). When the service client re-joins the conference within re-join timeout interval, the service proxy may resume the last state of the service client just prior to the log off process. However, when the service client joins the conference after the re-join timeout interval, the joining process may be substantially similar to a first-time join process.
  • A recovery process may occur after a network (e.g. network 100) experiences a temporary interruption and/or disconnection. For example, a conference component (e.g. a sync proxy 336 and/or a sync client 436) may continue to send heartbeat signals during the interruption, but may not receive heartbeat signals from a connected component. When the duration of the interruption is within a predetermined timeout interval (e.g. disconnect timeout interval), each conference component may maintain digest log states (e.g. at digest log 333 and/or 433) and may continue with the last state (e.g. prior to the interruption) after recovering from the interruption. After recovery, each component may detect missing updates (e.g. occurred during the interruption) from a connected component and may request digest log history from the connected component (e.g. via method 1300 and/or 1400).
  • However, when the network experiences a long-term network failure (e.g. longer than the disconnect timeout interval), a disconnection process may be performed at the service proxy and/or the service client. For example, when the network disruption occurs between a service proxy and a service client, the service proxy may detect the failure and may disconnect the service client by employing substantially similar mechanisms as in a log off notification process. When the network disruption occurs at a service proxy, other remote service proxies may detect the network failure and each service proxy may update the service proxy's digest log, for example, by deleting the node that correspond to the faulty service proxy and re-computing the global state.
  • In an embodiment of a multi-tier hybrid conference service network (e.g. network 100), conference controls and/or signaling exchanged between service proxies and/or service clients in a control plane (e.g. control paths 191 and 192) may include session setup and/or close messages, service-related synchronization messages, heartbeat messages, and/or recovery messages. For example, the session setup and/or close messages, service-related messages, and/or recovery messages may be initiated and/or generated by a FP processor (e.g. FP processor 331 and/or 431) at a sync proxy (e.g. sync proxy 336) and/or a sync client (e.g. sync client 436). The messages may be in the form of an interest packet and/or a data packet structured according to the ICN protocol. For example, an interest packet may be employed for sending a notification message and may employ a push mechanism. Some interest packets may be followed by data packets (e.g. response messages).
  • FIG. 15 is a schematic diagram of an embodiment of an ICN protocol interest packet 1500. The packet 1500 may comprise a name field 1510 and a nonce field 1520. The name field 1510 may be a name-based identifier that identifies an information object. The nonce field 1520 may be a numerical value that is employed for security, authentication, and/or encryption. For example, the nonce field 1520 may comprise a random number. The packet 1500 may be sent via a push mechanism and/or a pull mechanism.
  • In an embodiment, conference service control messages, such as session setup and/or close messages, service-related notification messages, heartbeat messages, and/or recovery messages, may be sent as ICN protocol interest packets 1500. The name field 1510 in each packet 1500 may begin with a routing prefix (e.g. <Routing-Prefix>), which may be name-based and may identify a recipient of the interest packet 1510. The following list examples of routing prefixes:
  • TABLE 1
    Names for Routing Prefix
    Recipient Names for routing prefix
    Sync proxy <ProxyIDR>
    Sync client <ISP>:<DeviceID>
  • The ProxyIDR may be a remote sync proxy ID that identifies the sync proxy (e.g. sync proxy 336) situated in the SR (e.g. SR 120 may host one or more sync proxies). The ISP may be the name of an ISP that provides Internet service to a conference participant (e.g. UE 130). The DeviceID may be a UE ID or a sync client ID that identifies the conference participant (e.g. sync client 436 situated in the UE 130).
  • In a session setup and/or close process, a sync client may send session setup and/or close messages to a sync proxy requesting to connect to and/or disconnect from a conference, respectively. For example, an interest packet for a session setup and/or close message may comprise a name field 1510 as shown below:
  • <Routing-Prefix>:<ServiceID>:<ClientID>:<Msg-Type>
  • where Routing-Prefix may be the routing prefix for a sync proxy as shown in Table 1 herein above. The ServiceID, ClientID, and Msg-Type may indicate information as shown below:
  • TABLE 2
    Descriptions of name identifiers for session setup
    and/or close messages (sync client to sync proxy)
    Names Descriptions
    ServiceID Service name (e.g. chatroom-name)
    ClientID Sync client ID
    Msg-Type Connect (e.g. session setup) or disconnect (e.g. session close)
  • In a notification process, a sync client may send notification messages to a sync proxy requesting to join, leave, and/or re-join a conference, and/or other notification information. For example, an interest packet for an notification message from a sync client to a sync proxy may comprise a name field 1510 as shown below:
  • <Routing-Prefix>:<ServiceID>:<Msg-Type>:<dr>:<User-FP>
  • where Routing-Prefix may be the routing prefix for a sync proxy as shown in Table 1 herein above. The ServiceID, Msg-Type, dg, and User-FP may indicate information as shown below:
  • TABLE 3
    Descriptions of name identifiers for notification
    messages (sync client to sync proxy)
    Names Descriptions
    ServiceID Service name (e.g. chatroom-name)
    Msg-Type Connect (e.g. session setup) or disconnect (e.g. session close)
    dg Current global state logged at the sync client's digest log
    (e.g. digest log 433)
    User-FP Content of participant's FP.
  • It should be noted that User-FP may be generated at a UE (e.g. UE 130) by an application (e.g. a chat application) situated in in an application pool (e.g. application pool 411) at a service client (e.g. service client 400). The following shows an example of a User-FP:
  • <ISP>:<SR-ID>:<ServiceID>:<Service-AccountID>:<msg-Seq>
  • where ISP, SR-ID, ServiceID may be as described herein above, Service-AccountID may correspond to the UE account ID in the ISP network, and msg-Seq may include the participant's signature information, credential information, security parameters, and/or an associating update sequence number. For example, the update sequence number may be employed for identifying the User-FP content.
  • In response to notifications received from a sync client, a local sync proxy may send notification messages to a remote sync proxy to update the remote sync proxy of the joining, leaving, and/or re-joining of sync clients, and/or other sync clients' published information. For example, an interest packet for an notification message from a local sync proxy to a remote sync proxy may comprise a name field 1510 as shown below:
  • <Routing-Prefix>:<ServiceID>:<Msg-Type>:<ProxyID>:<dp_pre>:<dp_curr>:<User-FP>
  • where Routing-Prefix may be the routing prefix for a remote sync proxy as shown in Table 1 herein above. The ServiceID, Msg-Type, ProxyID, dp_pre, dp_curr, and User-FP may indicate information as shown below:
  • TABLE 4
    Descriptions of name identifiers for notification
    messages (local sync proxy to remote sync proxy)
    Names Descriptions
    ServiceID Service name (e.g. chatroom-name)
    Msg-Type Join, leave, re-join a conference, and/or other notification
    messages
    ProxyID Local sync proxy (e.g. sync proxy 336) ID
    dp_pre Last local state logged at the sync proxy's digest log
    (e.g. digest log 333)
    dp_curr Updated local update at the sync proxy
    User-FP FP content of a sync client (e.g. who initiated a join, leave,
    re-join and/or other notifications)
  • In response to notifications received from a remote sync proxy, a local sync proxy may send notification messages to a sync client (e.g. unicast) to update the sync client of the joining, leaving, and/or re-joining of sync clients attached to the remote sync proxy. For example, an interest packet for a notification message from a local sync proxy to an attached sync client may comprise a name field 1510 as shown below:
  • <Routing-Prefix>:<ServiceID>:<Flag>:<dg_pre><dg_curr>:<User-FP>
  • where Routing-Prefix may be the routing prefix for a sync client as shown in Table 1 herein above. The ServiceID, Flag, dr_pre, dr_curr, and User-FP may indicate information as shown below:
  • TABLE 5
    Descriptions of name identifiers for notification
    messages (sync proxy to sync client)
    Names Descriptions
    ServiceID Service name (e.g. chatroom-name)
    Flag Join, leave, re-join a conference, and/or other notification
    messages
    dg_pre Last global state logged at the sync proxy's digest log
    (e.g. digest log 333)
    dg_curr Updated global state
    User-FP FP content of a sync client (e.g. who initiated a join, leave,
    re-join and/or other notifications via a connected proxy)
  • In a sync process (e.g. a pull mechanism), a local service proxy may send a sync interest packet as an outstanding request such that the local service proxy may receive a next conference update from a remote sync proxy. For example, an interest packet for a sync interest message may comprise a name field 1510 as shown below:
  • <ServiceID>:<dg_curr>
  • where ServiceID and dg_curr may indicate information as shown below:
  • TABLE 6
    Descriptions of name identifiers for notification
    messages (sync proxy to sync client)
    Names Descriptions
    ServiceID Service name (e.g. chatroom-name)
    dg_curr Last logged global state at the local proxy
  • Heartbeat messages may be sent by a sync proxy and/or a sync client to indicate liveliness (e.g. functional indicator and/or connectivity). For example, a local sync proxy may send heartbeat messages to a remote sync proxy as well as connected sync clients and a sync client may send heartbeat messages to a connected sync proxy. For example, an interest packet for a heartbeat message may comprise a name field 1510 as shown below:
  • <Routing-Prefix>:<OriginatorID>:<Flag>:<Sequence_no>
  • where Routing-Prefix may vary depending the intended recipient as shown in Table 1 herein above. The OriginiatorID, Flag, and sequence_no may indicate information as shown below:
  • TABLE 7
    Descriptions of name identifiers for heartbeat messages
    Names Descriptions
    OriginatorID Depend on the sender:
    Sync proxy: <ProxyIDR>
    Sync client: <ClientID>
    Flag Hello, Alive, etc.
    Sequence_no Sequence number of the heartbeat message
  • It should be noted that heartbeat messages may be sent periodically and/or driven by predetermined events. In some embodiments, a recipient of a heartbeat message may send a confirmation message (e.g. as a data packet as discussed more fully below).
  • A recovery process may refer to a network recovery subsequent to a temporary network interruption at a sync client and/or a sync proxy. During the interruption, the sync client and/or the sync proxy may miss updates (e.g. notifications messages) from a corresponding connected component. After recovery, the sync client and/or the sync proxy may detect missed updates when receiving notification messages received from a corresponding connected component. For example, when a local service proxy employs a push mechanism for conference update synchronization with a remote service proxy, the local service proxy may detect missed notifications from a remote sync proxy by determining whether the dp_pre in a notification message received from the remote sync proxy are identical (e.g. no gap) to the local state logged in a last entry associated with the remote service proxy. A sync client may detect missed notifications from a sync proxy by determining whether the dg_pre in a notification message received from the sync proxy are identical (e.g. no gap) to the last global state logged at the sync client. For example, an interest packet for a recovery message may comprise a name field 1510 as shown below:
  • <Routing-Prefix>:<ServiceID>:<Msg-Type>:<digest_last>:<digest_new>
  • where Routing-Prefix may vary depending the intended recipient as shown in Table 1 herein above. The ServiceID, Msg-Type, digest_last, and digest_new may indicate information as shown below:
  • TABLE 8
    Descriptions of name identifiers for recovery messages
    Names Descriptions
    ServiceID Service name (e.g. chatroom-name)
    Msg-Type Recovery
    digest_last Last logged global state dg or local state dp.
    digest_new Most recent received global state dg or local state dp.
  • It should be noted that the digest_last and digest_new may vary depending on the sender and/or the recipient. For example, when a sync client request digest log history from a sync proxy, the digest_last and digest_new may refer to the global state dg. When a sync proxy request digest log history from a remote sync proxy, the digest_last and digest_new may refer to the remote sync proxy's local state dpn. The digest_last and digest_new may indicate the missing digest log history.
  • FIG. 16 is a schematic diagram of an embodiment of an ICN protocol data packet 1600. The packet 1600 may comprise a name field 1610 and a data field 1620. The name field 1610 may be substantially similar to name field 1610. The data field 1620 may comprise content of message sequences and may vary depending on the message types, for example, some messages may comprise additional signature profile and/or credential information.
  • In an embodiment, conference service control messages, such as session setup and/or close response messages and/or recovery data messages, may be sent as ICN protocol data packets 1600. In a session setup and/or close process, a sync proxy may respond to a sync client by sending a session setup response and/or a session close response. For example, a data packet for a session setup and/or close response message may comprise a name field 1610 substantially similar to the name field in a session setup and/or close interest described herein above. The data field 1620 in a data packet for the session setup and/or close response may include a global state (e.g. dg) at the sync proxy and/or an acknowledgement to the requested session setup and/or close.
  • In a recovery process, a sync proxy may respond to a connected component's recovery request message by sending a digest log history. The depth (e.g. number of log entries) of the digest log may be determined by the digest_last and digest_new indicated in a recovery interest packet as described herein above and/or the depth in the cache as maintained by a responding component. For example, a data packet for a recovery response message may comprise a name field 1610 substantially similar to the name field in a recovery interest described herein above. The data field 1620 in a data packet for the recovery response may include history of FPs that is updated between the digest_last and digest_new indicated in the name field 1610.
  • In a sync process, a sync proxy may respond to an outstanding sync interest message from a remote sync proxy by sending a sync data response message (e.g. global state dg_curr in the sync interest message is out of date). The sync proxy may send a sync data response message comprising the updated global state (e.g. dg_new) and a FP corresponding to the global state transition from dg_curr to dg_new.
  • At least one embodiment is disclosed and variations, combinations, and/or modifications of the embodiment(s) and/or features of the embodiment(s) made by a person having ordinary skill in the art are within the scope of the disclosure. Alternative embodiments that result from combining, integrating, and/or omitting features of the embodiment(s) are also within the scope of the disclosure. Where numerical ranges or limitations are expressly stated, such express ranges or limitations should be understood to include iterative ranges or limitations of like magnitude falling within the expressly stated ranges or limitations (e.g. from about 1 to about 10 includes, 2, 3, 4, etc.; greater than 0.10 includes 0.11, 0.12, 0.13, etc.). For example, whenever a numerical range with a lower limit, Rl, and an upper limit, Ru, is disclosed, any number falling within the range is specifically disclosed. In particular, the following numbers within the range are specifically disclosed: R=Rl+k*(Ru−Rl), wherein k is a variable ranging from 1 percent to 100 percent with a 1 percent increment, i.e., k is 1 percent, 2 percent, 3 percent, 4 percent, 7 percent, . . . , 70 percent, 71 percent, 72 percent, . . . , 97 percent, 96 percent, 97 percent, 98 percent, 99 percent, or 100 percent. Moreover, any numerical range defined by two R numbers as defined in the above is also specifically disclosed. Unless otherwise stated, the term “about” means ±10% of the subsequent number. Use of the term “optionally” with respect to any element of a claim means that the element is required, or alternatively, the element is not required, both alternatives being within the scope of the claim. Use of broader terms such as comprises, includes, and having should be understood to provide support for narrower terms such as consisting of, consisting essentially of, and comprised substantially of. Accordingly, the scope of protection is not limited by the description set out above but is defined by the claims that follow, that scope including all equivalents of the subject matter of the claims. Each and every claim is incorporated as further disclosure into the specification and the claims are embodiment(s) of the present disclosure. The discussion of a reference in the disclosure is not an admission that it is prior art, especially any reference that has a publication date after the priority date of this application. The disclosure of all patents, patent applications, and publications cited in the disclosure are hereby incorporated by reference, to the extent that they provide exemplary, procedural, or other details supplementary to the disclosure.
  • While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
  • In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.

Claims (20)

What is claimed is:
1. A network element (NE) comprising:
a memory configured to store a digest log for a conference;
a receiver configured to receive a first message from a first of a plurality of participants associated with the NE, wherein the first message comprises a signature profile of the first participant;
a processor coupled to the receiver and the memory and configured to track a state of the conference by performing a first update of the digest log according to the first message; and
a transmitter coupled to the processor and configured to send a second message to indicate the first update to a first of a plurality of service proxies that serve the conference.
2. The NE of claim 1, wherein performing the first update comprises:
generating a current entry in the digest log;
computing a local state for the current entry by incrementing a local state in a last entry of the digest log; and
recording the first participant's signature profile in the current entry, and
wherein the second message comprises:
the local state in the current entry;
the local state in the last entry; and
the first participant's signature profile.
3. The NE of claim 1, wherein the receiver is further configured to receive a third message from the first service proxy, wherein the third message comprises a signature profile of a second participant associated with the first service proxy, wherein the processor is further configured to perform a second update of the digest log according to the third message, and wherein the transmitter is further configured to send a fourth message to indicate the second update to the first participant.
4. The NE of claim 3, wherein the third message further comprises a current local state of the first service proxy and a last local state of the first service proxy, wherein performing the second update comprises:
generating a current entry associated with the first service proxy in the digest log;
recording the current local state in the current entry; and
computing a global state in the current entry by incrementing a global state in a last entry of the digest log, and
wherein the fourth message comprises:
the global state of the current entry;
the global state of the last entry; and
the second participant's signature profile.
5. The NE of claim 4, wherein the processor is further configured to determine that the NE has missed one or more updates from the first service proxy when the last local state indicated in the third message is different from a current local state in a last entry associated with the first service proxy, wherein the transmitter is further configured to transmit a recovery request message to the first service proxy to request the missed updates, and wherein the receiver is further configured to receive a recovery data message comprising at least one of the missed updates from the first service proxy.
6. The NE of claim 1, wherein the receiver is further configured to receive a recovery request message comprising a request for the digest log between a first state and a second state, wherein the processor is further configured to extract a portion of the digest log comprising states between the first state and the second state, and wherein the transmitter is further configured to transmit a recovery data message comprising the extracted portion of the digest log.
7. The NE of claim 1, wherein the digest log comprises:
a record of the service proxies and local states corresponding to the service proxies at a specified time;
a record of the participants and signature profiles corresponding to the participants at the specified time; and
a global state of the conference at the specified time.
8. The NE of claim 1, wherein the processor is further configured to:
receive heartbeat messages that indicate network connectivity statuses between the NE and the first service proxy; and
generate a recovery request message when no heartbeat message is received from the first service proxy for a timeout window, and
wherein the transmitter is further configured to send the recovery request message to a second of the plurality of service proxies.
9. The NE of claim 1, wherein the NE is an Information Centric Networking (ICN)-enabled service proxy for the conference, wherein the first message and the second message are ICN-protocol packets, wherein the first message and the second message, each comprises a routable name-based identifier, and wherein the first message further comprises a conference join request, a conference leave, a conference re-join request, or combinations thereof.
10. A method for synchronizing service controls for a conference at a local service proxy in an Information Centric Networking (ICN) network, the method comprising:
receiving a first message from a first of a plurality of participants associated with the local service proxy, wherein the first message comprises a signature profile of the first participant, and wherein the first message is received by employing an ICN content name based routing scheme;
tracking a state of the conference by performing a first update for a digest log according to the first message; and
sending a second message to indicate the first update to a first of a plurality of remote service proxies serving the conference.
11. The method of claim 10, wherein performing the first update comprises:
generating a current entry in the digest log;
computing a global state for the current entry by incrementing a global state in the last entry of the digest log; and
recording the first participant's signature profile in the current entry.
12. The method of claim 11 further comprising:
receiving a synchronization (sync) interest message comprising a global state of the first remote service proxy; and
initiating the sending of the second message when the global state in the current entry is more recent than the first remote service proxy's global state in the received sync interest message,
wherein the second message comprises the global state in the current entry of the digest log and the first participant's signature profile.
13. The method of claim 10 further comprising:
receiving a third message from the first remote service proxy, wherein the third message comprises a signature profile of a second participant associated with the first remote service proxy and a global state of the first remote service proxy;
performing a second update for the digest log according to the third message; and
pushing a fourth message to indicate the second update to the first participant.
14. The method of claim 13, wherein performing the second update comprises:
generating a current entry associated with the first remote service proxy in the digest log; and
recording the global state of the third message in the current entry,
wherein the fourth message comprises:
the global state received in the third message;
a global state in a last entry of the digest log; and
the second participant's signature profile,
wherein the method further comprises sending a synchronization (sync) interest message to indicate an interest for a next conference update, and
wherein the sync interest message comprises the global state in the current entry of the digest log.
15. The method of claim 10 further comprising:
receiving a recovery request message from the first participant, wherein the recovery request message comprises a request for the digest log between a first global state and a second global state;
in response to receiving the recovery request message, extracting a portion of the digest log comprising states between the first global state and the second global state; and
transmitting a recovery data message comprising the extracted portion of the digest log to the first participant.
16. The method of claim 10 further comprising:
starting a timer with a wait period when the local service proxy has missed one or more updates from the first remote service proxy;
sending a recovery request message when the timer reaches the wait period and no conference update is received during the wait period; and
receiving a recovery data message comprising at least one of the missed updates.
17. The method of claim 10 further comprising:
receiving heartbeat messages that indicate network connectivity statuses between the local service proxy and the first participant;
performing a third update for the digest log by removing the first participant from the digest log when no heartbeat message is received from the first participant for a timeout window; and
sending a third message indicating the third update to the first remote service proxy.
18. A computer program product for use by a local service proxy serving a conference in an Information Centric Networking (ICN) network, wherein the computer program product comprises computer executable instructions stored on a non-transitory computer readable medium such that when executed by a processor cause the local service proxy to:
receive a first message from a first of a plurality of participants associated with the local service proxy, wherein the first message comprises a signature profile of the first participant, and wherein the first message is received via an ICN content name based routing scheme;
track a state of the conference by performing a first update of a digest log according to the first message, wherein performing the first update comprises recording the first participant's signature profile in the digest log; and
send a second message to indicate the first update to a first of a plurality of remote service proxies serving the conference.
19. The computer program product of claim 18, wherein the instructions further cause the processor to:
receive a third message from the first remote service proxy, wherein the third message comprises a signature profile of a second participant associated with the first remote service proxy;
perform a second update of the digest log according to the third message; and
send a fourth message to indicate the second update to the first participant.
20. The computer program product of claim 19, wherein performing the first update further comprises:
incrementing a global state in the digest log; and
incrementing a local state of the local service proxy in the digest log,
wherein the third message further comprises a local state of the first remote service proxy,
wherein performing the second update comprises:
incrementing the global state in the digest log; and
tracking the first remote service proxy's local state by recording the local state of the third message in the digest log, and
wherein the digest log comprises:
a record of the remote service proxies and local states corresponding to the remote service proxies at a specified time;
a record of the participants and signature profiles corresponding to the participants at the specified time; and
the global state at the specified time.
US14/280,336 2013-05-17 2014-05-16 Multi-Tier Push Hybrid Service Control Architecture for Large Scale Conferencing over ICN Abandoned US20140344379A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/280,336 US20140344379A1 (en) 2013-05-17 2014-05-16 Multi-Tier Push Hybrid Service Control Architecture for Large Scale Conferencing over ICN

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201361824656P 2013-05-17 2013-05-17
US201461984505P 2014-04-25 2014-04-25
US14/280,336 US20140344379A1 (en) 2013-05-17 2014-05-16 Multi-Tier Push Hybrid Service Control Architecture for Large Scale Conferencing over ICN

Publications (1)

Publication Number Publication Date
US20140344379A1 true US20140344379A1 (en) 2014-11-20

Family

ID=50979893

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/280,336 Abandoned US20140344379A1 (en) 2013-05-17 2014-05-16 Multi-Tier Push Hybrid Service Control Architecture for Large Scale Conferencing over ICN
US14/280,325 Active 2035-03-31 US10171523B2 (en) 2013-05-17 2014-05-16 Multi-tier push service control architecture for large scale conference over ICN

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/280,325 Active 2035-03-31 US10171523B2 (en) 2013-05-17 2014-05-16 Multi-tier push service control architecture for large scale conference over ICN

Country Status (6)

Country Link
US (2) US20140344379A1 (en)
EP (1) EP2984785A2 (en)
JP (1) JP2016527589A (en)
KR (1) KR20160006781A (en)
CN (1) CN105359457A (en)
WO (2) WO2014186757A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104660677A (en) * 2015-01-16 2015-05-27 北京邮电大学 Tree CDN-P2P fusion network framework based on grid structure and method thereof
WO2017147517A1 (en) * 2016-02-25 2017-08-31 Alibaba Group Holding Limited Lease-based heartbeat protocol method and apparatus
CN108174232A (en) * 2018-01-05 2018-06-15 白山市松睿科技有限公司 A kind of Transmission system and method for the network data based on CDN
US20180352602A1 (en) * 2016-03-01 2018-12-06 Telefonaktiebolaget Lm Ericsson (Publ) Correlation of User Equipment Identity to Information Centric Networking Request
CN109040787A (en) * 2018-09-05 2018-12-18 湖南华诺科技有限公司 A kind of method of distributed autonomous set-top box content distribution network
US10749995B2 (en) 2016-10-07 2020-08-18 Cisco Technology, Inc. System and method to facilitate integration of information-centric networking into internet protocol networks
US10764188B2 (en) 2017-02-22 2020-09-01 Cisco Technology, Inc. System and method to facilitate robust traffic load balancing and remote adaptive active queue management in an information-centric networking environment
US10798633B2 (en) 2017-02-23 2020-10-06 Cisco Technology, Inc. Heterogeneous access gateway for an information-centric networking environment
US10805825B2 (en) 2017-02-23 2020-10-13 Cisco Technology, Inc. System and method to facilitate cross-layer optimization of video over WiFi in an information-centric networking environment
US10951734B1 (en) * 2019-08-21 2021-03-16 Hyundai Motor Company Client electronic device, a vehicle, and a method of controlling the same
US11132353B2 (en) * 2018-04-10 2021-09-28 Intel Corporation Network component, network switch, central office, base station, data storage, method and apparatus for managing data, computer program, machine readable storage, and machine readable medium
US11470176B2 (en) * 2019-01-29 2022-10-11 Cisco Technology, Inc. Efficient and flexible load-balancing for clusters of caches under latency constraint

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9621853B1 (en) 2016-06-28 2017-04-11 At&T Intellectual Property I, L.P. Service orchestration to support a cloud-based, multi-party video conferencing service in a virtual overlay network environment
TWI650636B (en) * 2017-11-23 2019-02-11 財團法人資訊工業策進會 Detection system and detection method
CN110162413B (en) * 2018-02-12 2021-06-04 华为技术有限公司 Event-driven method and device
US10986184B1 (en) 2019-03-05 2021-04-20 Edjx, Inc. Systems and methods for it management of distributed computing resources on a peer-to-peer network
CN110233744B (en) * 2019-06-12 2021-06-01 广东佳米科技有限公司 Conference state display method, conference state updating method and device
CN110493885B (en) * 2019-08-21 2020-12-08 北京理工大学 Named data network continuous data pushing method for data fragmentation
CN113630428B (en) * 2020-05-08 2022-09-02 中国电信股份有限公司 Acquisition method and acquisition system for service data
US11863592B2 (en) 2021-05-14 2024-01-02 Cisco Technology, Inc. Active speaker tracking using a global naming scheme

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060282408A1 (en) * 2003-09-30 2006-12-14 Wisely David R Search system and method via proxy server
US20080003964A1 (en) * 2006-06-30 2008-01-03 Avaya Technology Llc Ip telephony architecture including information storage and retrieval system to track fluency
US20100061538A1 (en) * 2008-09-09 2010-03-11 David Coleman Methods and Systems for Calling Conference Participants to Establish a Conference Call
US20110110364A1 (en) * 2009-04-27 2011-05-12 Lance Fried Secure customer service proxy portal

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6477545B1 (en) * 1998-10-28 2002-11-05 Starfish Software, Inc. System and methods for robust synchronization of datasets
US20040205175A1 (en) * 2003-03-11 2004-10-14 Kammerer Stephen J. Communications system for monitoring user interactivity
US8670760B2 (en) * 2008-01-24 2014-03-11 Kodiak Networks, Inc. Converged mobile-web communications solution
JP4828999B2 (en) * 2006-04-27 2011-11-30 京セラ株式会社 Mobile station and server
US9425973B2 (en) * 2006-12-26 2016-08-23 International Business Machines Corporation Resource-based synchronization between endpoints in a web-based real time collaboration
JP5228369B2 (en) * 2007-04-27 2013-07-03 日本電気株式会社 COMMUNICATION SYSTEM, COMMUNICATION METHOD, AND COMMUNICATION PROGRAM
WO2011027475A1 (en) * 2009-09-07 2011-03-10 株式会社東芝 Teleconference device
CN102118263B (en) * 2010-01-06 2015-05-20 中兴通讯股份有限公司 Method and system for distribution of configuration information
JP5998383B2 (en) * 2010-07-28 2016-09-28 株式会社リコー Transmission management system, transmission system, transmission management method, and program
US20120142324A1 (en) * 2010-12-03 2012-06-07 Qualcomm Incorporated System and method for providing conference information
JP5850224B2 (en) * 2011-02-28 2016-02-03 株式会社リコー Management system and program
KR20130093748A (en) * 2011-12-27 2013-08-23 한국전자통신연구원 System for supporting information-centric networking service based on p2p and method thereof
JP5880293B2 (en) * 2012-06-01 2016-03-08 株式会社リコー Communication system, call management server, location information server, and communication terminal
US8955048B2 (en) * 2012-11-27 2015-02-10 Ricoh Company, Ltd. Conference data management

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060282408A1 (en) * 2003-09-30 2006-12-14 Wisely David R Search system and method via proxy server
US20080003964A1 (en) * 2006-06-30 2008-01-03 Avaya Technology Llc Ip telephony architecture including information storage and retrieval system to track fluency
US20100061538A1 (en) * 2008-09-09 2010-03-11 David Coleman Methods and Systems for Calling Conference Participants to Establish a Conference Call
US20110110364A1 (en) * 2009-04-27 2011-05-12 Lance Fried Secure customer service proxy portal

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104660677A (en) * 2015-01-16 2015-05-27 北京邮电大学 Tree CDN-P2P fusion network framework based on grid structure and method thereof
US10601930B2 (en) 2016-02-25 2020-03-24 Alibaba Group Holding Limited Lease-based heartbeat protocol method and apparatus
WO2017147517A1 (en) * 2016-02-25 2017-08-31 Alibaba Group Holding Limited Lease-based heartbeat protocol method and apparatus
US11558918B2 (en) * 2016-03-01 2023-01-17 Telefonaktiebolaget Lm Ericsson (Publ) Correlation of user equipment identity to information centric networking request
US20180352602A1 (en) * 2016-03-01 2018-12-06 Telefonaktiebolaget Lm Ericsson (Publ) Correlation of User Equipment Identity to Information Centric Networking Request
US10749995B2 (en) 2016-10-07 2020-08-18 Cisco Technology, Inc. System and method to facilitate integration of information-centric networking into internet protocol networks
US10764188B2 (en) 2017-02-22 2020-09-01 Cisco Technology, Inc. System and method to facilitate robust traffic load balancing and remote adaptive active queue management in an information-centric networking environment
US10798633B2 (en) 2017-02-23 2020-10-06 Cisco Technology, Inc. Heterogeneous access gateway for an information-centric networking environment
US10805825B2 (en) 2017-02-23 2020-10-13 Cisco Technology, Inc. System and method to facilitate cross-layer optimization of video over WiFi in an information-centric networking environment
CN108174232A (en) * 2018-01-05 2018-06-15 白山市松睿科技有限公司 A kind of Transmission system and method for the network data based on CDN
US11132353B2 (en) * 2018-04-10 2021-09-28 Intel Corporation Network component, network switch, central office, base station, data storage, method and apparatus for managing data, computer program, machine readable storage, and machine readable medium
CN109040787A (en) * 2018-09-05 2018-12-18 湖南华诺科技有限公司 A kind of method of distributed autonomous set-top box content distribution network
US11470176B2 (en) * 2019-01-29 2022-10-11 Cisco Technology, Inc. Efficient and flexible load-balancing for clusters of caches under latency constraint
US10951734B1 (en) * 2019-08-21 2021-03-16 Hyundai Motor Company Client electronic device, a vehicle, and a method of controlling the same

Also Published As

Publication number Publication date
US20140344378A1 (en) 2014-11-20
JP2016527589A (en) 2016-09-08
WO2014186760A3 (en) 2015-01-15
WO2014186757A2 (en) 2014-11-20
WO2014186757A3 (en) 2015-01-15
WO2014186760A2 (en) 2014-11-20
US10171523B2 (en) 2019-01-01
KR20160006781A (en) 2016-01-19
EP2984785A2 (en) 2016-02-17
CN105359457A (en) 2016-02-24

Similar Documents

Publication Publication Date Title
US10171523B2 (en) Multi-tier push service control architecture for large scale conference over ICN
Zhang et al. Scalable name-based data synchronization for named data networking
Zhu et al. Let's chronosync: Decentralized dataset state synchronization in named data networking
Guo et al. P2Cast: peer-to-peer patching scheme for VoD service
US9628527B2 (en) System and method for delivering content in a unicast/multicast manner
Xie et al. Coolstreaming: Design, theory, and practice
US8539088B2 (en) Session monitoring method, apparatus, and system based on multicast technologies
JP4799008B2 (en) Method for transmitting a multipoint stream in a local area network and connection device for implementing the method
Banerjee et al. Resilient multicast using overlays
CN102231762B (en) Peer-to-peer (p2p) server architecture capable of being unlimitedly and horizontally expanded
US20140019549A1 (en) Control System for Conferencing Applications in Named-Data Networks
CN104506330B (en) A kind of message synchronization method and system
CN100550857C (en) Realize method, system and the access device of intercommunication of two layers of local specific service
Moll et al. SoK: The evolution of distributed dataset synchronization solutions in NDN
CN102291458B (en) Method for peer-to-peer (p2p) server framework
CN101588251A (en) A kind of method and apparatus of IMS instant message group sending
Azgin et al. Scalable multicast for content delivery in information centric networks
Toth Design of a social messaging system using stateful multicast
Ha et al. Topology and architecture design for peer to peer video live streaming system on mobile broadcasting social media
WO2023083136A1 (en) Live broadcasting method, system, bier controller, router, device, and readable medium
JP5574383B2 (en) Reception status estimation method, reception side multi-point distribution device, and program
JP5713499B2 (en) Multi-point distribution method and multi-point distribution system
Alekseev et al. Evaluation of a topological distance algorithm for construction of a P2P multicast hybrid overlay tree
Yang et al. GroupSync: Scalable Distributed Dataset Synchronization for Named Data Networking
Kim et al. Reliable data delivery for relay-based overlay multicast

Legal Events

Date Code Title Description
AS Assignment

Owner name: FUTUREWEI TECHNOLOGIES, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHAKRABORTI, ASIT;WANG, GUOQIANG;WEI, JUN;AND OTHERS;SIGNING DATES FROM 20140530 TO 20140602;REEL/FRAME:035802/0821

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION