EP2386164A2 - Web optimization - Google Patents

Web optimization

Info

Publication number
EP2386164A2
EP2386164A2 EP10700649A EP10700649A EP2386164A2 EP 2386164 A2 EP2386164 A2 EP 2386164A2 EP 10700649 A EP10700649 A EP 10700649A EP 10700649 A EP10700649 A EP 10700649A EP 2386164 A2 EP2386164 A2 EP 2386164A2
Authority
EP
European Patent Office
Prior art keywords
client
prefetch
request
content
url
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP10700649A
Other languages
German (de)
French (fr)
Inventor
William B. Sebastian
Dan Newman
Peter Lepeska
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Viasat Inc
Original Assignee
Viasat Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/571,288 external-priority patent/US20100180005A1/en
Priority claimed from US12/619,095 external-priority patent/US8171135B2/en
Application filed by Viasat Inc filed Critical Viasat Inc
Publication of EP2386164A2 publication Critical patent/EP2386164A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/14Relay systems
    • H04B7/15Active relay systems
    • H04B7/185Space-based or airborne stations; Stations for satellite systems
    • H04B7/18523Satellite systems for providing broadcast service to terrestrial stations, i.e. broadcast satellite service
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/14Relay systems
    • H04B7/15Active relay systems
    • H04B7/185Space-based or airborne stations; Stations for satellite systems
    • H04B7/18578Satellite systems for providing broadband data service to individual earth stations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5681Pre-fetching or pre-delivering data based on network characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements

Definitions

  • the present invention relates, in general, to network acceleration, and more particularly, to URL masking, cache cycling, prefetch accumulation, DNS prefetching, and/or other types of network acceleration functionality.
  • a URL masking algorithm is provided to allow prefetchers and caches to work even when the URLs are constructed using scripts intended to block such behavior.
  • certain cache-busting techniques generate portions of the URL string, using Java scripts, to include unique values (e.g., random numbers, timestamps, etc.).
  • prefetchers may be fooled into thinking objects at the URL have not yet been prefetched, when in fact they have.
  • Embodiments mask these cache-busting portions of the URL string to allow the prefetcher to recognize the request as a previously prefetched URL.
  • cache cycling is used to issue a fresh request to the content provider for website content each time the proxy server serves a request from cached data.
  • URL masking may allow a prefetcher to operate in the context of a cache- busting algorithm. Using prefetched content may reduce the apparent number of times the URL is requested, which may reduce advertising revenue and other metrics based on the number of requests.
  • Cache cycling embodiments maintain the request metrics while allowing optimal prefetching in the face of cache-busting techniques.
  • a number of techniques are provided for optimizing prefetcher functionality.
  • an accumulator is provided for optimizing performance of an accelerator abort system.
  • Chunked content (e.g., in HTTP chunked mode) is accumulated until enough data is available to make an abort decision.
  • socket mapping architectures are adjusted to allow prefetching of content copies for URLs requested multiple times on the same page.
  • persistent storage is adapted to cache prefetched, but unused data, and to provide access to the data to avoid subsequent redundant prefetching.
  • DNS transparent proxy and prefetch is integrated with HTTP transparent proxy and prefetch, so as to piggyback DNS information with HTTP frames. Prefetching may be provided for the DNS associated with all host names called in Java scripts to reduce the number of requests needed to the DNS server.
  • the DNS prefetch functionality may be used to begin locally satisfying DNS lookup requests at the client, even when the DNS lookup request is made before the DNS prefetch is complete.
  • FIG. 1 is a block diagram illustrating satellite communications, according to one embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a gateway, according to one embodiment of the present invention.
  • FIG. 3 is a block diagram illustrating a subscriber terminal, according to one embodiment of the present invention.
  • FIG. 4 is a generalized schematic diagram illustrating a computer system, in accordance with various embodiments of the invention.
  • FIG. 5 is a block diagram illustrating a networked system of computers, which can be used in accordance with various embodiments of the invention.
  • FIG. 6 is a block diagram illustrating a system for implementing prefetching, according to one embodiment of the present invention.
  • FIG. 7A and 7B are block diagrams illustrating a network acceleration module, according to one embodiment of the present invention.
  • FIG. 8 is a flow diagram illustrating a method for implementing URL masking, according to one embodiment of the present invention.
  • FIG. 9 is a flow diagram illustrating a method for further implementing URL masking, according to one embodiment of the present invention.
  • FIG. 10 is a block diagram illustrating a system for implementing URL masking, according to one embodiment of the present invention.
  • FIG. 11 illustrates a system of implementing a prior art HTTP cache.
  • FIG. 12 illustrates how a cache with cache cycling is used in conjunction with URL masking, according to various embodiments.
  • FIG. 13 illustrates a system for implementing cache cycling, in accordance with aspects of various embodiments.
  • FIG. 14 which illustrates an embodiment of a method performed by a prefetch response abort.
  • FIG. 15 shows a flow diagram of an illustrative method for prefetching using an accumulator, according to various embodiments.
  • FIG. 16 shows relevant portions of an illustrative communications system, including an accumulator for a prefetch abort system, according to various embodiments.
  • FIG. 17 shows an illustrative method for exploiting accumulated data to further optimize prefetch abort operations, according to various embodiments.
  • the satellite communications system 100 includes a network 120, such as the Internet, interfaced with a gateway 115 that is configured to communicate with one or more subscriber terminals 130, via a satellite 105.
  • a gateway 115 is sometimes referred to as a hub or ground station.
  • Subscriber terminals 130 are sometimes called modems, satellite modems, or user terminals.
  • the communications system 100 is illustrated as a geostationary satellite 105 based communication system, it should be noted that various embodiments described herein are not limited to use in geostationary satellite based systems; for example, some embodiments could be low earth orbit (“LEO") satellite based systems or aerial payloads not in orbit and held aloft by planes, blimps, weather balloons, etc. Other embodiments could have a number of satellites instead of just one.
  • LEO low earth orbit
  • the network 120 may be any type of network and can include, for example, the Internet, an Internet protocol (“IP”) network, an intranet, a wide-area network (“WAN”), a local- area network (“LAN”), a virtual private network (“VPN”), the Public Switched Telephone Network (“PSTN”), and/or any other type of network supporting data communication between devices described herein, in different embodiments.
  • IP Internet protocol
  • a network 120 may include both wired and wireless connections, including optical links.
  • the network 120 may connect the gateway 115 with other gateways (not shown), which are also in communication with the satellite 105.
  • the gateway 115 provides an interface between the network 120 and the satellite 105.
  • the gateway 115 may be configured to receive data and information directed to one or more subscriber terminals 130, and can format the data and information for delivery to the respective destination device via the satellite 105. Similarly, the gateway 115 may be configured to receive signals from the satellite 105 (e.g., from one or more subscriber terminals 130) directed to a destination in the network 120, and can process the received signals for transmission along the network 120.
  • a device (not shown) connected to the network 120 may communicate with one or more subscriber terminals 130. Data and information, for example IP datagrams, may be sent from a device in the network 120 to the gateway 115. It will be appreciated that the network 120 may be in further communication with a number of different types of providers, including content providers, application providers, service providers, etc. Further, in various embodiments, the providers may communicate content with the satellite communication system 100 through the network 120, or through other components of the system (e.g., directly through the gateway 115).
  • the gateway 115 may format frames in accordance with a physical layer definition for transmission to the satellite 105.
  • a variety of physical layer transmission modulation and coding techniques may be used with certain embodiments, including those defined with the DVB-S2 standard.
  • the link 135 from the gateway 115 to the satellite 105 may be referred to hereinafter as the downstream uplink 135.
  • the gateway 115 uses the antenna 110 to transmit the content to the satellite 105.
  • the antenna 110 comprises a parabolic reflector with high directivity in the direction of the satellite and low directivity in other directions.
  • a geostationary satellite 105 is configured to receive the signals from the location of antenna 110 and within the frequency band and specific polarization transmitted.
  • the satellite 105 may, for example, use a reflector antenna, lens antenna, array antenna, active antenna, or other mechanism for reception of such signals.
  • the satellite 105 may process the signals received from the gateway 115 and forward the signal from the gateway 115 containing the MAC frame to one or more subscriber terminals 130.
  • the satellite 105 operates in a multi-beam mode, transmitting a number of narrow beams each directed at a different region of the earth.
  • the satellite 105 may be configured as a "bent pipe" satellite, wherein the satellite may frequency convert the received carrier signals before retransmitting these signals to their destination, but otherwise perform little or no other processing on the contents of the signals.
  • the satellite 105 may be configured as a "bent pipe" satellite, wherein the satellite may frequency convert the received carrier signals before retransmitting these signals to their destination, but otherwise perform little or no other processing on the contents of the signals.
  • single or multiple carrier signals could be used for the feeder spot beams.
  • a variety of physical layer transmission modulation and coding techniques may be used by the satellite 105 in accordance with certain embodiments, including those defined with the DVB-S2 standard. For other embodiments, a number of configurations are possible (e.g., using LEO satellites, or using a mesh network instead of a star network).
  • the service signals transmitted from the satellite 105 may be received by one or more subscriber terminals 130, via the respective subscriber antenna 125.
  • the subscriber antenna 125 and terminal 130 together comprise a very small aperture terminal ("VSAT"), with the antenna 125 measuring approximately 0.6 meters in diameter and having approximately 2 watts of power.
  • VSAT very small aperture terminal
  • a variety of other types of subscriber antennae 125 may be used at the subscriber terminal 130 to receive the signal from the satellite 105.
  • the link 150 from the satellite 105 to the subscriber terminals 130 may be referred to hereinafter as the downstream downlink 150.
  • Each of the subscriber terminals 130 may include a hub or router (not pictured) that is coupled to multiple subscriber terminals 130.
  • CPE consumer premises equipment
  • a subscriber terminal 130 may transmit data and information to a network 120 destination via the satellite 105.
  • the subscriber terminal 130 transmits the signals via the upstream uplink 145-a to the satellite 105 using the subscriber antenna 125-a.
  • the link from the satellite 105 to the gateway 115 may be referred to hereinafter as the upstream downlink 140.
  • one or more of the satellite links 135, 140, 145, 150 are capable of communicating using one or more communication schemes.
  • the communication schemes may be the same or different for different links.
  • the communication schemes may include different types of coding and modulation combinations.
  • various satellite links may communicate using physical layer transmission modulation and coding techniques using adaptive coding and modulation schemes, etc.
  • the communication schemes may also use one or more different types of multiplexing schemes, including Multi-Frequency Time-Division Multiple Access ("MF-TDMA”), Time Division Multiple Access (“TDMA”), Frequency Division Multiple Access (“FDMA”), Orthogonal Frequency Division Multiple Access (“OFDMA”), Code Division Multiple Access (“CDMA”), or any number of other schemes.
  • MF-TDMA Multi-Frequency Time-Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • CDMA Code Division Multiple Access
  • the satellite communications system 100 may use various techniques to "direct" content to a subscriber or group of subscribers. For example, the content may be tagged (e.g., using packet header information according to a transmission protocol) with a certain destination identifier (e.g., an IP address) or use different modcode points. Each subscriber terminal 130 may then be adapted to handle the received data according to the tags.
  • content destined for a particular subscriber terminal 130 may be passed on to its respective CPE 160, while content not destined for the subscriber terminal 130 may be ignored.
  • the subscriber terminal 130 caches information not destined for the associated CPE 160 for use if the information is later found to be useful in avoiding traffic over the satellite link.
  • FIG. 2 shows a simplified block diagram 200 illustrating an embodiment of a gateway 115 coupled between the network 120 and an antenna 110, according to various embodiments.
  • the gateway 115 has a number of components, including a network interface module 210, a satellite modem termination system ("SMTS") 230, and a gateway transceiver module 260.
  • Components of the gateway 115 may be implemented, in whole or in part, in hardware. Thus, they may comprise one, or more, Application Specific Integrated Circuits ("ASICs") adapted to perform a subset of the applicable functions in hardware. Alternatively, the functions may be performed by one or more other processing units (or cores), on one or more integrated circuits.
  • ASICs Application Specific Integrated Circuits
  • Integrated circuits may be used (e.g., Structured/Platform ASICs, Field Programmable Gate Arrays ("FPGAs”) and other Semi-Custom ICs), which may be programmed.
  • FPGAs Field Programmable Gate Arrays
  • Each may also be implemented, in whole or in part, with instructions embodied in a computer-readable medium, formatted to be executed by one or more general or application specific controllers.
  • Embodiments of the gateway 115 receive data from the network 120 (e.g., the network 120 of FIG. 1), including data originating from one or more origin servers 205 (e.g., content servers) and destined for one or more subscribers in a spot beam.
  • the data is received at the network interface module 210, which includes one or more components for interfacing with the network 120.
  • the network interface module 210 includes a network switch and a router.
  • the network interface module 210 interfaces with other modules, including a third-party edge server 212 and/or a traffic shaper module 214.
  • the third-party edge server 212 may be adapted to mirror content (e.g., implementing transparent mirroring, like would be performed in a point of presence ("POP") of a content delivery network ("CDN”)) to the gateway 115.
  • POP point of presence
  • CDN content delivery network
  • the third-party edge server 212 may facilitate contractual relationships between content providers and service providers to move content closer to subscribers in the satellite communication network 100.
  • the traffic shaper module 214 controls traffic from the network 120 through the gateway 115, for example, to help optimize performance of the satellite communication system 100 (e.g., by reducing latency, increasing effective bandwidth, etc.). In one embodiment, the traffic shaper module 214 delays packets in a traffic stream to conform to a predetermined traffic profile.
  • Traffic is passed from the network interface module 210 to the SMTS 230 to be handled by one or more of its component modules.
  • the SMTS 230 includes a gateway accelerator module 250, a scheduler module 235, and support modules 246.
  • all traffic from the network interface module 210 is passed to the gateway accelerator module 250 for handling, as described more fully below.
  • some or all of the traffic from the gateway accelerator module 250 is passed to the support modules 246.
  • real-time types of data e.g., User Datagram Protocol (“UDP”) data traffic, like Internet-protocol television (“IPTV”) programming
  • UDP User Datagram Protocol
  • IPTV Internet-protocol television
  • TCP Transmission Control Protocol
  • Embodiments of the gateway accelerator module 250 provide various types of application, WAN/LAN, and/or other acceleration functionality.
  • the gateway accelerator module 250 implements functionality of AcceleNet applications from Intelligent Compression Technologies, Inc. ("ICT"), a division of ViaSat, Inc. This functionality may be used to exploit information from application layers of the protocol stack (e.g., layers 4 - 7 of the IP stack) through use of software or firmware operating in the subscriber terminal 130 and/or CPE 160.
  • ICT Intelligent Compression Technologies, Inc.
  • This functionality may be used to exploit information from application layers of the protocol stack (e.g., layers 4 - 7 of the IP stack) through use of software or firmware operating in the subscriber terminal 130 and/or CPE 160.
  • Embodiments of the gateway accelerator module 250 also include a gateway parser module 252, a gateway prefetcher module 254, and/or a gateway masker module 246.
  • the gateway parser module 252 provides various script parsing functions for supporting functionality of the gateway accelerator module 250.
  • the gateway parser module 252 may be configured to implement advanced parsing of Java scripts to interpret web requests for use in prefetching.
  • Prefetching functionality may be implemented through the gateway prefetcher module 254 in the gateway accelerator module 250.
  • Embodiments of the gateway prefetcher module 254 handle one or more of various prefetching functions, including receiving and interpreting instructions from other components of the gateway accelerator module 250 as to what objects to prefetch, receiving and interpreting instructions from components of the subscriber terminal 130, generating and/or sending instructions to one or more content servers to retrieve prefetch objects, keeping track of prefetched and/or cached content, directing objects to be cached (e.g., in the gateway cache module 220), etc.
  • functionality of the gateway prefetcher module 254 and/or the gateway parser module 252 is optimized by other components of the gateway accelerator module 250.
  • requested URLs embedded in Java script may be parsed by the gateway parser module 252, and related objects may be prefetched by the gateway prefetcher module 254.
  • certain cache-busting techniques may limit the effectiveness of the gateway prefetcher module 254 (e.g., by fooling the gateway parser module 252).
  • Embodiments of the gateway masker module 246 are configured to implement URL masking to counter these cache-busting techniques, as discussed more fully below.
  • the gateway accelerator module 250 is adapted to provide high payload compression.
  • the gateway accelerator module 250 may compress payload such that over 70% of upload traffic when browsing the web in some cases is being used by transport management, and other items other than the compressed payload data.
  • functionality of the gateway accelerator module 250 is closely integrated with the satellite link through components of the SMTS 230 to reduce upload bandwidth requirements and/or to more efficiently schedule to satellite link (e.g., by communicating with the scheduler module 235).
  • the link layer may be used to determine whether packets are successfully delivered, and those packets can be tied more closely with the content they supported through application layer information.
  • these and/or other functions of the gateway accelerator module 250 are provided by a proxy server 255 resident on (e.g., or in communication with) the gateway accelerator module 250.
  • the proxy server 255 is implemented with multiple servers. Each of the multiple servers may be configured to handle a portion of the traffic passing through the gateway accelerator module 250. It is worth noting that functionality of various embodiments described herein use data which, at times, may be processed across multiple servers. As such, one or more server management modules may be provided for processing (e.g., tracking, routing, partitioning, etc.) data across the multiple servers. For example, when one server within the proxy server 255 receives a request from a subscriber terminal 130 on the spot beam, the server management module may process that request in the context of other similar requests received at other severs in the proxy server 255.
  • Data processed by the gateway accelerator module 250 may pass through the support modules 246 to the scheduler 235.
  • the support modules 246 include one or more types of modules for supporting the functionality of the SMTS 230, for example, including a multicaster module 240, a fair access policy (“FAP") module, and an adaptive coding and modulation (“ACM”) module. In certain embodiments, some or all of the support modules 246 include off-the-shelf types of components.
  • Embodiments of the multicaster module 240 provide various functions relating to multicasting of data over the links of the satellite communication system 100.
  • the multicaster module 240 use data generated by other components of the SMTS 230 (e.g., the gateway accelerator module 250) to prepare traffic for multicasting. For example, the multicaster module 240 may prepare datagrams as a multicast stream. Other embodiments of the multicaster module 240 perform more complex multicasting-related functionality. For example, the multicaster module 240 may contribute to determinations of whether data is unicast or multicast to one or more subscribers (e.g., using information generated by the gateway accelerator module 250), what modcodes to use, whether data should or should not be sent as a function of data cached as destination subscriber terminals 130, how to handle certain types of encryption, etc.
  • Embodiments of the FAP module 242 implement various FAP-related functions.
  • the FAP module 242 collects data from multiple components to determine how much network usage to attribute to a particular subscriber. For example, the FAP module 242 may determine how to count upload or download traffic against a subscriber's FAP.
  • the FAP module 242 dynamically adjusts FAPs according to various network link and/or usage conditions. For example, the FAP module 242 may adjust FAPs to encourage network usage during lower traffic times.
  • the FAP module 242 affects the operation of other components of the SMTS 230 as a function of certain FAP conditions. For example, the FAP module 242 may direct the multicaster module 240 to multicast certain types of data or to prevent certain subscribers from joining certain multicast streams as a function of FAP considerations.
  • Embodiments of the ACM module 244 implement various ACM functions.
  • the ACM module 244 may track link conditions for certain spot beams, subscribers, etc., for use in dynamically adjusting modulation and/or coding schemes.
  • the ACM module 244 may help determine which subscribers should be included in which customer groupings or multicast streams as a function of optimizing resources through modcode settings.
  • the ACM module 244 implements ACM-aware encoding of data adapted for progressive encoding.
  • MPEG-4 video data may be adapted for progressive encoding in layers (e.g., a base layer and enhancement layers).
  • the ACM module 244 may be configured to set an appropriate modcode separately for each layer to optimize video delivery.
  • the scheduler module 235 When traffic has been processed by the gateway accelerator module 250 and/or the support modules 246, the traffic is passed to the scheduler module 235.
  • Embodiments of the scheduler module 235 are configured to provide various functions relating to scheduling the links of the satellite communication system 100 handled by the gateway 115. For example, the scheduler module 235 may manage link bandwidth by scheduling license grants within a spot beam.
  • functionality of the SMTS 230 involves communication and interaction with a storage area network 222 ("SAN").
  • SAN 222 include a gateway cache module 220, which may include any useful type of memory store for various types of functionality of the gateway 115.
  • the gateway cache module 220 may include volatile or non-volatile storage, servers, files, queues, etc.
  • the SAN 222 further includes a captive edge server 225, which may be in communication with the gateway cache module 220.
  • the captive edge server 225 provides functionality similar to that of the third-party edge server 212, including content mirroring.
  • the captive edge server 225 may facilitate different contractual relationships from those of the third-party edge server 212 (e.g., between the gateway 115 provider and various content providers).
  • the SMTS 230 provides many different types of functionality. For example, embodiments of the SMTS 230 oversee a variety of decoding, interleaving, decryption, and unscrambling techniques. The SMTS 230 may also manage functions applicable to the communication of content downstream through the satellite 105 to one or more subscriber terminals 130. As described more fully below with reference to various embodiments, the SMTS may handle different types of traffic in different ways (e.g., for different use cases of the satellite communication network 100).
  • some use cases involve contractual relationships and/or obligations with third-party content providers to interface with their edge servers (e.g., through the third-party edge server 212), while other use cases involve locally "re-hosting" certain content (e.g., through the captive edge server 225). Further, some use cases handle real- time types of data (e.g., UDP data) differently from non-real-time types of data (e.g., TCP data). Many other types of use cases are possible.
  • UDP data real- time types of data
  • TCP data non-real-time types of data
  • the gateway transceiver module 260 encodes and/or modulate data, using one or more error correction techniques, adaptive encoding techniques, baseband encapsulation, frame creation, etc. (e.g., using various modcodes, lookup tables, etc.). Other functions may also be performed by these components (e.g., by the SMTS 230), including upconverting, amplifying, filtering, tuning, tracking, etc.
  • the gateway transceiver module 260 communicates data to one or more antennae 110 for transmission via the satellite 105 to the subscriber terminals 130.
  • FIG. 3 shows a simplified block diagram 300 illustrating an embodiment of a subscriber terminal 130 coupled between the respective subscriber antenna 125 and the CPE 160, according to various embodiments.
  • the subscriber terminal 130 includes a terminal transceiver module 310, data processing modules 315, and a terminal cache module 335-a.
  • Embodiments of the data processing modules 315 include a MAC module 350, a terminal accelerator module 330, and a routing module 320.
  • the components may be implemented, in whole or in part, in hardware. Thus, they may comprise one, or more, Application Specific Integrated Circuits ("ASICs") adapted to perform a subset of the applicable functions in hardware. Alternatively, the functions may be performed by one or more other processing modules (or cores), on one or more integrated circuits. In other embodiments, other types of integrated circuits may be used (e.g., Structured/Platform ASICs, Field Programmable Gate Arrays ("FPGAs”) and other Semi- Custom ICs), which may be programmed. Each may also be implemented, in whole or in part, with instructions embodied in a computer-readable medium, formatted to be executed by one or more general or application specific processors.
  • ASICs Application Specific Integrated Circuits
  • FPGAs Field Programmable Gate Arrays
  • Semi- Custom ICs Semi- Custom ICs
  • a signal from the subscriber antenna 125 is received by the subscriber terminal 130 at the terminal transceiver module 310.
  • Embodiments of the terminal transceiver module 310 may amplify the signal, acquire the carrier, and/or downconvert the signal. In some embodiments, this functionality is performed by other components (either inside or outside the subscriber terminal 130).
  • data from the terminal transceiver module 310 e.g., the downconverted signal
  • Embodiments of the MAC module 350 prepare data for communication to other components of, or in communication with, the subscriber terminal 130, including the terminal accelerator module 330, the routing module 320, and/or the CPE 160.
  • the MAC module 350 may modulate, encode, filter, decrypt, and/or otherwise process the data to be compatible with the CPE 160.
  • the MAC module 350 includes a pre-processing module 352.
  • the pre-processing module 352 implements certain functionality for optimizing the other components of the data processing modules 315.
  • the pre-processing module 352 processes the signal received from the terminal transceiver module by interpreting (e.g., and decoding) modulation and/or coding schemes, interpreting multiplexed data streams, filtering the digitized signal, parsing the digitized signal into various types of information (e.g., by extracting the physical layer header), etc.
  • the pre-processing module 352 pre-filters traffic to determine which data to route directly to the routing module 320, and which data to route through the terminal accelerator module 330 for further processing.
  • Embodiments of the terminal accelerator module 330 provide substantially the same functionality as the gateway accelerator module 250, including various types of applications, WAN/LAN, and/or other acceleration functionality.
  • the terminal accelerator module 330 implements functionality of AcceleNetTM applications, like interpreting data communicated by the gateway 115 using high payload compression, handling various prefetching functions, parsing scripts to interpret requests, etc.
  • these and/or other functions of the terminal accelerator module 330 are provided by a proxy client 332 resident on (e.g., or in communication with) the terminal accelerator module 330. Data from the MAC module 350 and/or the terminal accelerator module 330 may then be routed to one or more CPEs 160 by the routing module 320.
  • the terminal accelerator module 330 includes a terminal prefetcher module 334, a terminal parser module 342, and/or a terminal masker module 340.
  • the terminal parser module 342, the terminal prefetcher module 334, and the terminal masker module 340 provide the same or similar functionality as the gateway parser module 252, the gateway prefetcher module 254, and the gateway masker module 246, respectively.
  • similar modules in the terminal accelerator module 330 and the gateway accelerator module 250 may work together to implement their respective functions.
  • the components of the subscriber terminal 130 and the gateway 115 provide different functionality.
  • functionality of the gateway parser module 252 may be asymmetric, such that it would not be desirable or possible to provide the same functionality in the terminal parser module 342.
  • the terminal accelerator module 330 further includes a prefetch list 336.
  • output from the data processing module 320 and/or the terminal accelerator module 330 is stored in the terminal cache module 335-a.
  • the data processing module 320 and/or the terminal accelerator module 330 may be configured to determine what data should be stored in the terminal cache module 335-a and which data should not (e.g., which data should be passed to the CPE 160).
  • the terminal cache module 335-a may include any useful type of memory store for various types of functionality of the subscriber terminal 130.
  • the terminal cache module 335-a may include volatile or non-volatile storage, servers, files, queues, etc.
  • storage functionality and/or capacity is shared between an integrated (e.g., on-board) terminal cache module 335-a and an extended (e.g., off-board) cache module 335-b.
  • the extended cache module 335-b may be implemented in various ways, including as an attached peripheral device (e.g., a thumb drive, USB hard drive, etc.), a wireless peripheral device (e.g., a wireless hard drive), a networked peripheral device (e.g., a networked server), etc.
  • the subscriber terminal 130 interfaces with the extended cache module 335-b through one or more ports 338.
  • functionality of the terminal cache module 335-a is implemented as storage integrated into or in communication with the CPE 160 of FIG. 1.
  • CPE 160 are standard CPE 160 devices or systems with no specifically tailored hardware or software (e.g., shown as CPE 160-a).
  • Other embodiments of the CPE 160 include hardware and/or software modules adapted to optimize or enhance integration of the CPE 160 with the subscriber terminal 130 (e.g., shown as alternate CPE 160-b).
  • the alternate CPE 160-b is shown to include a CPE accelerator module 362, a CPE processor module 366, and a CPE cache module 364.
  • Embodiments of the CPE accelerator module 362 are configured to implement the same, similar, or complimentary functionality as the terminal accelerator module 330.
  • the CPE accelerator module 362 may be a software client version of the terminal accelerator module 330.
  • the functionality of the data processing modules 315 is implemented by the CPE accelerator module 362 and/or the CPE processor module. In these embodiments, it may be possible to reduce the complexity of the subscriber terminal by shifting functionality to the alternate CPE 160-b.
  • Embodiments of the CPE cache module 364 may include any type of data caching components in or in communication with the alternate CPE 160-b (e.g., a computer hard drive, a digital video recorder ("DVR”), etc.).
  • the CPE cache module 364 is in communication with the extended cache module 335-b, for example, via one or more ports 338-b.
  • the subscriber terminal 130 is configured to transmit data back to the gateway 115.
  • Embodiments of the data processing modules 315 and the terminal transceiver module 310 are configured to provide functionality for communicating information back through the satellite communication system 100 (e.g., for directing provision of services). For example, information about what is stored in the terminal cache module 335-a or the CPE cache module 364 may be sent back to the gateway 115 for limiting repetitious file transfers, as described more fully below.
  • the satellite communications system 100 may be used to provide different types of communication services to subscribers.
  • the satellite communications system 100 may provide content from the network 120 to a subscriber's CPE 160, including Internet content, broadcast television and radio content, on-demand content, voice-over-Internet-protocol ("VoIP”) content, and/or any other type of desired content.
  • this content may be communicated to subscribers in different ways, including through unicast, multicast, broadcast, and/or other communications.
  • FIG. 4 provides a schematic illustration of one embodiment of a computer system 400 that can perform the methods of the invention, as described herein, and/or can function as, for example, gateway 115, subscriber terminal 130, etc. It should be noted that Fig. 4 is meant only to provide a generalized illustration of various components, any or all of which may be utilized as appropriate. Fig. 4, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner.
  • the computer system 400 is shown comprising hardware elements that can be electrically coupled via a bus 405 (or may otherwise be in communication, as appropriate).
  • the hardware elements can include one or more processors 410, including without limitation one or more general-purpose processors and/or one or more special-purpose processors (such as digital signal processing chips, graphics acceleration chips, and/or the like); one or more input devices 415, which can include without limitation a mouse, a keyboard and/or the like; and one or more output devices 420, which can include without limitation a display device, a printer and/or the like.
  • the computer system 400 may further include (and/or be in communication with) one or more storage devices 425, which can comprise, without limitation, local and/or network accessible storage and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device, such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable and/or the like.
  • storage devices 425 can comprise, without limitation, local and/or network accessible storage and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device, such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable and/or the like.
  • RAM random access memory
  • ROM read-only memory
  • the computer system 400 might also include a communications subsystem 430, which can include without limitation a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a BluetoothTM device, an 802.11 device, a WiFi device, a WiMax device, cellular communication facilities, etc.), and/or the like.
  • the communications subsystem 430 may permit data to be exchanged with a network (such as the network described below, to name one example), and/or any other devices described herein.
  • the computer system 400 will further comprise a working memory 435, which can include a RAM or ROM device, as described above.
  • the computer system 400 also can comprise software elements, shown as being currently located within the working memory 435, including an operating system 440 and/or other code, such as one or more application programs 445, which may comprise computer programs of the invention, and/or may be designed to implement methods of the invention and/or configure systems of the invention, as described herein.
  • an operating system 440 and/or other code such as one or more application programs 445, which may comprise computer programs of the invention, and/or may be designed to implement methods of the invention and/or configure systems of the invention, as described herein.
  • application programs 445 may comprise computer programs of the invention, and/or may be designed to implement methods of the invention and/or configure systems of the invention, as described herein.
  • one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer).
  • a set of these instructions and/or codes might be stored on a computer-readable storage medium, such as the storage device(s) 425 described
  • the storage medium might be separate from a computer system (e.g., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program a general purpose computer with the instructions/code stored thereon.
  • These instructions might take the form of executable code, which is executable by the computer system 400 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer system 400 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.
  • the invention employs a computer system (such as the computer system 400) to perform methods of the invention.
  • a computer system such as the computer system 400
  • some or all of the procedures of such methods are performed by the computer system 400 in response to processor 410 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 440 and/or other code, such as an application program 445) contained in the working memory 435.
  • Such instructions may be read into the working memory 435 from another machine-readable medium, such as one or more of the storage device(s) 425.
  • execution of the sequences of instructions contained in the working memory 435 might cause the processor(s) 410 to perform one or more procedures of the methods described herein.
  • machine-readable medium and “computer readable medium”, as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion.
  • various machine-readable media might be involved in providing instructions/code to processor(s) 410 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals).
  • a computer-readable medium is a physical and/or tangible storage medium. Such a medium may take many forms, including but not limited to, non- volatile media, volatile media, and transmission media.
  • Non- volatile media includes, for example, optical or magnetic disks, such as the storage device(s) 425.
  • Volatile media includes, without limitation, dynamic memory, such as the working memory 435.
  • Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise the bus 405, as well as the various components of the communication subsystem 430 (and/or the media by which the communications subsystem 430 provides communication with other devices).
  • transmission media can also take the form of waves (including without limitation radio, acoustic and/or light waves, such as those generated during radio-wave and infra-red data communications) .
  • Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
  • Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 410 for execution.
  • the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer.
  • a remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer system 400.
  • These signals which might be in the form of electromagnetic signals, acoustic signals, optical signals and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.
  • the communications subsystem 430 (and/or components thereof) generally will receive the signals, and the bus 405 then might carry the signals (and/or the data, instructions, etc., carried by the signals) to the working memory 435, from which the processor(s) 405 retrieves and executes the instructions.
  • the instructions received by the working memory 435 may optionally be stored on a storage device 425 either before or after execution by the processor(s) 410.
  • FIG. 5 illustrates a schematic diagram of a system 500 that can be used in accordance with one set of embodiments.
  • the system 500 can include one or more user computers 505.
  • the user computers 505 can be general purpose personal computers (including, merely by way of example, personal computers and/or laptop computers running any appropriate flavor of Microsoft Corp.'s WindowsTM (e.g., VistaTM) and/or Apple Corp.'s MacintoshTM operating systems) and/or workstation computers running any of a variety of commercially available UNIXTM or UNIX-like operating systems.
  • These user computers 505 can also have any of a variety of applications, including one or more applications configured to perform methods of the invention, as well as one or more office applications, database client and/or server applications, and web browser applications.
  • the user computers 505 can be any other electronic device, such as a thin-client computer, Internet-enabled mobile telephone, and/or personal digital assistant (PDA), capable of communicating via a network (e.g., the network 510 described below) and/or displaying and navigating web pages or other types of electronic documents.
  • a network e.g., the network 510 described below
  • the exemplary system 500 is shown with three user computers 505, any number of user computers can be supported.
  • Certain embodiments of the invention operate in a networked environment, which can include a network 510.
  • the network 510 can be any type of network familiar to those skilled in the art that can support data communications using any of a variety of commercially available protocols, including without limitation TCP/IP, SNA, IPX, AppleTalk, and the like.
  • the network 510 can be a local area network ("LAN”), including without limitation an Ethernet network, a Token-Ring network and/or the like; a wide-area network (WAN); a virtual network, including without limitation a virtual private network (“VPN”); the Internet; an intranet; an extranet; a public switched telephone network (“PSTN”); an infra-red network; a wireless network, including without limitation a network operating under any of the IEEE 802.11 suite of protocols, the BluetoothTM protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks.
  • LAN local area network
  • WAN wide-area network
  • VPN virtual private network
  • PSTN public switched telephone network
  • WiFi public switched telephone network
  • Embodiments of the invention can include one or more server computers 515.
  • Each of the server computers 515 may be configured with an operating system, including without limitation any of those discussed above, as well as any commercially (or freely) available server operating systems.
  • Each of the servers 515 may also be running one or more applications, which can be configured to provide services to one or more clients 505 and/or other servers 515.
  • one of the servers 515 may be a web server, which can be used, merely by way of example, to process requests for web pages or other electronic documents from user computers 505.
  • the web server can also run a variety of server applications, including HTTP servers, FTP servers, CGI servers, database servers, Java servers, and the like.
  • the web server may be configured to serve web pages that can be operated within a web browser on one or more of the user computers 505 to perform methods of the invention.
  • the server computers 515 might include one or more application servers, which can include one or more applications accessible by a client running on one or more of the client computers 505 and/or other servers 515.
  • the server(s) 515 can be one or more general purpose computers capable of executing programs or scripts in response to the user computers 505 and/or other servers 515, including without limitation web applications (which might, in some cases, be configured to perform methods of the invention).
  • a web application can be implemented as one or more scripts or programs written in any suitable programming language, such as Java , C, C#TM or C++, and/or any scripting language, such as Perl, Python, or TCL, as well as combinations of any programming/scripting languages.
  • the application server(s) can also include database servers, including without limitation those commercially available from OracleTM, MicrosoftTM, SybaseTM, IBMTM and the like, which can process requests from clients (including, depending on the configuration, database clients, API clients, web browsers, etc.) running on a user computer 505 and/or another server 515.
  • an application server can create web pages dynamically for displaying the information in accordance with embodiments of the invention.
  • Data provided by an application server may be formatted as web pages (comprising HTML, Javascript, etc., for example) and/or may be forwarded to a user computer 505 via a web server (as described above, for example).
  • a web server might receive web page requests and/or input data from a user computer 505 and/or forward the web page requests and/or input data to an application server.
  • a web server may be integrated with an application server.
  • one or more servers 515 can function as a file server and/or can include one or more of the files (e.g., application code, data files, etc.) necessary to implement methods of the invention incorporated by an application running on a user computer 505 and/or another server 515.
  • a file server can include all necessary files, allowing such an application to be invoked remotely by a user computer 505 and/or server 515.
  • the functions described with respect to various servers herein e.g., application server, database server, web server, file server, etc.
  • the system can include one or more databases 520.
  • the location of the database(s) 520 is discretionary: merely by way of example, a database 520a might reside on a storage medium local to (and/or resident in) a server 515a (and/or a user computer 505).
  • a database 520b can be remote from any or all of the computers 505, 515, so long as the database can be in communication (e.g., via the network 510) with one or more of these.
  • a database 520 can reside in a storage-area network ("SAN") familiar to those skilled in the art.
  • SAN storage-area network
  • the database 520 can be a relational database, such as an OracleTM database, that is adapted to store, update, and retrieve data in response to SQL-formatted commands.
  • the database might be controlled and/or maintained by a database server, as described above, for example.
  • Embodiments include methods, systems, and devices that implement various techniques for optimizing web performance over satellite communication links. It will be appreciated that other components and systems may be used to provide functionality of the various embodiments described herein. As such, descriptions of various embodiments in the context of components and functionality of Figs. 1- 5 are intended only for clarity, and should not be construed as limiting the scope of the invention. [0091] For example, embodiments of the invention may be used to address certain cold access metrics. Cold access (e.g., a first visit to a website with a clear cache) to popular websites is a well-established metric for user experience on a public network, as it is the operation in which network performance is most clearly and frequently apparent to the end user. Consequently, improvements in this cold access metric can play a role in driving consumer purchasing decisions, such as in selecting network access providers or deciding whether to use an acceleration service. There are a number of factors that may contribute to the cold access metric.
  • Cold access e.g., a first visit to a website with a
  • Some factors that may contribute to the cold access metric relate to the number of round trip times ("RTTs") needed to communicate content between elements of the satellite systems (e.g., between the gateway 115 and the subscriber terminal 130 of the satellite communication system 100 of FIG. 1). Because of the large distance that must be traveled to and from the satellite 105, some data latency is inherent in any satellite communication system 100. This latency may be increased with each RTT needed to fulfill a request for data. As such, reducing the number of RTTs needed to communicate information over the satellite communication system 100 may significantly reduce the data transfer times (e.g., download times) over the communication links.
  • RTTs round trip times
  • the gateway prefetcher module 254 and/or the terminal prefetcher module 334 may be capable of determining from a website request how to prefetch much of the content for the website (e.g., through intelligent script parsing). However, receipt of the prefetched content may be delayed while the gateway 115 (e.g., acting as a proxy server) waits for responses from origin servers serving the website content. These delays may substantially offset reductions in delay provided by the prefetching functionality of the gateway prefetcher module 254 and/or the terminal prefetcher module 334.
  • Embodiments of the invention implement various types of functionality to address these and other factors to optimize web access performance.
  • Some embodiments use acceleration functionality like advanced prefetching and compression (e.g., through the gateway accelerator module 250 and/or the terminal accelerator module 330) to reduce the number of RTTs.
  • Other embodiments use uniform resource locator ("URL") anti-aliasing and/or cycle caching functionality to enhance performance of the satellite communication system 100 without substantially interfering with the commercial objectives of the content providers.
  • URL uniform resource locator
  • Still other embodiments provide improved parsing functionality to optimize prefetching results.
  • a URL masking algorithm is provided to allow prefetchers and caches to work even when the URLs are constructed using scripts intended to block such behavior.
  • certain cache-busting techniques generate portions of the URL string, using Java scripts, to include unique values (e.g., random numbers, timestamps, etc.).
  • prefetchers may be fooled into thinking objects at the URL have not yet been pref etched, when in fact they have.
  • Embodiments mask these cache-busting portions of the URL string to allow the prefetcher to recognize the request as a previously prefetched URL.
  • cache cycling is used to issue a fresh request to the content provider for website content each time the proxy server serves a request from cached data.
  • URL masking may allow a prefetcher to operate in the context of a cache- busting algorithm. Using prefetched content may reduce the apparent number of times the URL is requested, which may reduce advertising revenue and other metrics based on the number of requests.
  • Cache cycling embodiments maintain the request metrics while allowing optimal prefetching in the face of cache-busting techniques.
  • an accumulator is provided for optimizing performance of an accelerator abort system. Chunked content (e.g., in HTTP chunked mode) is accumulated until enough data is available to make an abort decision.
  • socket mapping architectures are adjusted to allow prefetching of content copies for URLs requested multiple times on the same page.
  • persistent storage is adapted to cache prefetched, but unused data, and to provide access to the data to avoid subsequent redundant prefetching.
  • DNS transparent proxy and prefetch are integrated with HTTP transparent proxy and prefetch, so as to piggyback DNS information with HTTP frames.
  • prefetching is provided for the DNS associated with all hostnames called in Java scripts to reduce the number of requests needed to the DNS server.
  • delivery of objects is prioritized according to browser rendering characteristics. For example, data is serialized back to a subscriber's browser so as to prioritize objects needing further parsing or having valuable information with respect to rendering.
  • Public web sites may deploy Java scripts that make each request for an object appear with a unique URL. For example, this technique allows cycling of ad content and also prevents caches from interfering with the accounting of site accesses.
  • These so-called "cache-busting” techniques may limit prefetching functionality (e.g., functionality of the gateway prefetcher module 254 and/or the terminal prefetcher module 334), as the URL prefetched on the proxy server will often not match the one from the browser. For example, to protect their commercial interests with respect to delivery and accounting of advertising content, commercial websites employ a number of cache-busting techniques.
  • One illustrative cache-busting technique uses functions, such as random number generators and millisecond timestamps, to produce unique values each time they are executed. These unique values may then be used as part of a URL to generate unique URLs with each subsequent request for the same website.
  • functions such as random number generators and millisecond timestamps
  • the time string appended to the URL is an integer with millisecond precision, so that no two calls to this routine may ever result in the same URL string.
  • a parser e.g., the terminal parser module 342
  • content servers e.g., via the gateway prefetcher module 254
  • the system may include a user system 602, a proxy client 612 and a proxy server 632.
  • the user system may include a client graphical user interface (GUI) 610.
  • GUI 610 may allow a user to configure performance aspects of system 600. For example, the user may adjust the compression parameters and/or algorithms, content filters (e.g., blocks elicit websites), and enable or disable various features used by system 600.
  • some of the features may include network diagnostics, error reporting, as well as controlling, for example, prefetch response abort 642.
  • Such control may be adding and/or removing pages (i.e. URLs) to or from whitelist 648 and/or blacklist 649.
  • the user selects a universal recourse locator (URL) address which directs web browser 606 (e.g., Internet Explorer®, Firefox®, Netscape Navigator®, etc.) to a website (e.g., cnn.com, google.com, yahoo.com, etc.).
  • web browser 606 may check browser cache 604 to determine whether the website associated with the selected URL is located within browser cache 604. If the website is located within browser cache 604 the amount of time the website has been in the cache is checked to determine if the cached website is "fresh" (i.e. new) enough to use. For example, the amount of time that a website may be considered fresh may be 5 minutes; however, other time limits may be used.
  • web browser 606 renders the cached page. However, if the website has either not been cached or the cached webpage is not fresh, web browser 606 sends a request to the Internet for the website.
  • redirector 608 intercepts the request sent from web browser 606.
  • Redirector 608 instead sends the request through a local bus 605 to proxy client 612.
  • proxy client 612 may be implemented as a software application running on user system 602.
  • proxy client 612 may be implemented on a separate computer system and is connected to user system 602 via a high speed/low latency link (e.g., a branch office LAN subnet, etc.).
  • proxy client 612 includes a request parser 616.
  • Request parser 616 may check cache optimizer 614 to determine if a cached copy of the requested website may still be able to be used.
  • Cache optimizer 614 is in communication with browser cache 604 in order to have access to cached websites.
  • Cache optimizer 614 is able to access browser cache 604 without creating a redundant copy of the cached websites, thus requiring less storage space.
  • cache optimizer 614 implements more effective algorithms to determine whether a cached website is fresh.
  • cache optimizer 613 may implement the cache expiration algorithms from HTTP vl.l (i.e., RFC 2616), which may not be natively supported in browser 606.
  • HTTP vl.l i.e., RFC 2616
  • browser cache 604 may inappropriately consider a cached website as too old to use; however, cache optimizer 614 may still be able to use the cached website. More efficient use of cached websites can improve browsing efficiency by reducing the number of Internet accesses.
  • request parser 616 checks prefetch manager 620 to determine if the requested website has been prefetched. Prefetching a response is when the item is requested from the website by the accelerator prior to receiving a request from the web browser 606. Prefetching can potentially save round-trips of data access from user system 602 to the Internet.
  • request parser 616 forwards the request to a request encoder 618.
  • Request encoder 618 encodes the request into a compressed version of the request using one of many possible data compression algorithms. For example, these algorithms may employ a coding dictionary 622 to store strings so that data from previous web objects can be used to compress data from new pages. Accordingly, where the request for the website is 550 bytes in total, the encoded request may be as small as 50 bytes. This level of compression can save bandwidth on a connection, such as high latency link 630.
  • high latency link 630 may be a wireless link, a cellular link, a satellite link, a dial-up link, etc.
  • protocol 628 is Intelligent Compression Technology's® (ICT) transport protocol (ITP). Nonetheless, other protocols may be used, such as the standard transmission control protocol (TCP).
  • ITP maintains a persistent connection with proxy server 632. The persistent connection between proxy client 612 and proxy server 632 enables system 600 to eliminate the inefficiencies and overhead costs associated with creating a new connection for each request.
  • the encoded request is forwarded from protocol 628 to request decoder 636.
  • Request decoder 636 uses decoder 636 which is appropriate for the encoding performed by request encoder 618. In one embodiment, this process utilizes a coding dictionary 638 in order to translate the encoded request back into a standard format which can be accessed by the destination website.
  • the HTTP request includes a cookie (or other special instructions), such as a "referred by" or type of encoding accepted, information about the cookie or instructions may be stored in a cookie cache 655.
  • Request decoder 636 then transmits the decoded request to the destination website over a low latency link 656.
  • Low latency link 656 may be, for example, a cable modem connection, a digital subscriber line (DSL) connection, a Tl connection, a fiber optic connection, etc.
  • DSL digital subscriber line
  • a response parser 644 receives a response from the requested website.
  • this response may include an attachment, such as an image and/or text file.
  • Some types of attachments such as HTML, XML, CSS, or Java Scripts, may include references to other "in-line" objects that may be needed to render a requested web page.
  • response parser 644 may forward the objects to a prefetch scanner 646.
  • prefetch scanner 646 scans the attached file and identifies URLs of in-line objects that may be candidates for prefetching.
  • objects that may be needed for the web page may also be specified in Java scripts that appear within the HTML or CSS page or within a separate Java script file.
  • the identified candidates are added to a candidate list.
  • prefetch scanner 646 may notify prefetch abort 642 of the context in which the object was identified, such as the type of object in which it was found and/or the syntax in which the URL occurred. This information may be used by prefetch abort 642 to determine the probability that the URL will actually be requested by browser 606.
  • the candidate list is forwarded to whitelist 648 and blacklist 649.
  • Whitelist 648 and blacklist 649 may be used to track which URLs should be allowed to be prefetched. Based on the host (i.e. the server that is supplying the URL), the file type (e.g., application service provider (ASP) files) should not be prefetched, etc.
  • ASP application service provider
  • whitelist 648 and blacklist 649 control prefetching behavior by indicating which URLs on the candidate list should or should not be prefetched. In many instances with certain webpages/file types prefetching may not work. In addition to ASP files, webpages which include fields or cookies may have problems with prefetching.
  • a modified candidate list is generated, and then the list is forwarded to a client cache model 650.
  • the client cache model 650 attempts to model which items from the list will be included in browser cache 604. As such, those items are removed from the modified candidate list.
  • the updated modified candidate list is forwarded to a request synthesizer 654 which creates an HTTP request in order to prefetch each item in the updated modified candidate list.
  • the HTTP request header may include cookies and/or other instructions appropriate to the web site and/or to browser 606's preferences using information obtained from cookie model 652.
  • the prefetch HTTP requests may then be transmitted through low latency link 656 to the corresponding website.
  • response parser 644 receives a prefetch response from the website and accesses a prefetch response abort 642.
  • Prefetch response abort 642 is configured to determine whether the prefetched item is worth sending to user system 602.
  • Prefetch response abort 642 bases its decision whether to abort a prefetch on a variety of factors, which are discussed below in more detail.
  • response parser 644 forwards the response to response encoder 640.
  • Response encoder 640 accesses coding dictionary 638 in order to encode the prefetched response.
  • Response encoder 640 then forwards the encoded response through protocol 628 over high latency link 630 and then to response decoder 626.
  • Response decoder 626 decodes the response and forwards it to response manager 624.
  • response manager 624 creates a prefetch socket to receive the prefetched item as it is downloaded.
  • Response manager 624 transmits the response over local bus 605 to redirector 608. Redirector 608 then forwards the response to web browser 606 which renders the content of the response.
  • the terminal accelerator module 330 includes a terminal masker module 340 and/or the gateway accelerator module 250 includes a gateway masker module 246, adapted to implement URL masking functionality.
  • Using URL masking functionality may allow the gateway prefetcher module 254 and/or the terminal prefetcher module 334 to operate in the context of some cache-busting techniques.
  • parser module 252 may identify an embedded URL string within a webpage, Java Script, etc. Further, parser module 252 may then analyze the URL string to determine if a cache-busting portion (or random portion) exists in the URL string. However, it should be noted that the random portion may not have anything to do with cache busting, and is placed in the URL string for utility value.
  • an advertisement server may embed or append a string of random characters in the URL string. Such a random string of characters may be used to cycle through ads to be presented to the browser. For example, random number 1 may produce an ad for company I 5 random number 2 may produce an ad for company 2, and so forth.
  • the "random number" (or embedded string) may be generated in a variety of ways. For example, a rand() method may be called to generate a binary number. Then an ASCI string may be generated from the binary number, which is then appended or embedded in the URL. Alternatively, a timestamp may be used to produce the "random" portion of the URL string. For example, the timestamp may be extended out several digits and converted into an ASCI string and appended or embedded within the URL sting.
  • the URL string may be passed to masker module 256.
  • the masker produces a mask that identifies which bytes in the URL string are effectively random. This may be implemented, for example, as a string of the same length as the URL where a byte is 0 if it is a normal byte and 1 if it is random.
  • the mask can be used to exclude the random bytes in deciding whether two URLs match, such as in the C-language method: bool isMatch(int urlLength, char* requestUrl, char* prefetchedUrl, char* mask)
  • This mask can be sent to the client along with the URL string for the item that has been prefetched.
  • prefetcher module 254 may compare the masked URL string with URL strings of objects that have already been prefetched by prefetcher module 254. If a match is found, then prefetcher module 254 may then notify prefetcher module 334 in terminal accelerator module 330 (FIG. 7B) that the object has already been prefetched, and not to prefetch it again, thus preventing sending unnecessary bytes across the link. Accordingly, the prefetched version of the object from the masked URL string is used to be rendered in the browser instead of pref etching a new object.
  • FIG. 8 shows an illustrative flow diagram of a method 800 for implementing URL masking functionality, according to various embodiments of the invention.
  • the method 800 begins at block 804 by identifying a URL to be prefetched.
  • a portion of the URL string is identified as employing a cache-busting technique.
  • a mask is then set, at block 812, to mask the cache-busting portion of the URL string.
  • the URL string may be sent at block 816 from a proxy server to a proxy client. Further, at block 820, the mask may be sent from the proxy server to the proxy client.
  • the proxy server is implemented in the gateway 115 (e.g., the proxy server 255 of FIG.
  • the gateway 115 sends a list of URLs being prefetched to the subscriber terminal 130, where prefetched content may be cached (e.g., in the terminal cache module 335).
  • the proxy client may compare intercepted browser requests with the list of URLs to decide whether a request can be served via a prefetched object. As part of this comparison in block 824, the proxy client applies the mask to the requested URL and/or the prefetched URL list. In this way, the proxy client is able to determine in block 828 whether the requested content is, in fact, from a non-prefetched URL; or if it is actually from the same URL employing a cache-busting technique.
  • the requested object(s) may be served in block 832 using prefetched (e.g., locally cached) content. Otherwise, the requested object(s) may be served in block 836 by retrieving the objects from other locations. For example, the requested object(s) may be retrieved from the gateway cache module 220, from a content server over the network 120, etc.
  • a URL is identified by the gateway parser module 252 by means of parsing a Java script embedded in a web object with certain file extensions (e.g., HTML, XML, CSS, JS, or other protocols used within HTTP). Identifying the URL may involve constructing the string using various Java functions which may be defined in the web object or may be part of a library known to the parser. When constructing the string, embodiments of the gateway parser module 252 look for calls to library functions that may be used to make URLs unique each time they are constructed so as to prevent caches from fulfilling the request from copies of previously downloaded objects (e.g., known as "cache-busting").
  • cache-busting functions include random number generators or timers with millisecond resolution. If the parser determines that part of the URL is being constructed with characters derived from these cache- busting functions, embodiments of the gateway masker module 256 generate a mask as a function of the URL string to mask the millisecond timestamp portion of the URL string. The prefetcher issues a request to the web server for the URL that it constructs, and the URL string and mask information are sent from the gateway 115 (e.g., proxy server 255) to the subscriber terminal 130 (e.g., proxy client 332).
  • the gateway 115 e.g., proxy server 255
  • subscriber terminal 130 e.g., proxy client 332
  • the subscriber terminal 130 receives the URL and mask at the same time as it receives the object that it was embedded in, such as the HTML page.
  • the terminal accelerator module 330 places the URL string and mask onto a "prefetch list" of objects that are in process of being prefetched.
  • the parser module 342 identifies the URL being requested and asks the prefetch list 336 whether this URL is being prefetched.
  • the prefetch list 336 iterates through all entries to see if the request is a match.
  • calls are made to the masker module 340, supplying the request URL, the prefetched URL being tested, and the mask associated with the prefetched URL.
  • the masker module 340 may perform a string comparison, excluding characters as a function of the mask.
  • Embodiments return a Boolean value indicating whether the masked versions of the requested and prefetched URLs are a match. If so, the response to the CPE 160 may be filled using the prefetched object. Otherwise, the subscriber terminal 130 may request the objects from the gateway 115 (e.g., as proxy server 255) over the satellite communication system 100.
  • embodiments of the URL masking functionality may be applied both to prefetched content (e.g., to see if a prefetched object matches a client request) and to the use of cached content on the gateway cache module 220 and/or the terminal cache module 335. Further, it will be appreciated that URL masking functionality may allow prefetchers and caches to work even when the URLs are constructed using scripts intended to block such behavior. By facilitating the use of prefetching (e.g., by the gateway prefetcher module 254 and/or the terminal prefetcher module 334) and local caching (e.g., at the terminal cache module 335), the number of RTTs may be reduced. Local caching may also reduce some server response delays that affect communications over the satellite communication system 100.
  • FIG. 9 illustrates a method 900 for implementing URL masking according to embodiments of the present invention.
  • Java script included in a requested page may be parsed.
  • the parsing of the requested page URL string within the Java script may be identified and assembled (process block 908).
  • the process of generating the identified URL string may be analyzed (process block 912).
  • a determination may be made as to whether portions of the URL string were randomly generated so as to have a meaningless value (decision block 916).
  • the portion of the URL string may be a randomly generated number, a timestamp, etc. If no random portion of the URL is found, then the Java script is continued to be parsed. Otherwise, at process block 920, the random or meaningless portion of the URL string is masked out/off of the URL string. [0131] Then, at process block 924, the masked version of the URL may be checked against prefetched URL strings and/or cached URL strings to determine a match.
  • a cached or prefetched object is able to be used where it otherwise would have been classified as a cache miss or a non-prefetched object.
  • FIG. 10 illustrates one embodiment of a system 1000 according to aspects of the present invention.
  • system 1000 may include a client 1005.
  • Client 1005 may be configured to use a web browser to access various Internet and/or intranet web pages, or to access files, emails, etc. from various types of content servers.
  • client 1005 may include a proxy client 1010 which may intercept the traffic from the browser.
  • Client 1005 may be configured to communicate over a high latency link 1015 with proxy server 1020 using an optimized transport protocol.
  • proxy server 1020 may identify, based on a request received from proxy client 1010 via client 1005's browser, objects that may be able to be prefetched. Furthermore, proxy server 1020 may store all of the caching instructions for all objects downloaded by proxy server 1020 on behalf of client 1005.
  • proxy server 1020 may send a request over a low latency link 1025 to a content server 1030.
  • low latency link 1025 may be a satellite link, a broadband link, a cable link, etc.
  • the request may request the caching instructions for the object that may potentially be prefetched from the web server.
  • Proxy server 1020 may then analyze the caching instructions for the object to determine if the object has been modified since it was last prefetched. Accordingly, if the object has been modified, then proxy server 1020 would download the updated version of the object from content server 1030. Otherwise, if the previously prefetched object is still valid, no prefetching is needed. Thus, proxy server 1020 can simply use the previously prefetched object.
  • content server 1030 may be a file server, an FTP server, etc. and various web browsers may be used by client 1005.
  • the cache model may be modified to be stored, for example, at proxy client 1010.
  • proxy client 1010 may be configured to maintain the caching instructions associated with each prefetched object.
  • proxy client 1010 may store cached (or prefetched) objects for future access by client 1005, or in an alternative embodiment, to be accessed by other clients and/or servers connected with client 1005. Consequently, any component in FIG. 10 may be configured to store prefetched (or cached) objects and/or caching instructions.
  • the cache model may be implemented at a separate location from client 1005 and/or client proxy 1010.
  • the cache model may be located at a remote server, database, storage device, remote network, etc.
  • cached objects may be stored remotely from client 1005 and retrieved from the remote location upon request of the object.
  • prefetching and caching may improve a subscriber's experience (e.g., through reduced download times for web objects), content providers may lose some control over content delivery and accounting. This may be undesirable for a number of reasons.
  • URL masking may compromise commercial interests of content providers. For example, advertising companies may rely on getting fresh requests to URLs to cycle different content, as well as to account for the number of site hits.
  • Using cached information may limit content cycling and may make request and hit tracking more difficult. Another reasons is that providing subscribers with cached data may result in presenting the subscribers with different web experiences than if normal cycling of content was allowed.
  • FIG. 11 illustrates a system of implementing a prior art HTTP cache.
  • the cache 1102 receives a URL 1101, and initially uses an index of URL 1101 's contents 1104 to determine (process block 1103) whether a fresh copy of the item is available. Freshness in this case is typically established using the standard HTTP rules, such as defined in RFC 2616, although HTTP caches can also be tuned to be more aggressive with respect to returning content that may not be fresh according to such rules. If the fresh copy is available, the cache retrieves (process block 1109) the cached copy from a storage 1108 and returns the retrieved object as a response (process block 1110). No further action is need in this case.
  • a request 1105 is uploaded to a web content server 1106.
  • a web content server 1106 When the response is received (process block 1107), then returned (process block 1110), a copy is added to storage 1108, and index 1104 is updated.
  • cache cycling which is used to issue fresh requests to content providers for website content each time a proxy server serves a request from cached data.
  • cache cycling allows fresh content to be supplied for each request when URL masking is used, as described above.
  • URL masking removes random elements from URL strings which are used to cycle through different content. Removing these random elements allows prefetching optimizations as well as caches to work effectively, but could interfere with the normal cycling of different content items for advertisements or other web elements.
  • Cache cycling allows fresh content to be presented for each request while still allowing the performance benefits of caches and prefetching to be achieved. Furthermore, since using cached content reduces the apparent number of times a URL is requested, URL masking could interfere with the accounting of advertising revenue and other metrics based on the number of requests. Cache cycling maintains the request metrics while allowing the performance benefits of caching to be achieved.
  • Some embodiments of cache cycling are implemented using a satellite communications system (e.g., the satellite communications system 100 of FIG. 1, above), for example, including functionality of gateways and/or subscriber terminals (e.g., the gateway 215 and/or subscriber terminal 230 of FIGS. 2 and 3, respectively).
  • a satellite communications system e.g., the satellite communications system 100 of FIG. 1, above
  • gateways and/or subscriber terminals e.g., the gateway 215 and/or subscriber terminal 230 of FIGS. 2 and 3, respectively.
  • Those and/or other embodiments may exploit functionality described with reference to the computer system 400 of FIG. 4 and/or the computer network system 500 of FIG. 5. It will be appreciated that other types of systems and/or components may be used to implement functionality of various embodiments, without departing from the scope of the invention.
  • FIG. 12 illustrates how a cache with cache cycling is used in conjunction with URL masking, according to various embodiments.
  • the input to the cache may include both a normal unmasked URL 1201 and a masked URL 1202 (e.g., using the techniques described above).
  • the masked bytes in the URL string can be filled with default placeholders, such as the character '0', or the like.
  • the impact of random values in the URL string have been removed so that all URLs that differ only by the random elements will present the same masked URL at process block 1202.
  • a cache 1203 then checks an index 1205 to determine whether, the object that was retrieved in response to a request for a URL that had the same masked URL, is in cache 1203. If a response is in cache and sufficiently fresh, it is retrieved (process block 1210) from a storage 1209 and returned (process block 1211) to the user (e.g., client browser, etc.). In this case, freshness may be determined by special rules rather than using RFC 2616, as the expiration times provided in the HTTP header may not support caching. If a cached copy can be used, the user obtains the performance benefits from avoiding the wait for a response from, for example, a web content server.
  • an unmasked URL 1201 is also supplied.
  • the unmasked URL includes the random elements created in, for example, the original Java script, and each of these URLs would be unique.
  • a fresh request 1206 for the unmasked URL 1201 is then sent to a web content server 1207, regardless of whether a cached copy of the masked URL exists.
  • the response is received (process block 1208), the response is added to cache storage 1209 as the new entry for the masked URL 1202, and index 1205 is updated. If a sufficiently fresh cache entry is not found at process block 1204, the cache waits for the response at process block 1208, and then returns a copy to the user at process block 1211.
  • cycling cache 1203 may be implemented in either Terminal Cache Module 435-A or Gateway Cache Module 220-A.
  • cycling cache 1203 allows for a response to be sent immediately to CPE 260 without waiting for a copy to be fetched or prefetched from content server 1207. If a cached response was provided at process block 1210, then the fresh copy received at process block 1208 may not be considered time-critical, in that the customer has already received a response. In this case the transfer of this data can be done at a low priority so as not to interfere with time-sensitive transfers.
  • Masked URL 1202 can be generated from Unmasked URL 1202 at the same time that the mask is used to check for matches with prefetched objects.
  • cycling cache 1203 When used cycling cache 1203 is implemented on the gateway side, cycling cache 1203 may be used to provide fast responses to prefetch requests, as it avoids the need to wait for a response from content server 1207.
  • the URL masks are generated at the same time that the embedded URLs are identified in, for example, the Java scripts within the HTML or other web objects, so that masked URL 1202 can be presented along with unmasked URL 1201 to cycling cache 1203.
  • each time a cached object is used a fresh copy of the content may be requested.
  • the cache is cycled, and the client receives content that is one cycle old, but the same number of external "hit" are accounted.
  • the client is not required to wait for the fresh copy of the content because the client is able to quickly render the cached copy, and the next time the content is requested the previously fresh copy will be rendered to the client, and another fresh copy with be retrieved, and so forth.
  • system 1300 may include elements from Figs. 1 - 3, as well as a content provider 1305.
  • proxy server 155 in the gateway 115 may issue a fresh request to content provider 1305 for the cached content.
  • new objects may replace the cached copies of those objects, for use in serving the next request for that URL to CPE 160.
  • content provider 1305 may receive the same number of requests and may cycle through the same content, while providing CPE 160 with benefits of prefetched/cached content.
  • a request may be made by a web browser for a URL at CPE 160.
  • Proxy client 332 implemented in subscriber terminal 130 may determine (e.g., as a result of cache- busting techniques discussed above) that cached copies of the requested objects are available in terminal cache module 335.
  • the proxy client then issues a fresh request to proxy server 155 in gateway 115 according to the requested content (e.g., with or without masking cache-busting portions of URL strings). While the request is being processed and new objects are being retrieved, locally cached copies of the objects are then passed to CPE 160's browser for rendering.
  • the web browser may immediately begin to render objects out of terminal cache module 335 without waiting for requests to be fulfilled over satellite 105; while in the meantime, cached objects are replaced with new versions as the requests are fulfilled.
  • embodiments include accumulator functionality for accumulating object data prior to making an abort determination. Certain embodiments also compress the accumulated data to more accurately reflect the cost of pushing the data to the client as part of the prefetch operation. Accumulation and/or compression of the data may provide sufficient data relating to the size of the object to make a useful abort determination, even where the size of the object cannot be otherwise determined (e.g., from the object data header).
  • accumulated data e.g., in compressed or uncompressed form
  • the object may be pushed to the client from server-side storage, rather than retrieving (e.g., and compressing) the object from the content server redundantly.
  • additional (e.g., byte-level) data processing functionality exploit the accumulated data to implement additional (e.g., byte-level) data processing functionality.
  • prefetch response abort 642 receives a prefetched object from the Internet through low latency link 656 (Fig. 6) (process block 1405). Even though the object has initially been prefetched, it does not necessarily mean that it is efficient to forward the object to the client (e.g., proxy client 612 (Fig. 6)). Due to bandwidth and other constraints of the link, objects sent over high latency link 630 (Fig. 6) between proxy server 632 (Fig. 6) and proxy client 612 (Fig. 6) should be carefully selected. Accordingly, a variety of factors should be considered before forwarding a prefetched object to the client.
  • the size of the received object is checked.
  • the size of the object may be significant in determining whether to forward the object to the client.
  • one benefit of forwarding the prefetched object to the client may be the elimination of a round trip.
  • the prefetched item is eventually used by user system 602 (Fig. 6)
  • the request out to the Internet and the response back from the requested website i.e., one round trip
  • one potential negative effect of forwarding a prefetched object is that the prefetched object unnecessarily uses the link's bandwidth. As such, if a prefetched object is forwarded to the client but never used by the client, the bandwidth used to forward the object may be wasted. Accordingly, larger prefetched objects may decrease optimization because the gained round trip may not outweigh the bandwidth consumption.
  • a point system may be assigned to the prefetched object where, for example, a 10 kilobyte object is given a higher point value than a 10 megabyte object. Consequently, if the point value associated with the object reaches or exceeds a threshold, then the object is forwarded to the client.
  • Another factor in determining whether an object should be forwarded to the client is the probability of use of the object (process block 1415).
  • the user may, for example, "click-off a web page before objects within the page are requested. Whether some objects may be requested may depend on browser settings and/or on external events, such as mouse position.
  • objects referenced on a CSS e.g., style sheet for the entire website
  • URLs are identified within Java scripts, the scripts themselves, based on a variety of factors, may determine whether to request an object.
  • a general model can be built sampling many different clients in many sessions going to many websites. Subsequently, a more specific model can be developed for a specific website and/or for a particular user. In one embodiment, this may be accomplished by recording the frequency of page use in a specific context for a specific web page by a specific user.
  • the object may be assigned a point value associated with its probability of use.
  • the probability of use may be assigned a percentage value.
  • the bandwidth of high latency link 630 may be determined (i.e., the speed of the link between proxy server 632 (Fig. 6) and proxy client 612 (Fig. 6)).
  • the bandwidth of this link can be a factor in determining whether to forward the prefetched object. For example, with a higher link bandwidth, more objects and larger objects could be forwarded to the client. However, in contrast, if the bandwidth of the link is lower, then prefetch response abort 642 (Fig. 6) may be more selective when deciding whether to forward the prefetched object.
  • the bandwidth of the link is assigned a point value which may be factored into the determination of whether to forward the object.
  • the latency of the link between proxy server 632 (Fig. 6) and proxy client 612 (Fig. 6) is determined.
  • the latency of the link is based on the current round trip time (RTT) of the link. Accordingly, if the RTT is high, then it may be more beneficial to forward the prefetched object to the client because of the round trip savings gained by forwarding the object. However, if the RTT is low, then the saved round trip may be of less value for optimization purposes.
  • the latency of the link is assigned a point value which may be factored into the determination of whether to forward the object.
  • the initial prefetch time is determined (i.e., how long the object took to be retrieved from the Internet). If the object took a long time to retrieve from the Internet, then it may be optimal to forward the object to the client in order to avoid re- downloading the object in the future. Furthermore, if the object was downloaded quickly, then less optimization may be gained from forwarding the object to the client. Hence, in one embodiment, the download time of the object may be assigned a point value which may be factored into determining whether to forward the object to the client. In an alternative embodiment, the aborted objects may be stored on proxy server 632 (Fig. 6) in case they are subsequently requested. Accordingly, if these objects are stored and then requested, the download will not need to be repeated. If this approach is implemented, then process block 1430 may not be used.
  • a cost/benefit analysis may be preformed to determine whether to forward the prefetched object.
  • the above-mentioned point values may be calculated to determine if the object meets a predetermined threshold.
  • the cost of forwarding the object may be determined using the following equation:
  • Cost ObjectSize * (1.0 - ProbabilityofUse)/Bandwidth
  • the benefit of forwarding the prefetched object may be determined using the following equation:
  • the prefetched object is aborted and the object is not forwarded to the client (process block 1445). Conversely, if the benefit is greater than the cost, then the prefetched object is forwarded to the client (process block 1450).
  • objects that have been aborted may be cached at, for example, proxy server 632 (Fig. 6), in the event that the client subsequently requests the object. Hence, the above referenced equation may be reduced to:
  • the equation is reduced in this manner because, since the object has already been downloaded, it would not need to be re-downloaded from the originating server.
  • the factors used to determine whether to forward a pref etched object may be used outside the website and/or Internet context.
  • the prefetching technique may be used to determine which terminals to download an object from in a peer-to-peer network environment.
  • the prefetching technique may be used on various network types, for example, a satellite network, a mobile device network, etc.
  • prefetching systems may seek to request objects that will subsequently be requested when a web page is rendered.
  • a server-side prefetcher may not have full knowledge of a client-side browser configuration or the contents of a client-side browser cache. Even further, the prefetcher may not be able to fully parse scripts in which object references are found, thereby further limiting certainty as to whether an object will be requested. As such, effectiveness and efficiency of a prefetching system may hinge on establishing appropriate cost-benefit analysis techniques.
  • the prefetching cost-benefit may be analyzed as a function of a number of factors, including probability of use, round-trip time (RTT), prefetch time, available bandwidth, object size, etc. Illustrative equations to this effect are described above with reference to Fig. 14. These various factors may be weighed to determine whether prefetching one or more objects is efficient, for example, as compared to downloading the object only when it is actually requested by the client.
  • RTT round-trip time
  • a user requests an object from a content server (e.g., via a web browser on a user system).
  • the request is processed by an intermediate proxy client and proxy server (e.g., as described with reference to Fig. 10, above).
  • the proxy server requests the object from the content server, receives the object from the content server in response to the request, and forwards the object to the user system via the proxy client.
  • delays are introduced through Communications with the content server (e.g., from delays in getting a response from the content server) and from communications with the client (e.g., from latency in the communication link between the proxy server and the proxy client).
  • delays resulting from communications with the client may largely be due to the round-trip time over the communication link.
  • Embodiments of prefetching systems may be used to address one or both of these types of delay.
  • a requested object is not prefetched.
  • a first delay may be introduced while the proxy server waits to receive the requested object, and a second delay may be introduced according to the RTT between the server and client sides of the system.
  • an object is prefetched.
  • the associated delay (the first delay in the preceding example) may be substantially eliminated.
  • link usage in response to the request may be substantially minimized. For instance, rather than sending a large object over the link in response to the request, only a small message may be sent indicating to the client that a cached version of the object should be used.
  • the link may be unnecessarily congested. This may delay the downloading of objects that are actually requested.
  • the decision whether it is efficient to prefetch an object may essentially become a function of object size.
  • the prefetching system determines that it is efficient to prefetch the object if the object size is less than some threshold value; while the prefetching system determines that it is inefficient to prefetch the object if the object size is larger than the threshold value.
  • objects may be downloaded speculatively to the optimizing proxy server to determine the size. If the object size is less than the threshold for efficient prefetching, then the object may be prefetched. If it is larger, the object is not prefetched.
  • This approach may be limited in a number of ways. One limitation is that the size of an object may not be specified in the header. If the prefetcher aborts the transfer and the file was small, the benefits of prefetching are lost. If the prefetcher starts to download the file and the file is large, then unnecessary link congestion occurs. Another limitation is that, if the prefetcher decides to abort an object because it is too large and the browser subsequently requests the object, then the object must be requested again from the content server.
  • Embodiments of prefetch abort systems include an accumulator configured to address one or more of these limitations. During a prefetch operation, the accumulator accumulates file data until sufficient data is available to make an effective abort decision. In some embodiments, the accumulator is implemented by the prefetch response abort 642 block of Fig. 6. For example, the prefetch response abort 642 of Fig. 6 may use the accumulator to help determine the object size in block 1410 of the method 1400 of Fig. 14.
  • FIG. 15 shows a flow diagram of an illustrative method 1500 for prefetching using an accumulator, according to various embodiments.
  • FIG. 16 shows relevant portions of an illustrative communications system 1600, including an accumulator for a prefetch abort system, according to various embodiments.
  • the system 1600 includes a proxy server 632 in communication with a user system 602 (e.g., via a proxy client, over a high-latency link 630) and a content server 1630 (e.g., over a relatively low latency link 656).
  • the proxy server 632 may be the proxy server 632 of FIG. 6, including prefetch response abort 642.
  • prefetch response abort 642 is in communication with prefetch object compressor 1602 and prefetch accumulator 1604.
  • the prefetch accumulator 1604 may be further in communication with an accumulator buffer 1610 and/or an output data store 1620.
  • the components of the system 1600 of FIG. 16 will be discussed in parallel with associated portions of the method 1500 of FIG. 15.
  • the method 1500 begins at block 1504 by determining an appropriate size threshold for efficient prefetching of an object.
  • the size threshold is determined by prefetch response abort 642, as discussed with reference to Fig. 2 above. If transfer time is a primary concern, the size threshold may be determined as a function of the cost benefit equations described above, as follows. The cost of prefetching may be calculated in some embodiments according to the equation:
  • Cost Objects ize * (1.0 - ProbabilityofUse)/Bandwidth
  • “ObjectSize” is the size of the object being evaluated for prefetching
  • “ProbabilityofUse” is the probability the object will be ultimately requested by the user system 602 (e.g., requested by the client browser)
  • “Bandwidth” is the bandwidth usage on the high- latency link 630 from pushing the object to the user system 602.
  • RTT is the round-trip time for communications over the high-latency link 630
  • PrefetchTime is the time it takes to download (e.g., and possibly to compress) the object from the content server 1030.
  • the maximum efficient object size may be considered in some embodiments as the object size where the cost of prefetching is equal to the benefit of prefetching (i.e., if the object size increases further, the cost will increase without increasing the benefit, causing the cost to exceed the benefit). Setting the cost and benefit equations equal to each other and solving for the object size may yield the following equation for the maximum efficient object size for prefetching:
  • MaximumSize ProbabilityofUse * (RTT + PrefetchTime) * Bandwidth / (1.0 - ProbabilityofUse) [0181] It is worth noting that, while this equation may maximize performance experienced by one user, this threshold object size may be adjusted in response to other issues or for other reasons. For example, if a link is congested, the maximum size may be reduced to reflect the impact of the bandwidth consumption on other users. It is further worth noting that other types of metrics or thresholds may be used.
  • downloading of the prefetched object may begin.
  • the object is retrieved from the content server 1030 by the proxy server 632, but may not be sent over the high-latency link 630 to the user system 602.
  • the downloaded data is accumulated in the prefetch accumulator 1604 (e.g., in an accumulator buffer 1610).
  • the size of the accumulated data may be evaluated to determine whether the maximum size threshold has been reached.
  • the data may be compressed by the prefetch object compressor 1602 prior to being sent (or as it is sent) to the prefetch accumulator 1604.
  • compressing the data with the prefetch object compressor 1602 prior to accumulation in the accumulator buffer 1610 may allow the calculations to more accurately reflect the ultimate cost to the high-latency link 630 of sending the object to the user system 602. For example, if a file is highly compressible, the bandwidth cost to the high-latency link 630 may be reduced. As such, it may still be efficient to push the object to the user system 602 in its compressed form, even if its uncompressed object size would exceed the size threshold.
  • the size of the accumulated data is evaluated according to the compressed size in block 1520 when determining whether the maximum size threshold has been reached.
  • the determination at block 1524 may account for additional factors not evaluated as part of the size threshold equations. For example, the determination of whether it is efficient to push the object to the user system 602 may be affected by communications with other users (e.g., multicasting opportunities, link congestion, etc.) or other factors.
  • the object e.g., compressed or uncompressed
  • the prefetch operation may be aborted at block 1532.
  • the accumulated data e.g., the compressed data in the accumulator buffer 1610
  • an output data store 1620 e.g., an output buffer
  • various additional functions may be performed. For example, the data may be parsed, indexed, logged, further compressed, etc.
  • the threshold size value e.g., in the accumulator buffer 1610 or the output data store 1620
  • the accumulated data can be sent (e.g., from the output data store 1620) while the remainder of the file is downloaded and compressed by the prefetcher (e.g., and/or accumulated by the prefetch accumulator 1604).
  • the second threshold may be set so that the time needed to transfer the accumulated data may be long enough to compensate for the delay in reestablishing a connection over the low latency link 656 to the content server 1630. In this way, savings in the prefetch time may be achieved without excessive consumption of memory and processing resources on the proxy server 632.
  • the data may effectively be immediately available; no further delays may be created by fetching another copy from the content server 1630 or by compressing a new data object. As such, savings in prefetch time may be achieved even though the data was not pushed across the high- latency link 630.
  • FIG. 17 shows an illustrative method 1700 for exploiting accumulated data to further optimize prefetch abort operations, according to various embodiments.
  • the method 1700 is performed by the system 1600 of FIG. 16.
  • the method 1700 begins at block 1705 by receiving a prefetched object from a content server 1030. It will be appreciated from the discussion of FIGS. 15 and 16 that, even though the object has already been prefetched, it may still be inefficient to forward the object to the user system 602. For example, bandwidth and/or other constraints may affect a determination of whether it is efficient to communicate objects over the high-latency link 630 between the proxy server 632 and the user system 602.
  • accumulated prefetched objects may be analyzed (e.g., in addition to other factors, such as link conditions) to gather and/or generate various additional cost-benefit data for optimizing the prefetch operation.
  • the byte sequence of the prefetched object is analyzed at block 1710.
  • a determination may then be made in block 1715 of whether the bytes are the same as an object that was previously prefetched from a different URL.
  • data in the accumulator buffer 1610 may be processed using delta coding and/or other techniques to generate a fingerprint of the accumulated data. The fingerprint may be compared against fingerprints of previously accumulated data (e.g., data stored in the output data store 1620).
  • the object size is determined at block 1720. As discussed above, the object size may be significant in determining whether it is efficient to forward the object to the client. In some embodiments, if the object is large (e.g., or in all cases), a determination is made as to whether the object content is compressible, scalable, etc. at block 1725. For example, the determination may be made by the prefetch object compressor 1602. In certain embodiments, this determination is made as a function of analyzing the byte sequence in block 1710. If the content is compressible, scalable, etc., embodiments of the method 1700 revise the effective object size at block 1730. For example, the method 1700 may estimate the compressed size of the object and use that compressed size as the effective size of the object when making prefetching determinations.
  • the communication link between the proxy client and the proxy server may be evaluated to determine various link conditions. This type of information may be received from external sources, estimated, measured, etc. For example, the links may be analyzed to determine bandwidth, traffic, latency, packet errors, etc.
  • latency of the high-latency link 630 may be estimated in block 1740 based on a current round-trip time (“RTT") measure. Accordingly, if the RTT is high, then it may be more beneficial to forward the prefetched object to the client because of the round trip savings gained by forwarding the object. However, if the RTT is low, then the saved round trip may be of less value for optimization purposes.
  • RTT current round-trip time
  • the method 1700 determines the probability that an object will be used at block 1750.
  • URL parsing and/or other techniques can be used to help make that determination. For example, certain popular website content may be more likely requested and/or accessed by users of a communication network.
  • the determination at block 1750 may be made as a function of analyzing the byte sequence in block 1710. For example, fingerprinting may be used, as described above, to determine if the object has been requested before by that user or by some threshold number of other users.
  • the various types of data collected and/or generated in the various blocks may be used to estimate prefetch time in block 1760. For example, current link conditions and object size may drive an estimation of the time it will take to prefetch a particular object. Other data may also be used. For example, if the object was previously prefetched (e.g., or a similar object), that data can be used to make predictions. Particularly, if the object took a long time to retrieve from the Internet a previous time, then it may be optimal to forward the object to the client in order to avoid re-downloading the object in the future. Furthermore, if the object was downloaded quickly, then less optimization may be gained from forwarding the object to the client. Hence, in one embodiment, the download time of the object may be assigned a point value which may be factored into determining whether to forward the object to the client.
  • some or all of these data can be used to perform a cost-benefit analysis on the prefetched object at block 1765.
  • the result of the cost-benefit analysis performed at block 1765 can be evaluated at decision block 1770 to determine whether the benefits of prefetching the object outweigh the costs of prefetching the object. If the benefits outweigh the costs, the object is prefetched at block 1780. If the benefits fail to outweigh the costs, the object is not prefetched, or the prefetch operation is aborted at block 1775.
  • factors used to determine whether to forward a prefetched object may be used outside the website and/or Internet context.
  • the prefetching technique may be used to determine which terminals to download an object from in a peer-to-peer network environment.
  • the prefetching technique may be used on various network types, for example, a satellite network, a mobile device network, etc.
  • URL requests may be made to retrieve embedded content objects for use in rendering the webpage, including images, videos, sounds, etc.
  • Each of the URLs may be associated with an internet protocol (IP) address, as designated by a domain name server (DNS).
  • IP internet protocol
  • DNS domain name server
  • DNS lookups may require that additional requests be made to the network, which may cause certain inefficiencies. For example, in a satellite communications system, DNS lookups may involve additional round trips between the client user terminal and the server gateway sides of the communications system. Since each round trip over the satellite link takes time, these DNS lookups may cause undesirable system performance.
  • Some systems may configure user web browsers to use a hyper-text transfer protocol (HTTP) proxy at the server (e.g., gateway) side of the communications system for all DNS lookups.
  • HTTP hyper-text transfer protocol
  • the client browser may forward all requests to the server-side proxy, so all the DNS lookups can be performed at the server side.
  • DNS lookups may not use additional round trips.
  • this implementation may require a particular type of browser configuration at the client side. This may be undesirable, as certain clients may not desire or know to configure their browsers in this way.
  • Embodiments implement prefetching of DNS entries, sometimes piggybacking on the prefetching of associated web objects.
  • prefetching of an object continues according to other prefetching techniques, until the point where the HTML response may be parsed.
  • a DNS lookup is performed to find the IP address for the request.
  • the IP address is then pushed to the client as part of the prefetch data package (e.g., including the URL, the prefetched object, etc.).
  • the client when the HTML response is received by the client, the client opens a prefetch socket.
  • the client may use the prefetch socket to begin receiving the prefetch data, for example, including the DNS lookup results.
  • the client is aware of what data is being prefetched and can make further requests accordingly. For example, when a DNS request is made by the client, the request may be intercepted to determine whether the request can be handled using a local DNS entry. If so, the DNS response is handled locally and a round trip may be avoided.
  • the client may wait to handle the request locally, even where the local DNS entry has not yet been fully received.
  • Embodiments may be implemented in the context of various types of systems and components. For example, some embodiments exploit functionality of, or operate within the context of, systems, such as those described above with reference to FIGS. 1 - 5 and 10. For the sake of clarity, embodiments are described in the context of a client-server communications system, like the system 600 discussed above with reference to FIG. 6.
  • a system 600 including a user system 602, a proxy client 612, and a proxy server 632.
  • the user system 602 may include a client graphical user interface (GUI) 610.
  • GUI 610 may allow a user to configure performance aspects of the system 600. For example, the user may adjust the compression parameters and/or algorithms, content filters (e.g., blocks elicit websites), and enable or disable various features used by the system 600.
  • some of the features may include network diagnostics, error reporting, as well as controlling, for example, functionality of the proxy server 632.
  • Such control may include adding and/or removing pages (i.e., URLs) to or from whitelist 648 and/or blacklist 649, etc.
  • a user accesses a website through the web browser 606, for example, by providing the URL of the website to web browser 606.
  • Rendering and/or using the website may typically include making a number of calls to other URLs.
  • the website may include embedded objects (e.g., advertisements, movies, sounds, images, etc.), links, etc.
  • Each of these URLs may represent a location on the Intenet defined by an IP address.
  • To retrieve the objects associated with the URLs, each URL may first have to be resolved to its corresponding IP address. This may typically be accomplished by issuing a lookup request to a DNS.
  • One traditional implementation may include issuing the DNS lookup requests from the client side of the system 600 (e.g., from the proxy client 612 or another component of the user system 602).
  • a client-side component may issue a request to a DNS to resolve the IP address of the object's URL, after which, the same or another client-side component may request the object using its resolved IP address.
  • This may involve two requests to the Internet, which may result in two round trips.
  • each round trip is costly (e.g., where the round trip time is very long, as in a satellite communications system), client-side DNS lookup requests may be undesirable.
  • Another traditional implementation may include shifting the DNS lookup request role to the server side of the system 600 (e.g., to the proxy server 632).
  • user web browsers may be configured to use a server-side HTTP proxy for performing the DNS lookups. While this may avoid extra round trips incurred by performing the DNS lookups, the implementation may not be transparent to the user. For example, the configuration may involve affecting particular browser settings, running a client-side application, etc. Certain clients may not desire to configure their systems in this way for various reasons.
  • DNS lookups are implemented in such a way as to be relatively transparent to the user, while still avoiding extra round trips.
  • methods and systems may be substantially agnostic to the user's browser configuration, whether the user is running a particular application (e.g., a client-side optimization application), etc.
  • Embodiments implement prefetching of the DNS entries along with or separate from the prefetching of associated web objects.
  • prefetching of an object begins according to other prefetching techniques, like those described in U.S. Patent Application No. 12/172,913, filed on July 14, 2008, entitled “METHODS AND SYSTEMS FOR PERFORMING A PREFETCH ABORT OPERATION, " which is hereby incorporated by reference herein in its entirety for all purposes.
  • web browser 606 may check browser cache 604 to determine whether the website associated with the selected URL is located within browser cache 604. If the website is located within browser cache 604, the amount of time the website has been in the cache is checked to determine if the cached website is "fresh" (i.e., new) enough to use.
  • web browser 606 renders the cached page. However, if the website has either not been cached or the cached webpage is not fresh, web browser 606 sends a request to the Internet for the website.
  • redirector 608 intercepts the request sent from web browser 606.
  • Redirector 608 instead sends the request through a local bus 605 to proxy client 612.
  • proxy client 612 may be implemented as a software application running on user system 602.
  • proxy client 612 may be implemented on a separate computer system and is connected to user system 602 via a high speed/low latency link (e.g., a branch office LAN subnet, etc.).
  • proxy client 612 includes a request parser 616.
  • Request parser 616 may check cache optimizer 614 to determine if a cached copy of the requested website may still be able to be used.
  • Cache optimizer 614 is in communication with browser cache 604 in order to have access to cached websites.
  • Cache optimizer 614 is able to access browser cache 604 without creating a redundant copy of the cached websites, thus requiring less storage space.
  • cache optimizer 614 implements more effective algorithms to determine whether a cached website is fresh.
  • cache optimizer 614 may implement the cache expiration algorithms from HTTP vl .1 (i.e., RFC 2616), which may not be natively supported in web browser 606.
  • HTTP vl .1 i.e., RFC 2616
  • browser cache 604 may inappropriately consider a cached website as too old to use; however, cache optimizer 614 may still be able to use the cached website. More efficient use of cached websites can improve browsing efficiency by reducing the number of Internet accesses.
  • request parser 616 checks prefetch manager 620 to determine if the requested website has been prefetched. Prefetching a website is when content from the website is accessed, downloaded, and stored before a request to the website is made by web browser 606. Prefetching can potentially save round-trips of data access from user system 602 to the Internet.
  • request parser 616 forwards the request to a request encoder 618.
  • Request encoder 618 encodes the request into a compressed version of the request using one of many possible data compression algorithms. For example, these algorithms may employ a coding dictionary 622 which stores strings so that data from previous web objects can be used to compress data from new pages. Accordingly, where the request for the website is 550 bytes in total, the encoded request may be as small as 50 bytes. This level of compression can save bandwidth on a connection, such as high latency link 630.
  • high latency link 630 may be a wireless link, a cellular link, a satellite link, a dial-up link, etc.
  • protocol 628 is Intelligent Compression Technology's ® (ICT) transport protocol (ITP). Nonetheless, other protocols may be used, such as the standard transmission control protocol (TCP).
  • ITP maintains a persistent connection with proxy server 632. The persistent connection between proxy client 612 and proxy server 632 enables system 600 to eliminate the inefficiencies and overhead costs associated with creating a new connection for each request.
  • the encoded request is forwarded from protocol 628 to request decoder 636.
  • Request decoder 636 uses a decoder which is appropriate for the encoding performed by request encoder 618. In one embodiment, this process utilizes a coding dictionary 638 in order to translate the encoded request back into a standard format which can be accessed by the destination website.
  • the HTTP request includes a cookie (or other special instructions), such as a "referred by" or type of encoding accepted, information about the cookie or instructions may be stored in a cookie model 652.
  • Request decoder 636 then transmits the decoded request to the destination website over a low latency link 656.
  • Low latency link 656 may be, for example, a cable modem connection, a digital subscriber line (DSL) connection, a Tl connection, a fiber optic connection, etc.
  • DSL digital subscriber line
  • a response parser 644 receives a response from the requested website.
  • this response may include an attachment, such as an image and/or text file.
  • Some types of attachments such as HTML, XML, CSS, or Java Scripts, may include references to other "in-line" objects that may be needed to render a requested web page.
  • response parser 644 may forward the objects to a prefetch scanner 646.
  • prefetch scanner 646 scans the attached file and identifies URLs of in-line objects that may be candidates for prefetching.
  • objects that may be needed for the web page may also be specified in Java scripts that appear within the HTML or CSS page or within a separate Java script file. Methods for identifying candidates within Java scripts may be found in a co-pending U.S. Patent Application No. 12/172,917, entitled “METHODS AND SYSTEMS FOR JAVA SCRIPT PARSING" (Attorney Docket No. 026841- 00021 OUS), filed July 14, 2008, which is incorporated by reference for all purposes.
  • prefetch scanner 646 may notify prefetch response abort 642 of the context in which the object was identified, such as the type of object in which it was found and/or the syntax in which the URL occurred. This information may be used by prefetch response abort 642 to determine the probability that the URL will actually be requested by web browser 606.
  • the candidate list is forwarded to whitelist 648 and blacklist 649.
  • Whitelist 648 and blacklist 649 may be used to track which URLs should be allowed to be prefetched. Based on the host (i.e., the server that is supplying the URL), the file type (e.g., application service provider (ASP) files should not be prefetched), etc. Accordingly, whitelist 648 and blacklist 649 control prefetching behavior by indicating which URLs on the candidate list should or should not be prefetched. In many instances with certain webpages/file types, prefetching may not work. In addition to ASP files, webpages which include fields or cookies may have problems with prefetching.
  • ASP application service provider
  • a modified candidate list is generated and then the list is forwarded to a client cache model 650.
  • the client cache model 650 attempts to model which items from the list will be included in browser cache 604. As such, those items are removed from the modified candidate list.
  • the updated modified candidate list is forwarded to a request synthesizer 654 which creates an HTTP request in order to prefetch each item in the updated modified candidate list.
  • the HTTP request header may include cookies and/or other instructions appropriate to the website and/or to web browser 606 's preferences using information obtained from cookie model 652.
  • the prefetch HTTP requests may then be transmitted through low latency link 656 to the corresponding website.
  • response parser 644 receives a prefetch response from the website and accesses a prefetch response abort 642.
  • Prefetch response abort 642 is configured to determine whether the prefetched item is worth sending to user system 602.
  • Prefetch response abort 642 bases its decision whether to abort a prefetch on a variety of factors, which are discussed above in more detail.
  • response parser 644 forwards the response to response encoder 640.
  • Response encoder 640 accesses coding dictionary 638 in order to encode the prefetched response.
  • Response encoder 640 then forwards the encoded response through protocol 628 over high latency link 630 and then to response decoder 626.
  • Response decoder 626 decodes the response and forwards the response to response manager 624.
  • response manager 624 creates a prefetch socket to receive the prefetched item as it is downloaded.
  • URLs associated with the requested (e.g., prefetch) objects may have to be resolved to determine corresponding IP addresses.
  • a DNS lookup may be performed to resolve the URLs for each prefetch object.
  • the DNS lookup result may be added to the prefetch data pushed to the client.
  • the response encoder 640 forwards the encoded response through protocol 628 over high latency link 630 to response decoder 626, the response includes the DNS lookup results (e.g., the IP address associated with the URL).
  • the DNS lookup results may be received at the client as they are downloaded via the prefetch socket created by response manager 624.
  • response decoder 626 when response decoder 626 decodes the response, it is stripped of certain data relating to the DNS lookup and creates a DNS prefetch entry.
  • the DNS prefetch entry may include the URL and its associated IP address.
  • Certain embodiments may store the DNS entry locally for future use. Other embodiments temporarily store the DNS entry (e.g., in a scratch pad) in anticipation of an impending request. For example, when the DNS information is received, it may be assumed that a request for that DNS will be made shortly thereafter, if at all.
  • Response manager 624 transmits the response over local bus 605 to redirector 608.
  • Redirector 608 then forwards the response to web browser 606 which renders the content of the response.
  • rendering the content may involve requesting one or more content objects from the web (e.g., videos, images, sounds, etc.).
  • Each content object may be located at a URL, and each URL may have to be resolved to a valid host IP address prior to requesting the content object.
  • resolving the URLs may typically involve querying a DNS to find the associated IP address.
  • the DNS lookup request may be intercepted by the redirector 608.
  • Redirector 608 may instead send the request through a local bus 605 to proxy client 612.
  • DNS entries may have been prefetched and stored locally, or may be in the process of being prefetched.
  • the request parser 616 in the proxy client may check prefetch manager 620 to determine if the requested DNS lookup has been, or is in the process of being, prefetched.
  • the DNS request may be handled locally. For example, if it is determined that the DNS request can be handled locally, response manager 624 may transmit the DNS response over local bus 605 to redirector 608. The object request can then proceed without first making a round trip (e.g., across high latency link 630) to the DNS. If it is determined that the DNS request cannot be handled locally, it may be passed along for normal processing (e.g., over high latency link 630 to the DNS).
  • the HTML response may be received at the client, and web browser 606 may begin requesting DNS lookups, before the respective DNS prefetch entries have been created (e.g., before they have finished downloading).
  • the client may be made aware of what will be prefetched as part of the HTML response.
  • embodiments of the client receive the DNS lookup results via a prefetch socket (e.g., acting as a DNS proxy) configured by the client to receive particular prefetch objects.
  • a prefetch socket e.g., acting as a DNS proxy
  • the client may be aware that the DNS information is in the process of being prefetched. Consequently, the client may decide to wait for the local DNS entry to be completely prefetched to allow local handling of the DNS request.
  • certain objects may not be prefetched, even where the URL is embedded or otherwise part of an HTTP request. For example, it may be determined that it would be inefficient to prefetch the object because of its size, because there is a very low probability that the object will ultimately be requested by the user, because the URL represents a link (e.g., HREF) or other web item that is not a prefetch candidate, because the object is on the blacklist 649, etc. Certain embodiments still prefetch the DNS and create a local DNS entry, even when it is determined not to prefetch the associated object. In fact, some embodiments prefetch all DNS information, whenever practical.
  • HREF link
  • piggybacking the DNS lookup result (the IP address) onto the URL when an object is prefetched may only add a few bytes (e.g., 4 or 6 bytes to describe typical IP addresses) to the prefetch data package. Even if the DNS lookup is pushed without an associated object (e.g., along with other data), the total additional prefetch data may still be minimal. As such, the cost of prefetching the DNS entry may be very small compared to the cost of the round trip, particularly where round trip times are large (e.g., in a satellite communications system).
  • Some embodiments perform a cost-benefit analysis for prefetching DNS entries that is similar to that described above with reference to FIG. 14. For example, referring to the equations described above with reference to FIG. 14, object size factors primarily (e.g., or only) into the cost, and RTT factors primarily (e.g., or only) into the benefit.
  • object size primarily (e.g., or only) into the cost
  • RTT factors primarily (e.g., or only) into the benefit.
  • the object size is very small (e.g., the DNS lookup result involves only a small number of bytes)
  • the prefetch cost may be very small.
  • the RTT is relatively constant (e.g., the time to traverse the satellite link may not change much)
  • the prefetch cost may be relatively constant.
  • the benefits may clearly outweigh the costs predicted for prefetching the DNS entries. Moreover, the costs become even further outweighed as the RTT increases.
  • the size of the DNS prefetch data may be very small, regardless of whether the DNS data is prefetched on its own (e.g., with the associated URL) or as a piggyback operation along with prefetching an object. Consequently, it may be efficient to prefetch DNS data, even where no associated objects are prefetched. For example, even where the method 200 of FIG. 2 results in a determination to abort the prefetch operation, it may still be efficient to push associated DNS data to the client.

Abstract

Methods, systems, devices, and software are provided for improving performance of a communications system, particularly in the context of web communications. Some embodiments provide techniques for URL masking, for example, to allow prefetchers and caches to work even when the URLs are constructed using scripts intended to block such behavior. Other embodiments implement cache cycling techniques, for example, to issue a fresh request to the content provider for website content each time the proxy server serves a request from cached data. Still other embodiments provide accumulation and/or caching techniques for optimizing performance of an accelerator abort system. And in other embodiments, DNS entries are prefetched to reduce DNS lookup times. For example, DNS prefetch functionality may be used to begin locally satisfying DNS lookup requests at the client, even when the DNS lookup request is made before the DNS prefetch is complete.

Description

WEB OPTIMIZATION
CROSS-REFERENCES
[0001] This application is a non-provisional application which claims priority to U.S. Provisional Application No. 61/143,933, entitled "WEB OPTIMIZATION OVER SATELLITE LINKS" (Attorney Docket No. 017018-019600US), filed on January 12, 2009, which is incorporated by reference in its entirety for any and all purposes.
[0002] This application also claims priority to U.S. Patent Application No. 12/571,281, entitled "METHODS AND SYSTEMS FOR IMPLEMENTING URL MASKING" (Attorney Docket No. 017018-019610US), filed on September 30, 2009; U.S. Patent Application No. 12/571,288, entitled "CACHE CYCLING" (Attorney Docket No. 017018-019620US), filed on September 30, 2009; and U.S. Patent Application No. 12/619,095, entitled "ACCUMULATOR FOR PREFETCH ABORT" (Attorney Docket No. 017018-019630US), filed on November 16, 2009; all of which are incorporated by reference in their entirety for any and all purposes.
[0003] This application also relates to U.S. Patent Application No. 12/685,691 , entitled "DNS PREFETCH" (Attorney Docket No. 017018-019640US), filed on January 12, 2010, which is incorporated by reference in its entirety for any and all purposes.
FIELD
[0004] The present invention relates, in general, to network acceleration, and more particularly, to URL masking, cache cycling, prefetch accumulation, DNS prefetching, and/or other types of network acceleration functionality.
BACKGROUND
[0005] Presently, cold access (first visit on clear cache) to popular sites is a well-established metric for user experience on a public network, as it is the operation in which network performance is most clearly and frequently apparent to the end user. Consequently, improvements in this metric can play a significant role in driving consumer purchasing decisions, such as in selecting network access providers or deciding whether to use an acceleration service. [0006] Performance for satellite access to commercial web sites can be significantly improved. Currently, an effective solution for many of the issues affecting satellite performance exist. For example, optimal transport protocols and compression are effective at reducing the number of bytes downloaded. However, another aspect of network acceleration involves the number of RTTs. At present, a majority of the objects for cold access to public sites are prefetched, and many of the non-prefetched requests occur sequentially because URL references within Java scripts have to be resolved before subsequent HTML data can be parsed. Over a broadband satellite link, these accumulated RTTs may be a large contributor to download times.
[0007] These and other delays and aspects of web communications over various communications systems cause sub-optimal performance. Hence, improvements in the art are needed.
BRIEF SUMMARY
[0008] Among other things, methods, systems, devices, and software are provided for improving performance of a communications system, particularly in the context of web communications.
[0009] According to some embodiments, a URL masking algorithm is provided to allow prefetchers and caches to work even when the URLs are constructed using scripts intended to block such behavior. For example, certain cache-busting techniques generate portions of the URL string, using Java scripts, to include unique values (e.g., random numbers, timestamps, etc.). As such, prefetchers may be fooled into thinking objects at the URL have not yet been prefetched, when in fact they have. Embodiments mask these cache-busting portions of the URL string to allow the prefetcher to recognize the request as a previously prefetched URL.
[0010] According to other embodiments, cache cycling is used to issue a fresh request to the content provider for website content each time the proxy server serves a request from cached data. For example, URL masking may allow a prefetcher to operate in the context of a cache- busting algorithm. Using prefetched content may reduce the apparent number of times the URL is requested, which may reduce advertising revenue and other metrics based on the number of requests. Cache cycling embodiments maintain the request metrics while allowing optimal prefetching in the face of cache-busting techniques. [0011] According to other embodiments, a number of techniques are provided for optimizing prefetcher functionality. In one embodiment, an accumulator is provided for optimizing performance of an accelerator abort system. Chunked content (e.g., in HTTP chunked mode) is accumulated until enough data is available to make an abort decision. In another embodiment, socket mapping architectures are adjusted to allow prefetching of content copies for URLs requested multiple times on the same page. In yet another embodiment, persistent storage is adapted to cache prefetched, but unused data, and to provide access to the data to avoid subsequent redundant prefetching.
[0012] In still another embodiment, DNS transparent proxy and prefetch is integrated with HTTP transparent proxy and prefetch, so as to piggyback DNS information with HTTP frames. Prefetching may be provided for the DNS associated with all host names called in Java scripts to reduce the number of requests needed to the DNS server. The DNS prefetch functionality may be used to begin locally satisfying DNS lookup requests at the client, even when the DNS lookup request is made before the DNS prefetch is complete.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] A further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings wherein like reference numerals are used throughout the several drawings to refer to similar components. In some instances, a sublabel is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sublabel, it is intended to refer to all such multiple similar components.
[0014] FIG. 1 is a block diagram illustrating satellite communications, according to one embodiment of the present invention.
[0015] FIG. 2 is a block diagram illustrating a gateway, according to one embodiment of the present invention.
[0016] FIG. 3 is a block diagram illustrating a subscriber terminal, according to one embodiment of the present invention. [0017] FIG. 4 is a generalized schematic diagram illustrating a computer system, in accordance with various embodiments of the invention.
[0018] FIG. 5 is a block diagram illustrating a networked system of computers, which can be used in accordance with various embodiments of the invention.
[0019] FIG. 6 is a block diagram illustrating a system for implementing prefetching, according to one embodiment of the present invention.
[0020] FIG. 7A and 7B are block diagrams illustrating a network acceleration module, according to one embodiment of the present invention.
[0021] FIG. 8 is a flow diagram illustrating a method for implementing URL masking, according to one embodiment of the present invention.
[0022] FIG. 9 is a flow diagram illustrating a method for further implementing URL masking, according to one embodiment of the present invention.
[0023] FIG. 10 is a block diagram illustrating a system for implementing URL masking, according to one embodiment of the present invention.
[0024] FIG. 11 illustrates a system of implementing a prior art HTTP cache.
[0025] FIG. 12 illustrates how a cache with cache cycling is used in conjunction with URL masking, according to various embodiments.
[0026] FIG. 13 illustrates a system for implementing cache cycling, in accordance with aspects of various embodiments.
[0027] FIG. 14, which illustrates an embodiment of a method performed by a prefetch response abort.
[0028] FIG. 15 shows a flow diagram of an illustrative method for prefetching using an accumulator, according to various embodiments.
[0029] FIG. 16 shows relevant portions of an illustrative communications system, including an accumulator for a prefetch abort system, according to various embodiments. [0030] FIG. 17 shows an illustrative method for exploiting accumulated data to further optimize prefetch abort operations, according to various embodiments.
DETAILED DESCRIPTION
[0031] The ensuing description provides exemplary embodiment(s) only, and is not intended to limit the scope, applicability or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiment(s) will provide those skilled in the art with an enabling description for implementing an exemplary embodiment, it being understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope as set forth in the appended claims. Some of the various exemplary embodiments may be summarized as follows.
[0032] Referring first to FIG. 1, a block diagram is shown of an embodiment of a satellite communications system 100 for use with various embodiments. The satellite communications system 100 includes a network 120, such as the Internet, interfaced with a gateway 115 that is configured to communicate with one or more subscriber terminals 130, via a satellite 105. A gateway 115 is sometimes referred to as a hub or ground station. Subscriber terminals 130 are sometimes called modems, satellite modems, or user terminals. As noted above, although the communications system 100 is illustrated as a geostationary satellite 105 based communication system, it should be noted that various embodiments described herein are not limited to use in geostationary satellite based systems; for example, some embodiments could be low earth orbit ("LEO") satellite based systems or aerial payloads not in orbit and held aloft by planes, blimps, weather balloons, etc. Other embodiments could have a number of satellites instead of just one.
[0033] The network 120 may be any type of network and can include, for example, the Internet, an Internet protocol ("IP") network, an intranet, a wide-area network ("WAN"), a local- area network ("LAN"), a virtual private network ("VPN"), the Public Switched Telephone Network ("PSTN"), and/or any other type of network supporting data communication between devices described herein, in different embodiments. A network 120 may include both wired and wireless connections, including optical links. As illustrated in a number of embodiments, the network 120 may connect the gateway 115 with other gateways (not shown), which are also in communication with the satellite 105. [0034] The gateway 115 provides an interface between the network 120 and the satellite 105. The gateway 115 may be configured to receive data and information directed to one or more subscriber terminals 130, and can format the data and information for delivery to the respective destination device via the satellite 105. Similarly, the gateway 115 may be configured to receive signals from the satellite 105 (e.g., from one or more subscriber terminals 130) directed to a destination in the network 120, and can process the received signals for transmission along the network 120.
[0035] A device (not shown) connected to the network 120 may communicate with one or more subscriber terminals 130. Data and information, for example IP datagrams, may be sent from a device in the network 120 to the gateway 115. It will be appreciated that the network 120 may be in further communication with a number of different types of providers, including content providers, application providers, service providers, etc. Further, in various embodiments, the providers may communicate content with the satellite communication system 100 through the network 120, or through other components of the system (e.g., directly through the gateway 115).
[0036] The gateway 115 may format frames in accordance with a physical layer definition for transmission to the satellite 105. A variety of physical layer transmission modulation and coding techniques may be used with certain embodiments, including those defined with the DVB-S2 standard. The link 135 from the gateway 115 to the satellite 105 may be referred to hereinafter as the downstream uplink 135. The gateway 115 uses the antenna 110 to transmit the content to the satellite 105. In one embodiment, the antenna 110 comprises a parabolic reflector with high directivity in the direction of the satellite and low directivity in other directions.
[0037] In one embodiment, a geostationary satellite 105 is configured to receive the signals from the location of antenna 110 and within the frequency band and specific polarization transmitted. The satellite 105 may, for example, use a reflector antenna, lens antenna, array antenna, active antenna, or other mechanism for reception of such signals. The satellite 105 may process the signals received from the gateway 115 and forward the signal from the gateway 115 containing the MAC frame to one or more subscriber terminals 130. In one embodiment, the satellite 105 operates in a multi-beam mode, transmitting a number of narrow beams each directed at a different region of the earth. [0038] With such a multibeam satellite 105, there may be any number of different signal switching configurations on the satellite 105, allowing signals from a single gateway 115 to be switched between different spot beams. In one embodiment, the satellite 105 may be configured as a "bent pipe" satellite, wherein the satellite may frequency convert the received carrier signals before retransmitting these signals to their destination, but otherwise perform little or no other processing on the contents of the signals. There could be a single carrier signal for each service spot beam or multiple carriers in different embodiments. Similarly, single or multiple carrier signals could be used for the feeder spot beams. A variety of physical layer transmission modulation and coding techniques may be used by the satellite 105 in accordance with certain embodiments, including those defined with the DVB-S2 standard. For other embodiments, a number of configurations are possible (e.g., using LEO satellites, or using a mesh network instead of a star network).
[0039] The service signals transmitted from the satellite 105 may be received by one or more subscriber terminals 130, via the respective subscriber antenna 125. In one embodiment, the subscriber antenna 125 and terminal 130 together comprise a very small aperture terminal ("VSAT"), with the antenna 125 measuring approximately 0.6 meters in diameter and having approximately 2 watts of power. In other embodiments, a variety of other types of subscriber antennae 125 may be used at the subscriber terminal 130 to receive the signal from the satellite 105. The link 150 from the satellite 105 to the subscriber terminals 130 may be referred to hereinafter as the downstream downlink 150. Each of the subscriber terminals 130 may include a hub or router (not pictured) that is coupled to multiple subscriber terminals 130.
[0040] In some embodiments, some or all of the subscriber terminals 130 are connected to consumer premises equipment ("CPE") 160. CPE may include, for example, computers, local area networks, Internet appliances, wireless networks, etc. A subscriber terminal 130, for example 130-a, may transmit data and information to a network 120 destination via the satellite 105. The subscriber terminal 130 transmits the signals via the upstream uplink 145-a to the satellite 105 using the subscriber antenna 125-a. The link from the satellite 105 to the gateway 115 may be referred to hereinafter as the upstream downlink 140.
[0041] In various embodiments, one or more of the satellite links 135, 140, 145, 150 are capable of communicating using one or more communication schemes. In various embodiments, the communication schemes may be the same or different for different links. The communication schemes may include different types of coding and modulation combinations. For example, various satellite links may communicate using physical layer transmission modulation and coding techniques using adaptive coding and modulation schemes, etc. The communication schemes may also use one or more different types of multiplexing schemes, including Multi-Frequency Time-Division Multiple Access ("MF-TDMA"), Time Division Multiple Access ("TDMA"), Frequency Division Multiple Access ("FDMA"), Orthogonal Frequency Division Multiple Access ("OFDMA"), Code Division Multiple Access ("CDMA"), or any number of other schemes.
[0042] In a given satellite spot beam, all customers serviced by the spot beam may be capable of receiving all the content traversing the spot beam by virtue of the fact that the satellite communications system 100 employs wireless communications via various antennae (e.g., 110 and 125). However, some of the content may not be intended for receipt by certain customers. As such, the satellite communications system 100 may use various techniques to "direct" content to a subscriber or group of subscribers. For example, the content may be tagged (e.g., using packet header information according to a transmission protocol) with a certain destination identifier (e.g., an IP address) or use different modcode points. Each subscriber terminal 130 may then be adapted to handle the received data according to the tags. For example, content destined for a particular subscriber terminal 130 may be passed on to its respective CPE 160, while content not destined for the subscriber terminal 130 may be ignored. In some cases, the subscriber terminal 130 caches information not destined for the associated CPE 160 for use if the information is later found to be useful in avoiding traffic over the satellite link.
[0043] FIG. 2 shows a simplified block diagram 200 illustrating an embodiment of a gateway 115 coupled between the network 120 and an antenna 110, according to various embodiments. The gateway 115 has a number of components, including a network interface module 210, a satellite modem termination system ("SMTS") 230, and a gateway transceiver module 260. Components of the gateway 115 may be implemented, in whole or in part, in hardware. Thus, they may comprise one, or more, Application Specific Integrated Circuits ("ASICs") adapted to perform a subset of the applicable functions in hardware. Alternatively, the functions may be performed by one or more other processing units (or cores), on one or more integrated circuits. In other embodiments, other types of integrated circuits may be used (e.g., Structured/Platform ASICs, Field Programmable Gate Arrays ("FPGAs") and other Semi-Custom ICs), which may be programmed. Each may also be implemented, in whole or in part, with instructions embodied in a computer-readable medium, formatted to be executed by one or more general or application specific controllers.
[0044] Embodiments of the gateway 115 receive data from the network 120 (e.g., the network 120 of FIG. 1), including data originating from one or more origin servers 205 (e.g., content servers) and destined for one or more subscribers in a spot beam. The data is received at the network interface module 210, which includes one or more components for interfacing with the network 120. For example, the network interface module 210 includes a network switch and a router.
[0045] In some embodiments, the network interface module 210 interfaces with other modules, including a third-party edge server 212 and/or a traffic shaper module 214. The third-party edge server 212 may be adapted to mirror content (e.g., implementing transparent mirroring, like would be performed in a point of presence ("POP") of a content delivery network ("CDN")) to the gateway 115. For example, the third-party edge server 212 may facilitate contractual relationships between content providers and service providers to move content closer to subscribers in the satellite communication network 100. The traffic shaper module 214 controls traffic from the network 120 through the gateway 115, for example, to help optimize performance of the satellite communication system 100 (e.g., by reducing latency, increasing effective bandwidth, etc.). In one embodiment, the traffic shaper module 214 delays packets in a traffic stream to conform to a predetermined traffic profile.
[0046] Traffic is passed from the network interface module 210 to the SMTS 230 to be handled by one or more of its component modules. In some embodiments, the SMTS 230 includes a gateway accelerator module 250, a scheduler module 235, and support modules 246. In some embodiments, all traffic from the network interface module 210 is passed to the gateway accelerator module 250 for handling, as described more fully below. In other embodiments, some or all of the traffic from the gateway accelerator module 250 is passed to the support modules 246. For example, in one embodiment, real-time types of data (e.g., User Datagram Protocol ("UDP") data traffic, like Internet-protocol television ("IPTV") programming) bypass the gateway accelerator module 250, while non-real-time types of data (e.g., Transmission Control Protocol ("TCP") data traffic, like web video) are routed through the gateway accelerator module 250 for processing.
[0047] Embodiments of the gateway accelerator module 250 provide various types of application, WAN/LAN, and/or other acceleration functionality. In one embodiment, the gateway accelerator module 250 implements functionality of AcceleNet applications from Intelligent Compression Technologies, Inc. ("ICT"), a division of ViaSat, Inc. This functionality may be used to exploit information from application layers of the protocol stack (e.g., layers 4 - 7 of the IP stack) through use of software or firmware operating in the subscriber terminal 130 and/or CPE 160.
[0048] Embodiments of the gateway accelerator module 250 also include a gateway parser module 252, a gateway prefetcher module 254, and/or a gateway masker module 246. The gateway parser module 252 provides various script parsing functions for supporting functionality of the gateway accelerator module 250. For example, the gateway parser module 252 may be configured to implement advanced parsing of Java scripts to interpret web requests for use in prefetching.
[0049] Prefetching functionality may be implemented through the gateway prefetcher module 254 in the gateway accelerator module 250. Embodiments of the gateway prefetcher module 254 handle one or more of various prefetching functions, including receiving and interpreting instructions from other components of the gateway accelerator module 250 as to what objects to prefetch, receiving and interpreting instructions from components of the subscriber terminal 130, generating and/or sending instructions to one or more content servers to retrieve prefetch objects, keeping track of prefetched and/or cached content, directing objects to be cached (e.g., in the gateway cache module 220), etc.
[0050] In some embodiments, functionality of the gateway prefetcher module 254 and/or the gateway parser module 252 is optimized by other components of the gateway accelerator module 250. For example, requested URLs embedded in Java script may be parsed by the gateway parser module 252, and related objects may be prefetched by the gateway prefetcher module 254. However, certain cache-busting techniques may limit the effectiveness of the gateway prefetcher module 254 (e.g., by fooling the gateway parser module 252). Embodiments of the gateway masker module 246 are configured to implement URL masking to counter these cache-busting techniques, as discussed more fully below.
[0051] In some embodiments, the gateway accelerator module 250 is adapted to provide high payload compression. For example, the gateway accelerator module 250 may compress payload such that over 70% of upload traffic when browsing the web in some cases is being used by transport management, and other items other than the compressed payload data. In other embodiments, functionality of the gateway accelerator module 250 is closely integrated with the satellite link through components of the SMTS 230 to reduce upload bandwidth requirements and/or to more efficiently schedule to satellite link (e.g., by communicating with the scheduler module 235). For example, the link layer may be used to determine whether packets are successfully delivered, and those packets can be tied more closely with the content they supported through application layer information. In certain embodiments, these and/or other functions of the gateway accelerator module 250 are provided by a proxy server 255 resident on (e.g., or in communication with) the gateway accelerator module 250.
[0052] In some embodiments, the proxy server 255 is implemented with multiple servers. Each of the multiple servers may be configured to handle a portion of the traffic passing through the gateway accelerator module 250. It is worth noting that functionality of various embodiments described herein use data which, at times, may be processed across multiple servers. As such, one or more server management modules may be provided for processing (e.g., tracking, routing, partitioning, etc.) data across the multiple servers. For example, when one server within the proxy server 255 receives a request from a subscriber terminal 130 on the spot beam, the server management module may process that request in the context of other similar requests received at other severs in the proxy server 255.
[0053] Data processed by the gateway accelerator module 250 may pass through the support modules 246 to the scheduler 235. Embodiments of the support modules 246 include one or more types of modules for supporting the functionality of the SMTS 230, for example, including a multicaster module 240, a fair access policy ("FAP") module, and an adaptive coding and modulation ("ACM") module. In certain embodiments, some or all of the support modules 246 include off-the-shelf types of components. [0054] Embodiments of the multicaster module 240 provide various functions relating to multicasting of data over the links of the satellite communication system 100. Certain embodiments of the multicaster module 240 use data generated by other components of the SMTS 230 (e.g., the gateway accelerator module 250) to prepare traffic for multicasting. For example, the multicaster module 240 may prepare datagrams as a multicast stream. Other embodiments of the multicaster module 240 perform more complex multicasting-related functionality. For example, the multicaster module 240 may contribute to determinations of whether data is unicast or multicast to one or more subscribers (e.g., using information generated by the gateway accelerator module 250), what modcodes to use, whether data should or should not be sent as a function of data cached as destination subscriber terminals 130, how to handle certain types of encryption, etc.
[0055] Embodiments of the FAP module 242 implement various FAP-related functions. In one embodiment, the FAP module 242 collects data from multiple components to determine how much network usage to attribute to a particular subscriber. For example, the FAP module 242 may determine how to count upload or download traffic against a subscriber's FAP. In another embodiment, the FAP module 242 dynamically adjusts FAPs according to various network link and/or usage conditions. For example, the FAP module 242 may adjust FAPs to encourage network usage during lower traffic times. In yet another embodiment, the FAP module 242 affects the operation of other components of the SMTS 230 as a function of certain FAP conditions. For example, the FAP module 242 may direct the multicaster module 240 to multicast certain types of data or to prevent certain subscribers from joining certain multicast streams as a function of FAP considerations.
[0056] Embodiments of the ACM module 244 implement various ACM functions. For example, the ACM module 244 may track link conditions for certain spot beams, subscribers, etc., for use in dynamically adjusting modulation and/or coding schemes. In some embodiments, the ACM module 244 may help determine which subscribers should be included in which customer groupings or multicast streams as a function of optimizing resources through modcode settings. In certain embodiments, the ACM module 244 implements ACM-aware encoding of data adapted for progressive encoding. For example, MPEG-4 video data may be adapted for progressive encoding in layers (e.g., a base layer and enhancement layers). The ACM module 244 may be configured to set an appropriate modcode separately for each layer to optimize video delivery.
[0057] When traffic has been processed by the gateway accelerator module 250 and/or the support modules 246, the traffic is passed to the scheduler module 235. Embodiments of the scheduler module 235 are configured to provide various functions relating to scheduling the links of the satellite communication system 100 handled by the gateway 115. For example, the scheduler module 235 may manage link bandwidth by scheduling license grants within a spot beam.
[0058] In some embodiments, functionality of the SMTS 230 involves communication and interaction with a storage area network 222 ("SAN"). Embodiments of the SAN 222 include a gateway cache module 220, which may include any useful type of memory store for various types of functionality of the gateway 115. For example, the gateway cache module 220 may include volatile or non-volatile storage, servers, files, queues, etc. In certain embodiments, the SAN 222 further includes a captive edge server 225, which may be in communication with the gateway cache module 220. In some embodiments, the captive edge server 225 provides functionality similar to that of the third-party edge server 212, including content mirroring. For example, the captive edge server 225 may facilitate different contractual relationships from those of the third-party edge server 212 (e.g., between the gateway 115 provider and various content providers).
[0059] It will be appreciated that the SMTS 230 provides many different types of functionality. For example, embodiments of the SMTS 230 oversee a variety of decoding, interleaving, decryption, and unscrambling techniques. The SMTS 230 may also manage functions applicable to the communication of content downstream through the satellite 105 to one or more subscriber terminals 130. As described more fully below with reference to various embodiments, the SMTS may handle different types of traffic in different ways (e.g., for different use cases of the satellite communication network 100). For example, some use cases involve contractual relationships and/or obligations with third-party content providers to interface with their edge servers (e.g., through the third-party edge server 212), while other use cases involve locally "re-hosting" certain content (e.g., through the captive edge server 225). Further, some use cases handle real- time types of data (e.g., UDP data) differently from non-real-time types of data (e.g., TCP data). Many other types of use cases are possible.
[0060] In certain embodiments, some or all of these downstream communication functions are handled by the gateway transceiver module 260. Embodiments of the gateway transceiver module 260 encode and/or modulate data, using one or more error correction techniques, adaptive encoding techniques, baseband encapsulation, frame creation, etc. (e.g., using various modcodes, lookup tables, etc.). Other functions may also be performed by these components (e.g., by the SMTS 230), including upconverting, amplifying, filtering, tuning, tracking, etc. The gateway transceiver module 260 communicates data to one or more antennae 110 for transmission via the satellite 105 to the subscriber terminals 130.
[0061] FIG. 3 shows a simplified block diagram 300 illustrating an embodiment of a subscriber terminal 130 coupled between the respective subscriber antenna 125 and the CPE 160, according to various embodiments. The subscriber terminal 130 includes a terminal transceiver module 310, data processing modules 315, and a terminal cache module 335-a. Embodiments of the data processing modules 315 include a MAC module 350, a terminal accelerator module 330, and a routing module 320.
[0062] The components may be implemented, in whole or in part, in hardware. Thus, they may comprise one, or more, Application Specific Integrated Circuits ("ASICs") adapted to perform a subset of the applicable functions in hardware. Alternatively, the functions may be performed by one or more other processing modules (or cores), on one or more integrated circuits. In other embodiments, other types of integrated circuits may be used (e.g., Structured/Platform ASICs, Field Programmable Gate Arrays ("FPGAs") and other Semi- Custom ICs), which may be programmed. Each may also be implemented, in whole or in part, with instructions embodied in a computer-readable medium, formatted to be executed by one or more general or application specific processors.
[0063] A signal from the subscriber antenna 125 is received by the subscriber terminal 130 at the terminal transceiver module 310. Embodiments of the terminal transceiver module 310 may amplify the signal, acquire the carrier, and/or downconvert the signal. In some embodiments, this functionality is performed by other components (either inside or outside the subscriber terminal 130). [0064] In some embodiments, data from the terminal transceiver module 310 (e.g., the downconverted signal) is communicated to the data processing modules 315 for processing. For example, data is communicated to the MAC module 350. Embodiments of the MAC module 350 prepare data for communication to other components of, or in communication with, the subscriber terminal 130, including the terminal accelerator module 330, the routing module 320, and/or the CPE 160. For example, the MAC module 350 may modulate, encode, filter, decrypt, and/or otherwise process the data to be compatible with the CPE 160.
[0065] In some embodiments, the MAC module 350 includes a pre-processing module 352. The pre-processing module 352 implements certain functionality for optimizing the other components of the data processing modules 315. In some embodiments, the pre-processing module 352 processes the signal received from the terminal transceiver module by interpreting (e.g., and decoding) modulation and/or coding schemes, interpreting multiplexed data streams, filtering the digitized signal, parsing the digitized signal into various types of information (e.g., by extracting the physical layer header), etc. In other embodiments, the pre-processing module 352 pre-filters traffic to determine which data to route directly to the routing module 320, and which data to route through the terminal accelerator module 330 for further processing.
[0066] Embodiments of the terminal accelerator module 330 provide substantially the same functionality as the gateway accelerator module 250, including various types of applications, WAN/LAN, and/or other acceleration functionality. In one embodiment, the terminal accelerator module 330 implements functionality of AcceleNet™ applications, like interpreting data communicated by the gateway 115 using high payload compression, handling various prefetching functions, parsing scripts to interpret requests, etc. In certain embodiments, these and/or other functions of the terminal accelerator module 330 are provided by a proxy client 332 resident on (e.g., or in communication with) the terminal accelerator module 330. Data from the MAC module 350 and/or the terminal accelerator module 330 may then be routed to one or more CPEs 160 by the routing module 320.
[0067] In some embodiments, the terminal accelerator module 330 includes a terminal prefetcher module 334, a terminal parser module 342, and/or a terminal masker module 340. In various embodiments, the terminal parser module 342, the terminal prefetcher module 334, and the terminal masker module 340 provide the same or similar functionality as the gateway parser module 252, the gateway prefetcher module 254, and the gateway masker module 246, respectively. For example, similar modules in the terminal accelerator module 330 and the gateway accelerator module 250 may work together to implement their respective functions. In other embodiments, the components of the subscriber terminal 130 and the gateway 115 provide different functionality. For example, functionality of the gateway parser module 252 may be asymmetric, such that it would not be desirable or possible to provide the same functionality in the terminal parser module 342. In some embodiments, the terminal accelerator module 330 further includes a prefetch list 336.
[0068] In some embodiments, output from the data processing module 320 and/or the terminal accelerator module 330 is stored in the terminal cache module 335-a. Further, the data processing module 320 and/or the terminal accelerator module 330 may be configured to determine what data should be stored in the terminal cache module 335-a and which data should not (e.g., which data should be passed to the CPE 160). It will be appreciated that the terminal cache module 335-a may include any useful type of memory store for various types of functionality of the subscriber terminal 130. For example, the terminal cache module 335-a may include volatile or non-volatile storage, servers, files, queues, etc.
[0069] In certain embodiments, storage functionality and/or capacity is shared between an integrated (e.g., on-board) terminal cache module 335-a and an extended (e.g., off-board) cache module 335-b. For example, the extended cache module 335-b may be implemented in various ways, including as an attached peripheral device (e.g., a thumb drive, USB hard drive, etc.), a wireless peripheral device (e.g., a wireless hard drive), a networked peripheral device (e.g., a networked server), etc. In some embodiments, the subscriber terminal 130 interfaces with the extended cache module 335-b through one or more ports 338. In one embodiment, functionality of the terminal cache module 335-a is implemented as storage integrated into or in communication with the CPE 160 of FIG. 1.
[0070] Some embodiments of the CPE 160 are standard CPE 160 devices or systems with no specifically tailored hardware or software (e.g., shown as CPE 160-a). Other embodiments of the CPE 160, however, include hardware and/or software modules adapted to optimize or enhance integration of the CPE 160 with the subscriber terminal 130 (e.g., shown as alternate CPE 160-b). For example, the alternate CPE 160-b is shown to include a CPE accelerator module 362, a CPE processor module 366, and a CPE cache module 364. Embodiments of the CPE accelerator module 362 are configured to implement the same, similar, or complimentary functionality as the terminal accelerator module 330. For example, the CPE accelerator module 362 may be a software client version of the terminal accelerator module 330. In some embodiments, some or all of the functionality of the data processing modules 315 is implemented by the CPE accelerator module 362 and/or the CPE processor module. In these embodiments, it may be possible to reduce the complexity of the subscriber terminal by shifting functionality to the alternate CPE 160-b. Embodiments of the CPE cache module 364 may include any type of data caching components in or in communication with the alternate CPE 160-b (e.g., a computer hard drive, a digital video recorder ("DVR"), etc.). In some embodiments, the CPE cache module 364 is in communication with the extended cache module 335-b, for example, via one or more ports 338-b.
[0071] In certain embodiments, the subscriber terminal 130 is configured to transmit data back to the gateway 115. Embodiments of the data processing modules 315 and the terminal transceiver module 310 are configured to provide functionality for communicating information back through the satellite communication system 100 (e.g., for directing provision of services). For example, information about what is stored in the terminal cache module 335-a or the CPE cache module 364 may be sent back to the gateway 115 for limiting repetitious file transfers, as described more fully below.
[0072] It will be appreciated that the satellite communications system 100 may be used to provide different types of communication services to subscribers. For example, the satellite communications system 100 may provide content from the network 120 to a subscriber's CPE 160, including Internet content, broadcast television and radio content, on-demand content, voice-over-Internet-protocol ("VoIP") content, and/or any other type of desired content. It will be further appreciated that this content may be communicated to subscribers in different ways, including through unicast, multicast, broadcast, and/or other communications.
[0073] FIG. 4 provides a schematic illustration of one embodiment of a computer system 400 that can perform the methods of the invention, as described herein, and/or can function as, for example, gateway 115, subscriber terminal 130, etc. It should be noted that Fig. 4 is meant only to provide a generalized illustration of various components, any or all of which may be utilized as appropriate. Fig. 4, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner.
[0074] The computer system 400 is shown comprising hardware elements that can be electrically coupled via a bus 405 (or may otherwise be in communication, as appropriate). The hardware elements can include one or more processors 410, including without limitation one or more general-purpose processors and/or one or more special-purpose processors (such as digital signal processing chips, graphics acceleration chips, and/or the like); one or more input devices 415, which can include without limitation a mouse, a keyboard and/or the like; and one or more output devices 420, which can include without limitation a display device, a printer and/or the like.
[0075] The computer system 400 may further include (and/or be in communication with) one or more storage devices 425, which can comprise, without limitation, local and/or network accessible storage and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device, such as a random access memory ("RAM") and/or a read-only memory ("ROM"), which can be programmable, flash-updateable and/or the like. The computer system 400 might also include a communications subsystem 430, which can include without limitation a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a Bluetooth™ device, an 802.11 device, a WiFi device, a WiMax device, cellular communication facilities, etc.), and/or the like. The communications subsystem 430 may permit data to be exchanged with a network (such as the network described below, to name one example), and/or any other devices described herein. In many embodiments, the computer system 400 will further comprise a working memory 435, which can include a RAM or ROM device, as described above.
[0076] The computer system 400 also can comprise software elements, shown as being currently located within the working memory 435, including an operating system 440 and/or other code, such as one or more application programs 445, which may comprise computer programs of the invention, and/or may be designed to implement methods of the invention and/or configure systems of the invention, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer). A set of these instructions and/or codes might be stored on a computer-readable storage medium, such as the storage device(s) 425 described above. In some cases, the storage medium might be incorporated within a computer system, such as the system 400. In other embodiments, the storage medium might be separate from a computer system (e.g., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer system 400 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer system 400 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.
[0077] It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.
[0078] In one aspect, the invention employs a computer system (such as the computer system 400) to perform methods of the invention. According to a set of embodiments, some or all of the procedures of such methods are performed by the computer system 400 in response to processor 410 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 440 and/or other code, such as an application program 445) contained in the working memory 435. Such instructions may be read into the working memory 435 from another machine-readable medium, such as one or more of the storage device(s) 425. Merely by way of example, execution of the sequences of instructions contained in the working memory 435 might cause the processor(s) 410 to perform one or more procedures of the methods described herein.
[0079] The terms "machine-readable medium" and "computer readable medium", as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. In an embodiment implemented using the computer system 400, various machine-readable media might be involved in providing instructions/code to processor(s) 410 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer-readable medium is a physical and/or tangible storage medium. Such a medium may take many forms, including but not limited to, non- volatile media, volatile media, and transmission media. Non- volatile media includes, for example, optical or magnetic disks, such as the storage device(s) 425. Volatile media includes, without limitation, dynamic memory, such as the working memory 435. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise the bus 405, as well as the various components of the communication subsystem 430 (and/or the media by which the communications subsystem 430 provides communication with other devices). Hence, transmission media can also take the form of waves (including without limitation radio, acoustic and/or light waves, such as those generated during radio-wave and infra-red data communications) .
[0080] Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
[0081] Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 410 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer system 400. These signals, which might be in the form of electromagnetic signals, acoustic signals, optical signals and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.
[0082] The communications subsystem 430 (and/or components thereof) generally will receive the signals, and the bus 405 then might carry the signals (and/or the data, instructions, etc., carried by the signals) to the working memory 435, from which the processor(s) 405 retrieves and executes the instructions. The instructions received by the working memory 435 may optionally be stored on a storage device 425 either before or after execution by the processor(s) 410.
[0083] A set of embodiments comprises systems for managing identity information and generating an identity confidence scoring system. Merely by way of example, FIG. 5 illustrates a schematic diagram of a system 500 that can be used in accordance with one set of embodiments. The system 500 can include one or more user computers 505. The user computers 505 can be general purpose personal computers (including, merely by way of example, personal computers and/or laptop computers running any appropriate flavor of Microsoft Corp.'s Windows™ (e.g., Vista™) and/or Apple Corp.'s Macintosh™ operating systems) and/or workstation computers running any of a variety of commercially available UNIX™ or UNIX-like operating systems. These user computers 505 can also have any of a variety of applications, including one or more applications configured to perform methods of the invention, as well as one or more office applications, database client and/or server applications, and web browser applications. Alternatively, the user computers 505 can be any other electronic device, such as a thin-client computer, Internet-enabled mobile telephone, and/or personal digital assistant (PDA), capable of communicating via a network (e.g., the network 510 described below) and/or displaying and navigating web pages or other types of electronic documents. Although the exemplary system 500 is shown with three user computers 505, any number of user computers can be supported.
[0084] Certain embodiments of the invention operate in a networked environment, which can include a network 510. The network 510 can be any type of network familiar to those skilled in the art that can support data communications using any of a variety of commercially available protocols, including without limitation TCP/IP, SNA, IPX, AppleTalk, and the like. Merely by way of example, the network 510 can be a local area network ("LAN"), including without limitation an Ethernet network, a Token-Ring network and/or the like; a wide-area network (WAN); a virtual network, including without limitation a virtual private network ("VPN"); the Internet; an intranet; an extranet; a public switched telephone network ("PSTN"); an infra-red network; a wireless network, including without limitation a network operating under any of the IEEE 802.11 suite of protocols, the Bluetooth™ protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks. [0085] Embodiments of the invention can include one or more server computers 515. Each of the server computers 515 may be configured with an operating system, including without limitation any of those discussed above, as well as any commercially (or freely) available server operating systems. Each of the servers 515 may also be running one or more applications, which can be configured to provide services to one or more clients 505 and/or other servers 515.
[0086] Merely by way of example, one of the servers 515 may be a web server, which can be used, merely by way of example, to process requests for web pages or other electronic documents from user computers 505. The web server can also run a variety of server applications, including HTTP servers, FTP servers, CGI servers, database servers, Java servers, and the like. In some embodiments of the invention, the web server may be configured to serve web pages that can be operated within a web browser on one or more of the user computers 505 to perform methods of the invention.
[0087] The server computers 515, in some embodiments, might include one or more application servers, which can include one or more applications accessible by a client running on one or more of the client computers 505 and/or other servers 515. Merely by way of example, the server(s) 515 can be one or more general purpose computers capable of executing programs or scripts in response to the user computers 505 and/or other servers 515, including without limitation web applications (which might, in some cases, be configured to perform methods of the invention). Merely by way of example, a web application can be implemented as one or more scripts or programs written in any suitable programming language, such as Java , C, C#™ or C++, and/or any scripting language, such as Perl, Python, or TCL, as well as combinations of any programming/scripting languages. The application server(s) can also include database servers, including without limitation those commercially available from Oracle™, Microsoft™, Sybase™, IBM™ and the like, which can process requests from clients (including, depending on the configuration, database clients, API clients, web browsers, etc.) running on a user computer 505 and/or another server 515. In some embodiments, an application server can create web pages dynamically for displaying the information in accordance with embodiments of the invention. Data provided by an application server may be formatted as web pages (comprising HTML, Javascript, etc., for example) and/or may be forwarded to a user computer 505 via a web server (as described above, for example). Similarly, a web server might receive web page requests and/or input data from a user computer 505 and/or forward the web page requests and/or input data to an application server. In some cases, a web server may be integrated with an application server.
[0088] In accordance with further embodiments, one or more servers 515 can function as a file server and/or can include one or more of the files (e.g., application code, data files, etc.) necessary to implement methods of the invention incorporated by an application running on a user computer 505 and/or another server 515. Alternatively, as those skilled in the art will appreciate, a file server can include all necessary files, allowing such an application to be invoked remotely by a user computer 505 and/or server 515. It should be noted that the functions described with respect to various servers herein (e.g., application server, database server, web server, file server, etc.) can be performed by a single server and/or a plurality of specialized servers, depending on implementation-specific needs and parameters.
[0089] In certain embodiments, the system can include one or more databases 520. The location of the database(s) 520 is discretionary: merely by way of example, a database 520a might reside on a storage medium local to (and/or resident in) a server 515a (and/or a user computer 505). Alternatively, a database 520b can be remote from any or all of the computers 505, 515, so long as the database can be in communication (e.g., via the network 510) with one or more of these. In a particular set of embodiments, a database 520 can reside in a storage-area network ("SAN") familiar to those skilled in the art. (Likewise, any necessary files for performing the functions attributed to the computers 505, 515 can be stored locally on the respective computer and/or remotely, as appropriate.) In one set of embodiments, the database 520 can be a relational database, such as an Oracle™ database, that is adapted to store, update, and retrieve data in response to SQL-formatted commands. The database might be controlled and/or maintained by a database server, as described above, for example.
[0090] Embodiments include methods, systems, and devices that implement various techniques for optimizing web performance over satellite communication links. It will be appreciated that other components and systems may be used to provide functionality of the various embodiments described herein. As such, descriptions of various embodiments in the context of components and functionality of Figs. 1- 5 are intended only for clarity, and should not be construed as limiting the scope of the invention. [0091] For example, embodiments of the invention may be used to address certain cold access metrics. Cold access (e.g., a first visit to a website with a clear cache) to popular websites is a well-established metric for user experience on a public network, as it is the operation in which network performance is most clearly and frequently apparent to the end user. Consequently, improvements in this cold access metric can play a role in driving consumer purchasing decisions, such as in selecting network access providers or deciding whether to use an acceleration service. There are a number of factors that may contribute to the cold access metric.
[0092] Some factors that may contribute to the cold access metric relate to the number of round trip times ("RTTs") needed to communicate content between elements of the satellite systems (e.g., between the gateway 115 and the subscriber terminal 130 of the satellite communication system 100 of FIG. 1). Because of the large distance that must be traveled to and from the satellite 105, some data latency is inherent in any satellite communication system 100. This latency may be increased with each RTT needed to fulfill a request for data. As such, reducing the number of RTTs needed to communicate information over the satellite communication system 100 may significantly reduce the data transfer times (e.g., download times) over the communication links.
[0093] Other factors that may contribute to the cold access metric relate to delays caused by waiting for content from upstream servers. For example, the gateway prefetcher module 254 and/or the terminal prefetcher module 334 may be capable of determining from a website request how to prefetch much of the content for the website (e.g., through intelligent script parsing). However, receipt of the prefetched content may be delayed while the gateway 115 (e.g., acting as a proxy server) waits for responses from origin servers serving the website content. These delays may substantially offset reductions in delay provided by the prefetching functionality of the gateway prefetcher module 254 and/or the terminal prefetcher module 334.
[0094] Embodiments of the invention implement various types of functionality to address these and other factors to optimize web access performance. Some embodiments use acceleration functionality like advanced prefetching and compression (e.g., through the gateway accelerator module 250 and/or the terminal accelerator module 330) to reduce the number of RTTs. Other embodiments use uniform resource locator ("URL") anti-aliasing and/or cycle caching functionality to enhance performance of the satellite communication system 100 without substantially interfering with the commercial objectives of the content providers. Still other embodiments provide improved parsing functionality to optimize prefetching results.
[0095] URL Masking Embodiments
[0096] According to some embodiments, a URL masking algorithm is provided to allow prefetchers and caches to work even when the URLs are constructed using scripts intended to block such behavior. For example, certain cache-busting techniques generate portions of the URL string, using Java scripts, to include unique values (e.g., random numbers, timestamps, etc.). As such, prefetchers may be fooled into thinking objects at the URL have not yet been pref etched, when in fact they have. Embodiments mask these cache-busting portions of the URL string to allow the prefetcher to recognize the request as a previously prefetched URL.
[0097] According to other embodiments, cache cycling is used to issue a fresh request to the content provider for website content each time the proxy server serves a request from cached data. For example, URL masking may allow a prefetcher to operate in the context of a cache- busting algorithm. Using prefetched content may reduce the apparent number of times the URL is requested, which may reduce advertising revenue and other metrics based on the number of requests. Cache cycling embodiments maintain the request metrics while allowing optimal prefetching in the face of cache-busting techniques.
[0098] According to other embodiments, a number of techniques are provided for optimizing prefetcher functionality. In one embodiment, an accumulator is provided for optimizing performance of an accelerator abort system. Chunked content (e.g., in HTTP chunked mode) is accumulated until enough data is available to make an abort decision. In another embodiment, socket mapping architectures are adjusted to allow prefetching of content copies for URLs requested multiple times on the same page. In yet another embodiment, persistent storage is adapted to cache prefetched, but unused data, and to provide access to the data to avoid subsequent redundant prefetching. In still another embodiment, DNS transparent proxy and prefetch are integrated with HTTP transparent proxy and prefetch, so as to piggyback DNS information with HTTP frames. In even another embodiment, prefetching is provided for the DNS associated with all hostnames called in Java scripts to reduce the number of requests needed to the DNS server. And in another embodiment, delivery of objects is prioritized according to browser rendering characteristics. For example, data is serialized back to a subscriber's browser so as to prioritize objects needing further parsing or having valuable information with respect to rendering.
[0099] Public web sites may deploy Java scripts that make each request for an object appear with a unique URL. For example, this technique allows cycling of ad content and also prevents caches from interfering with the accounting of site accesses. These so-called "cache-busting" techniques may limit prefetching functionality (e.g., functionality of the gateway prefetcher module 254 and/or the terminal prefetcher module 334), as the URL prefetched on the proxy server will often not match the one from the browser. For example, to protect their commercial interests with respect to delivery and accounting of advertising content, commercial websites employ a number of cache-busting techniques.
[0100] One illustrative cache-busting technique uses functions, such as random number generators and millisecond timestamps, to produce unique values each time they are executed. These unique values may then be used as part of a URL to generate unique URLs with each subsequent request for the same website. For example, an illustrative Java script for generating a URL is as follows: if (cacheBust)
{ var cacheStamp = new Date ( ) ; var cacheBuster = cacheStamp. getTime (); xmlURL = http://sports.myNetwork.com/ login/loggedln?rand= ' +cacheBuster; }
[0101] The time string appended to the URL is an integer with millisecond precision, so that no two calls to this routine may ever result in the same URL string. As such, with each subsequent call to the URL, a parser (e.g., the terminal parser module 342) may parse the request as looking for a new (i.e., not cached) set of content, causing the terminal prefetcher module 334 to direct multiple sequential accesses from content servers (e.g., via the gateway prefetcher module 254). It will be appreciated that each subsequent request for the same content may necessitate additional RTTs, adding latency to data transfers. [0102] Turning now to FIG. 6 which illustrates a system for optimizing transfer of content from the Internet to a web browser. In one embodiment, the system may include a user system 602, a proxy client 612 and a proxy server 632. The user system may include a client graphical user interface (GUI) 610. Client GUI 610 may allow a user to configure performance aspects of system 600. For example, the user may adjust the compression parameters and/or algorithms, content filters (e.g., blocks elicit websites), and enable or disable various features used by system 600. In one embodiment, some of the features may include network diagnostics, error reporting, as well as controlling, for example, prefetch response abort 642. Such control may be adding and/or removing pages (i.e. URLs) to or from whitelist 648 and/or blacklist 649.
[0103] In one embodiment, the user selects a universal recourse locator (URL) address which directs web browser 606 (e.g., Internet Explorer®, Firefox®, Netscape Navigator®, etc.) to a website (e.g., cnn.com, google.com, yahoo.com, etc.). In a further embodiment, web browser 606 may check browser cache 604 to determine whether the website associated with the selected URL is located within browser cache 604. If the website is located within browser cache 604 the amount of time the website has been in the cache is checked to determine if the cached website is "fresh" (i.e. new) enough to use. For example, the amount of time that a website may be considered fresh may be 5 minutes; however, other time limits may be used. Consequently, if the website has been cached and the website is considered fresh, then web browser 606 renders the cached page. However, if the website has either not been cached or the cached webpage is not fresh, web browser 606 sends a request to the Internet for the website.
[0104] In one embodiment, redirector 608 intercepts the request sent from web browser 606. Redirector 608 instead sends the request through a local bus 605 to proxy client 612. In some embodiments, proxy client 612 may be implemented as a software application running on user system 602. In an alternative embodiment, proxy client 612 may be implemented on a separate computer system and is connected to user system 602 via a high speed/low latency link (e.g., a branch office LAN subnet, etc.). In one embodiment, proxy client 612 includes a request parser 616. Request parser 616 may check cache optimizer 614 to determine if a cached copy of the requested website may still be able to be used. Cache optimizer 614 is in communication with browser cache 604 in order to have access to cached websites. Cache optimizer 614 is able to access browser cache 604 without creating a redundant copy of the cached websites, thus requiring less storage space.
[0105] According to one embodiment, cache optimizer 614 implements more effective algorithms to determine whether a cached website is fresh. In one embodiment, cache optimizer 613 may implement the cache expiration algorithms from HTTP vl.l (i.e., RFC 2616), which may not be natively supported in browser 606. For example, browser cache 604 may inappropriately consider a cached website as too old to use; however, cache optimizer 614 may still be able to use the cached website. More efficient use of cached websites can improve browsing efficiency by reducing the number of Internet accesses.
[0106] In one embodiment, if the requested website is not able to be accessed from the cached websites, request parser 616 checks prefetch manager 620 to determine if the requested website has been prefetched. Prefetching a response is when the item is requested from the website by the accelerator prior to receiving a request from the web browser 606. Prefetching can potentially save round-trips of data access from user system 602 to the Internet.
[0107] In a further embodiment, if the requested website has not been prefetched, then request parser 616 forwards the request to a request encoder 618. Request encoder 618 encodes the request into a compressed version of the request using one of many possible data compression algorithms. For example, these algorithms may employ a coding dictionary 622 to store strings so that data from previous web objects can be used to compress data from new pages. Accordingly, where the request for the website is 550 bytes in total, the encoded request may be as small as 50 bytes. This level of compression can save bandwidth on a connection, such as high latency link 630. In one embodiment, high latency link 630 may be a wireless link, a cellular link, a satellite link, a dial-up link, etc.
[0108] In one embodiment, after request encoder 618 generates an encoded version of the request, the encoded request is forwarded to a protocol 628. In one embodiment, protocol 628 is Intelligent Compression Technology's® (ICT) transport protocol (ITP). Nonetheless, other protocols may be used, such as the standard transmission control protocol (TCP). In one embodiment, ITP maintains a persistent connection with proxy server 632. The persistent connection between proxy client 612 and proxy server 632 enables system 600 to eliminate the inefficiencies and overhead costs associated with creating a new connection for each request.
[0109] In one embodiment, the encoded request is forwarded from protocol 628 to request decoder 636. Request decoder 636 uses decoder 636 which is appropriate for the encoding performed by request encoder 618. In one embodiment, this process utilizes a coding dictionary 638 in order to translate the encoded request back into a standard format which can be accessed by the destination website. Furthermore, if the HTTP request includes a cookie (or other special instructions), such as a "referred by" or type of encoding accepted, information about the cookie or instructions may be stored in a cookie cache 655. Request decoder 636 then transmits the decoded request to the destination website over a low latency link 656. Low latency link 656 may be, for example, a cable modem connection, a digital subscriber line (DSL) connection, a Tl connection, a fiber optic connection, etc.
[0110] In response to the request, a response parser 644 receives a response from the requested website. In one embodiment, this response may include an attachment, such as an image and/or text file. Some types of attachments, such as HTML, XML, CSS, or Java Scripts, may include references to other "in-line" objects that may be needed to render a requested web page. In one embodiment, when response parser 644 detects an attachment type that may contain such references to "in-line" objects, response parser 644 may forward the objects to a prefetch scanner 646.
[0111] In one embodiment, prefetch scanner 646 scans the attached file and identifies URLs of in-line objects that may be candidates for prefetching. For example, candidates may be identified by HTML syntax, such as the token "img src=". In addition, objects that may be needed for the web page may also be specified in Java scripts that appear within the HTML or CSS page or within a separate Java script file. In one embodiment, the identified candidates are added to a candidate list.
[0112] In one embodiment, for the candidate URLs prefetch scanner 646 may notify prefetch abort 642 of the context in which the object was identified, such as the type of object in which it was found and/or the syntax in which the URL occurred. This information may be used by prefetch abort 642 to determine the probability that the URL will actually be requested by browser 606. [0113] According to a further embodiment, the candidate list is forwarded to whitelist 648 and blacklist 649. Whitelist 648 and blacklist 649 may be used to track which URLs should be allowed to be prefetched. Based on the host (i.e. the server that is supplying the URL), the file type (e.g., application service provider (ASP) files) should not be prefetched, etc. Accordingly, whitelist 648 and blacklist 649 control prefetching behavior by indicating which URLs on the candidate list should or should not be prefetched. In many instances with certain webpages/file types prefetching may not work. In addition to ASP files, webpages which include fields or cookies may have problems with prefetching.
[0114] In one embodiment, once the candidate list has been passed through whitelist 648 and blacklist 649, a modified candidate list is generated, and then the list is forwarded to a client cache model 650. The client cache model 650 attempts to model which items from the list will be included in browser cache 604. As such, those items are removed from the modified candidate list. Subsequently, the updated modified candidate list is forwarded to a request synthesizer 654 which creates an HTTP request in order to prefetch each item in the updated modified candidate list. The HTTP request header may include cookies and/or other instructions appropriate to the web site and/or to browser 606's preferences using information obtained from cookie model 652. The prefetch HTTP requests may then be transmitted through low latency link 656 to the corresponding website.
[0115] In one embodiment, response parser 644 receives a prefetch response from the website and accesses a prefetch response abort 642. Prefetch response abort 642 is configured to determine whether the prefetched item is worth sending to user system 602. Prefetch response abort 642 bases its decision whether to abort a prefetch on a variety of factors, which are discussed below in more detail.
[0116] If the prefetch is not aborted, response parser 644 forwards the response to response encoder 640. Response encoder 640 accesses coding dictionary 638 in order to encode the prefetched response. Response encoder 640 then forwards the encoded response through protocol 628 over high latency link 630 and then to response decoder 626. Response decoder 626 decodes the response and forwards it to response manager 624. In one embodiment, if the response is a prefetched response then response manager 624 creates a prefetch socket to receive the prefetched item as it is downloaded. [0117] Response manager 624 transmits the response over local bus 605 to redirector 608. Redirector 608 then forwards the response to web browser 606 which renders the content of the response.
[0118] In some embodiments (e.g., as shown in FIGS. 2 and 3), the terminal accelerator module 330 includes a terminal masker module 340 and/or the gateway accelerator module 250 includes a gateway masker module 246, adapted to implement URL masking functionality. Using URL masking functionality may allow the gateway prefetcher module 254 and/or the terminal prefetcher module 334 to operate in the context of some cache-busting techniques.
[0119] Turning now to FIG. 7 A, which illustrates one embodiment of gateway accelerator module 250. In one embodiment, parser module 252 may identify an embedded URL string within a webpage, Java Script, etc. Further, parser module 252 may then analyze the URL string to determine if a cache-busting portion (or random portion) exists in the URL string. However, it should be noted that the random portion may not have anything to do with cache busting, and is placed in the URL string for utility value. For example, an advertisement server may embed or append a string of random characters in the URL string. Such a random string of characters may be used to cycle through ads to be presented to the browser. For example, random number 1 may produce an ad for company I5 random number 2 may produce an ad for company 2, and so forth.
[0120] The "random number" (or embedded string) may be generated in a variety of ways. For example, a rand() method may be called to generate a binary number. Then an ASCI string may be generated from the binary number, which is then appended or embedded in the URL. Alternatively, a timestamp may be used to produce the "random" portion of the URL string. For example, the timestamp may be extended out several digits and converted into an ASCI string and appended or embedded within the URL sting.
[0121] Once the cache-busting portion of the URL string has been identified, the URL string may be passed to masker module 256. In one embodiment, the masker produces a mask that identifies which bytes in the URL string are effectively random. This may be implemented, for example, as a string of the same length as the URL where a byte is 0 if it is a normal byte and 1 if it is random. In this case, the mask can be used to exclude the random bytes in deciding whether two URLs match, such as in the C-language method: bool isMatch(int urlLength, char* requestUrl, char* prefetchedUrl, char* mask)
{ for (int i=0; KurlLength; ++i) if ((requestUrl[i] != prefetchedUrl [i]) && !mask[i]) return false return true;
}
This mask can be sent to the client along with the URL string for the item that has been prefetched.
[0122] After masker module 256 has masked out the random portion of the URL string, the masked URL string is passed to prefetcher module 254. In one embodiment, prefetcher module 254 may compare the masked URL string with URL strings of objects that have already been prefetched by prefetcher module 254. If a match is found, then prefetcher module 254 may then notify prefetcher module 334 in terminal accelerator module 330 (FIG. 7B) that the object has already been prefetched, and not to prefetch it again, thus preventing sending unnecessary bytes across the link. Accordingly, the prefetched version of the object from the masked URL string is used to be rendered in the browser instead of pref etching a new object.
[0123] FIG. 8 shows an illustrative flow diagram of a method 800 for implementing URL masking functionality, according to various embodiments of the invention. The method 800 begins at block 804 by identifying a URL to be prefetched. At block 808, a portion of the URL string is identified as employing a cache-busting technique. A mask is then set, at block 812, to mask the cache-busting portion of the URL string. The URL string may be sent at block 816 from a proxy server to a proxy client. Further, at block 820, the mask may be sent from the proxy server to the proxy client. In certain embodiments, the proxy server is implemented in the gateway 115 (e.g., the proxy server 255 of FIG. 2) and the proxy client is implemented in the subscriber terminal 130 (e.g., the proxy client 332 of FIG. 3). The gateway 115 sends a list of URLs being prefetched to the subscriber terminal 130, where prefetched content may be cached (e.g., in the terminal cache module 335).
[0124] At block 824, the proxy client may compare intercepted browser requests with the list of URLs to decide whether a request can be served via a prefetched object. As part of this comparison in block 824, the proxy client applies the mask to the requested URL and/or the prefetched URL list. In this way, the proxy client is able to determine in block 828 whether the requested content is, in fact, from a non-prefetched URL; or if it is actually from the same URL employing a cache-busting technique.
[0125] If the only difference is in the masked portion of the request (e.g., the masked URL request matches the masked prefetched URL), the requested object(s) may be served in block 832 using prefetched (e.g., locally cached) content. Otherwise, the requested object(s) may be served in block 836 by retrieving the objects from other locations. For example, the requested object(s) may be retrieved from the gateway cache module 220, from a content server over the network 120, etc.
[0126] For example, a URL is identified by the gateway parser module 252 by means of parsing a Java script embedded in a web object with certain file extensions (e.g., HTML, XML, CSS, JS, or other protocols used within HTTP). Identifying the URL may involve constructing the string using various Java functions which may be defined in the web object or may be part of a library known to the parser. When constructing the string, embodiments of the gateway parser module 252 look for calls to library functions that may be used to make URLs unique each time they are constructed so as to prevent caches from fulfilling the request from copies of previously downloaded objects (e.g., known as "cache-busting"). Examples of cache-busting functions include random number generators or timers with millisecond resolution. If the parser determines that part of the URL is being constructed with characters derived from these cache- busting functions, embodiments of the gateway masker module 256 generate a mask as a function of the URL string to mask the millisecond timestamp portion of the URL string. The prefetcher issues a request to the web server for the URL that it constructs, and the URL string and mask information are sent from the gateway 115 (e.g., proxy server 255) to the subscriber terminal 130 (e.g., proxy client 332).
[0127] In some embodiments, the subscriber terminal 130 receives the URL and mask at the same time as it receives the object that it was embedded in, such as the HTML page. The terminal accelerator module 330 places the URL string and mask onto a "prefetch list" of objects that are in process of being prefetched. When the accelerator receives a subsequent HTTP GET request, the parser module 342 identifies the URL being requested and asks the prefetch list 336 whether this URL is being prefetched. The prefetch list 336 iterates through all entries to see if the request is a match. In order to determine if it is a match, calls are made to the masker module 340, supplying the request URL, the prefetched URL being tested, and the mask associated with the prefetched URL. The masker module 340 may perform a string comparison, excluding characters as a function of the mask. Embodiments return a Boolean value indicating whether the masked versions of the requested and prefetched URLs are a match. If so, the response to the CPE 160 may be filled using the prefetched object. Otherwise, the subscriber terminal 130 may request the objects from the gateway 115 (e.g., as proxy server 255) over the satellite communication system 100.
[0128] It will be appreciated that embodiments of the URL masking functionality may be applied both to prefetched content (e.g., to see if a prefetched object matches a client request) and to the use of cached content on the gateway cache module 220 and/or the terminal cache module 335. Further, it will be appreciated that URL masking functionality may allow prefetchers and caches to work even when the URLs are constructed using scripts intended to block such behavior. By facilitating the use of prefetching (e.g., by the gateway prefetcher module 254 and/or the terminal prefetcher module 334) and local caching (e.g., at the terminal cache module 335), the number of RTTs may be reduced. Local caching may also reduce some server response delays that affect communications over the satellite communication system 100.
[0129] Referring next to FIG. 9, which illustrates a method 900 for implementing URL masking according to embodiments of the present invention. At process block 904, Java script included in a requested page may be parsed. During the parsing of the requested page URL string within the Java script may be identified and assembled (process block 908). Furthermore, the process of generating the identified URL string may be analyzed (process block 912).
[0130] In one embodiment, a determination may be made as to whether portions of the URL string were randomly generated so as to have a meaningless value (decision block 916). For example, the portion of the URL string may be a randomly generated number, a timestamp, etc. If no random portion of the URL is found, then the Java script is continued to be parsed. Otherwise, at process block 920, the random or meaningless portion of the URL string is masked out/off of the URL string. [0131] Then, at process block 924, the masked version of the URL may be checked against prefetched URL strings and/or cached URL strings to determine a match. At decision block 928, it is determined if there is a match, and at process block 932, the matching prefetched or cached object associated with the determined URL string is presented to the terminal. Accordingly, a cached or prefetched object is able to be used where it otherwise would have been classified as a cache miss or a non-prefetched object.
[0132] FIG. 10 illustrates one embodiment of a system 1000 according to aspects of the present invention. In one embodiment, system 1000 may include a client 1005. Client 1005 may be configured to use a web browser to access various Internet and/or intranet web pages, or to access files, emails, etc. from various types of content servers. In one embodiment, client 1005 may include a proxy client 1010 which may intercept the traffic from the browser. Client 1005 may be configured to communicate over a high latency link 1015 with proxy server 1020 using an optimized transport protocol.
[0133] In one embodiment, proxy server 1020 may identify, based on a request received from proxy client 1010 via client 1005's browser, objects that may be able to be prefetched. Furthermore, proxy server 1020 may store all of the caching instructions for all objects downloaded by proxy server 1020 on behalf of client 1005.
[0134] In one embodiment, proxy server 1020 may send a request over a low latency link 1025 to a content server 1030. In one embodiment, low latency link 1025 may be a satellite link, a broadband link, a cable link, etc. In a further embodiment, the request may request the caching instructions for the object that may potentially be prefetched from the web server. Proxy server 1020 may then analyze the caching instructions for the object to determine if the object has been modified since it was last prefetched. Accordingly, if the object has been modified, then proxy server 1020 would download the updated version of the object from content server 1030. Otherwise, if the previously prefetched object is still valid, no prefetching is needed. Thus, proxy server 1020 can simply use the previously prefetched object.
[0135] A number of variations and modifications of the disclosed embodiments can also be used. For example, content server 1030 may be a file server, an FTP server, etc. and various web browsers may be used by client 1005. Furthermore, the cache model may be modified to be stored, for example, at proxy client 1010. As such, proxy client 1010 may be configured to maintain the caching instructions associated with each prefetched object. In a further embodiment, proxy client 1010 may store cached (or prefetched) objects for future access by client 1005, or in an alternative embodiment, to be accessed by other clients and/or servers connected with client 1005. Consequently, any component in FIG. 10 may be configured to store prefetched (or cached) objects and/or caching instructions.
[0136] In an additional embodiment, the cache model may be implemented at a separate location from client 1005 and/or client proxy 1010. For example, the cache model may be located at a remote server, database, storage device, remote network, etc. In one embodiment, cached objects may be stored remotely from client 1005 and retrieved from the remote location upon request of the object.
[0137] Cache Cycling Embodiments
[0138] While using prefetching and caching may improve a subscriber's experience (e.g., through reduced download times for web objects), content providers may lose some control over content delivery and accounting. This may be undesirable for a number of reasons. One reason is that URL masking may compromise commercial interests of content providers. For example, advertising companies may rely on getting fresh requests to URLs to cycle different content, as well as to account for the number of site hits. Using cached information may limit content cycling and may make request and hit tracking more difficult. Another reasons is that providing subscribers with cached data may result in presenting the subscribers with different web experiences than if normal cycling of content was allowed.
[0139] FIG. 11 illustrates a system of implementing a prior art HTTP cache. The cache 1102 receives a URL 1101, and initially uses an index of URL 1101 's contents 1104 to determine (process block 1103) whether a fresh copy of the item is available. Freshness in this case is typically established using the standard HTTP rules, such as defined in RFC 2616, although HTTP caches can also be tuned to be more aggressive with respect to returning content that may not be fresh according to such rules. If the fresh copy is available, the cache retrieves (process block 1109) the cached copy from a storage 1108 and returns the retrieved object as a response (process block 1110). No further action is need in this case. Alternatively, if the item is not in cache, a request 1105 is uploaded to a web content server 1106. When the response is received (process block 1107), then returned (process block 1110), a copy is added to storage 1108, and index 1104 is updated. There are significant shortcomings with this system; hence, improvements in the art are needed.
[0140] Aspects of the following embodiments relate to cache cycling, which is used to issue fresh requests to content providers for website content each time a proxy server serves a request from cached data. For example, cache cycling allows fresh content to be supplied for each request when URL masking is used, as described above. URL masking removes random elements from URL strings which are used to cycle through different content. Removing these random elements allows prefetching optimizations as well as caches to work effectively, but could interfere with the normal cycling of different content items for advertisements or other web elements.
[0141] Cache cycling allows fresh content to be presented for each request while still allowing the performance benefits of caches and prefetching to be achieved. Furthermore, since using cached content reduces the apparent number of times a URL is requested, URL masking could interfere with the accounting of advertising revenue and other metrics based on the number of requests. Cache cycling maintains the request metrics while allowing the performance benefits of caching to be achieved.
[0142] Some embodiments of cache cycling are implemented using a satellite communications system (e.g., the satellite communications system 100 of FIG. 1, above), for example, including functionality of gateways and/or subscriber terminals (e.g., the gateway 215 and/or subscriber terminal 230 of FIGS. 2 and 3, respectively). Those and/or other embodiments may exploit functionality described with reference to the computer system 400 of FIG. 4 and/or the computer network system 500 of FIG. 5. It will be appreciated that other types of systems and/or components may be used to implement functionality of various embodiments, without departing from the scope of the invention.
[0143] FIG. 12 illustrates how a cache with cache cycling is used in conjunction with URL masking, according to various embodiments. The input to the cache may include both a normal unmasked URL 1201 and a masked URL 1202 (e.g., using the techniques described above). For these purposes, the masked bytes in the URL string can be filled with default placeholders, such as the character '0', or the like. As a result, the impact of random values in the URL string have been removed so that all URLs that differ only by the random elements will present the same masked URL at process block 1202.
[0144] A cache 1203 then checks an index 1205 to determine whether, the object that was retrieved in response to a request for a URL that had the same masked URL, is in cache 1203. If a response is in cache and sufficiently fresh, it is retrieved (process block 1210) from a storage 1209 and returned (process block 1211) to the user (e.g., client browser, etc.). In this case, freshness may be determined by special rules rather than using RFC 2616, as the expiration times provided in the HTTP header may not support caching. If a cached copy can be used, the user obtains the performance benefits from avoiding the wait for a response from, for example, a web content server.
[0145] For each masked URL 1202 that is received, an unmasked URL 1201 is also supplied. The unmasked URL includes the random elements created in, for example, the original Java script, and each of these URLs would be unique. A fresh request 1206 for the unmasked URL 1201 is then sent to a web content server 1207, regardless of whether a cached copy of the masked URL exists. When the response is received (process block 1208), the response is added to cache storage 1209 as the new entry for the masked URL 1202, and index 1205 is updated. If a sufficiently fresh cache entry is not found at process block 1204, the cache waits for the response at process block 1208, and then returns a copy to the user at process block 1211.
[0146] In a further embodiment, cycling cache 1203 may be implemented in either Terminal Cache Module 435-A or Gateway Cache Module 220-A. When used on the terminal side, cycling cache 1203 allows for a response to be sent immediately to CPE 260 without waiting for a copy to be fetched or prefetched from content server 1207. If a cached response was provided at process block 1210, then the fresh copy received at process block 1208 may not be considered time-critical, in that the customer has already received a response. In this case the transfer of this data can be done at a low priority so as not to interfere with time-sensitive transfers. Masked URL 1202 can be generated from Unmasked URL 1202 at the same time that the mask is used to check for matches with prefetched objects.
[0147] When used cycling cache 1203 is implemented on the gateway side, cycling cache 1203 may be used to provide fast responses to prefetch requests, as it avoids the need to wait for a response from content server 1207. The URL masks are generated at the same time that the embedded URLs are identified in, for example, the Java scripts within the HTML or other web objects, so that masked URL 1202 can be presented along with unmasked URL 1201 to cycling cache 1203.
[0148] In a further embodiment, each time a cached object is used, a fresh copy of the content may be requested. As such, the cache is cycled, and the client receives content that is one cycle old, but the same number of external "hit" are accounted. Furthermore, the client is not required to wait for the fresh copy of the content because the client is able to quickly render the cached copy, and the next time the content is requested the previously fresh copy will be rendered to the client, and another fresh copy with be retrieved, and so forth.
[0149] Referring now to FIG. 13, which illustrates a system 1300 for implementing cache cycling, in accordance with aspects of various embodiments. In one embodiment, system 1300 may include elements from Figs. 1 - 3, as well as a content provider 1305. According to embodiments of the present invention, each time system 1300 serves a request from cached data stored in terminal cache module 335, proxy server 155 in the gateway 115 may issue a fresh request to content provider 1305 for the cached content. When the response to the request arrives, new objects may replace the cached copies of those objects, for use in serving the next request for that URL to CPE 160. In this way, content provider 1305 may receive the same number of requests and may cycle through the same content, while providing CPE 160 with benefits of prefetched/cached content.
[0150] For example, a request may be made by a web browser for a URL at CPE 160. Proxy client 332 implemented in subscriber terminal 130 may determine (e.g., as a result of cache- busting techniques discussed above) that cached copies of the requested objects are available in terminal cache module 335. The proxy client then issues a fresh request to proxy server 155 in gateway 115 according to the requested content (e.g., with or without masking cache-busting portions of URL strings). While the request is being processed and new objects are being retrieved, locally cached copies of the objects are then passed to CPE 160's browser for rendering. As such, the web browser may immediately begin to render objects out of terminal cache module 335 without waiting for requests to be fulfilled over satellite 105; while in the meantime, cached objects are replaced with new versions as the requests are fulfilled. [0151] Accumulator for Prefetch Abort Embodiments
[0152] Many present prefetchers blindly download content which may or may not be utilized by a client system in the future. Such an operation is performed without any consideration for the probability that the object may actually be used, nor does prefetching take into consideration the size of the object, the bandwidth of the link between the client and the content server, etc. As a result, a considerable amount of bandwidth, server capacity, and storage space is wasted on prefetched content which is never actually used. Hence, improvements in the art are needed.
[0153] Among other things, systems and methods are described for determining whether to abort a prefetch operation. Some prefetch abort determinations are made according to the size of the object being prefetched. However, in some cases, it is difficult or impossible to determine the size of the object prior to downloading the object from a content server. As such, embodiments include accumulator functionality for accumulating object data prior to making an abort determination. Certain embodiments also compress the accumulated data to more accurately reflect the cost of pushing the data to the client as part of the prefetch operation. Accumulation and/or compression of the data may provide sufficient data relating to the size of the object to make a useful abort determination, even where the size of the object cannot be otherwise determined (e.g., from the object data header). Other embodiments store accumulated data (e.g., in compressed or uncompressed form) for use in further optimizing prefetch operations. For example, if an accumulated prefetch is aborted before the object is forwarded to the client, and the client later requests the object, the object may be pushed to the client from server-side storage, rather than retrieving (e.g., and compressing) the object from the content server redundantly. Still other embodiments exploit the accumulated data to implement additional (e.g., byte-level) data processing functionality.
[0154] Turning now to FIG. 14, which illustrates method 1400, one embodiment of the operations performed by prefetch response abort 642 (Fig. 6) is shown. As discussed above, prefetch response abort 642 (Fig. 6) receives a prefetched object from the Internet through low latency link 656 (Fig. 6) (process block 1405). Even though the object has initially been prefetched, it does not necessarily mean that it is efficient to forward the object to the client (e.g., proxy client 612 (Fig. 6)). Due to bandwidth and other constraints of the link, objects sent over high latency link 630 (Fig. 6) between proxy server 632 (Fig. 6) and proxy client 612 (Fig. 6) should be carefully selected. Accordingly, a variety of factors should be considered before forwarding a prefetched object to the client.
[0155] At process block 1410, the size of the received object is checked. In one embodiment, the size of the object may be significant in determining whether to forward the object to the client. For example, one benefit of forwarding the prefetched object to the client may be the elimination of a round trip. In other words, if a prefetched item is eventually used by user system 602 (Fig. 6), the request out to the Internet and the response back from the requested website (i.e., one round trip) can be eliminated. Hence, in some instances, the smaller the prefetched object is, the more beneficial the prefetch is for optimization purposes.
[0156] Furthermore, one potential negative effect of forwarding a prefetched object is that the prefetched object unnecessarily uses the link's bandwidth. As such, if a prefetched object is forwarded to the client but never used by the client, the bandwidth used to forward the object may be wasted. Accordingly, larger prefetched objects may decrease optimization because the gained round trip may not outweigh the bandwidth consumption. In one embodiment, a point system may be assigned to the prefetched object where, for example, a 10 kilobyte object is given a higher point value than a 10 megabyte object. Consequently, if the point value associated with the object reaches or exceeds a threshold, then the object is forwarded to the client.
[0157] Another factor in determining whether an object should be forwarded to the client is the probability of use of the object (process block 1415). As a user browses the Internet, not all URLs that are prefetched will actually be requested by web browser 606. The user may, for example, "click-off a web page before objects within the page are requested. Whether some objects may be requested may depend on browser settings and/or on external events, such as mouse position. Furthermore, objects referenced on a CSS (e.g., style sheet for the entire website) may not be used on each individual web page. In addition, if URLs are identified within Java scripts, the scripts themselves, based on a variety of factors, may determine whether to request an object.
[0158] In one embodiment, the probability that an object will actually be requested by web browser 606 may be estimated as a function of the context in which the reference was identified. For example, this context may depend on the type of the object (e.g., HTML, CSS, JS, etc.), the surrounding syntax (e.g., "img src=", java script, etc.), and the level of recursion (e.g., was the reference on the main HTML or on an object that was itself prefetched). In one embodiment, if the object was referenced in a Java script, the probability of use may depend on information collected while parsing the script. The probability that an object in a specific context will be requested can be estimated in several ways. For example, a general model can be built sampling many different clients in many sessions going to many websites. Subsequently, a more specific model can be developed for a specific website and/or for a particular user. In one embodiment, this may be accomplished by recording the frequency of page use in a specific context for a specific web page by a specific user.
[0159] Collectively, based on the above-mentioned probability factors, the object may be assigned a point value associated with its probability of use. In an alternative embodiment, the probability of use may be assigned a percentage value.
[0160] At process block 1420, the bandwidth of high latency link 630 (Fig. 6) may be determined (i.e., the speed of the link between proxy server 632 (Fig. 6) and proxy client 612 (Fig. 6)). The bandwidth of this link can be a factor in determining whether to forward the prefetched object. For example, with a higher link bandwidth, more objects and larger objects could be forwarded to the client. However, in contrast, if the bandwidth of the link is lower, then prefetch response abort 642 (Fig. 6) may be more selective when deciding whether to forward the prefetched object. In one embodiment, the bandwidth of the link is assigned a point value which may be factored into the determination of whether to forward the object.
[0161] At process block 1425, the latency of the link between proxy server 632 (Fig. 6) and proxy client 612 (Fig. 6) is determined. In one embodiment, the latency of the link is based on the current round trip time (RTT) of the link. Accordingly, if the RTT is high, then it may be more beneficial to forward the prefetched object to the client because of the round trip savings gained by forwarding the object. However, if the RTT is low, then the saved round trip may be of less value for optimization purposes. In one embodiment, the latency of the link is assigned a point value which may be factored into the determination of whether to forward the object.
[0162] In process block 1430, the initial prefetch time is determined (i.e., how long the object took to be retrieved from the Internet). If the object took a long time to retrieve from the Internet, then it may be optimal to forward the object to the client in order to avoid re- downloading the object in the future. Furthermore, if the object was downloaded quickly, then less optimization may be gained from forwarding the object to the client. Hence, in one embodiment, the download time of the object may be assigned a point value which may be factored into determining whether to forward the object to the client. In an alternative embodiment, the aborted objects may be stored on proxy server 632 (Fig. 6) in case they are subsequently requested. Accordingly, if these objects are stored and then requested, the download will not need to be repeated. If this approach is implemented, then process block 1430 may not be used.
[0163] At process block 1435, a cost/benefit analysis may be preformed to determine whether to forward the prefetched object. In one embodiment, the above-mentioned point values may be calculated to determine if the object meets a predetermined threshold. In an alternative embodiment, the cost of forwarding the object may be determined using the following equation:
Cost = ObjectSize * (1.0 - ProbabilityofUse)/Bandwidth
[0164] Furthermore, in one embodiment, the benefit of forwarding the prefetched object may be determined using the following equation:
Benefit = ProbabilityofUse * (RTT + PrefetchTime)
[0165] Accordingly, by using these or other equations, at decision block 1440, if the cost value is greater than the benefit value, then the prefetched object is aborted and the object is not forwarded to the client (process block 1445). Conversely, if the benefit is greater than the cost, then the prefetched object is forwarded to the client (process block 1450). In an alternative embodiment, objects that have been aborted may be cached at, for example, proxy server 632 (Fig. 6), in the event that the client subsequently requests the object. Hence, the above referenced equation may be reduced to:
Benefit = ProbabilityofUse * RTT
[0166] The equation is reduced in this manner because, since the object has already been downloaded, it would not need to be re-downloaded from the originating server. [0167] A number of variations and modifications of the disclosed embodiments can also be used. For example, the factors used to determine whether to forward a pref etched object may be used outside the website and/or Internet context. For example, the prefetching technique may be used to determine which terminals to download an object from in a peer-to-peer network environment. In addition, the prefetching technique may be used on various network types, for example, a satellite network, a mobile device network, etc.
[0168] It will be appreciated that, while functions of embodiments are described above with reference to components of FIG. 6, other systems and components may be used without departing from the scope of the invention. For example, some embodiments may use components of the systems shown in FIGS. 4, 5, and/or 10, described above. Further, as will be appreciated from the above description, prefetching systems may seek to request objects that will subsequently be requested when a web page is rendered.
[0169] Notably, much of the information about those objects may be unknown. For example, the exact set of objects to be ultimately requested by a client may not be known for certain in advance. Further, a server-side prefetcher may not have full knowledge of a client-side browser configuration or the contents of a client-side browser cache. Even further, the prefetcher may not be able to fully parse scripts in which object references are found, thereby further limiting certainty as to whether an object will be requested. As such, effectiveness and efficiency of a prefetching system may hinge on establishing appropriate cost-benefit analysis techniques.
[0170] As discussed above, the prefetching cost-benefit may be analyzed as a function of a number of factors, including probability of use, round-trip time (RTT), prefetch time, available bandwidth, object size, etc. Illustrative equations to this effect are described above with reference to Fig. 14. These various factors may be weighed to determine whether prefetching one or more objects is efficient, for example, as compared to downloading the object only when it is actually requested by the client.
[0171] Typically, satisfying requests for objects over a communications system (e.g., a satellite communications system) involves multiple sources of delay. In one embodiment, a user requests an object from a content server (e.g., via a web browser on a user system). The request is processed by an intermediate proxy client and proxy server (e.g., as described with reference to Fig. 10, above). The proxy server requests the object from the content server, receives the object from the content server in response to the request, and forwards the object to the user system via the proxy client. From the perspective of the proxy server, delays are introduced through Communications with the content server (e.g., from delays in getting a response from the content server) and from communications with the client (e.g., from latency in the communication link between the proxy server and the proxy client). For example, in many high-latency communication systems (e.g., a satellite communications system), delays resulting from communications with the client may largely be due to the round-trip time over the communication link.
[0172] Embodiments of prefetching systems may be used to address one or both of these types of delay. In one example, a requested object is not prefetched. A first delay may be introduced while the proxy server waits to receive the requested object, and a second delay may be introduced according to the RTT between the server and client sides of the system. In another example, an object is prefetched. As there may be no need to send a request to the content server, the associated delay (the first delay in the preceding example) may be substantially eliminated. Further, while the RTT of the system may be unaffected by the prefetching (e.g., it is a physical property of the system), link usage in response to the request may be substantially minimized. For instance, rather than sending a large object over the link in response to the request, only a small message may be sent indicating to the client that a cached version of the object should be used.
[0173] It is worth noting that some system benefits of prefetching may tend to be relatively unrelated to the size of the object being prefetched, as the benefits may be most apparent when the object is ultimately requested. For example, even though a user may experience an apparent increase in speed by having the object sent to the user prior to a request, the amount of bandwidth used by the system is substantially the same regardless of when the object is sent over the link (e.g., whether it is sent as a prefetched object or sent in response to an actual request). On the contrary, however, some system costs associated with prefetching may be highly dependent on object size, as the costs may be most apparent when the object is not ultimately prefetched. For example, if a large object is prefetched and is not ultimately requested by the user, the link may be unnecessarily congested. This may delay the downloading of objects that are actually requested. [0174] Thus, once values of certain variables have been established (e.g., once link conditions and other environmental variables are established and the probability of use is estimated using various techniques), the decision whether it is efficient to prefetch an object may essentially become a function of object size. In some embodiments, the prefetching system determines that it is efficient to prefetch the object if the object size is less than some threshold value; while the prefetching system determines that it is inefficient to prefetch the object if the object size is larger than the threshold value.
[0175] At times, however, determining the object size prior to prefetching the object may be difficult or impossible. In some embodiments of prefetch abort systems, objects may be downloaded speculatively to the optimizing proxy server to determine the size. If the object size is less than the threshold for efficient prefetching, then the object may be prefetched. If it is larger, the object is not prefetched. This approach, however, may be limited in a number of ways. One limitation is that the size of an object may not be specified in the header. If the prefetcher aborts the transfer and the file was small, the benefits of prefetching are lost. If the prefetcher starts to download the file and the file is large, then unnecessary link congestion occurs. Another limitation is that, if the prefetcher decides to abort an object because it is too large and the browser subsequently requests the object, then the object must be requested again from the content server.
[0176] Embodiments of prefetch abort systems include an accumulator configured to address one or more of these limitations. During a prefetch operation, the accumulator accumulates file data until sufficient data is available to make an effective abort decision. In some embodiments, the accumulator is implemented by the prefetch response abort 642 block of Fig. 6. For example, the prefetch response abort 642 of Fig. 6 may use the accumulator to help determine the object size in block 1410 of the method 1400 of Fig. 14.
[0177] FIG. 15 shows a flow diagram of an illustrative method 1500 for prefetching using an accumulator, according to various embodiments. FIG. 16 shows relevant portions of an illustrative communications system 1600, including an accumulator for a prefetch abort system, according to various embodiments. The system 1600 includes a proxy server 632 in communication with a user system 602 (e.g., via a proxy client, over a high-latency link 630) and a content server 1630 (e.g., over a relatively low latency link 656). The proxy server 632 may be the proxy server 632 of FIG. 6, including prefetch response abort 642. As discussed below, prefetch response abort 642 is in communication with prefetch object compressor 1602 and prefetch accumulator 1604. The prefetch accumulator 1604 may be further in communication with an accumulator buffer 1610 and/or an output data store 1620. For the sake of clarity, the components of the system 1600 of FIG. 16 will be discussed in parallel with associated portions of the method 1500 of FIG. 15.
[0178] The method 1500 begins at block 1504 by determining an appropriate size threshold for efficient prefetching of an object. In some embodiments, the size threshold is determined by prefetch response abort 642, as discussed with reference to Fig. 2 above. If transfer time is a primary concern, the size threshold may be determined as a function of the cost benefit equations described above, as follows. The cost of prefetching may be calculated in some embodiments according to the equation:
Cost = Objects ize * (1.0 - ProbabilityofUse)/Bandwidth
[0179] For example, "ObjectSize" is the size of the object being evaluated for prefetching, "ProbabilityofUse" is the probability the object will be ultimately requested by the user system 602 (e.g., requested by the client browser), and "Bandwidth" is the bandwidth usage on the high- latency link 630 from pushing the object to the user system 602. The benefit of prefetching may be calculated in some embodiments according to the equation:
Benefit = ProbabilityofUse * (RTT + PrefetchTime)
[0180] For example, "RTT" is the round-trip time for communications over the high-latency link 630 and "PrefetchTime" is the time it takes to download (e.g., and possibly to compress) the object from the content server 1030. The maximum efficient object size may be considered in some embodiments as the object size where the cost of prefetching is equal to the benefit of prefetching (i.e., if the object size increases further, the cost will increase without increasing the benefit, causing the cost to exceed the benefit). Setting the cost and benefit equations equal to each other and solving for the object size may yield the following equation for the maximum efficient object size for prefetching:
MaximumSize = ProbabilityofUse * (RTT + PrefetchTime) * Bandwidth / (1.0 - ProbabilityofUse) [0181] It is worth noting that, while this equation may maximize performance experienced by one user, this threshold object size may be adjusted in response to other issues or for other reasons. For example, if a link is congested, the maximum size may be reduced to reflect the impact of the bandwidth consumption on other users. It is further worth noting that other types of metrics or thresholds may be used.
[0182] At block 1508, downloading of the prefetched object may begin. For example, the object is retrieved from the content server 1030 by the proxy server 632, but may not be sent over the high-latency link 630 to the user system 602. At block 1512, the downloaded data is accumulated in the prefetch accumulator 1604 (e.g., in an accumulator buffer 1610). At block 1520, the size of the accumulated data may be evaluated to determine whether the maximum size threshold has been reached.
[0183] In some embodiments, at block 1516, the data may be compressed by the prefetch object compressor 1602 prior to being sent (or as it is sent) to the prefetch accumulator 1604. Notably, compressing the data with the prefetch object compressor 1602 prior to accumulation in the accumulator buffer 1610 may allow the calculations to more accurately reflect the ultimate cost to the high-latency link 630 of sending the object to the user system 602. For example, if a file is highly compressible, the bandwidth cost to the high-latency link 630 may be reduced. As such, it may still be efficient to push the object to the user system 602 in its compressed form, even if its uncompressed object size would exceed the size threshold. In some embodiments, the size of the accumulated data is evaluated according to the compressed size in block 1520 when determining whether the maximum size threshold has been reached.
[0184] A determination is then made at block 1524 as to whether it is efficient to push the object to the user system 602 over the high-latency link 630. In some embodiments, this determination is made as a function of the determination in block 1520. For example, the equation for determining the size threshold may include all the relevant factors for making the ultimate cost benefit determination. If the end of the object file is reached before reaching the size threshold, it may be determined that pushing the object to the user system 602 is efficient; if the object size threshold is reached before the end of file, then it may not be efficient to push the object to the user system 602. In this way, embodiments may make the correct decision whether to prefetch the file data even where the file size is not specified in the header. In other embodiments, the determination at block 1524 may account for additional factors not evaluated as part of the size threshold equations. For example, the determination of whether it is efficient to push the object to the user system 602 may be affected by communications with other users (e.g., multicasting opportunities, link congestion, etc.) or other factors.
[0185] If the decision is made to push the prefetched data to the user system 602 at block 1524, the object (e.g., compressed or uncompressed) may be pushed to the user system 602 at block 1528. If the decision is made not to push the prefetched data to the user system 602 at block 1524, the prefetch operation may be aborted at block 1532. In some embodiments, the accumulated data (e.g., the compressed data in the accumulator buffer 1610) can be pushed to an output data store 1620 (e.g., an output buffer) at block 1540. As part of storing the data at the output data store 1620 in block 1540, various additional functions may be performed. For example, the data may be parsed, indexed, logged, further compressed, etc.
[0186] In certain embodiments, when the decision is made not to push the prefetched data to the client at block 1524, a further determination is made as to whether the data should be accumulated past the threshold size value (e.g., in the accumulator buffer 1610 or the output data store 1620) at block 1536. For example, it may be more efficient to compress a larger amount of data prior to storage in block 1540. Notably, continued accumulation may typically increase performance for the user, as the prefetch time is saved without sacrificing bandwidth on the high- latency link 630. However, continued downloading, compression, and storage may use excessive amounts of resources on the proxy server 632. In one embodiment, the accumulation continues only until a second threshold is reached, after which the proxy server 632 closes the connection to the content server 1630. If the client (e.g., browser) subsequently requests the object, the accumulated data can be sent (e.g., from the output data store 1620) while the remainder of the file is downloaded and compressed by the prefetcher (e.g., and/or accumulated by the prefetch accumulator 1604). The second threshold may be set so that the time needed to transfer the accumulated data may be long enough to compensate for the delay in reestablishing a connection over the low latency link 656 to the content server 1630. In this way, savings in the prefetch time may be achieved without excessive consumption of memory and processing resources on the proxy server 632. [0187] It is worth noting that, if the client browser subsequently requests a stored object, the data may effectively be immediately available; no further delays may be created by fetching another copy from the content server 1630 or by compressing a new data object. As such, savings in prefetch time may be achieved even though the data was not pushed across the high- latency link 630.
[0188] It is further worth noting that the accumulated data (e.g., stored in the accumulator buffer 1610 and/or the output data store 1620 can be further exploited to provide additional functionality. FIG. 17 shows an illustrative method 1700 for exploiting accumulated data to further optimize prefetch abort operations, according to various embodiments. In some embodiments, the method 1700 is performed by the system 1600 of FIG. 16.
[0189] The method 1700 begins at block 1705 by receiving a prefetched object from a content server 1030. It will be appreciated from the discussion of FIGS. 15 and 16 that, even though the object has already been prefetched, it may still be inefficient to forward the object to the user system 602. For example, bandwidth and/or other constraints may affect a determination of whether it is efficient to communicate objects over the high-latency link 630 between the proxy server 632 and the user system 602.
[0190] As such, accumulated prefetched objects may be analyzed (e.g., in addition to other factors, such as link conditions) to gather and/or generate various additional cost-benefit data for optimizing the prefetch operation. In some embodiments, the byte sequence of the prefetched object is analyzed at block 1710. A determination may then be made in block 1715 of whether the bytes are the same as an object that was previously prefetched from a different URL. For example, data in the accumulator buffer 1610 may be processed using delta coding and/or other techniques to generate a fingerprint of the accumulated data. The fingerprint may be compared against fingerprints of previously accumulated data (e.g., data stored in the output data store 1620).
[0191] In other embodiments, the object size is determined at block 1720. As discussed above, the object size may be significant in determining whether it is efficient to forward the object to the client. In some embodiments, if the object is large (e.g., or in all cases), a determination is made as to whether the object content is compressible, scalable, etc. at block 1725. For example, the determination may be made by the prefetch object compressor 1602. In certain embodiments, this determination is made as a function of analyzing the byte sequence in block 1710. If the content is compressible, scalable, etc., embodiments of the method 1700 revise the effective object size at block 1730. For example, the method 1700 may estimate the compressed size of the object and use that compressed size as the effective size of the object when making prefetching determinations.
[0192] In still other embodiments, other cost-benefit data is gathered and/or generated. In one embodiment, at block 1740, the communication link between the proxy client and the proxy server (e.g., or between the proxy server and the network) may be evaluated to determine various link conditions. This type of information may be received from external sources, estimated, measured, etc. For example, the links may be analyzed to determine bandwidth, traffic, latency, packet errors, etc. Similarly, latency of the high-latency link 630 may be estimated in block 1740 based on a current round-trip time ("RTT") measure. Accordingly, if the RTT is high, then it may be more beneficial to forward the prefetched object to the client because of the round trip savings gained by forwarding the object. However, if the RTT is low, then the saved round trip may be of less value for optimization purposes.
[0193] In another embodiment, the method 1700 determines the probability that an object will be used at block 1750. In some embodiments described above (e.g., those without accumulator functionality), URL parsing and/or other techniques can be used to help make that determination. For example, certain popular website content may be more likely requested and/or accessed by users of a communication network. In embodiments of prefetching that include accumulator functionality, the determination at block 1750 may be made as a function of analyzing the byte sequence in block 1710. For example, fingerprinting may be used, as described above, to determine if the object has been requested before by that user or by some threshold number of other users.
[0194] The various types of data collected and/or generated in the various blocks (e.g., blocks 1730, 1715, 1740, and 1750) may be used to estimate prefetch time in block 1760. For example, current link conditions and object size may drive an estimation of the time it will take to prefetch a particular object. Other data may also be used. For example, if the object was previously prefetched (e.g., or a similar object), that data can be used to make predictions. Particularly, if the object took a long time to retrieve from the Internet a previous time, then it may be optimal to forward the object to the client in order to avoid re-downloading the object in the future. Furthermore, if the object was downloaded quickly, then less optimization may be gained from forwarding the object to the client. Hence, in one embodiment, the download time of the object may be assigned a point value which may be factored into determining whether to forward the object to the client.
[0195] In some embodiments, some or all of these data can be used to perform a cost-benefit analysis on the prefetched object at block 1765. The result of the cost-benefit analysis performed at block 1765 can be evaluated at decision block 1770 to determine whether the benefits of prefetching the object outweigh the costs of prefetching the object. If the benefits outweigh the costs, the object is prefetched at block 1780. If the benefits fail to outweigh the costs, the object is not prefetched, or the prefetch operation is aborted at block 1775.
[0196] A number of variations and modifications of the disclosed embodiments can also be used. For example, factors used to determine whether to forward a prefetched object may be used outside the website and/or Internet context. For example, the prefetching technique may be used to determine which terminals to download an object from in a peer-to-peer network environment. In addition, the prefetching technique may be used on various network types, for example, a satellite network, a mobile device network, etc.
[0197] Domain Name Server (DNS) Prefetch Embodiments
[0198] When requesting a webpage, a number of requests may be made to various uniform resource locators (URLs). For example, URL requests may be made to retrieve embedded content objects for use in rendering the webpage, including images, videos, sounds, etc. Each of the URLs may be associated with an internet protocol (IP) address, as designated by a domain name server (DNS). As such, before retrieving a web object, a request may have to be made to the DNS to find the IP address associated with the object's URL.
[0199] The DNS lookups may require that additional requests be made to the network, which may cause certain inefficiencies. For example, in a satellite communications system, DNS lookups may involve additional round trips between the client user terminal and the server gateway sides of the communications system. Since each round trip over the satellite link takes time, these DNS lookups may cause undesirable system performance.
[0200] Some systems may configure user web browsers to use a hyper-text transfer protocol (HTTP) proxy at the server (e.g., gateway) side of the communications system for all DNS lookups. The client browser may forward all requests to the server-side proxy, so all the DNS lookups can be performed at the server side. In this way, DNS lookups may not use additional round trips. However, this implementation may require a particular type of browser configuration at the client side. This may be undesirable, as certain clients may not desire or know to configure their browsers in this way.
[0201] As such, it may be desirable to provide a different approach that is more transparent to the user, while still reducing round trips associated with DNS lookups. Among other things, methods, systems, devices, and software are provided for reducing round trips associated with DNS lookups in ways that are substantially transparent to the user. Embodiments implement prefetching of DNS entries, sometimes piggybacking on the prefetching of associated web objects. In one embodiment, prefetching of an object continues according to other prefetching techniques, until the point where the HTML response may be parsed. When an embedded object request is identified, a DNS lookup is performed to find the IP address for the request. The IP address is then pushed to the client as part of the prefetch data package (e.g., including the URL, the prefetched object, etc.).
[0202] In some embodiments, when the HTML response is received by the client, the client opens a prefetch socket. The client may use the prefetch socket to begin receiving the prefetch data, for example, including the DNS lookup results. As such, the client is aware of what data is being prefetched and can make further requests accordingly. For example, when a DNS request is made by the client, the request may be intercepted to determine whether the request can be handled using a local DNS entry. If so, the DNS response is handled locally and a round trip may be avoided. Notably, because of the awareness of what is being received via the prefetch socket, the client may wait to handle the request locally, even where the local DNS entry has not yet been fully received. As such, the round trip may be at least partially avoided even when the DNS request is made by the browser prior to completing receipt of the prefetched DNS entry. [0203] Embodiments may be implemented in the context of various types of systems and components. For example, some embodiments exploit functionality of, or operate within the context of, systems, such as those described above with reference to FIGS. 1 - 5 and 10. For the sake of clarity, embodiments are described in the context of a client-server communications system, like the system 600 discussed above with reference to FIG. 6.
[0204] Turning to FIG. 6, a system 600 is illustrated including a user system 602, a proxy client 612, and a proxy server 632. The user system 602 may include a client graphical user interface (GUI) 610. Client GUI 610 may allow a user to configure performance aspects of the system 600. For example, the user may adjust the compression parameters and/or algorithms, content filters (e.g., blocks elicit websites), and enable or disable various features used by the system 600. In one embodiment, some of the features may include network diagnostics, error reporting, as well as controlling, for example, functionality of the proxy server 632. Such control may include adding and/or removing pages (i.e., URLs) to or from whitelist 648 and/or blacklist 649, etc.
[0205] In one embodiment, a user accesses a website through the web browser 606, for example, by providing the URL of the website to web browser 606. Rendering and/or using the website may typically include making a number of calls to other URLs. For example, the website may include embedded objects (e.g., advertisements, movies, sounds, images, etc.), links, etc. Each of these URLs may represent a location on the Intenet defined by an IP address. To retrieve the objects associated with the URLs, each URL may first have to be resolved to its corresponding IP address. This may typically be accomplished by issuing a lookup request to a DNS.
[0206] One traditional implementation may include issuing the DNS lookup requests from the client side of the system 600 (e.g., from the proxy client 612 or another component of the user system 602). When an object is requested, a client-side component may issue a request to a DNS to resolve the IP address of the object's URL, after which, the same or another client-side component may request the object using its resolved IP address. This may involve two requests to the Internet, which may result in two round trips. Particularly where each round trip is costly (e.g., where the round trip time is very long, as in a satellite communications system), client-side DNS lookup requests may be undesirable. [0207] Another traditional implementation may include shifting the DNS lookup request role to the server side of the system 600 (e.g., to the proxy server 632). For example, user web browsers may be configured to use a server-side HTTP proxy for performing the DNS lookups. While this may avoid extra round trips incurred by performing the DNS lookups, the implementation may not be transparent to the user. For example, the configuration may involve affecting particular browser settings, running a client-side application, etc. Certain clients may not desire to configure their systems in this way for various reasons.
[0208] According to embodiments described herein, DNS lookups are implemented in such a way as to be relatively transparent to the user, while still avoiding extra round trips. For example, methods and systems may be substantially agnostic to the user's browser configuration, whether the user is running a particular application (e.g., a client-side optimization application), etc. Embodiments implement prefetching of the DNS entries along with or separate from the prefetching of associated web objects.
[0209] In some embodiments, prefetching of an object begins according to other prefetching techniques, like those described in U.S. Patent Application No. 12/172,913, filed on July 14, 2008, entitled "METHODS AND SYSTEMS FOR PERFORMING A PREFETCH ABORT OPERATION, " which is hereby incorporated by reference herein in its entirety for all purposes. For example, after a user requests a website, web browser 606 may check browser cache 604 to determine whether the website associated with the selected URL is located within browser cache 604. If the website is located within browser cache 604, the amount of time the website has been in the cache is checked to determine if the cached website is "fresh" (i.e., new) enough to use. Consequently, if the website has been cached and the website is considered fresh, then web browser 606 renders the cached page. However, if the website has either not been cached or the cached webpage is not fresh, web browser 606 sends a request to the Internet for the website.
[0210] In one embodiment, redirector 608 intercepts the request sent from web browser 606. Redirector 608 instead sends the request through a local bus 605 to proxy client 612. In some embodiments, proxy client 612 may be implemented as a software application running on user system 602. In an alternative embodiment, proxy client 612 may be implemented on a separate computer system and is connected to user system 602 via a high speed/low latency link (e.g., a branch office LAN subnet, etc.). In one embodiment, proxy client 612 includes a request parser 616. Request parser 616 may check cache optimizer 614 to determine if a cached copy of the requested website may still be able to be used. Cache optimizer 614 is in communication with browser cache 604 in order to have access to cached websites. Cache optimizer 614 is able to access browser cache 604 without creating a redundant copy of the cached websites, thus requiring less storage space.
[0211] According to one embodiment, cache optimizer 614 implements more effective algorithms to determine whether a cached website is fresh. In one embodiment, cache optimizer 614 may implement the cache expiration algorithms from HTTP vl .1 (i.e., RFC 2616), which may not be natively supported in web browser 606. For example, browser cache 604 may inappropriately consider a cached website as too old to use; however, cache optimizer 614 may still be able to use the cached website. More efficient use of cached websites can improve browsing efficiency by reducing the number of Internet accesses.
[0212] In one embodiment, if the requested website is not able to be accessed from the cached websites, request parser 616 checks prefetch manager 620 to determine if the requested website has been prefetched. Prefetching a website is when content from the website is accessed, downloaded, and stored before a request to the website is made by web browser 606. Prefetching can potentially save round-trips of data access from user system 602 to the Internet.
[0213] In a further embodiment, if the requested website has not been prefetched, then request parser 616 forwards the request to a request encoder 618. Request encoder 618 encodes the request into a compressed version of the request using one of many possible data compression algorithms. For example, these algorithms may employ a coding dictionary 622 which stores strings so that data from previous web objects can be used to compress data from new pages. Accordingly, where the request for the website is 550 bytes in total, the encoded request may be as small as 50 bytes. This level of compression can save bandwidth on a connection, such as high latency link 630. In one embodiment, high latency link 630 may be a wireless link, a cellular link, a satellite link, a dial-up link, etc.
[0214] In one embodiment, after request encoder 618 generates an encoded version of the request, the encoded request is forwarded to a protocol 628. In one embodiment, protocol 628 is Intelligent Compression Technology's® (ICT) transport protocol (ITP). Nonetheless, other protocols may be used, such as the standard transmission control protocol (TCP). In one embodiment, ITP maintains a persistent connection with proxy server 632. The persistent connection between proxy client 612 and proxy server 632 enables system 600 to eliminate the inefficiencies and overhead costs associated with creating a new connection for each request.
[0215] In one embodiment, the encoded request is forwarded from protocol 628 to request decoder 636. Request decoder 636 uses a decoder which is appropriate for the encoding performed by request encoder 618. In one embodiment, this process utilizes a coding dictionary 638 in order to translate the encoded request back into a standard format which can be accessed by the destination website. Furthermore, if the HTTP request includes a cookie (or other special instructions), such as a "referred by" or type of encoding accepted, information about the cookie or instructions may be stored in a cookie model 652. Request decoder 636 then transmits the decoded request to the destination website over a low latency link 656. Low latency link 656 may be, for example, a cable modem connection, a digital subscriber line (DSL) connection, a Tl connection, a fiber optic connection, etc.
[0216] In response to the request, a response parser 644 receives a response from the requested website. In one embodiment, this response may include an attachment, such as an image and/or text file. Some types of attachments, such as HTML, XML, CSS, or Java Scripts, may include references to other "in-line" objects that may be needed to render a requested web page. In one embodiment, when response parser 644 detects an attachment type that may contain such references to "in-line" objects, response parser 644 may forward the objects to a prefetch scanner 646.
[0217] In one embodiment, prefetch scanner 646 scans the attached file and identifies URLs of in-line objects that may be candidates for prefetching. For example, candidates may be identified by HTML syntax, such as the token "img src=". In addition, objects that may be needed for the web page may also be specified in Java scripts that appear within the HTML or CSS page or within a separate Java script file. Methods for identifying candidates within Java scripts may be found in a co-pending U.S. Patent Application No. 12/172,917, entitled "METHODS AND SYSTEMS FOR JAVA SCRIPT PARSING" (Attorney Docket No. 026841- 00021 OUS), filed July 14, 2008, which is incorporated by reference for all purposes. In one embodiment, the identified candidates are added to a candidate list. [0218] In one embodiment, for the candidate URLs, prefetch scanner 646 may notify prefetch response abort 642 of the context in which the object was identified, such as the type of object in which it was found and/or the syntax in which the URL occurred. This information may be used by prefetch response abort 642 to determine the probability that the URL will actually be requested by web browser 606.
[0219] According to a further embodiment, the candidate list is forwarded to whitelist 648 and blacklist 649. Whitelist 648 and blacklist 649 may be used to track which URLs should be allowed to be prefetched. Based on the host (i.e., the server that is supplying the URL), the file type (e.g., application service provider (ASP) files should not be prefetched), etc. Accordingly, whitelist 648 and blacklist 649 control prefetching behavior by indicating which URLs on the candidate list should or should not be prefetched. In many instances with certain webpages/file types, prefetching may not work. In addition to ASP files, webpages which include fields or cookies may have problems with prefetching.
[0220] In one embodiment, once the candidate list has been passed through whitelist 648 and blacklist 649, a modified candidate list is generated and then the list is forwarded to a client cache model 650. The client cache model 650 attempts to model which items from the list will be included in browser cache 604. As such, those items are removed from the modified candidate list. Subsequently, the updated modified candidate list is forwarded to a request synthesizer 654 which creates an HTTP request in order to prefetch each item in the updated modified candidate list. The HTTP request header may include cookies and/or other instructions appropriate to the website and/or to web browser 606 's preferences using information obtained from cookie model 652. The prefetch HTTP requests may then be transmitted through low latency link 656 to the corresponding website.
[0221] In one embodiment, response parser 644 receives a prefetch response from the website and accesses a prefetch response abort 642. Prefetch response abort 642 is configured to determine whether the prefetched item is worth sending to user system 602. Prefetch response abort 642 bases its decision whether to abort a prefetch on a variety of factors, which are discussed above in more detail.
[0222] If the prefetch is not aborted, response parser 644 forwards the response to response encoder 640. Response encoder 640 accesses coding dictionary 638 in order to encode the prefetched response. Response encoder 640 then forwards the encoded response through protocol 628 over high latency link 630 and then to response decoder 626. Response decoder 626 decodes the response and forwards the response to response manager 624. In one embodiment, if the response is a prefetched response, then response manager 624 creates a prefetch socket to receive the prefetched item as it is downloaded.
[0223] It will be appreciated that, in making HTTP requests, URLs associated with the requested (e.g., prefetch) objects may have to be resolved to determine corresponding IP addresses. For example, a DNS lookup may be performed to resolve the URLs for each prefetch object. Rather than discarding the results of the DNS lookup after the HTTP request is made, the DNS lookup result may be added to the prefetch data pushed to the client. As such, in some embodiments, when the response encoder 640 forwards the encoded response through protocol 628 over high latency link 630 to response decoder 626, the response includes the DNS lookup results (e.g., the IP address associated with the URL). Further, when the response is a prefetch response, the DNS lookup results may be received at the client as they are downloaded via the prefetch socket created by response manager 624.
[0224] In some embodiments, when response decoder 626 decodes the response, it is stripped of certain data relating to the DNS lookup and creates a DNS prefetch entry. For example, the DNS prefetch entry may include the URL and its associated IP address. Certain embodiments may store the DNS entry locally for future use. Other embodiments temporarily store the DNS entry (e.g., in a scratch pad) in anticipation of an impending request. For example, when the DNS information is received, it may be assumed that a request for that DNS will be made shortly thereafter, if at all.
[0225] Response manager 624 transmits the response over local bus 605 to redirector 608. Redirector 608 then forwards the response to web browser 606 which renders the content of the response. After web browser 606 receives the response, rendering the content may involve requesting one or more content objects from the web (e.g., videos, images, sounds, etc.). Each content object may be located at a URL, and each URL may have to be resolved to a valid host IP address prior to requesting the content object. As discussed above, resolving the URLs may typically involve querying a DNS to find the associated IP address. [0226] In some embodiments, the DNS lookup request may be intercepted by the redirector 608. Redirector 608 may instead send the request through a local bus 605 to proxy client 612. As discussed above, some DNS entries may have been prefetched and stored locally, or may be in the process of being prefetched. As such, the request parser 616 in the proxy client may check prefetch manager 620 to determine if the requested DNS lookup has been, or is in the process of being, prefetched.
[0227] If the requested DNS lookup has been, or is being, prefetched, the DNS request may be handled locally. For example, if it is determined that the DNS request can be handled locally, response manager 624 may transmit the DNS response over local bus 605 to redirector 608. The object request can then proceed without first making a round trip (e.g., across high latency link 630) to the DNS. If it is determined that the DNS request cannot be handled locally, it may be passed along for normal processing (e.g., over high latency link 630 to the DNS).
[0228] It is worth noting that the HTML response may be received at the client, and web browser 606 may begin requesting DNS lookups, before the respective DNS prefetch entries have been created (e.g., before they have finished downloading). However, as discussed above, the client may be made aware of what will be prefetched as part of the HTML response. For example, embodiments of the client receive the DNS lookup results via a prefetch socket (e.g., acting as a DNS proxy) configured by the client to receive particular prefetch objects. As such, even when the DNS prefetch entry has not been completed, the client may be aware that the DNS information is in the process of being prefetched. Consequently, the client may decide to wait for the local DNS entry to be completely prefetched to allow local handling of the DNS request.
[0229] It is further worth noting that, as discussed above, certain objects may not be prefetched, even where the URL is embedded or otherwise part of an HTTP request. For example, it may be determined that it would be inefficient to prefetch the object because of its size, because there is a very low probability that the object will ultimately be requested by the user, because the URL represents a link (e.g., HREF) or other web item that is not a prefetch candidate, because the object is on the blacklist 649, etc. Certain embodiments still prefetch the DNS and create a local DNS entry, even when it is determined not to prefetch the associated object. In fact, some embodiments prefetch all DNS information, whenever practical. [0230] For example, piggybacking the DNS lookup result (the IP address) onto the URL when an object is prefetched may only add a few bytes (e.g., 4 or 6 bytes to describe typical IP addresses) to the prefetch data package. Even if the DNS lookup is pushed without an associated object (e.g., along with other data), the total additional prefetch data may still be minimal. As such, the cost of prefetching the DNS entry may be very small compared to the cost of the round trip, particularly where round trip times are large (e.g., in a satellite communications system).
[0231] Some embodiments perform a cost-benefit analysis for prefetching DNS entries that is similar to that described above with reference to FIG. 14. For example, referring to the equations described above with reference to FIG. 14, object size factors primarily (e.g., or only) into the cost, and RTT factors primarily (e.g., or only) into the benefit. When the object size is very small (e.g., the DNS lookup result involves only a small number of bytes), the prefetch cost may be very small. When the RTT is relatively constant (e.g., the time to traverse the satellite link may not change much), the prefetch cost may be relatively constant. As such, the benefits may clearly outweigh the costs predicted for prefetching the DNS entries. Moreover, the costs become even further outweighed as the RTT increases.
[0232] It is further worth noting, as discussed above, that the size of the DNS prefetch data may be very small, regardless of whether the DNS data is prefetched on its own (e.g., with the associated URL) or as a piggyback operation along with prefetching an object. Consequently, it may be efficient to prefetch DNS data, even where no associated objects are prefetched. For example, even where the method 200 of FIG. 2 results in a determination to abort the prefetch operation, it may still be efficient to push associated DNS data to the client.
[0233] While the principles of the disclosure have been described above in connection with specific apparatuses and methods, it is to be clearly understood that this description is made only by way of example and not as limitation on the scope of the disclosure. Further, embodiments described with reference to functionality of the subscriber terminal 130 may be implemented by or at the gateway 115, and vise versa.
[0234] While the invention has been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the methods and processes described herein may be implemented using hardware components, software components, and/or any combination thereof. Further, while various methods and processes described herein may be described with respect to particular structural and/or functional components for ease of description, methods of the invention are not limited to any particular structural and/or functional architecture but instead can be implemented on any suitable hardware, firmware and/or software configuration. Similarly, while various functionality is ascribed to certain system components, unless the context dictates otherwise, this functionality can be distributed among various other system components in accordance with different embodiments of the invention.
[0235] Moreover, while the procedures comprised in the methods and processes described herein are described in a particular order for ease of description, unless the context dictates otherwise, various procedures may be reordered, added, and/or omitted in accordance with various embodiments of the invention. Moreover, the procedures described with respect to one method or process may be incorporated within other described methods or processes; likewise, system components described according to a particular structural architecture and/or with respect to one system may be organized in alternative structural architectures and/or incorporated within other described systems. Hence, while various embodiments are described with — or without — certain features for ease of description and to illustrate exemplary features, the various components and/or features described herein with respect to a particular embodiment can be substituted, added and/or subtracted from among other described embodiments, unless the context dictates otherwise. Consequently, although the invention has been described with respect to exemplary embodiments, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.

Claims

CLAIMS WHAT IS CLAIMED IS:
1. A method of implementing URL masking, the method comprising: receiving, at a terminal, a web content request including a URL string for locating the web content; comparing, at a parser module on the terminal, the URL string to a list of URLs for which prefetched responses are available to determine if the request can be fulfilled from the prefetched responses; using a mask that excludes portions of the URL string that are not relevant to finding or selecting web content when comparing the request to the list of prefetched URLs; if the masked URL string matches the URL of one of the prefetched responses, supplying the prefetched response to be used as a response to the incoming request; parsing scripts in a web response to search for URLs that are rendered on a web page; analyzing the scripts to identify bytes in the URL that generate random values; and generating a mask which indicates bytes that are random and that are to be excluded from a comparison in order to determine whether the prefetched response can be used to response to the web content request.
2. A method of implementing URL masking according to claim 1, wherein the random values in the URL string comprise a cache busting string.
3. A method of implementing URL masking according to claim 1 , wherein the web content request is a Java script.
4. A method of implementing URL masking according to claim 3, further comprising parsing, at the parser module, the Java script to identify an embedded URL.
5. A method of implementing URL masking according to claim 4, further comprising in response to identifying an embedded URL, determining the process in which the embedded URL was constructed.
6. A method of implementing URL masking according to claim 5, wherein the process comprises: executing a random number generator to produce a binary string; converting the binary string into an ASCI string; and inserting the ASCI string into the embedded URL.
7. A method of implementing URL masking according to claim 6, wherein the embedded URL is the URL string and the ASCI string is the unrelated portion of the URL string.
8. A method of implementing URL masking according to claim 1, wherein the unrelated portion of the URL string is generated using a timestamp value.
9. A method of implementing URL masking according to claim 8, further comprising removing at least a portion of the timestamp value from the URL string to mask the URL string.
10. A method of implementing URL masking according to claim 1, further comprising comparing the masked URL with cached URLs in the terminal's cache.
11. A method of implementing URL masking according to claim 10, further comprising in response to determining that the masked URL matches one of the cached URLs, rendering the web content associated with the cached URL at the terminal.
12. A method of implementing URL masking according to claim 11, wherein the cache is a squid web proxy cache.
13. A method of implementing URL masking according to claim 11 , wherein the cache is a browser cache.
14. A method of implementing URL masking according to claim 1, the terminal is a satellite terminal.
15. A system for implementing URL masking, the system comprising: a gateway configured to receive a web content request including a URL string for locating the web content,; and a terminal in communication with the gateway, the terminal configured to receive the web content request form the gateway, compare the URL string to a list of URLs for which prefetched responses are available to determine if the request can be fulfilled from the prefetched responses, use a mask that excludes portions of the URL string that are not relevant to finding or selecting web content when comparing the request to the list of prefetched URLs, if the masked URL string matches the URL of one of the prefetched responses, supply the prefetched response to be used as a response to the incoming request, parse scripts in a web response to search for URLs that are rendered on a web page, analyze the scripts to identify bytes in the URL that generate random values, generate a mask which indicates bytes that are random and that are to be excluded from a comparison in order to determine whether the prefetched response can be used to response to the web content request.
16. The system for implementing URL masking according to claim 15, wherein the gateway is a satellite gateway and the terminal is a subscriber terminal.
17. The system for implementing URL masking according to claim 16, wherein the satellite gateway and the subscriber terminal are in communication via a satellite link.
18. A gateway configured to implementing URL masking, the gateway comprising: an accelerator module configured to receive a web content request including a URL string for locating the web content, wherein the accelerator module includes: a parser module configured to analyze the URL string to determine if the URL string includes a portion within the string that is unrelated to locating the web content; a masker module coupled with the parser module, the masker module configured to, in response to determining that the URL string includes a portion that is unrelated to determining the location of the web content, create a mask that indicates which bytes are to be excluded from the URL string when determining whether a request matches a pref etched or cached response; and a prefetcher module coupled with the masker module, the prefetcher module configured to compare the masked URL string with prefetched URL strings stored by the prefetcher module, and in response to the masked URL matching one of the prefetched URL string, retrieve a prefetched object associated with the one of the prefetched URL strings; and a gateway transceiver module in communication with the accelerator module, the gateway transceiver module configured to receive the prefetched object and transmit the prefetched object to a terminal.
19. A gateway configured to implementing URL masking according to claim 18, wherein the unrelated portion of the URL string is a cache busting string.
20. A machine-readable medium for implementing URL masking, which includes sets of instructions which, when executed by a machine, cause the machine to: receive, at a terminal, a web content request including a URL string for locating the web content; compare, at a parser module on the terminal, the URL string to a list of URLs for which prefetched responses are available to determine if the request can be fulfilled from the prefetched responses; use a mask that excludes portions of the URL string that are not relevant to finding or selecting web content when comparing the request to the list of prefetched URLs; if the masked URL string matches the URL of one of the prefetched responses, supply the prefetched response to be used as a response to the incoming request; parse scripts in a web response to search for URLs that are rendered on a web page; analyze the scripts to identify bytes in the URL that generate random values; and generate a mask which indicates bytes that are random and that are to be excluded from a comparison in order to determine whether the prefetched response can be used to response to the web content request.
21. A system for implementing cache cycling, the system comprising: a client configured to generate a content request; a subscriber terminal including a terminal cache module and a terminal accelerator module which includes a proxy client, wherein the proxy client is configured to intercept the content request, access the terminal cache module, and determine that the requested content is stored in the terminal cache module, issue a request for a new copy of the requested content, and transmit the requested content stored in the terminal cache module to the client; a satellite in communication with the subscriber terminal, the satellite configured to transmit data; a gateway in communication with the satellite, the gateway including a gateway accelerator module which includes a proxy server, the proxy server configured to receive the request for the new copy of the requested content and forward the request; and a content provider in communication with the gateway, the content provider configured to receive the content request and transmit the new copy of the requested content to the gateway, wherein the gateway is configured to transmit the new copy of the content to the subscriber terminal via the satellite; and wherein the subscriber terminal is further configured to replace the requested content stored in the terminal cache module with the new copy of the requested content, such that the content stored in the terminal cache module is updated for subsequent requests.
22. A system for implementing cache cycling as in claim 21, wherein the content request includes a URL string for locating the requested content.
23. A system for implementing cache cycling as in claim 22, wherein the URL string comprises a masked URL string.
24. A system for implementing cache cycling as in claim 23, wherein a masked URL string comprise at least a portion of the URL string being randomly generated, and the masked URL string removed and/or replaces the at least a random portion of the URL string.
25. A system for implementing cache cycling as in claim 22, wherein the terminal accelerator module is further configured to determine portions of the URL string that are randomly generated for each request according to a particular random generation policy.
26. A system for implementing cache cycling as in claim 25, wherein the terminal accelerator module is further configured to generate a random string according to the random generation policy and insert the random string into the URL string in order to request the new copy of the requested content.
27. A system for implementing cache cycling as in claim 21 , wherein the requested content comprises advertising content.
28. A system for implementing cache cycling as in claim 27, wherein the request for the advertising content includes cookies, client targeting parameters, and localization information associated with the request.
29. A system for implementing cache cycling as in claim 27, wherein the advertising content includes advertising accounting requirements, and as such, the requesting of the new copy of the requested content maintains the advertising accounting requirements.
30. A system for implementing cache cycling as in claim 21, wherein the satellite comprises a bent pipe satellite.
31. A system for implementing cache cycling as in claim 21 , wherein the new copy of the requested content comprises a fresh copy of the requested content.
32. A system for implementing cache cycling as in claim 21, wherein the client is further configured to render the requested content.
33. A system for implementing cache cycling as in claim 21, wherein the terminal cache module is further configured to stored the new copy of the requested content.
34. A method of implementing cache cycling, the method comprising: generating, at a client, a content request; intercepting, at a subscriber terminal, the content request; accessing a terminal cache module to determine that the requested content is stored in the terminal cache module; issuing a request for a new copy of the requested content; transmitting the requested content stored in the terminal cache module to the client; receiving, at a gateway, the request for the new copy of the requested content; receiving, at a content provider, the content request and transmitting the new copy of the requested content to the gateway; transmitting the new copy of the content to the subscriber terminal; and replacing the requested content stored in the terminal cache module with the new copy of the requested content, such that the content stored in the terminal cache module is updated for subsequent requests.
35. A method of implementing cache cycling as in claim 34, further comprising rendering the requested content in the client's browser.
36. A method of implementing cache cycling as in claim 34, wherein the content request includes a URL string for locating the requested content.
37. A method of implementing cache cycling as in claim 34, further comprising determining portions of the URL string that are randomly generated for each request according to a particular random generation policy.
38. A method of implementing cache cycling as in claim 37, further comprising generating a random string according to the random generation policy.
39. A method of implementing cache cycling as in claim 38, further comprising inserting the random string into the URL string in order to request the new copy of the requested content.
40. A method of implementing cache cycling as in claim 34, wherein the rested content comprises an HTTP request.
41. A machine-readable medium having sets of instructions which, when executed by a machine, cause the machine to: generate, at a client, a content request; intercept, at a subscriber terminal, the content request; access a terminal cache module to determine that the requested content is stored in the terminal cache module; issue a request for a new copy of the requested content; transmit the requested content stored in the terminal cache module to the client; receive, at a gateway, the request for the new copy of the requested content; receive, at a content provider, the content request and transmitting the new copy of the requested content to the gateway; transmit the new copy of the content to the subscriber terminal; and replace the requested content stored in the terminal cache module with the new copy of the requested content, such that the content stored in the terminal cache module is updated for subsequent requests.
42. The machine-readable medium as in claim 41, wherein the sets of instructions, when further executed by the machine, cause the machine to store the new copy of the requested content at the terminal cache module.
43. A method for handling object data as part of a prefetch operation in a communications system, the method comprising: commencing downloading of object data of a prefetch object from a content server using a prefetch server communicatively coupled with the content server; accumulating the object data in an accumulator at the prefetch server; checking an accumulated size of the accumulated object data; determining, at the prefetch server and as a function of the accumulated size of the accumulated object data, whether the cost of pushing the object data to a client over a communication link of the communications system exceeds the benefit of pushing the object data to the client over the communication link of the communications system; and when the cost of pushing the object data to the client exceeds the benefit of pushing the object data to the client, aborting the prefetch operation.
44. The method of claim 43, further comprising: when the cost of pushing the object data to the client does not exceed the benefit of pushing the object data to the client, pushing the object data to the client.
45. The method of claim 43, further comprising: when the cost of pushing the object data to the client exceeds the benefit of pushing the object data to the client, pushing the object data to an output data store.
46. The method of claim 45, further comprising: receiving a request for the prefetch object from the client at the prefetch server subsequent to pushing the object data to the output data store; and satisfying the request by pushing the object data from the output data store to the client.
47. The method of claim 43, further comprising: compressing the object data into compressed object data at the accumulator.
48. The method of claim 47, wherein checking the accumulated size of the accumulated object data comprises checking the accumulated size of the compressed object data.
49. The method of claim 47, further comprising: when the cost of pushing the object data to the client exceeds the benefit of pushing the object data to the client, pushing the compressed object data to an output data store.
50. The method of claim 47, further comprising: when the cost of pushing the object data to the client exceeds the benefit of pushing the object data to the client: accumulating additional object data in the accumulator at the prefetch server, wherein compressing the object data comprises compressing the additional object data into the compressed object data at the accumulator; and pushing the compressed object data to an output data store.
51. The method of claim 43, further comprising: determining, at the prefetch server, a threshold value indicating a threshold size of the prefetch object, such that the cost of pushing the object data to the client is substantially equal to the benefit of pushing the object data to the client when the accumulated size of the accumulated data is substantially equal to the threshold size.
52. The method of claim 51 , wherein determining whether the cost of pushing the object data to the client exceeds the benefit of pushing the object data to the client comprises: determining whether the accumulated size of the accumulated object data exceeds the threshold value.
53. The method of claim 51 , wherein the threshold value is determined as a function of at least one of: a probability of use of the prefetch object by the client; a communication time relating to communication of the prefetch object between the content server and the prefetch server; or a communication time relating to communication of the prefetch object between the prefetch server and the client.
54. The method of claim 51, wherein the threshold value is determined such that:
S = [P * (TRT + TPF) * B] / (1 - P),
wherein S indicates the threshold value, P indicates a probability of use of the prefetch object by the client, B indicates a bandwidth of the communication link of the communications system between the prefetch server and the client, TRT indicates a round-trip time for communicating the prefetch between the prefetch server and the client, and Tpp indicates a prefetch time for downloading the prefetch object from the content server to the prefetch server.
55. The method of claim 43, wherein: the proxy server is in communication with the content server over a first communication link; the proxy server is in communication with the client over a second communication link; and
latency of the first communication link is substantially low relative to latency of the second communication link.
56. A system for handling object data as part of a prefetch operation in a communications system, the system comprising: a proxy server located at a server-side node of the communications system, the proxy server being communicatively coupled with a content server and with a client located at a client-side node of the communications system, the proxy server comprising: an accumulator configured to accumulate object data of a prefetch object; and a prefetch server, communicatively coupled with the accumulator and configured to: download the object data from the content server to the accumulator; check an accumulated size of the accumulated object data; determine, as a function of the accumulated size of the accumulated object data, whether the cost of pushing the object data to the client over a communication link of the communications system exceeds the benefit of pushing the object data to the client over the communication link of the communications system; and when the cost of pushing the object data to the client exceeds the benefit of pushing the object data to the client, aborting the prefetch operation.
57. The system of claim 56, the prefetch server being further configured to: push the object data to the client when the cost of pushing the object data to the client does not exceed the benefit of pushing the object data to the client.
58. The system of claim 56, further comprising: an output data store, communicatively coupled with the proxy server, wherein the prefetch server is further configured to push the object data to the output data store when the cost of pushing the object data to the client exceeds the benefit of pushing the object data to the client.
59. The system of claim 58, the prefetch server being further configured to: receive a request for the prefetch object from the client subsequent to pushing the object data to the output data store; retrieve the object data from the output data store; and push the object data from the output data store to the client in response to the request.
60. The system of claim 56, further comprising: a prefetch compressor, communicatively coupled with the accumulator and configured to compress the object data into compressed object data at the accumulator.
61. The system of claim 56, the prefetch server being further configured to: determine a threshold value indicating a threshold size of the prefetch object, such that the cost of pushing the object data to the client exceeds the benefit of pushing the object data to the client when the accumulated size of the accumulated data exceeds the threshold size.
62. A machine-readable medium for handling object data as part of a prefetch operation in a communications system, the machine-readable medium having instructions stored thereon which, when executed by a machine, cause the machine to perform steps comprising: commencing downloading of object data of the prefetch object from a content server communicatively coupled with the machine; accumulating the object data in an accumulator; checking an accumulated size of the accumulated object data; determining, as a function of the accumulated size of the accumulated object data, whether the cost of pushing the object data to a client communicatively coupled with the machine over a communication link of the communications system exceeds the benefit of pushing the object data to the client over the communication link of the communications system; and when the cost of pushing the object data to the client exceeds the benefit of pushing the object data to the client, aborting the prefetch operation.
63. The machine-readable medium of claim 62, the instructions stored thereon, when executed by the machine, causing the machine to perform steps further comprising: when the cost of pushing the object data to the client does not exceed the benefit of pushing the object data to the client, pushing the object data to the client.
64. The machine-readable medium of claim 62, the instructions stored thereon, when executed by the machine, causing the machine to perform steps further comprising: when the cost of pushing the object data to the client exceeds the benefit of pushing the object data to the client, pushing the object data to an output data store communicatively coupled with the machine.
65. The machine-readable medium of claim 62, the instructions stored thereon, when executed by the machine, causing the machine to perform steps further comprising: compressing the object data into compressed object data at the accumulator.
66. The machine-readable medium of claim 62, the instructions stored thereon, when executed by the machine, causing the machine to perform steps further comprising: determining a threshold value indicating a threshold size of the prefetch object, such that the cost of pushing the object data to the client exceeds the benefit of pushing the object data to the client when the accumulated size of the accumulated data exceeds the threshold size.
67. A method for prefetching domain name server (DNS) entries in a communications system, the method comprising: receiving response data in response to a request for a content set, the content set comprising a plurality of content objects, each content object associated with a network location that is remote over the communications system; determining when the response data comprises a DNS prefetch response indicating prefetching of DNS information corresponding to the network location associated with a prefetch object, the prefetch object being one of the plurality of content objects; when the response data comprises the DNS prefetch response, establishing a prefetch channel configured to receive the DNS information over the communications system as indicated by the DNS prefetch response; intercepting a DNS lookup request associated with the prefetch object; and locally satisfying the DNS lookup request using the DNS information.
68. The method of claim 67, further comprising: after intercepting the DNS lookup request, determining that the DNS lookup request can be satisfied with the DNS information still being received over the prefetch channel; and waiting for the DNS information to be fully received prior to locally satisfying the DNS lookup request using the DNS information.
69. The method of claim 67, wherein at least the receiving step and the intercepting step are implemented using a proxy client.
70. The method of claim 67, further comprising: storing the DNS information received over the prefetch channel in a local data store.
71. The method of claim 67, wherein the DNS lookup request is issued to a remote DNS.
72. The method of claim 67, wherein the prefetch channel is a network socket.
73. The method of claim 67, wherein the request for the content set is received in response to selection of a link using a web browser.
74. The method of claim 67, wherein the DNS information comprises an Internet protocol (IP) address corresponding to the network location associated with the prefetch object.
75. The method of claim 67, wherein the request for the content set is received in response to selection of a link using a web browser.
76. The method of claim 67, wherein the response data is an HTML response or an HTTP response.
77. The method of claim 67, wherein the content set is a webpageΛ
78. A system for handling prefetching of domain name server (DNS) entries at a client side of a communications system, the system comprising: a response processing module, communicatively coupled with and local to a client machine, and configured to: receive response data in response to a request for a content set from the client machine, the content set comprising a plurality of content objects, each content object associated with a network location that is remote over the communications system; and determine when the response data comprises a DNS prefetch response indicating prefetching of DNS information corresponding to the network location associated with a prefetch object, the prefetch object being one of the plurality of content objects; and a DNS prefetch module, communicatively coupled with the response processing module and the client machine, and configured to: when the response data comprises the DNS prefetch response, establish a prefetch channel configured to receive the DNS information from a server side of the communications network as indicated by the DNS prefetch response; intercept a DNS lookup request associated with the prefetch object from the client machine; and return a DNS lookup response to the client machine in satisfaction of the DNS lookup request using the DNS information.
79. The system of claim 78, wherein the DNS prefetch module is further configured to: after intercepting the DNS lookup request, determine that the DNS lookup request can be satisfied with the DNS information still being received over the prefetch channel; and wait for the DNS information to be fully received prior to returning the DNS lookup response.
80. The system of claim 78, further comprising: a data store, communicatively coupled with the response processing module, and configured to store the DNS information received over the prefetch channel.
81. The system of claim 78, wherein the DNS lookup request is issued by the client machine to a remote DNS.
82. The system of claim 78, wherein the DNS information comprises an Internet protocol (IP) address corresponding to the network location associated with the prefetch object.
83. A machine-readable medium for handling prefetching of domain name server (DNS) entries in a communications system, the machine-readable medium having instructions stored thereon which, when executed by a machine, cause the machine to perform steps comprising: receiving response data in response to a request for a content set, the content set comprising a plurality of content objects, each content object associated with a network location that is remote over the communications system; determining when the response data comprises a DNS prefetch response indicating prefetching of DNS information corresponding to the network location associated with a prefetch object, the prefetch object being one of the plurality of content objects; when the response data comprises the DNS prefetch response, establishing a prefetch channel configured to receive the DNS information over the communications network as indicated by the DNS prefetch response; intercepting a DNS lookup request associated with the prefetch object; and locally satisfying the DNS lookup request using the DNS information.
84. The machine-readable medium of claim 83, the instructions stored thereon, when executed by the machine, causing the machine to perform steps further comprising: after intercepting the DNS lookup request, determining that the DNS lookup request can be satisfied with the DNS information still being received over the prefetch channel; and waiting for the DNS information to be fully received prior to locally satisfying the DNS lookup request using the DNS information.
85. The machine-readable medium of claim 83, wherein the machine is configured as a proxy client.
86. The machine-readable medium of claim 83, the instructions stored thereon, when executed by the machine, causing the machine to perform steps further comprising: storing the DNS information received over the prefetch channel in a data store communicatively coupled with the machine.
EP10700649A 2009-01-12 2010-01-12 Web optimization Withdrawn EP2386164A2 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US14393309P 2009-01-12 2009-01-12
US12/571,288 US20100180005A1 (en) 2009-01-12 2009-09-30 Cache cycling
US12/571,281 US20100180082A1 (en) 2009-01-12 2009-09-30 Methods and systems for implementing url masking
US12/619,095 US8171135B2 (en) 2007-07-12 2009-11-16 Accumulator for prefetch abort
PCT/US2010/020795 WO2010081160A2 (en) 2009-01-12 2010-01-12 Web optimization

Publications (1)

Publication Number Publication Date
EP2386164A2 true EP2386164A2 (en) 2011-11-16

Family

ID=44773876

Family Applications (1)

Application Number Title Priority Date Filing Date
EP10700649A Withdrawn EP2386164A2 (en) 2009-01-12 2010-01-12 Web optimization

Country Status (3)

Country Link
EP (1) EP2386164A2 (en)
AU (1) AU2010203401B2 (en)
WO (1) WO2010081160A2 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8750123B1 (en) 2013-03-11 2014-06-10 Seven Networks, Inc. Mobile device equipped with mobile network congestion recognition to make intelligent decisions regarding connecting to an operator network
US8761756B2 (en) 2005-06-21 2014-06-24 Seven Networks International Oy Maintaining an IP connection in a mobile network
US8775631B2 (en) 2012-07-13 2014-07-08 Seven Networks, Inc. Dynamic bandwidth adjustment for browsing or streaming activity in a wireless network based on prediction of user behavior when interacting with mobile applications
US8782222B2 (en) 2010-11-01 2014-07-15 Seven Networks Timing of keep-alive messages used in a system for mobile network resource conservation and optimization
US8799410B2 (en) 2008-01-28 2014-08-05 Seven Networks, Inc. System and method of a relay server for managing communications and notification between a mobile device and a web access server
US8805425B2 (en) 2007-06-01 2014-08-12 Seven Networks, Inc. Integrated messaging
US8811952B2 (en) 2002-01-08 2014-08-19 Seven Networks, Inc. Mobile device power management in data synchronization over a mobile network with or without a trigger notification
US8812695B2 (en) 2012-04-09 2014-08-19 Seven Networks, Inc. Method and system for management of a virtual network connection without heartbeat messages
US8839412B1 (en) 2005-04-21 2014-09-16 Seven Networks, Inc. Flexible real-time inbox access
US8838783B2 (en) 2010-07-26 2014-09-16 Seven Networks, Inc. Distributed caching for resource and mobile network traffic management
US8843153B2 (en) 2010-11-01 2014-09-23 Seven Networks, Inc. Mobile traffic categorization and policy for network use optimization while preserving user experience
GB2500333B (en) * 2010-07-26 2014-10-08 Seven Networks Inc Mobile application traffic optimization
US8862657B2 (en) 2008-01-25 2014-10-14 Seven Networks, Inc. Policy based content service
US8868753B2 (en) 2011-12-06 2014-10-21 Seven Networks, Inc. System of redundantly clustered machines to provide failover mechanisms for mobile traffic management and network resource conservation
US8874761B2 (en) 2013-01-25 2014-10-28 Seven Networks, Inc. Signaling optimization in a wireless network for traffic utilizing proprietary and non-proprietary protocols
US8934414B2 (en) 2011-12-06 2015-01-13 Seven Networks, Inc. Cellular or WiFi mobile traffic optimization based on public or private network destination
US8966053B2 (en) 2007-07-12 2015-02-24 Viasat, Inc. Methods and systems for performing a prefetch abort operation for network acceleration
US9002828B2 (en) 2007-12-13 2015-04-07 Seven Networks, Inc. Predictive content delivery
US9009250B2 (en) 2011-12-07 2015-04-14 Seven Networks, Inc. Flexible and dynamic integration schemas of a traffic management system with various network operators for network traffic alleviation
US9043433B2 (en) 2010-07-26 2015-05-26 Seven Networks, Inc. Mobile network traffic coordination across multiple applications
US9065765B2 (en) 2013-07-22 2015-06-23 Seven Networks, Inc. Proxy server associated with a mobile carrier for enhancing mobile traffic management in a mobile network
US11095494B2 (en) 2007-10-15 2021-08-17 Viasat, Inc. Methods and systems for implementing a cache model in a prefetching system

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8438312B2 (en) 2009-10-23 2013-05-07 Moov Corporation Dynamically rehosting web content
US7970940B1 (en) * 2009-12-22 2011-06-28 Intel Corporation Domain name system lookup latency reduction
EP2495670B1 (en) 2010-11-29 2019-08-07 Hughes Network Systems, LLC Computer networking system and method with javascript execution for pre-fetching content from dynamically-generated url and javascript injection to modify date or random number calculation
EP2552082B1 (en) 2011-07-29 2018-10-31 Deutsche Telekom AG Favourite web site acceleration method and system
RU2689439C2 (en) 2014-05-13 2019-05-28 Опера Софтвэар Ас Improved performance of web access
FR3027173B1 (en) * 2014-10-14 2017-11-03 Thales Sa ARCHITECTURE OF A TELECOMMUNICATION NETWORK
US10574631B2 (en) 2015-05-11 2020-02-25 Finjan Mobile, Inc. Secure and private mobile web browser
EP3968181A1 (en) * 2015-08-28 2022-03-16 Viasat, Inc. Systems and methods for prefetching dynamic urls
WO2018080819A1 (en) * 2016-10-24 2018-05-03 Finjan Mobile, Inc. Secure and private mobile web browser
US11232168B1 (en) * 2018-11-13 2022-01-25 Introspective Analytics Inc. Digital advertising optimization
US11734381B2 (en) * 2021-12-07 2023-08-22 Servicenow, Inc. Efficient downloading of related documents
US11729292B1 (en) 2022-09-02 2023-08-15 International Business Machines Corporation Automated caching and cache busting

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6553411B1 (en) * 1999-05-18 2003-04-22 International Business Machines Corporation System and method for cache acceleration
US7103714B1 (en) * 2001-08-04 2006-09-05 Oracle International Corp. System and method for serving one set of cached data for differing data requests
US7437438B2 (en) * 2001-12-27 2008-10-14 Hewlett-Packard Development Company, L.P. System and method for energy efficient data prefetching
US7130890B1 (en) * 2002-09-04 2006-10-31 Hewlett-Packard Development Company, L.P. Method and system for adaptively prefetching objects from a network
US7953820B2 (en) * 2002-09-11 2011-05-31 Hughes Network Systems, Llc Method and system for providing enhanced performance of web browsing
US20050210121A1 (en) * 2004-03-22 2005-09-22 Qualcomm Incorporated Satellite anticipatory bandwith acceleration
GB2425194A (en) * 2005-04-15 2006-10-18 Exponetic Ltd Tracking user network activity using a client identifier
US20060294223A1 (en) * 2005-06-24 2006-12-28 Microsoft Corporation Pre-fetching and DNS resolution of hyperlinked content
US7584294B2 (en) * 2007-03-12 2009-09-01 Citrix Systems, Inc. Systems and methods for prefetching objects for caching using QOS

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2010081160A2 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8811952B2 (en) 2002-01-08 2014-08-19 Seven Networks, Inc. Mobile device power management in data synchronization over a mobile network with or without a trigger notification
US8839412B1 (en) 2005-04-21 2014-09-16 Seven Networks, Inc. Flexible real-time inbox access
US8761756B2 (en) 2005-06-21 2014-06-24 Seven Networks International Oy Maintaining an IP connection in a mobile network
US8805425B2 (en) 2007-06-01 2014-08-12 Seven Networks, Inc. Integrated messaging
US8966053B2 (en) 2007-07-12 2015-02-24 Viasat, Inc. Methods and systems for performing a prefetch abort operation for network acceleration
US11095494B2 (en) 2007-10-15 2021-08-17 Viasat, Inc. Methods and systems for implementing a cache model in a prefetching system
US9002828B2 (en) 2007-12-13 2015-04-07 Seven Networks, Inc. Predictive content delivery
US8862657B2 (en) 2008-01-25 2014-10-14 Seven Networks, Inc. Policy based content service
US8799410B2 (en) 2008-01-28 2014-08-05 Seven Networks, Inc. System and method of a relay server for managing communications and notification between a mobile device and a web access server
US9049179B2 (en) 2010-07-26 2015-06-02 Seven Networks, Inc. Mobile network traffic coordination across multiple applications
US8838783B2 (en) 2010-07-26 2014-09-16 Seven Networks, Inc. Distributed caching for resource and mobile network traffic management
GB2500333B (en) * 2010-07-26 2014-10-08 Seven Networks Inc Mobile application traffic optimization
US9043433B2 (en) 2010-07-26 2015-05-26 Seven Networks, Inc. Mobile network traffic coordination across multiple applications
US8843153B2 (en) 2010-11-01 2014-09-23 Seven Networks, Inc. Mobile traffic categorization and policy for network use optimization while preserving user experience
US8782222B2 (en) 2010-11-01 2014-07-15 Seven Networks Timing of keep-alive messages used in a system for mobile network resource conservation and optimization
US8934414B2 (en) 2011-12-06 2015-01-13 Seven Networks, Inc. Cellular or WiFi mobile traffic optimization based on public or private network destination
US8868753B2 (en) 2011-12-06 2014-10-21 Seven Networks, Inc. System of redundantly clustered machines to provide failover mechanisms for mobile traffic management and network resource conservation
US9009250B2 (en) 2011-12-07 2015-04-14 Seven Networks, Inc. Flexible and dynamic integration schemas of a traffic management system with various network operators for network traffic alleviation
US8812695B2 (en) 2012-04-09 2014-08-19 Seven Networks, Inc. Method and system for management of a virtual network connection without heartbeat messages
US8775631B2 (en) 2012-07-13 2014-07-08 Seven Networks, Inc. Dynamic bandwidth adjustment for browsing or streaming activity in a wireless network based on prediction of user behavior when interacting with mobile applications
US8874761B2 (en) 2013-01-25 2014-10-28 Seven Networks, Inc. Signaling optimization in a wireless network for traffic utilizing proprietary and non-proprietary protocols
US8750123B1 (en) 2013-03-11 2014-06-10 Seven Networks, Inc. Mobile device equipped with mobile network congestion recognition to make intelligent decisions regarding connecting to an operator network
US9065765B2 (en) 2013-07-22 2015-06-23 Seven Networks, Inc. Proxy server associated with a mobile carrier for enhancing mobile traffic management in a mobile network

Also Published As

Publication number Publication date
AU2010203401B2 (en) 2014-04-17
WO2010081160A3 (en) 2010-12-16
WO2010081160A2 (en) 2010-07-15
AU2010203401A1 (en) 2011-07-28

Similar Documents

Publication Publication Date Title
AU2010203401B2 (en) Web optimization
US20100180082A1 (en) Methods and systems for implementing url masking
US11916990B2 (en) Content set based deltacasting
US11777654B2 (en) Transport protocol for anticipatory content
US10972573B1 (en) Browser optimization through user history analysis
US9456050B1 (en) Browser optimization through user history analysis
US8010705B1 (en) Methods and systems for utilizing delta coding in acceleration proxy servers
US9613158B1 (en) Cache hinting systems
Armstrong Just-In-Time Push Prefetching: Accelerating the Mobile Web

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20110812

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20120514

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20121126