US20150019678A1 - Methods and Systems for Caching Content at Multiple Levels - Google Patents

Methods and Systems for Caching Content at Multiple Levels Download PDF

Info

Publication number
US20150019678A1
US20150019678A1 US14/464,638 US201414464638A US2015019678A1 US 20150019678 A1 US20150019678 A1 US 20150019678A1 US 201414464638 A US201414464638 A US 201414464638A US 2015019678 A1 US2015019678 A1 US 2015019678A1
Authority
US
United States
Prior art keywords
cache
content
byte
cache layer
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/464,638
Inventor
Chris King
Steve Mullaney
Jamshid Mahdavi
Ravikumar Venkata Duvvuri
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CA Inc
Original Assignee
Blue Coat Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Blue Coat Systems, Inc. filed Critical Blue Coat Systems, Inc.
Priority to US14/464,638 priority Critical patent/US20150019678A1/en
Publication of US20150019678A1 publication Critical patent/US20150019678A1/en
Assigned to JEFFERIES FINANCE LLC, AS THE COLLATERAL AGENT reassignment JEFFERIES FINANCE LLC, AS THE COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BLUE COAT SYSTEMS, INC.
Assigned to BLUE COAT SYSTEMS, INC. reassignment BLUE COAT SYSTEMS, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JEFFERIES FINANCE LLC
Assigned to SYMANTEC CORPORATION reassignment SYMANTEC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BLUE COAT SYSTEMS, INC.
Assigned to CA, INC. reassignment CA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SYMANTEC CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/2842
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0813Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0875Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/154Networked environment

Definitions

  • the present invention relates to systems and methods for caching within a network and, more specifically, to techniques for combining caching operations at multiple levels (e.g., object levels and byte levels) within a single appliance.
  • levels e.g., object levels and byte levels
  • object is used to refer to logical entities such as images or other multimedia data, such as animation, audio (such as streaming audio), movies, video (such as streaming video), program fragments, such as Java, Javascript, or ActiveX, or Web documents.
  • objects are relatively large logical entities.
  • “object caching” discussed above is not the only form of caching available today.
  • “Byte caching” or “stream caching” is an optimization technique in which information at a level below that of entire objects is cached. These cached bytes or streams are then associated with tokens so that when identical byte/stream patterns are observed in newly requested content, the byte/stream information is replaced by the token. Hence, if the byte/stream patterns repeat often enough, significant bandwidth savings can be achieved using these transmission optimizations.
  • An embodiment of the present invention provides a cache having an object cache layer and a byte cache layer, each configured to store information to storage devices included in the cache. Further, an application proxy layer configured to identify content that should not be cached by either (or both) of the object cache layer and/or the byte cache layer may also be included. For example, the application proxy layer may be configured to pass content not cacheable at the object cache layer to the byte cache layer.
  • a further embodiment of the invention involves receiving content from a content source and caching said content first at an object cache layer of a cache and next at a byte cache layer of the cache so as to eliminate repeated strings present within the content after caching at the object cache layer.
  • the content Prior to caching the content at the object cache layer, the content may be transformed from a first format to a second format, for example from an encrypted data format to a decrypted data format.
  • the content may be examined for compliance with one or more policies, for example policy checks performed remotely from the cache.
  • Still another embodiment of the present invention provides a system that includes a first object cache communicatively coupled to a second object cache via a transport and signaling channel made up of reciprocal byte cache layers.
  • An application proxy layer may be distributed between platforms supporting the first and second object cache.
  • the first object cache may be instantiated in a cache appliance, while in other cases the first object cache may be instantiated as a thin client executing on a computer platform.
  • policy-based decisions are implemented at an application proxy layer associated with one or more of the first object cache and the second object cache. These policy-based decisions may be based on configured policy information, heuristics or other algorithms, and may include decisions concerning what information is or is not cached at the reciprocal byte cache level. For example, the policy-based decisions may include decisions concerning personally identifying information of a user.
  • the byte cache layers may be configured to compress and decompress contents of at least one of the object caches and may include a thinning mechanism whereby less popular data stored in the byte cache layers are removed over time.
  • the byte cache layers may also be configured to store byte patterns of only less than a threshold size.
  • Yet a further embodiment of the invention provides a cache made up of a multi-level caching architecture in which content received from a content source is cached at multiple protocol stack levels according to its cacheability at each such layer.
  • the byte cache layers of each of the first and second cache may store common strings with their respective tokens.
  • FIG. 1 illustrates a cache configured with an application proxy layer, an object cache layer and a byte cache layer in accordance with an embodiment of the present invention
  • FIGS. 2A-2D illustrate various computer systems having pairs of caches each configured with object cache layers and byte cache layers in accordance with an embodiment of the present invention
  • FIG. 3 illustrates in further detail operations performed at the various caching layers of the caches illustrated in FIGS. 2A-2D ;
  • FIG. 4 illustrates a split tunnel deployment of caches configured in accordance with embodiments of the present invention.
  • a single cache combines one or more application proxies, an object cache layer and a byte cache layer.
  • the application proxies are logical entities that understand the protocols over which the application objects are communicated or delivered (e.g., HTTP, HTTPS, CIFS, FTP, RTSP/RTP, etc.). Consequently, these application proxies can identify application object boundaries and make use of the object cache accordingly.
  • the application proxy can still take advantage of the byte cache (e.g., through custom or socket pair application programming interfaces) so that content which cannot or should not be cached at the object level may instead be cached at the byte or stream level.
  • the present invention provides the benefits of application-level object caching, including the ability to offload demand on content sources and minimizing latency for cache hits, as well as the benefits of byte caching, to reduce the amount of data which must be transferred over a communication path.
  • Byte caching can also offer benefits with respect to certain types of otherwise non-cacheable content, which an application-level cache can usually do little to accelerate.
  • the appliance may incorporate further optimization techniques, such as intra-stream compression, predictive caching and policy-based content filtering.
  • various embodiments of the present invention may be implemented with the aid of computer-implemented processes or methods (a.k.a. programs or routines) that may be rendered in any computer software language including, without limitation, C#, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), JavaTM and the like.
  • CORBA Common Object Request Broker Architecture
  • JavaTM JavaTM
  • the present invention can be implemented with an apparatus to perform the operations described herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer, selectively activated or reconfigured by a computer program stored (permanently or temporarily, e.g., in the case of a client downloaded on-demand) in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • Cache 2 may be embodied within a stand alone appliance or, in some cases, may be instantiated (at least in part) as a client (or thin client) hosted on a personal computer or a server.
  • Cache 2 includes one or more application proxies 4 a - 4 n , an object cache layer 6 and a byte cache layer 8 .
  • Object cache layer 6 provides for storage of application objects.
  • cache 2 In response to a request by a client for an object (e.g., a file, document, image, etc.) cache 2 intercepts the request and checks to see if it has a fresh copy of the requested object. If so, cache 2 responds by sending the cached object to the client, otherwise it relays the request to another content source (e.g., a server identified by or in response to the client's request). The response from the server is then cached for future requests from clients.
  • object e.g., a file, document, image, etc.
  • Such application-level or object caches are generally used to offload the burden on origin servers and other content sources and to improve the time required to deliver requested content to clients.
  • Object caches are sometimes referred to as application-level caches because they terminate application-level communications. In the present invention, however, the termination of the application-level communications is preferably handled by the application proxies 4 a - 4 n . This allows for protocol-by-protocol optimization to remove application-specific protocol deficiencies, such as the chattiness of the protocol (e.g., as occurs with MAPI and CIFS), sequencing of messages, the frequency of short messages, etc. Moreover, in cases where application-level objects may not be cacheable an application proxy may still be able to offer other services to enhance the delivery of data through the network.
  • Byte cache (or stream cache) 8 operates at a much lower communication level (e.g., typically the Internet protocol (IP) or transmission control protocol (TCP) level) to store individual segments of data in a process called “dictionary compression”.
  • IP Internet protocol
  • TCP transmission control protocol
  • byte or stream dictionaries can be used to replace the actual data with representative tokens, thus reducing the size of information block (more specifically the number of bytes) to be transmitted.
  • Each time data needs to be sent over the WAN link it is scanned for duplicate segments in the cache. If any duplicates are found, the duplicate data (e.g., in some embodiments up to 64 KB), is removed from the byte sequence, and a token and a reference to the cache location and length is inserted (e.g., in one embodiment this is a 14-byte package).
  • the token and the reference are removed from the byte sequence and original data is inserted by reading from the byte cache, thus creating a byte sequence identical to the original byte sequence.
  • the object cache layer 6 enables or disables compression at the byte cache layer 8 based on whether content is known to be compressible or not compressible. Such determinations may be made at the application proxy layer.
  • the application proxy layer may be distributed between two different platforms supporting reciprocal object cache layers. This allows for improved efficiency of the application proxy while still obtaining the benefits of the byte caching channel between the split proxies.
  • the present invention is able to combine the benefits of each, resulting in acceleration of enterprise applications and reduction of WAN bandwidth requirements.
  • object caching offers a number of benefits (such as server offload and reduced latency), it fails to accelerate content delivery on cache misses and when dealing with non-cacheable content.
  • object caching is application specific it cannot be used with all communication protocols. Combining object caching with byte caching ameliorates these shortcomings.
  • byte caching operates with any communication protocol, is able to handle caching of dynamic and other types of otherwise non-cacheable content, and can be effective across different protocols (e.g., byte caching will operate even if the same file were first downloaded via the Common Internet File System (CIFS) and later via the Hypertext Transfer Protocol (HTTP)).
  • CIFS Common Internet File System
  • HTTP Hypertext Transfer Protocol
  • an object cache allows for true server offload and is able to cache decisions of external processes, which can later eliminate the need to repeat these time-consuming operations.
  • Such external processes may include virus scanning, examining content for compliance with network or business policies, and so on. The results of such policy evaluations may also be cached for later evaluation or use.
  • object caches can be used to enforce various types of content policies, for example allowing some content to pass to a requesting client while denying other requests.
  • caches configured in accordance with embodiments of the present invention may also provide “intra-stream compression” using conventional data compression technologies (e.g., Gzip). That is, within a single stream short histories thereof may be cached and used to eliminate redundant data. Typically, such technologies cannot operate across stream boundaries and so are distinct from byte caching as that term is used in connection with the present invention.
  • Gzip data compression technologies
  • Caches (whether appliances or otherwise) configured in accordance with embodiments of the present invention may also provide “predictive caching”. Caches configured in this manner examine content (e.g., HTML content as found in Web pages) being returned in response to a client request and identify embedded documents included therein. These embedded documents are then fetched and cached before the client's browser actually requests them (i.e., on the prediction that they will be needed). Techniques for performing such actions are discussed in U.S. Pat. No. 6,442,651, assigned to the assignee of the present invention and incorporated herein by reference. Such actions reduce the overall latency of a request.
  • content e.g., HTML content as found in Web pages
  • CDN content delivery network
  • Other predictive caching operations may make use of a content delivery network (CDN) for distributing content to caches on a schedule, ahead of points in time when a client will request such content.
  • CDN content delivery network
  • the administrator or end user may have more specific control over how and when a cache is pre-populated with content in order to most effectively speed up later accesses.
  • CDN techniques may be used in conjunction with an object cache, byte cache, or both.
  • Still another technique for predictive caching is “read ahead”.
  • the cache may be configured to “read ahead” to request later blocks in a file/message before the client requests same. All of these forms of predictive caching can be combined with object caching and byte caching in order to improve the performance of these techniques in accordance with embodiments of the present invention.
  • predictive caching may be used to store objects in the object-level cache, while in the case of MAPI, the read ahead may be used to populate the byte-level cache.
  • a system 10 includes a client 12 communicatively coupled via a network 14 to a cache 16 that is configured in accordance with the present invention.
  • Network 14 may be a local area network or, in some cases, may be a point-to-point connection, such as an Ethernet connection.
  • network 14 may be omitted altogether and functionality performed by cache 16 ′ may be instantiated within client 12 , for example as a proxy cache of a Web browser application or another application (e.g., a thin client) executing on client 12 .
  • proxy cache of a Web browser application or another application (e.g., a thin client) executing on client 12 For clarity, however, it is easier to treat cache 16 as a separate appliance in the remaining discussion.
  • cache 16 is communicatively coupled through a wide area network 18 to a second cache 20 .
  • Network 18 may be a single network or a network of networks, such as the Internet.
  • network 18 may be a private network (including, for example, a virtual private network established within a public network).
  • cache 16 may be communicatively coupled to cache 20 through a multitude of routers, switches, and/or other network communication devices. The details of such communication paths are not critical to the present invention and so they are represented simply as network 18 .
  • the caches 16 and 20 may be communicatively coupled through one or more gateways and/or firewalls, but these details are not critical to the present invention and so are not shown in detail.
  • Cache 20 is communicatively coupled via network 22 to a server (or other content source) 24 .
  • network 22 may be a local area network or, in some cases, may be a point-to-point connection, such as an Ethernet connection.
  • network 22 may be omitted altogether and functionality performed by cache 20 ′ may be instantiated within server 24 , for example as a proxy cache of a Web server application or other application executing on server 24 .
  • FIG. 2D shows an example where each cache component is instantiated as an application (e.g. a thin client), with one executing on the client 12 and the other on server 24 .
  • having a cache application 20 ′ executing on server 24 may, in some cases, be somewhat redundant but in other cases it provides significant advantages. For example, where the server itself needs to consult other data sources in order to respond to client requests, having such a cache may be particularly advantageous.
  • client and server refer to relationships between various computer-based devices, not necessarily to particular physical devices.
  • a “client” or “server” can include any of the following: (a) a single physical device capable of executing software which bears a client or server relationship with respect to a cache; (b) a portion of a physical device, such as a software process or set of software processes capable of executing on a physical device, which portion of the physical device bears a client or server relationship to a cache; or (c) a plurality of physical devices, or portions thereof, capable of cooperating to form a logical entity which bears a client or server relationship to a cache.
  • the phrases “client” and “server” therefore refer to such logical entities and not necessarily to particular physical devices.
  • networks 14 and 22 could be local area networks, wide area networks, or other more complicated networks, such as the Internet.
  • network 18 could be a wide area network or a local area network.
  • network 18 being a corporate intranet and network 22 being the Internet.
  • caches 16 and 20 For purposes of understanding functions performed by caches 16 and 20 (or 16 ′ and/or 20 ′) the example below assumes that client 12 has made a request for content from server 24 .
  • the reverse process where the client is sending data to the server would implicate somewhat reverse functionality (i.e., cache #1 and cache #2 may perform similar operations in the reverse data flow direction).
  • cache #1 and cache #2 may perform similar operations in the reverse data flow direction.
  • One difference that would be apparent in the reverse data flow direction is that one could cache “write” operations (say, for CIFS).
  • the caching of a write operation is somewhat different than a read. Data stored on a cached read is generally used by subsequent read operations. For a cached write, the data is not used by subsequent writes but rather by subsequent reads.
  • One significant benefit of the present invention is the ability to cache CIFS writes.
  • the requested content arrives first at cache 20 .
  • any one or more of the following operations, illustrated in FIG. 3 may be performed.
  • the output data stream from cache 20 is transmitted across network 18 to cache 16 , where some or all of the following operations may be performed:
  • socketpairs may (but need not) be used for communications between object cache and byte cache layers of a single cache appliance. Sockets typically define the communication paths to/from a network but in this case a pair of sockets are used to define communications between the various cache layers of a cache appliance. Socketpairs provide a convenient method to optionally insert processing layers without changing the input/output structure of application proxy code, and thus are beneficial for fast and error-free implementation of the techniques described herein.
  • the application layer cache may be configured to identify portions of a stream which are “uninteresting” for byte caching.
  • a hinting mechanism may be used to allow the application layer cache to identify “protocol metadata” which should not be stored in the byte cache. This metadata is not likely to be repeated in a subsequent transmission of the same file, because it is protocol specific and may even be specific to a single transmission/connection. By using this hinting mechanism to identify the material which should not be cached, the overall operation of the byte cache layer is improved.
  • policy-based decisions e.g., decisions taken at the application proxy level based on configured policy information, heuristics or other algorithms, such as string matches, executed on the received content
  • decisions taken at the application proxy level based on configured policy information, heuristics or other algorithms, such as string matches, executed on the received content
  • the object level cache may be configured to mark such non-cacheable content so that it is not cached at the byte cache level.
  • Such operations are not feasible with conventional byte caches because such devices have no ability to determine the nature of the information being transmitted thereto.
  • a cache configured in accordance with the present invention may include both short-term storage (typically in the form of read/write memory) and longer-term storage (typically in the form of one or more hard disk dives which may be read from/written to).
  • Information received at the cache is usually first stored to memory and later transferred to disk (assuming it is to be preserved for a longer period).
  • One reason for this division of storage is that it typically takes longer to read from/write to disk than to/from memory and so in order to avoid losses of data due to read/write latencies, this two level storage technique is employed.
  • the optional intra-stream compression layer also requires memory resources if it is used.
  • content i.e., data bytes
  • content may be stored on disk both within the object cache layer and within the byte cache layer. This may (and often will) mean that the same information is stored twice. This is not necessarily a problem inasmuch as disks tend to be large (in terms of storage space) and relatively inexpensive. Nevertheless, the situation can be improved by using the byte cache to “compress” the contents of the object cache. This would allow information stored in the object cache layer to be much reduced in size when stored to disk.
  • intra-stream compression requires that large amounts of memory be allocated to so-called “stream contexts”; sets of parameters and stream specific options that modify or enhance the behavior of a stream.
  • stream contexts sets of parameters and stream specific options that modify or enhance the behavior of a stream.
  • one embodiment of the present invention stores a limited number of these contexts and re-uses them across multiple streams by migrating the contexts from one stream to the next.
  • a compression operation may be omitted where the data is determined to be poorly compressible (either by the application proxy determining same, or because the cache has computed the compression rate or factor during compression operations and determined that it does not meet a previously established threshold). This can not only save memory, but also improve CPU performance. Also, one may choose to migrate the compression contexts to disk during periods when they are not in use in order to save memory.
  • memory is also needed by the object cache layer (e.g., to store an object hash table and memory cache) and the byte cache layer (e.g., to store a fingerprint table that acts as an index into the byte cache).
  • the actual byte cache data is stored as a set of objects within the object cache, so the memory cache provided by the object cache layer is effectively used for both object cache data and byte cache data.
  • a “thinning” mechanism may be employed in order to further optimize the memory space allocated to the data cache's fingerprint table. For example, for less popular data stored by the byte cache the associated entries in the fingerprint table for may be thinned (i.e., removed) over time. The consequence of course is that more fingerprint entries are kept for popular byte streams and, therefore, searches of the byte cache are more likely to find matches. The net result is improved compression ratios overall. Similar techniques may be used at the object cache level, for example by employing a “least recently used” or other form of cache clean up mechanism.
  • a “split proxy” may be implemented.
  • the object cache layers at each of cache 16 and cache 20 may store all cacheable objects returned by the content source. For example, this may be required where the caches need to operate independently of one another when servicing some client requests. However, it is also possible to configure the two object cache layers to operate as two halves of the same object cache that just happen to be executing on different devices.
  • the split proxy concept allows some objects to be stored at cache 20 and other objects to be stored at cache 16 .
  • the byte caching layer is then used as a transport and signaling channel between the two halves. The most basic signaling done on this byte caching channel would be to detect that there is an “other half” and agree to operate in split proxy mode. Thereafter the two object caches may communicate with one another to decide which will store what objects as they are returned from content sources and also to determine whether one of the halves has a copy of a requested object that can be used to satisfy a current request.
  • a split proxy also allows for the processing, rather than just the storage, to be split.
  • the cache closest to the server may be tasked with all of the read ahead operations and all of the data may be sent in a more efficient form to the cache closest to the client, where protocol-based processing and other optimizations are performed.
  • This may include the most efficient form for sending the data without certain overhead that would otherwise be imposed by application layer protocols.
  • the data is still subject to the benefits of byte caching and, indeed, may be more “byte cacheable”.
  • a further optimization of the present invention concerns the sizes for various byte caching parameters. That is, in order to keep the size of the byte cache (i.e., the amount of memory and disk space that it consumes) to a manageable level, it is not feasible to cache every possible byte pattern observed during a communication session. At the same time, if the cached streams are too fragmented, long matches are prohibited and the efficiency of the cache will be reduced. To balance these competing interests, one embodiment of the present invention provides for a threshold. For hits of length below the threshold they will be included in the byte cache for future use (avoiding fragmenting the stream). For hits longer than the threshold, however, they will not be included in order to avoid the byte cache becoming too large. In some cases, application-level information may be used/evaluated in order to set the appropriate threshold.
  • Different applications may have very different data characteristics. Even within a particular application, different files or request types may have different characteristics. For example, some applications and even particular types of files may have common byte patterns which are very long, while others may have much shorter common byte patterns. For this reason, individual application proxies may wish to control certain parameters related to byte caching in order to optimally store and find content in the byte cache. For applications or file types where repeated byte patterns are long, the application may wish to increase the threshold described in the previous paragraph. For applications or file types where the repeated byte patterns are always expected to be short, it may be desirable to decrease or even eliminate the threshold described in the previous paragraph. In addition, such applications or file types may also wish to have the byte cache produce more frequent index data in order to increase the likelihood of finding smaller repeated patterns.
  • This indexing is normally done by performing a computation on a small set of bytes (sometimes called a “shingle”); the length of the shingle is a lower bound on the size of repeated data which can be detected. For applications or file types which are expected to have very short repeated patterns, it may also be necessary to decrease the shingle size used when storing data of this type into the byte cache.
  • application proxies consider the protocol, file type, and other characteristics of the data, and choose optimal parameters for storing and retrieving that data within the byte cache.
  • a cache configured in accordance with the present invention may act as a read-write cache for byte caching and also for some application level proxies such as CIFS, etc.
  • the byte cache layer may in fact be a multi-level byte cache, in which different parameters are used for indexing each level. For example, a very large cache may have relatively sparse indexing information, while smaller caches may include more dense indexing information.
  • a cache 30 configured in accordance with the present invention may be deployed in a “split tunnel” configuration.
  • client 12 connects to cache 30 , which is situated at an Internet gateway of a corporate LAN, via network 14 .
  • one port of cache 30 may be coupled to further elements of the corporate network 32 (e.g., including a reciprocal cache 34 that provide object and byte cache layers, servers, other clients, etc.).
  • Another port of cache 30 is directly connected to the Internet (or other external network) 36 .
  • the object cache layer and perhaps the application proxies of cache 30 would provide benefit inasmuch as there is no reciprocal byte cache layer to peer with.
  • certain application accelerations i.e., those that benefit from object caching
  • could still benefit from this object cache layer and applications accessed over the corporate network 32 could still make use of both the object cache and byte cache layers.

Abstract

A cache includes an object cache layer and a byte cache layer, each configured to store information to storage devices included in the cache appliance. An application proxy layer may also be included. In addition, the object cache layer may be configured to identify content that should not be cached by the byte cache layer, which itself may be configured to compress contents of the object cache layer. In some cases the contents of the byte cache layer may be stored as objects within the object cache.

Description

    RELATED APPLICATION
  • This application is a nonprovisional of, claims priority to and incorporates by reference U.S. Provisional Patent Application 60/743,750 filed 24 Mar. 2006.
  • FIELD OF THE INVENTION
  • The present invention relates to systems and methods for caching within a network and, more specifically, to techniques for combining caching operations at multiple levels (e.g., object levels and byte levels) within a single appliance.
  • BACKGROUND
  • In the context of desktop applications (e.g. office software or web browsers), a cache is a device located logically between a content source (typically an application server or Web server though sometimes another cache) and one or more clients. Web pages, documents, images, movies, etc. (collectively known as “content”) stored by these content sources may be downloaded and displayed by the clients. The content can be displayed in the context of a Web browser executing on the client platform, or in the context of other application programs (e.g., audio/video players, document viewers, image viewers, etc.).
  • The content distributed by the various content sources may contain a variety of “objects”. In this context, the term object is used to refer to logical entities such as images or other multimedia data, such as animation, audio (such as streaming audio), movies, video (such as streaming video), program fragments, such as Java, Javascript, or ActiveX, or Web documents. Generally speaking, objects are relatively large logical entities.
  • As indicated above, a cache typically sits between the client and the content source and monitors transmissions therebetween. For example, if the client requests a Web page, the cache will see the request and check to see if it stores a local copy thereof. If so, the cache will return that copy to the client. Otherwise, the cache will forward the request to the content source. As the content source returns the requested objects to the client, the cache keeps a copy for itself, which copy may then be used to service later requests for the object. Application caches thus reduce latency (it takes less time for a client to get an object from a nearby cache than from the original content source); and reduce network traffic (because each object is only retrieved from the content source once, or periodically if the object is subject to changes over time).
  • The “object caching” discussed above is not the only form of caching available today. “Byte caching” or “stream caching” is an optimization technique in which information at a level below that of entire objects is cached. These cached bytes or streams are then associated with tokens so that when identical byte/stream patterns are observed in newly requested content, the byte/stream information is replaced by the token. Hence, if the byte/stream patterns repeat often enough, significant bandwidth savings can be achieved using these transmission optimizations.
  • SUMMARY OF THE INVENTION
  • An embodiment of the present invention provides a cache having an object cache layer and a byte cache layer, each configured to store information to storage devices included in the cache. Further, an application proxy layer configured to identify content that should not be cached by either (or both) of the object cache layer and/or the byte cache layer may also be included. For example, the application proxy layer may be configured to pass content not cacheable at the object cache layer to the byte cache layer.
  • The byte cache layer may be configured to compress contents of the object cache layer, and the object cache layer may be configured to enable or disable compression at the byte cache layer based on whether content is known to be compressible or not compressible (e.g., as determined by the application proxy layer). The contents of the byte cache layer may be stored as objects within the object cache.
  • A further embodiment of the invention involves receiving content from a content source and caching said content first at an object cache layer of a cache and next at a byte cache layer of the cache so as to eliminate repeated strings present within the content after caching at the object cache layer. Prior to caching the content at the object cache layer, the content may be transformed from a first format to a second format, for example from an encrypted data format to a decrypted data format. Alternatively, or in addition, prior to caching the content at the object cache layer, the content may be examined for compliance with one or more policies, for example policy checks performed remotely from the cache.
  • Prior to compressing the content at the byte cache layer, the content may be transformed from a first format to a second format. Further, intra-stream compression of the output of the byte cache layer may be employed. Indeed, the intra-stream compressed output of the byte cache layer may also be transformed from one data format to another data format, for example from an unencrypted data format to an encrypted data format.
  • Another embodiment of the present invention involves receiving content from a content source, decompressing the content at a byte cache layer to produce expanded content, and transmitting the expanded content to a client along with previously cached objects from an object cache layer. The byte cache layer and the object cache layer are preferably included in a common cache. Moreover, the expanded content may be cached at the object cache layer. Prior to decompressing the content at the byte cache layer, the content may be transformed from a first data format to a second data format, for example from an encrypted (and/or compressed) data format to a decrypted (and/or decompressed) data format.
  • Still another embodiment of the present invention provides a system that includes a first object cache communicatively coupled to a second object cache via a transport and signaling channel made up of reciprocal byte cache layers. An application proxy layer may be distributed between platforms supporting the first and second object cache. In some cases, the first object cache may be instantiated in a cache appliance, while in other cases the first object cache may be instantiated as a thin client executing on a computer platform.
  • In various embodiments of the invention, policy-based decisions are implemented at an application proxy layer associated with one or more of the first object cache and the second object cache. These policy-based decisions may be based on configured policy information, heuristics or other algorithms, and may include decisions concerning what information is or is not cached at the reciprocal byte cache level. For example, the policy-based decisions may include decisions concerning personally identifying information of a user.
  • The byte cache layers may be configured to compress and decompress contents of at least one of the object caches and may include a thinning mechanism whereby less popular data stored in the byte cache layers are removed over time. The byte cache layers may also be configured to store byte patterns of only less than a threshold size.
  • Yet a further embodiment of the invention provides a cache made up of a multi-level caching architecture in which content received from a content source is cached at multiple protocol stack levels according to its cacheability at each such layer. The byte cache layers of each of the first and second cache may store common strings with their respective tokens.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention is illustrated by way of example, and not limitation, in the figures of the accompanying drawings in which:
  • FIG. 1 illustrates a cache configured with an application proxy layer, an object cache layer and a byte cache layer in accordance with an embodiment of the present invention;
  • FIGS. 2A-2D illustrate various computer systems having pairs of caches each configured with object cache layers and byte cache layers in accordance with an embodiment of the present invention;
  • FIG. 3 illustrates in further detail operations performed at the various caching layers of the caches illustrated in FIGS. 2A-2D; and
  • FIG. 4 illustrates a split tunnel deployment of caches configured in accordance with embodiments of the present invention.
  • DETAILED DESCRIPTION
  • Described herein are methods and systems for caching content at multiple levels. Such techniques are useful in a variety of contexts, for example in accelerating traffic over bandwidth-constrained communication links. Inasmuch as many software applications are used in locations remote from where the applications are hosted, such acceleration can greatly improve application performance by reducing or eliminating latency issues.
  • In one embodiment of the present invention, a single cache combines one or more application proxies, an object cache layer and a byte cache layer. In this context, the application proxies are logical entities that understand the protocols over which the application objects are communicated or delivered (e.g., HTTP, HTTPS, CIFS, FTP, RTSP/RTP, etc.). Consequently, these application proxies can identify application object boundaries and make use of the object cache accordingly. Where the application objects are not cacheable, the application proxy can still take advantage of the byte cache (e.g., through custom or socket pair application programming interfaces) so that content which cannot or should not be cached at the object level may instead be cached at the byte or stream level. By doing so the present invention provides the benefits of application-level object caching, including the ability to offload demand on content sources and minimizing latency for cache hits, as well as the benefits of byte caching, to reduce the amount of data which must be transferred over a communication path. Byte caching can also offer benefits with respect to certain types of otherwise non-cacheable content, which an application-level cache can usually do little to accelerate. In addition, the appliance may incorporate further optimization techniques, such as intra-stream compression, predictive caching and policy-based content filtering.
  • Although discussed with reference to several illustrated embodiments, it is important to remember that the present invention should not be restricted thereby. That is, the scope of the invention is not intended to be limited to the examples presented below. Instead, the invention should only be measured in terms of the claims, which follow this description.
  • Moreover, various embodiments of the present invention may be implemented with the aid of computer-implemented processes or methods (a.k.a. programs or routines) that may be rendered in any computer software language including, without limitation, C#, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ and the like. In general, however, all of the aforementioned terms as used herein are meant to encompass any series of logical steps performed in a sequence to accomplish a given purpose.
  • In view of the above, it should be appreciated that some portions of the detailed description that follows are presented in terms of algorithms and symbolic representations of operations on data within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the computer science arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, it will be appreciated that throughout the description of the present invention, use of terms such as “processing”, “computing”, “calculating”, “determining”, “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • The present invention can be implemented with an apparatus to perform the operations described herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer, selectively activated or reconfigured by a computer program stored (permanently or temporarily, e.g., in the case of a client downloaded on-demand) in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • The algorithms and processes presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method. For example, any of the methods according to the present invention can be implemented in hard-wired circuitry, by programming a general-purpose processor or by any combination of hardware and software. One of ordinary skill in the art will immediately appreciate that the invention can be practiced with computer system configurations other than those described below, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics (e.g., mobile phones and the like), DSP devices, network PCs, minicomputers, mainframe computers, and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. The required structure for a variety of these systems will appear from the description below.
  • Referring now to FIG. 1, a hierarchical view of a cache 2 configured according to an embodiment of the present invention is illustrated. Cache 2 may be embodied within a stand alone appliance or, in some cases, may be instantiated (at least in part) as a client (or thin client) hosted on a personal computer or a server. Cache 2 includes one or more application proxies 4 a-4 n, an object cache layer 6 and a byte cache layer 8.
  • Object cache layer 6 provides for storage of application objects. In response to a request by a client for an object (e.g., a file, document, image, etc.) cache 2 intercepts the request and checks to see if it has a fresh copy of the requested object. If so, cache 2 responds by sending the cached object to the client, otherwise it relays the request to another content source (e.g., a server identified by or in response to the client's request). The response from the server is then cached for future requests from clients.
  • Such application-level or object caches are generally used to offload the burden on origin servers and other content sources and to improve the time required to deliver requested content to clients. Object caches are sometimes referred to as application-level caches because they terminate application-level communications. In the present invention, however, the termination of the application-level communications is preferably handled by the application proxies 4 a-4 n. This allows for protocol-by-protocol optimization to remove application-specific protocol deficiencies, such as the chattiness of the protocol (e.g., as occurs with MAPI and CIFS), sequencing of messages, the frequency of short messages, etc. Moreover, in cases where application-level objects may not be cacheable an application proxy may still be able to offer other services to enhance the delivery of data through the network.
  • Byte cache (or stream cache) 8, on the other hand, operates at a much lower communication level (e.g., typically the Internet protocol (IP) or transmission control protocol (TCP) level) to store individual segments of data in a process called “dictionary compression”. When data segments are repeated on a communication link these byte or stream dictionaries can be used to replace the actual data with representative tokens, thus reducing the size of information block (more specifically the number of bytes) to be transmitted. A more detailed discussion of byte caching can be found in Neil T. Spring and David Wetherall, “A Protocol Independent Technique for Eliminating Redundant Network Traffic”, Proc. ACM SIGCOMM (August, 2000), incorporated herein by reference.
  • Byte caching caches traffic irrespective of application-level protocol, port, or IP address, on both ends of a WAN link. Each time data needs to be sent over the WAN link, it is scanned for duplicate segments in the cache. If any duplicates are found, the duplicate data (e.g., in some embodiments up to 64 KB), is removed from the byte sequence, and a token and a reference to the cache location and length is inserted (e.g., in one embodiment this is a 14-byte package). On the receiving end, the token and the reference are removed from the byte sequence and original data is inserted by reading from the byte cache, thus creating a byte sequence identical to the original byte sequence.
  • In some cases, the object cache layer 6 enables or disables compression at the byte cache layer 8 based on whether content is known to be compressible or not compressible. Such determinations may be made at the application proxy layer. In some cases, for example a split proxy configuration, the application proxy layer may be distributed between two different platforms supporting reciprocal object cache layers. This allows for improved efficiency of the application proxy while still obtaining the benefits of the byte caching channel between the split proxies.
  • By combining both object and byte caching techniques in a single cache, the present invention is able to combine the benefits of each, resulting in acceleration of enterprise applications and reduction of WAN bandwidth requirements. For example, while object caching alone offers a number of benefits (such as server offload and reduced latency), it fails to accelerate content delivery on cache misses and when dealing with non-cacheable content. Moreover, because object caching is application specific it cannot be used with all communication protocols. Combining object caching with byte caching ameliorates these shortcomings. For example, byte caching operates with any communication protocol, is able to handle caching of dynamic and other types of otherwise non-cacheable content, and can be effective across different protocols (e.g., byte caching will operate even if the same file were first downloaded via the Common Internet File System (CIFS) and later via the Hypertext Transfer Protocol (HTTP)).
  • Likewise, shortcomings of byte caching (for example, its inability to offload demand on a content source or cache application-level decisions of other, associated routines) can be solved (at least in part) through combination with an object cache. For example, an object cache allows for true server offload and is able to cache decisions of external processes, which can later eliminate the need to repeat these time-consuming operations. Such external processes may include virus scanning, examining content for compliance with network or business policies, and so on. The results of such policy evaluations may also be cached for later evaluation or use. In this way, object caches can be used to enforce various types of content policies, for example allowing some content to pass to a requesting client while denying other requests.
  • In addition to providing object-level and byte-level caching, caches configured in accordance with embodiments of the present invention may also provide “intra-stream compression” using conventional data compression technologies (e.g., Gzip). That is, within a single stream short histories thereof may be cached and used to eliminate redundant data. Typically, such technologies cannot operate across stream boundaries and so are distinct from byte caching as that term is used in connection with the present invention.
  • Caches (whether appliances or otherwise) configured in accordance with embodiments of the present invention may also provide “predictive caching”. Caches configured in this manner examine content (e.g., HTML content as found in Web pages) being returned in response to a client request and identify embedded documents included therein. These embedded documents are then fetched and cached before the client's browser actually requests them (i.e., on the prediction that they will be needed). Techniques for performing such actions are discussed in U.S. Pat. No. 6,442,651, assigned to the assignee of the present invention and incorporated herein by reference. Such actions reduce the overall latency of a request.
  • Other predictive caching operations may make use of a content delivery network (CDN) for distributing content to caches on a schedule, ahead of points in time when a client will request such content. In a CDN, the administrator or end user may have more specific control over how and when a cache is pre-populated with content in order to most effectively speed up later accesses. CDN techniques may be used in conjunction with an object cache, byte cache, or both.
  • Still another technique for predictive caching is “read ahead”. For CIFS and MAPI (messaging application programming interface) downloads the cache may be configured to “read ahead” to request later blocks in a file/message before the client requests same. All of these forms of predictive caching can be combined with object caching and byte caching in order to improve the performance of these techniques in accordance with embodiments of the present invention. For example, in the case of HTTP, streaming content and CIFS, predictive caching may be used to store objects in the object-level cache, while in the case of MAPI, the read ahead may be used to populate the byte-level cache.
  • Having thus described some of the advantages offered by the present invention, we turn now to a description of the operation of various embodiments thereof. Referring to FIG. 2A, a system 10 includes a client 12 communicatively coupled via a network 14 to a cache 16 that is configured in accordance with the present invention. Network 14 may be a local area network or, in some cases, may be a point-to-point connection, such as an Ethernet connection. In other cases, as shown in FIG. 2B, network 14 may be omitted altogether and functionality performed by cache 16′ may be instantiated within client 12, for example as a proxy cache of a Web browser application or another application (e.g., a thin client) executing on client 12. For clarity, however, it is easier to treat cache 16 as a separate appliance in the remaining discussion.
  • Returning to FIG. 2A, cache 16 is communicatively coupled through a wide area network 18 to a second cache 20. Network 18 may be a single network or a network of networks, such as the Internet. In some cases, network 18 may be a private network (including, for example, a virtual private network established within a public network). Although not shown in this illustration, cache 16 may be communicatively coupled to cache 20 through a multitude of routers, switches, and/or other network communication devices. The details of such communication paths are not critical to the present invention and so they are represented simply as network 18. Likewise, in some cases the caches 16 and 20 may be communicatively coupled through one or more gateways and/or firewalls, but these details are not critical to the present invention and so are not shown in detail.
  • Cache 20 is communicatively coupled via network 22 to a server (or other content source) 24. As was the case with network 14, network 22 may be a local area network or, in some cases, may be a point-to-point connection, such as an Ethernet connection. In other cases, for example as shown in FIG. 2C, network 22 may be omitted altogether and functionality performed by cache 20′ may be instantiated within server 24, for example as a proxy cache of a Web server application or other application executing on server 24. For sake of completeness, FIG. 2D shows an example where each cache component is instantiated as an application (e.g. a thin client), with one executing on the client 12 and the other on server 24. Of course having a cache application 20′ executing on server 24 may, in some cases, be somewhat redundant but in other cases it provides significant advantages. For example, where the server itself needs to consult other data sources in order to respond to client requests, having such a cache may be particularly advantageous.
  • As used herein, the terms “client” and “server” refer to relationships between various computer-based devices, not necessarily to particular physical devices. A “client” or “server” can include any of the following: (a) a single physical device capable of executing software which bears a client or server relationship with respect to a cache; (b) a portion of a physical device, such as a software process or set of software processes capable of executing on a physical device, which portion of the physical device bears a client or server relationship to a cache; or (c) a plurality of physical devices, or portions thereof, capable of cooperating to form a logical entity which bears a client or server relationship to a cache. The phrases “client” and “server” therefore refer to such logical entities and not necessarily to particular physical devices. Further, in any of the embodiments described herein, either or both of networks 14 and 22 could be local area networks, wide area networks, or other more complicated networks, such as the Internet. Likewise, network 18 could be a wide area network or a local area network. One common scenario would have network 18 being a corporate intranet and network 22 being the Internet.
  • For purposes of understanding functions performed by caches 16 and 20 (or 16′ and/or 20′) the example below assumes that client 12 has made a request for content from server 24. Of course, the reverse process, where the client is sending data to the server would implicate somewhat reverse functionality (i.e., cache #1 and cache #2 may perform similar operations in the reverse data flow direction). One difference that would be apparent in the reverse data flow direction is that one could cache “write” operations (say, for CIFS). The caching of a write operation is somewhat different than a read. Data stored on a cached read is generally used by subsequent read operations. For a cached write, the data is not used by subsequent writes but rather by subsequent reads. One significant benefit of the present invention is the ability to cache CIFS writes.
  • Returning to the read example, as the requested content is returned, it arrives first at cache 20. Depending on the type of session between client 12 and server 24 and the type and nature of the content being transferred, any one or more of the following operations, illustrated in FIG. 3, may be performed.
      • a. Transform (1)/Application Proxy layer: This is a data transformation process in which encrypted content (e.g., as part of a SSL session) may be decrypted for further processing within cache 20. Alternatively, or in addition, content that arrives in a compressed form may be decompressed for further processing. Other types of application-level transforms may also be employed (e.g., at the application proxy level). For example, HTTP content may arrive with “chunked encoding”. A transformation may be performed (e.g., by the HTTP application proxy associated with cache 20) to restore the data to its original form before further processing is done. Application level proxies are especially well suited for such roles where examining and reassembling content into its correct form is required before further caching/compression operations may be undertaken. Likewise, the server could deliver the content in an encoded fashion (e.g., GZip encoding). An associated application proxy may be used to decode the content prior to further processing.
      • b. Policy-based operations (1): Prior to any actual caching of content at cache 20, the content may be scrutinized for compliance with one or more policies enforced by cache 20. For example, virus scanning may be performed at this time to ensure that the content is virus free before it is cached. Such scanning may be done by cache 20 or may be done by other resources not illustrated in the drawing and the results reported to cache 20.
      • c. Object-level caching: This is optional and in some embodiments may not be performed at the first cache. That is, in some embodiments the content will be subject to only the application proxy processing and byte caching. To the extent application (or object)-level caching is performed on the content, cacheable objects are copied into the object cache portion of cache 20. Also, to the extent there were any previously cached objects that satisfy the client's original request (which objects were not already present in the cache closest to the client), those objects may be served out of the object cache portion of cache 20 (note that the request for these objects may not have been forwarded to server 24, although a check may be made to determine whether or not the objects had been updated since last being cached and, if so, new copies thereof may have been requested).
      • d. Policy and transformation operations (2): Again, this is an optional procedure for the first cache in the content delivery path and in some cases is performed only at the cache closest to the client. Further policy-based operations and/or data transformation operations may be performed on the data not cached at the object cache level. Examples of such operations may include rewriting of Web pages to conform to a network operator's policies, stripping Java-based or other active content, etc. Even if the object is served from the cache, one may choose to have policy-based operations and data transformations performed.
      • e. Byte cache compression: At this point byte/stream caching may be employed to replace any previously cached strings with their respective tokens. Previously uncached strings may also be added to the byte cache layer of cache 20 at this time. That is, the strings may be cached in the byte cache layer and the byte cache table updated to reflect their availability along with their newly assigned, respective tokens.
      • f. Intra-stream compression: To the extent possible (and if desired), data redundancies may be reduced or eliminated using commercially available intra-stream compression routines. Examples of such operations include run length encoding, or more complex operations as performed by commercially available software tools such as Zlib, zip (including Gzip, PKZip, etc.), LZ adaptive dictionary-based algorithms, etc. This may be applied to some or all of the data resulting from the previous step.
      • g. Transformation (3); This final data transformation stage is optional. In cases where the client-server communications are to be encrypted, this transformation may involved encrypting the new data stream (produced after the caching and compression operations described above) according to the selected encryption algorithm or, in some cases, a new encryption scheme known only to caches 16 and 20.
  • The output data stream from cache 20 is transmitted across network 18 to cache 16, where some or all of the following operations may be performed:
      • a. Transformation (4): This data transformation operation will, in many cases, be an inverse of the Transformation (3) operation performed by cache 20. For example, if cache 20 performed an encryption operation, cache 16 will need to decrypt the data prior to processing it further. Other transformation operations that may be performed at this stage may not be perfect inverse operations of procedures applied at cache 20.
      • b. Intra-stream decompression: If intra-stream compression was performed by cache 20, cache 16 will need to perform inverse operations at this stage to expand the compressed data before it can be further processed. Generally, the intra-stream compression operation as a whole must be lossless. That is, the entirety of the data stream must be recoverable by cache 16.
      • c. Byte cache decompression: At this stage of processing, the tokens representing data streams replaced during the byte cache operations of cache 20 are themselves replaced by the corresponding data streams out of the byte cache layer of cache 16. In addition, any new streams cached in the byte cache layer of cache 20 will also be cached to the byte cache layer of cache 16 (and corresponding tokens assigned) so that the byte cache layers of each of the caches 16 and 20 remain identical. This ensures that the newly cached information can be properly tokenized for transfer between the caches in the future.
      • d. Policy and transformation operations (5): At this stage, policy-based operations not performed at cache 20 may be performed by cache 16 in order to reflect local network policies. Layering of policies (some by cache 20 and others by cache 16) in this fashion may include such things as cache 20 performing rewrites based on access methods (say, converting HTTP URLs to HTTPs URLs) and cache 16 performing additional rewrites based on local policies (say, translating to a local language). In addition, inverse data transformation operations to those applied at transformation (2) in cache 20 may be performed, if such operations have an inverse.
      • e. Object-level caching: At this stage the object cache of cache 16 may supply any previously cached objects implicated by the client's original request (such requests would not necessarily have been passed upstream to the server 24). In addition, new objects may be stored to the object cache layer so as to be available to service later requests from client 12 or other requestors.
      • f. Transform (6)/Application Proxy layer: Again, this process may involve additive policy operations to apply local content or network policies at cache 16. In some cases, a data transformation process which is an inverse of the Transformation (1) process performed by cache 20 may also be applied. For example, the data stream may be encrypted (e.g., as part of a SSL session) prior to transmission to client 12. Alternatively, or in addition, the data stream may be compressed prior to such transmission. Of course, the transformation need not be an inverse of the Transformation (1) process and instead may be an encryption and/or compression operation (or other data transformation process) that had no equivalent operation at cache 20.
  • Note that at the various policy/transformation stages complex, remote operations such as virus scanning and URL filtering may be performed. That is, some of these network policies may be implemented by transferring the content (or portions thereof) to other devices responsible for such things as virus scanning, filtering or other operations and returning the results and/or filtered content to the cache. The precise nature of such operations are not critical to the present invention but the present invention does accommodate the use thereof. Also, socketpairs may (but need not) be used for communications between object cache and byte cache layers of a single cache appliance. Sockets typically define the communication paths to/from a network but in this case a pair of sockets are used to define communications between the various cache layers of a cache appliance. Socketpairs provide a convenient method to optionally insert processing layers without changing the input/output structure of application proxy code, and thus are beneficial for fast and error-free implementation of the techniques described herein.
  • It is not necessary that all of the data transforms or policy operations performed at the first cache be reversed at the second cache. In many cases, especially where policy operations are applied at both caches, the operations will be additive. That is, different policy operations will be performed. Also, at the second cache (the is the one closest to the requestor), previously cached objects may be added to the output of the byte cache layer. Indeed, that output of the byte cache layer may itself be cached at the object cache layer of the cache closest to the requestor (if application level objects can be identified in that output) so as to have such objects available for later requests.
  • Achieving the best possible performance from a byte cache can be difficult from an implementation standpoint. In the context of the present invention, however, the task is made easier because of the presence of the application layer or cache functionality. Specifically, the application layer cache may be configured to identify portions of a stream which are “uninteresting” for byte caching. In one embodiment, for example, a hinting mechanism may be used to allow the application layer cache to identify “protocol metadata” which should not be stored in the byte cache. This metadata is not likely to be repeated in a subsequent transmission of the same file, because it is protocol specific and may even be specific to a single transmission/connection. By using this hinting mechanism to identify the material which should not be cached, the overall operation of the byte cache layer is improved.
  • Other “policy-based” decisions (e.g., decisions taken at the application proxy level based on configured policy information, heuristics or other algorithms, such as string matches, executed on the received content) that determine what information is or is not cached at the byte cache level may concern “sensitive” information. For example, personally identifying information of a user may be determined to be non-cacheable (even at the byte cache level) for policy reasons. The object level cache may be configured to mark such non-cacheable content so that it is not cached at the byte cache level. Such operations are not feasible with conventional byte caches because such devices have no ability to determine the nature of the information being transmitted thereto.
  • The performance of the various cache layers is also affected by contention for limited resources in the single cache appliance. For example, both the object cache and the byte cache require and utilize memory and disk resources. That is, a cache configured in accordance with the present invention may include both short-term storage (typically in the form of read/write memory) and longer-term storage (typically in the form of one or more hard disk dives which may be read from/written to). Information received at the cache is usually first stored to memory and later transferred to disk (assuming it is to be preserved for a longer period). One reason for this division of storage is that it typically takes longer to read from/write to disk than to/from memory and so in order to avoid losses of data due to read/write latencies, this two level storage technique is employed. Of course, the optional intra-stream compression layer also requires memory resources if it is used. These contention issues may be addressed in a variety of ways.
  • For example, in one embodiment of the present invention, content (i.e., data bytes) may be stored on disk both within the object cache layer and within the byte cache layer. This may (and often will) mean that the same information is stored twice. This is not necessarily a problem inasmuch as disks tend to be large (in terms of storage space) and relatively inexpensive. Nevertheless, the situation can be improved by using the byte cache to “compress” the contents of the object cache. This would allow information stored in the object cache layer to be much reduced in size when stored to disk.
  • Unlike disk space, however, memory remains relatively expensive per unit volume and so memory within the cache appliance is a precious resource. At the same time, intra-stream compression requires that large amounts of memory be allocated to so-called “stream contexts”; sets of parameters and stream specific options that modify or enhance the behavior of a stream. In order to optimize the use of memory for such stream contexts, one embodiment of the present invention stores a limited number of these contexts and re-uses them across multiple streams by migrating the contexts from one stream to the next. Further, in some cases a compression operation may be omitted where the data is determined to be poorly compressible (either by the application proxy determining same, or because the cache has computed the compression rate or factor during compression operations and determined that it does not meet a previously established threshold). This can not only save memory, but also improve CPU performance. Also, one may choose to migrate the compression contexts to disk during periods when they are not in use in order to save memory.
  • Of course, memory is also needed by the object cache layer (e.g., to store an object hash table and memory cache) and the byte cache layer (e.g., to store a fingerprint table that acts as an index into the byte cache). To accommodate these needs, in one embodiment of the present invention the actual byte cache data is stored as a set of objects within the object cache, so the memory cache provided by the object cache layer is effectively used for both object cache data and byte cache data. In some cases it may be necessary to store the object hash table (or a portion thereof) on disk in order to free up more memory space for the byte cache fingerprint table.
  • In some cases a “thinning” mechanism may be employed in order to further optimize the memory space allocated to the data cache's fingerprint table. For example, for less popular data stored by the byte cache the associated entries in the fingerprint table for may be thinned (i.e., removed) over time. The consequence of course is that more fingerprint entries are kept for popular byte streams and, therefore, searches of the byte cache are more likely to find matches. The net result is improved compression ratios overall. Similar techniques may be used at the object cache level, for example by employing a “least recently used” or other form of cache clean up mechanism.
  • In yet a further aspect of the present invention, a “split proxy” may be implemented. In some embodiments of the present invention the object cache layers at each of cache 16 and cache 20 may store all cacheable objects returned by the content source. For example, this may be required where the caches need to operate independently of one another when servicing some client requests. However, it is also possible to configure the two object cache layers to operate as two halves of the same object cache that just happen to be executing on different devices.
  • The split proxy concept allows some objects to be stored at cache 20 and other objects to be stored at cache 16. The byte caching layer is then used as a transport and signaling channel between the two halves. The most basic signaling done on this byte caching channel would be to detect that there is an “other half” and agree to operate in split proxy mode. Thereafter the two object caches may communicate with one another to decide which will store what objects as they are returned from content sources and also to determine whether one of the halves has a copy of a requested object that can be used to satisfy a current request.
  • A split proxy also allows for the processing, rather than just the storage, to be split. For some application protocols it may be advantageous to perform certain parts of the processing closest to the server, even though all of the data is ultimately cached at the object level at the cache closest to the client. For example, in the case of read ahead operations, the cache closest to the server may be tasked with all of the read ahead operations and all of the data may be sent in a more efficient form to the cache closest to the client, where protocol-based processing and other optimizations are performed. This may include the most efficient form for sending the data without certain overhead that would otherwise be imposed by application layer protocols. In this form, the data is still subject to the benefits of byte caching and, indeed, may be more “byte cacheable”.
  • A further optimization of the present invention concerns the sizes for various byte caching parameters. That is, in order to keep the size of the byte cache (i.e., the amount of memory and disk space that it consumes) to a manageable level, it is not feasible to cache every possible byte pattern observed during a communication session. At the same time, if the cached streams are too fragmented, long matches are prohibited and the efficiency of the cache will be reduced. To balance these competing interests, one embodiment of the present invention provides for a threshold. For hits of length below the threshold they will be included in the byte cache for future use (avoiding fragmenting the stream). For hits longer than the threshold, however, they will not be included in order to avoid the byte cache becoming too large. In some cases, application-level information may be used/evaluated in order to set the appropriate threshold.
  • Different applications may have very different data characteristics. Even within a particular application, different files or request types may have different characteristics. For example, some applications and even particular types of files may have common byte patterns which are very long, while others may have much shorter common byte patterns. For this reason, individual application proxies may wish to control certain parameters related to byte caching in order to optimally store and find content in the byte cache. For applications or file types where repeated byte patterns are long, the application may wish to increase the threshold described in the previous paragraph. For applications or file types where the repeated byte patterns are always expected to be short, it may be desirable to decrease or even eliminate the threshold described in the previous paragraph. In addition, such applications or file types may also wish to have the byte cache produce more frequent index data in order to increase the likelihood of finding smaller repeated patterns. This indexing is normally done by performing a computation on a small set of bytes (sometimes called a “shingle”); the length of the shingle is a lower bound on the size of repeated data which can be detected. For applications or file types which are expected to have very short repeated patterns, it may also be necessary to decrease the shingle size used when storing data of this type into the byte cache. In one embodiment of the invention, application proxies consider the protocol, file type, and other characteristics of the data, and choose optimal parameters for storing and retrieving that data within the byte cache.
  • Thus, methods and systems for caching content at multiple levels have been described. In the foregoing description reference was made to various illustrated embodiments of the invention, but the invention should not be limited thereby. For example, notwithstanding anything described above the present invention is applicable to caching at any or all of a variety of layers, including but not limited to an IP layer, a TCP layer, an application layer, and/or layer 2. Moreover, a cache configured in accordance with the present invention may act as a read-write cache for byte caching and also for some application level proxies such as CIFS, etc. In some cases, the byte cache layer may in fact be a multi-level byte cache, in which different parameters are used for indexing each level. For example, a very large cache may have relatively sparse indexing information, while smaller caches may include more dense indexing information. By combining the two the byte caching portion of a cache appliance configured in accordance with the present invention may be optimized.
  • In yet another embodiment, illustrated in FIG. 4, a cache 30 configured in accordance with the present invention may be deployed in a “split tunnel” configuration. Here, client 12 connects to cache 30, which is situated at an Internet gateway of a corporate LAN, via network 14. As before, one port of cache 30 may be coupled to further elements of the corporate network 32 (e.g., including a reciprocal cache 34 that provide object and byte cache layers, servers, other clients, etc.). Another port of cache 30 is directly connected to the Internet (or other external network) 36. As such, on that communication link only the object cache layer (and perhaps the application proxies) of cache 30 would provide benefit inasmuch as there is no reciprocal byte cache layer to peer with. Nevertheless, certain application accelerations (i.e., those that benefit from object caching) could still benefit from this object cache layer and applications accessed over the corporate network 32 could still make use of both the object cache and byte cache layers.
  • Thus, in light of these and other variations which may be implemented, the present invention should be measured only in terms of the claims, which follow.

Claims (22)

1. A cache, comprising:
an object cache layer and a byte cache layer, each configured to store information to storage devices included in the cache, the object cache layer and the byte cache layer being configured to communicate with one another through a socketpair and wherein contents of the byte cache layer are stored as objects within the object cache; and
an application proxy layer configured to identify content the should not be cached by one or more of the object cache layer and the byte cache layer and to pass content not cacheable at the object cache layer to the byte cache layer.
2-4. (canceled)
5. The cache of claim 1, wherein the application proxy layer is configured to determine whether the content is compressible or not compressible and the byte cache layer is configured to compress contents of the object cache layer.
6. The cache of claim 5, wherein the object cache layer enables or disables compression at the byte cache layer based on whether the content is known to be compressible or not compressible.
7-8. (canceled)
9. A method, comprising receiving content from a content source at a cache having an object cache layer and a byte cache layer, the object cache layer and the byte cache layer being configured to communicate with one another through a socketpair, and caching said content first at the object cache layer of the cache and next at the byte cache layer of the cache so as to eliminate repeated strings present within the content after caching at the object cache layer, said contents of the byte cache layer being stored as objects within the object cache.
10. The method of claim 9, further comprising prior to caching said content at the object cache layer, transforming the content from a first format to a second format.
11. The method of claim 10, wherein said first format comprises an encrypted data format and said second format comprises a decrypted data format.
12. The method of claim 9 further comprising prior to caching said content at the object cache layer, examining said content for compliance with one or more policies.
13. The method of claim 12 wherein said one or more policies include policy checking performed remotely from the cache.
14. The method of claim 9, further comprising prior to compressing said content at the byte cache layer, said content is transformed from a first format to a second format.
15. The method of claim 9, further comprising intra-stream compressing an output of the byte cache layer.
16. The method of claim 15, further comprising transforming intra-stream compressed output of the byte cache layer from a first data format to a second data format.
17. The method of claim 16, wherein the first data format comprises an unencrypted data format and the second data format comprises an encrypted data format.
18. A method, comprising receiving content from a content source at a cache having an object cache layer and a byte cache layer, the object cache layer and the byte cache layer being configured to communicate with one another through a socketpair, decompressing said content at the byte cache layer to produce expanded content, and transmitting said expanded content to a client along with previously cached objects from the object cache layer.
19. The method of claim 18, further comprising caching the expanded content at the object cache layer.
20. The method of claim 18, further comprising prior to decompressing said content at the byte cache layer transforming the content from a first data format to a second data format.
21. The method of claim 20, wherein the first data format comprises an encrypted data format and the second data format comprises a decrypted data format.
22. The method of claim 20, wherein the first data format comprises an encrypted and compressed data format and the second data format comprises a decrypted and decompressed data format.
23. The method of claim 19, further comprising transforming the expanded content and previously cached objects from a first data format to a second data format.
24. The method of claim 23, wherein the first data format comprises a decrypted data format and the second data format comprises an encrypted data format.
25-40. (canceled)
US14/464,638 2006-03-24 2014-08-20 Methods and Systems for Caching Content at Multiple Levels Abandoned US20150019678A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/464,638 US20150019678A1 (en) 2006-03-24 2014-08-20 Methods and Systems for Caching Content at Multiple Levels

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US74375006P 2006-03-24 2006-03-24
US11/690,669 US8832247B2 (en) 2006-03-24 2007-03-23 Methods and systems for caching content at multiple levels
US14/464,638 US20150019678A1 (en) 2006-03-24 2014-08-20 Methods and Systems for Caching Content at Multiple Levels

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/690,669 Division US8832247B2 (en) 2006-03-24 2007-03-23 Methods and systems for caching content at multiple levels

Publications (1)

Publication Number Publication Date
US20150019678A1 true US20150019678A1 (en) 2015-01-15

Family

ID=38606191

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/690,669 Active 2030-12-09 US8832247B2 (en) 2006-03-24 2007-03-23 Methods and systems for caching content at multiple levels
US14/464,638 Abandoned US20150019678A1 (en) 2006-03-24 2014-08-20 Methods and Systems for Caching Content at Multiple Levels

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/690,669 Active 2030-12-09 US8832247B2 (en) 2006-03-24 2007-03-23 Methods and systems for caching content at multiple levels

Country Status (1)

Country Link
US (2) US8832247B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110295965A1 (en) * 2009-02-05 2011-12-01 Hyeon-Sang Eom Method for sending and receiving session history in a communication system

Families Citing this family (202)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7675854B2 (en) 2006-02-21 2010-03-09 A10 Networks, Inc. System and method for an adaptive TCP SYN cookie with time validation
US20090182955A1 (en) * 2006-09-08 2009-07-16 Rao Cherukuri Application configuration across client devices of a local system
US8312507B2 (en) 2006-10-17 2012-11-13 A10 Networks, Inc. System and method to apply network traffic policy to an application session
US8584199B1 (en) 2006-10-17 2013-11-12 A10 Networks, Inc. System and method to apply a packet routing policy to an application session
US9576254B2 (en) * 2007-03-01 2017-02-21 Zipcar, Inc. Multi-tiered fleet management cache
US7756130B1 (en) 2007-05-22 2010-07-13 At&T Mobility Ii Llc Content engine for mobile communications systems
US7444596B1 (en) * 2007-11-29 2008-10-28 International Business Machines Corporation Use of template messages to optimize a software messaging system
US8505038B2 (en) * 2008-01-28 2013-08-06 Blue Coat Systems, Inc. Method and system for enhancing MS exchange (MAPI) end user experiences in a split proxy environment
US20090271569A1 (en) * 2008-04-28 2009-10-29 Kannan Shivkumar Partitioned management data cache
US8407619B2 (en) * 2008-07-30 2013-03-26 Autodesk, Inc. Method and apparatus for selecting and highlighting objects in a client browser
US8793307B2 (en) * 2009-01-28 2014-07-29 Blue Coat Systems, Inc. Content associative caching method for web applications
US8785760B2 (en) 2009-06-01 2014-07-22 Music Mastermind, Inc. System and method for applying a chain of effects to a musical composition
EP2438589A4 (en) * 2009-06-01 2016-06-01 Music Mastermind Inc System and method of receiving, analyzing and editing audio to create musical compositions
US8779268B2 (en) 2009-06-01 2014-07-15 Music Mastermind, Inc. System and method for producing a more harmonious musical accompaniment
US9251776B2 (en) 2009-06-01 2016-02-02 Zya, Inc. System and method creating harmonizing tracks for an audio input
US9310959B2 (en) 2009-06-01 2016-04-12 Zya, Inc. System and method for enhancing audio
US9257053B2 (en) 2009-06-01 2016-02-09 Zya, Inc. System and method for providing audio for a requested note using a render cache
US9177540B2 (en) 2009-06-01 2015-11-03 Music Mastermind, Inc. System and method for conforming an audio input to a musical key
US9960967B2 (en) 2009-10-21 2018-05-01 A10 Networks, Inc. Determining an application delivery server based on geo-location information
US8677009B2 (en) * 2010-01-22 2014-03-18 Microsoft Corporation Massive structured data transfer optimizations for high-latency, low-reliability networks
US20110185136A1 (en) * 2010-01-22 2011-07-28 Microsoft Corporation Moving large dynamic datasets via incremental change synchronization
US8726147B1 (en) * 2010-03-12 2014-05-13 Symantec Corporation Systems and methods for restoring web parts in content management systems
US9043385B1 (en) * 2010-04-18 2015-05-26 Viasat, Inc. Static tracker
US9253548B2 (en) 2010-05-27 2016-02-02 Adobe Systems Incorporated Optimizing caches for media streaming
US8839215B2 (en) 2010-07-19 2014-09-16 International Business Machines Corporation String cache file for optimizing memory usage in a java virtual machine
US8732426B2 (en) 2010-09-15 2014-05-20 Pure Storage, Inc. Scheduling of reactive I/O operations in a storage environment
US11275509B1 (en) 2010-09-15 2022-03-15 Pure Storage, Inc. Intelligently sizing high latency I/O requests in a storage environment
US8589655B2 (en) 2010-09-15 2013-11-19 Pure Storage, Inc. Scheduling of I/O in an SSD environment
US8589625B2 (en) 2010-09-15 2013-11-19 Pure Storage, Inc. Scheduling of reconstructive I/O read operations in a storage environment
US11614893B2 (en) 2010-09-15 2023-03-28 Pure Storage, Inc. Optimizing storage device access based on latency
US8468318B2 (en) 2010-09-15 2013-06-18 Pure Storage Inc. Scheduling of I/O writes in a storage environment
US8775868B2 (en) 2010-09-28 2014-07-08 Pure Storage, Inc. Adaptive RAID for an SSD environment
US9244769B2 (en) 2010-09-28 2016-01-26 Pure Storage, Inc. Offset protection data in a RAID array
US9215275B2 (en) 2010-09-30 2015-12-15 A10 Networks, Inc. System and method to balance servers based on server load status
US9609052B2 (en) 2010-12-02 2017-03-28 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US8839404B2 (en) * 2011-05-26 2014-09-16 Blue Coat Systems, Inc. System and method for building intelligent and distributed L2-L7 unified threat management infrastructure for IPv4 and IPv6 environments
US11636031B2 (en) 2011-08-11 2023-04-25 Pure Storage, Inc. Optimized inline deduplication
US8589640B2 (en) 2011-10-14 2013-11-19 Pure Storage, Inc. Method for maintaining multiple fingerprint tables in a deduplicating storage system
US8897154B2 (en) 2011-10-24 2014-11-25 A10 Networks, Inc. Combining stateless and stateful server load balancing
US9386088B2 (en) 2011-11-29 2016-07-05 A10 Networks, Inc. Accelerating service processing using fast path TCP
EP2791819B1 (en) 2011-12-14 2017-11-01 Level 3 Communications, LLC Content delivery network
US9094364B2 (en) 2011-12-23 2015-07-28 A10 Networks, Inc. Methods to manage services over a service gateway
US10044582B2 (en) 2012-01-28 2018-08-07 A10 Networks, Inc. Generating secure name records
GB2500374A (en) * 2012-03-13 2013-09-25 Ibm Optimisation of mobile data communication using byte caching
GB2500373A (en) * 2012-03-13 2013-09-25 Ibm Object caching for mobile data communication with mobility management
US8719540B1 (en) 2012-03-15 2014-05-06 Pure Storage, Inc. Fractal layout of data blocks across multiple devices
US8856445B2 (en) * 2012-05-24 2014-10-07 International Business Machines Corporation Byte caching with chunk sizes based on data type
US8832375B2 (en) * 2012-05-24 2014-09-09 International Business Machines Corporation Object type aware byte caching
US8782221B2 (en) 2012-07-05 2014-07-15 A10 Networks, Inc. Method to allocate buffer for TCP proxy session based on dynamic network conditions
US9351196B2 (en) 2012-08-31 2016-05-24 International Business Machines Corporation Byte caching in wireless communication networks
WO2014052099A2 (en) 2012-09-25 2014-04-03 A10 Networks, Inc. Load distribution in data networks
US10021174B2 (en) 2012-09-25 2018-07-10 A10 Networks, Inc. Distributing service sessions
US9843484B2 (en) 2012-09-25 2017-12-12 A10 Networks, Inc. Graceful scaling in software driven networks
US10002141B2 (en) 2012-09-25 2018-06-19 A10 Networks, Inc. Distributed database in software driven networks
US8745415B2 (en) 2012-09-26 2014-06-03 Pure Storage, Inc. Multi-drive cooperation to generate an encryption key
US11032259B1 (en) 2012-09-26 2021-06-08 Pure Storage, Inc. Data protection in a storage system
US10623386B1 (en) 2012-09-26 2020-04-14 Pure Storage, Inc. Secret sharing data protection in a storage system
US9338225B2 (en) 2012-12-06 2016-05-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US9847917B2 (en) 2012-12-13 2017-12-19 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services with feedback
US9634918B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Invalidation sequencing in a content delivery framework
US10652087B2 (en) 2012-12-13 2020-05-12 Level 3 Communications, Llc Content delivery framework having fill services
US10701148B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Content delivery framework having storage services
US10791050B2 (en) 2012-12-13 2020-09-29 Level 3 Communications, Llc Geographic location determination in a content delivery framework
US20140337472A1 (en) 2012-12-13 2014-11-13 Level 3 Communications, Llc Beacon Services in a Content Delivery Framework
US10701149B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Content delivery framework having origin services
CN105074688B (en) * 2012-12-27 2018-04-17 阿卡麦科技公司 Use the data deduplication based on stream of peer node figure
US9699231B2 (en) 2012-12-27 2017-07-04 Akamai Technologies, Inc. Stream-based data deduplication using directed cyclic graphs to facilitate on-the-wire compression
US9420058B2 (en) 2012-12-27 2016-08-16 Akamai Technologies, Inc. Stream-based data deduplication with peer node prediction
US11768623B2 (en) 2013-01-10 2023-09-26 Pure Storage, Inc. Optimizing generalized transfers between storage systems
US9589008B2 (en) 2013-01-10 2017-03-07 Pure Storage, Inc. Deduplication of volume regions
US11733908B2 (en) 2013-01-10 2023-08-22 Pure Storage, Inc. Delaying deletion of a dataset
US10908835B1 (en) 2013-01-10 2021-02-02 Pure Storage, Inc. Reversing deletion of a virtual machine
US9037791B2 (en) 2013-01-22 2015-05-19 International Business Machines Corporation Tiered caching and migration in differing granularities
US9531846B2 (en) 2013-01-23 2016-12-27 A10 Networks, Inc. Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US9900396B2 (en) * 2013-02-14 2018-02-20 Comcast Cable Communications, Llc Predictive content caching
US9900252B2 (en) 2013-03-08 2018-02-20 A10 Networks, Inc. Application delivery controller and global server load balancer
WO2014144837A1 (en) 2013-03-15 2014-09-18 A10 Networks, Inc. Processing data packets using a policy based network path
US10027761B2 (en) 2013-05-03 2018-07-17 A10 Networks, Inc. Facilitating a secure 3 party network session by a network device
WO2014179753A2 (en) 2013-05-03 2014-11-06 A10 Networks, Inc. Facilitating secure network traffic by an application delivery controller
JP6131170B2 (en) * 2013-10-29 2017-05-17 株式会社日立製作所 Computer system and data arrangement control method
US10365858B2 (en) 2013-11-06 2019-07-30 Pure Storage, Inc. Thin provisioning in a storage device
US11128448B1 (en) 2013-11-06 2021-09-21 Pure Storage, Inc. Quorum-aware secret sharing
US10263770B2 (en) 2013-11-06 2019-04-16 Pure Storage, Inc. Data protection in a storage system using external secrets
US10230770B2 (en) 2013-12-02 2019-03-12 A10 Networks, Inc. Network proxy layer for policy-based application proxies
US9208086B1 (en) 2014-01-09 2015-12-08 Pure Storage, Inc. Using frequency domain to prioritize storage of metadata in a cache
US10656864B2 (en) 2014-03-20 2020-05-19 Pure Storage, Inc. Data replication within a flash storage array
US9942152B2 (en) 2014-03-25 2018-04-10 A10 Networks, Inc. Forwarding data packets using a service-based forwarding policy
US10020979B1 (en) 2014-03-25 2018-07-10 A10 Networks, Inc. Allocating resources in multi-core computing environments
US9942162B2 (en) 2014-03-31 2018-04-10 A10 Networks, Inc. Active application response delay time
US9806943B2 (en) 2014-04-24 2017-10-31 A10 Networks, Inc. Enabling planned upgrade/downgrade of network devices without impacting network sessions
US9906422B2 (en) 2014-05-16 2018-02-27 A10 Networks, Inc. Distributed system to determine a server's health
US9986061B2 (en) 2014-06-03 2018-05-29 A10 Networks, Inc. Programming a data network device using user defined scripts
US10129122B2 (en) 2014-06-03 2018-11-13 A10 Networks, Inc. User defined objects for network devices
US9992229B2 (en) 2014-06-03 2018-06-05 A10 Networks, Inc. Programming a data network device using user defined scripts with licenses
US9779268B1 (en) 2014-06-03 2017-10-03 Pure Storage, Inc. Utilizing a non-repeating identifier to encrypt data
US9218244B1 (en) 2014-06-04 2015-12-22 Pure Storage, Inc. Rebuilding data across storage nodes
US11399063B2 (en) 2014-06-04 2022-07-26 Pure Storage, Inc. Network authentication for a storage system
US9218407B1 (en) 2014-06-25 2015-12-22 Pure Storage, Inc. Replication and intermediate read-write state for mediums
US10496556B1 (en) 2014-06-25 2019-12-03 Pure Storage, Inc. Dynamic data protection within a flash storage system
US10296469B1 (en) 2014-07-24 2019-05-21 Pure Storage, Inc. Access control in a flash storage system
US9495255B2 (en) 2014-08-07 2016-11-15 Pure Storage, Inc. Error recovery in a storage cluster
US9558069B2 (en) 2014-08-07 2017-01-31 Pure Storage, Inc. Failure mapping in a storage array
US9864761B1 (en) 2014-08-08 2018-01-09 Pure Storage, Inc. Read optimization operations in a storage system
US10430079B2 (en) 2014-09-08 2019-10-01 Pure Storage, Inc. Adjusting storage capacity in a computing system
US10164841B2 (en) 2014-10-02 2018-12-25 Pure Storage, Inc. Cloud assist for storage systems
US10430282B2 (en) 2014-10-07 2019-10-01 Pure Storage, Inc. Optimizing replication by distinguishing user and system write activity
US9489132B2 (en) 2014-10-07 2016-11-08 Pure Storage, Inc. Utilizing unmapped and unknown states in a replicated storage system
US9727485B1 (en) 2014-11-24 2017-08-08 Pure Storage, Inc. Metadata rewrite and flatten optimization
US9773007B1 (en) 2014-12-01 2017-09-26 Pure Storage, Inc. Performance improvements in a storage system
US9552248B2 (en) 2014-12-11 2017-01-24 Pure Storage, Inc. Cloud alert to replica
US9588842B1 (en) 2014-12-11 2017-03-07 Pure Storage, Inc. Drive rebuild
US9864769B2 (en) * 2014-12-12 2018-01-09 Pure Storage, Inc. Storing data utilizing repeating pattern detection
US10545987B2 (en) 2014-12-19 2020-01-28 Pure Storage, Inc. Replication to the cloud
US11947968B2 (en) 2015-01-21 2024-04-02 Pure Storage, Inc. Efficient use of zone in a storage device
US10296354B1 (en) 2015-01-21 2019-05-21 Pure Storage, Inc. Optimized boot operations within a flash storage array
US9710165B1 (en) 2015-02-18 2017-07-18 Pure Storage, Inc. Identifying volume candidates for space reclamation
US10082985B2 (en) 2015-03-27 2018-09-25 Pure Storage, Inc. Data striping across storage nodes that are assigned to multiple logical arrays
US10178169B2 (en) 2015-04-09 2019-01-08 Pure Storage, Inc. Point to point based backend communication layer for storage processing
US10140149B1 (en) 2015-05-19 2018-11-27 Pure Storage, Inc. Transactional commits with hardware assists in remote memory
US9547441B1 (en) 2015-06-23 2017-01-17 Pure Storage, Inc. Exposing a geometry of a storage device
US10310740B2 (en) 2015-06-23 2019-06-04 Pure Storage, Inc. Aligning memory access operations to a geometry of a storage device
US10581976B2 (en) 2015-08-12 2020-03-03 A10 Networks, Inc. Transmission control of protocol state exchange for dynamic stateful service insertion
US10243791B2 (en) 2015-08-13 2019-03-26 A10 Networks, Inc. Automated adjustment of subscriber policies
KR20170028825A (en) 2015-09-04 2017-03-14 퓨어 스토리지, 아이앤씨. Memory-efficient storage and searching in hash tables using compressed indexes
US11341136B2 (en) 2015-09-04 2022-05-24 Pure Storage, Inc. Dynamically resizable structures for approximate membership queries
US11269884B2 (en) 2015-09-04 2022-03-08 Pure Storage, Inc. Dynamically resizable structures for approximate membership queries
US9843453B2 (en) 2015-10-23 2017-12-12 Pure Storage, Inc. Authorizing I/O commands with I/O tokens
US10365798B1 (en) 2016-01-08 2019-07-30 Microsoft Technology Licensing, Llc Feedback manager for integration with an application
US10318288B2 (en) 2016-01-13 2019-06-11 A10 Networks, Inc. System and method to process a chain of network applications
US10452297B1 (en) 2016-05-02 2019-10-22 Pure Storage, Inc. Generating and optimizing summary index levels in a deduplication storage system
US10133503B1 (en) 2016-05-02 2018-11-20 Pure Storage, Inc. Selecting a deduplication process based on a difference between performance metrics
US10203903B2 (en) 2016-07-26 2019-02-12 Pure Storage, Inc. Geometry based, space aware shelf/writegroup evacuation
US10756816B1 (en) 2016-10-04 2020-08-25 Pure Storage, Inc. Optimized fibre channel and non-volatile memory express access
US10162523B2 (en) 2016-10-04 2018-12-25 Pure Storage, Inc. Migrating data between volumes using virtual copy operation
US10191662B2 (en) 2016-10-04 2019-01-29 Pure Storage, Inc. Dynamic allocation of segments in a flash storage system
US10545861B2 (en) 2016-10-04 2020-01-28 Pure Storage, Inc. Distributed integrated high-speed solid-state non-volatile random-access memory
US10481798B2 (en) 2016-10-28 2019-11-19 Pure Storage, Inc. Efficient flash management for multiple controllers
US10185505B1 (en) 2016-10-28 2019-01-22 Pure Storage, Inc. Reading a portion of data to replicate a volume based on sequence numbers
US10359942B2 (en) 2016-10-31 2019-07-23 Pure Storage, Inc. Deduplication aware scalable content placement
US10452290B2 (en) 2016-12-19 2019-10-22 Pure Storage, Inc. Block consolidation in a direct-mapped flash storage system
US11550481B2 (en) 2016-12-19 2023-01-10 Pure Storage, Inc. Efficiently writing data in a zoned drive storage system
US10389835B2 (en) 2017-01-10 2019-08-20 A10 Networks, Inc. Application aware systems and methods to process user loadable network applications
US11093146B2 (en) 2017-01-12 2021-08-17 Pure Storage, Inc. Automatic load rebalancing of a write group
US10528488B1 (en) 2017-03-30 2020-01-07 Pure Storage, Inc. Efficient name coding
US11403019B2 (en) 2017-04-21 2022-08-02 Pure Storage, Inc. Deduplication-aware per-tenant encryption
US10944671B2 (en) 2017-04-27 2021-03-09 Pure Storage, Inc. Efficient data forwarding in a networked device
US10402266B1 (en) 2017-07-31 2019-09-03 Pure Storage, Inc. Redundant array of independent disks in a direct-mapped flash storage system
US10831935B2 (en) 2017-08-31 2020-11-10 Pure Storage, Inc. Encryption management with host-side data reduction
US10776202B1 (en) 2017-09-22 2020-09-15 Pure Storage, Inc. Drive, blade, or data shard decommission via RAID geometry shrinkage
US10789211B1 (en) 2017-10-04 2020-09-29 Pure Storage, Inc. Feature-based deduplication
US10884919B2 (en) 2017-10-31 2021-01-05 Pure Storage, Inc. Memory management in a storage system
US10860475B1 (en) 2017-11-17 2020-12-08 Pure Storage, Inc. Hybrid flash translation layer
US10929031B2 (en) 2017-12-21 2021-02-23 Pure Storage, Inc. Maximizing data reduction in a partially encrypted volume
US11144638B1 (en) 2018-01-18 2021-10-12 Pure Storage, Inc. Method for storage system detection and alerting on potential malicious action
US10970395B1 (en) 2018-01-18 2021-04-06 Pure Storage, Inc Security threat monitoring for a storage system
US11010233B1 (en) 2018-01-18 2021-05-18 Pure Storage, Inc Hardware-based system monitoring
US10467527B1 (en) 2018-01-31 2019-11-05 Pure Storage, Inc. Method and apparatus for artificial intelligence acceleration
US11036596B1 (en) 2018-02-18 2021-06-15 Pure Storage, Inc. System for delaying acknowledgements on open NAND locations until durability has been confirmed
US11494109B1 (en) 2018-02-22 2022-11-08 Pure Storage, Inc. Erase block trimming for heterogenous flash memory storage devices
US11934322B1 (en) 2018-04-05 2024-03-19 Pure Storage, Inc. Multiple encryption keys on storage drives
US10678433B1 (en) 2018-04-27 2020-06-09 Pure Storage, Inc. Resource-preserving system upgrade
US11385792B2 (en) 2018-04-27 2022-07-12 Pure Storage, Inc. High availability controller pair transitioning
US10678436B1 (en) 2018-05-29 2020-06-09 Pure Storage, Inc. Using a PID controller to opportunistically compress more data during garbage collection
US11436023B2 (en) 2018-05-31 2022-09-06 Pure Storage, Inc. Mechanism for updating host file system and flash translation layer based on underlying NAND technology
US10776046B1 (en) 2018-06-08 2020-09-15 Pure Storage, Inc. Optimized non-uniform memory access
US11281577B1 (en) 2018-06-19 2022-03-22 Pure Storage, Inc. Garbage collection tuning for low drive wear
US11869586B2 (en) 2018-07-11 2024-01-09 Pure Storage, Inc. Increased data protection by recovering data from partially-failed solid-state devices
US11194759B2 (en) 2018-09-06 2021-12-07 Pure Storage, Inc. Optimizing local data relocation operations of a storage device of a storage system
US11133076B2 (en) 2018-09-06 2021-09-28 Pure Storage, Inc. Efficient relocation of data between storage devices of a storage system
CN109246238A (en) * 2018-10-15 2019-01-18 中国联合网络通信集团有限公司 Content caching accelerated method and the network equipment
US10846216B2 (en) 2018-10-25 2020-11-24 Pure Storage, Inc. Scalable garbage collection
US11113409B2 (en) 2018-10-26 2021-09-07 Pure Storage, Inc. Efficient rekey in a transparent decrypting storage array
US11194473B1 (en) 2019-01-23 2021-12-07 Pure Storage, Inc. Programming frequently read data to low latency portions of a solid-state storage array
US11588633B1 (en) 2019-03-15 2023-02-21 Pure Storage, Inc. Decommissioning keys in a decryption storage system
US11334254B2 (en) 2019-03-29 2022-05-17 Pure Storage, Inc. Reliability based flash page sizing
US11775189B2 (en) 2019-04-03 2023-10-03 Pure Storage, Inc. Segment level heterogeneity
US11397674B1 (en) 2019-04-03 2022-07-26 Pure Storage, Inc. Optimizing garbage collection across heterogeneous flash devices
US10990480B1 (en) 2019-04-05 2021-04-27 Pure Storage, Inc. Performance of RAID rebuild operations by a storage group controller of a storage system
US11099986B2 (en) 2019-04-12 2021-08-24 Pure Storage, Inc. Efficient transfer of memory contents
US11113302B2 (en) * 2019-04-23 2021-09-07 Salesforce.Com, Inc. Updating one or more databases based on dataflow events
US11487665B2 (en) 2019-06-05 2022-11-01 Pure Storage, Inc. Tiered caching of data in a storage system
US11281394B2 (en) 2019-06-24 2022-03-22 Pure Storage, Inc. Replication across partitioning schemes in a distributed storage system
US10929046B2 (en) 2019-07-09 2021-02-23 Pure Storage, Inc. Identifying and relocating hot data to a cache determined with read velocity based on a threshold stored at a storage device
US11422751B2 (en) 2019-07-18 2022-08-23 Pure Storage, Inc. Creating a virtual storage system
US11086713B1 (en) 2019-07-23 2021-08-10 Pure Storage, Inc. Optimized end-to-end integrity storage system
US11403043B2 (en) 2019-10-15 2022-08-02 Pure Storage, Inc. Efficient data compression by grouping similar data within a data segment
US11675898B2 (en) 2019-11-22 2023-06-13 Pure Storage, Inc. Recovery dataset management for security threat monitoring
US11341236B2 (en) 2019-11-22 2022-05-24 Pure Storage, Inc. Traffic-based detection of a security threat to a storage system
US11941116B2 (en) 2019-11-22 2024-03-26 Pure Storage, Inc. Ransomware-based data protection parameter modification
US11625481B2 (en) 2019-11-22 2023-04-11 Pure Storage, Inc. Selective throttling of operations potentially related to a security threat to a storage system
US11520907B1 (en) 2019-11-22 2022-12-06 Pure Storage, Inc. Storage system snapshot retention based on encrypted data
US11651075B2 (en) 2019-11-22 2023-05-16 Pure Storage, Inc. Extensible attack monitoring by a storage system
US11687418B2 (en) 2019-11-22 2023-06-27 Pure Storage, Inc. Automatic generation of recovery plans specific to individual storage elements
US11645162B2 (en) 2019-11-22 2023-05-09 Pure Storage, Inc. Recovery point determination for data restoration in a storage system
US11657155B2 (en) 2019-11-22 2023-05-23 Pure Storage, Inc Snapshot delta metric based determination of a possible ransomware attack against data maintained by a storage system
US11720714B2 (en) 2019-11-22 2023-08-08 Pure Storage, Inc. Inter-I/O relationship based detection of a security threat to a storage system
US11615185B2 (en) 2019-11-22 2023-03-28 Pure Storage, Inc. Multi-layer security threat detection for a storage system
US11755751B2 (en) 2019-11-22 2023-09-12 Pure Storage, Inc. Modify access restrictions in response to a possible attack against data stored by a storage system
US11500788B2 (en) 2019-11-22 2022-11-15 Pure Storage, Inc. Logical address based authorization of operations with respect to a storage system
US11720692B2 (en) 2019-11-22 2023-08-08 Pure Storage, Inc. Hardware token based management of recovery datasets for a storage system
CN113821173B (en) * 2021-09-17 2023-12-22 济南浪潮数据技术有限公司 Data storage method, device, equipment and computer readable storage medium
US11843682B1 (en) * 2022-08-31 2023-12-12 Adobe Inc. Prepopulating an edge server cache

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010054020A1 (en) * 2000-03-22 2001-12-20 Barth Brian E. Method and apparatus for dynamic information connection engine
US20050066063A1 (en) * 2003-08-01 2005-03-24 Microsoft Corporation Sparse caching for streaming media
US20050160137A1 (en) * 2002-04-02 2005-07-21 Edison Ishikawa Collapsed distrubuted cooperative memory for interactive and scalale media-on-demand systems
US20060190308A1 (en) * 2000-03-22 2006-08-24 Janssens Marcel D Method and apparatus for dynamic information connection engine
US20060248547A1 (en) * 2005-04-14 2006-11-02 International Business Machines Corporation Multi-level cache apparatus and method for enhanced remote invocation performance
US20060288119A1 (en) * 2005-06-15 2006-12-21 Hostway Corporation Multi-level redirection system

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6427187B2 (en) * 1998-07-31 2002-07-30 Cache Flow, Inc. Multiple cache communication
US6970939B2 (en) * 2000-10-26 2005-11-29 Intel Corporation Method and apparatus for large payload distribution in a network
CA2390954C (en) * 2001-06-19 2010-05-18 Foedero Technologies, Inc. Dynamic multi-level cache manager
US7171469B2 (en) * 2002-09-16 2007-01-30 Network Appliance, Inc. Apparatus and method for storing data in a proxy cache in a network
US7565425B2 (en) * 2003-07-02 2009-07-21 Amazon Technologies, Inc. Server architecture and methods for persistently storing and serving event data
US7308683B2 (en) * 2003-10-30 2007-12-11 International Business Machines Corporation Ordering of high use program code segments using simulated annealing
US20050120134A1 (en) * 2003-11-14 2005-06-02 Walter Hubis Methods and structures for a caching to router in iSCSI storage systems
US7543146B1 (en) * 2004-06-18 2009-06-02 Blue Coat Systems, Inc. Using digital certificates to request client consent prior to decrypting SSL communications
US7409600B2 (en) * 2004-07-12 2008-08-05 International Business Machines Corporation Self-healing cache system
US7516277B2 (en) * 2005-04-28 2009-04-07 Sap Ag Cache monitoring using shared memory
US20060294555A1 (en) * 2005-06-23 2006-12-28 Jianhua Xie Method and system for video on demand (VOD) servers to cache content
US8301839B2 (en) * 2005-12-30 2012-10-30 Citrix Systems, Inc. System and method for performing granular invalidation of cached dynamically generated objects in a data communication network
US7502888B2 (en) * 2006-02-07 2009-03-10 Hewlett-Packard Development Company, L.P. Symmetric multiprocessor system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010054020A1 (en) * 2000-03-22 2001-12-20 Barth Brian E. Method and apparatus for dynamic information connection engine
US20060190308A1 (en) * 2000-03-22 2006-08-24 Janssens Marcel D Method and apparatus for dynamic information connection engine
US20050160137A1 (en) * 2002-04-02 2005-07-21 Edison Ishikawa Collapsed distrubuted cooperative memory for interactive and scalale media-on-demand systems
US20050066063A1 (en) * 2003-08-01 2005-03-24 Microsoft Corporation Sparse caching for streaming media
US20060248547A1 (en) * 2005-04-14 2006-11-02 International Business Machines Corporation Multi-level cache apparatus and method for enhanced remote invocation performance
US20060288119A1 (en) * 2005-06-15 2006-12-21 Hostway Corporation Multi-level redirection system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110295965A1 (en) * 2009-02-05 2011-12-01 Hyeon-Sang Eom Method for sending and receiving session history in a communication system
US9444649B2 (en) * 2009-02-05 2016-09-13 Samsung Electronics Co., Ltd Method for sending and receiving session history in a communications system

Also Published As

Publication number Publication date
US8832247B2 (en) 2014-09-09
US20070245090A1 (en) 2007-10-18

Similar Documents

Publication Publication Date Title
US8832247B2 (en) Methods and systems for caching content at multiple levels
US10951739B2 (en) Data differencing across peers in an overlay network
US10911520B2 (en) Systems and methods of using the refresh button to determine freshness policy
US8112477B2 (en) Content identification for peer-to-peer content retrieval
US8615583B2 (en) Systems and methods of revalidating cached objects in parallel with request for object
US8364785B2 (en) Systems and methods for domain name resolution interception caching
US7809818B2 (en) Systems and method of using HTTP head command for prefetching
US7584294B2 (en) Systems and methods for prefetching objects for caching using QOS
US8504775B2 (en) Systems and methods of prefreshening cached objects based on user's current web page
US8037126B2 (en) Systems and methods of dynamically checking freshness of cached objects based on link status
US8103783B2 (en) Systems and methods of providing security and reliability to proxy caches
US8352605B2 (en) Systems and methods for providing dynamic ad hoc proxy-cache hierarchies
US7716306B2 (en) Data caching based on data contents
US20130018942A1 (en) System and method for bandwidth optimization in a network storage environment
US20080228864A1 (en) Systems and methods for prefetching non-cacheable content for compression history
US20080229020A1 (en) Systems and Methods of Providing A Multi-Tier Cache
JP2004535713A (en) System and method for increasing the effective bandwidth of a communication network
WO2004036362A2 (en) Compression of secure content
EP2795864B1 (en) Host/path-based data differencing in an overlay network using a compression and differencing engine
Khan Tree Delta Transcoding for Efficient Dynamic Resource Delivery on Asymmetric Internet.

Legal Events

Date Code Title Description
AS Assignment

Owner name: JEFFERIES FINANCE LLC, AS THE COLLATERAL AGENT, NE

Free format text: SECURITY INTEREST;ASSIGNOR:BLUE COAT SYSTEMS, INC.;REEL/FRAME:035751/0348

Effective date: 20150522

AS Assignment

Owner name: BLUE COAT SYSTEMS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JEFFERIES FINANCE LLC;REEL/FRAME:039516/0929

Effective date: 20160801

AS Assignment

Owner name: SYMANTEC CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BLUE COAT SYSTEMS, INC.;REEL/FRAME:039851/0044

Effective date: 20160801

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: CA, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SYMANTEC CORPORATION;REEL/FRAME:052700/0638

Effective date: 20191104