Browser Caches
Browsers and other user agents benefit from having a built-in cache. When you press the Back button on your browser, it reads the previous page from its cache. Nongraphical agents, such as web crawlers, cache objects as temporary files on disk rather than keeping them in memory.
Netscape Navigator lets you control exactly how much memory and disk space to use for caching, and it also allows you to flush the cache. Microsoft Internet Explorer lets you control the size of your local disk cache, but in a less flexible way. Both have controls for how often cached responses should be validated. People generally use 10–100MB of disk space for their browser cache.
A browser cache is limited to just one user, or at least one user agent. Thus, it gets hits only when the user revisits a page. As we’ll see later, browser caches can store “private” responses, but shared caches cannot.
Caching Proxies
Caching proxies, unlike browser caches, service many different users at once. Since many different users visit the same popular web sites, caching proxies usually have higher hit ratios than browser caches. As the number of users increases, so does the hit ratio [Duska, Marwood and Feely, 1997].
Caching proxies are essential services for many organizations, including ISPs, corporations, and schools. They usually run on dedicated hardware, which may be an appliance or a general-purpose server, such as a Unix or Windows NT system. Many organizations use inexpensive PC hardware that costs less than $1,000. At the other end of the spectrum, some organizations pay hundreds of thousands of dollars, or more, for high-performance solutions from one of the many caching vendors. We’ll talk more about equipment in Chapter 10 and performance in Chapter 12.
Caching proxies are normally located near network gateways (i.e., routers) on the organization’s side of its Internet connection. In other words, a cache should be located to maximize the number of clients that can use it, but it should not be on the far side of a slow, congested network link.
As I’ve already mentioned, a proxy sits between clients and servers. Unlike browser caches, a caching proxy alters the path of packets flowing through the network. A proxy splits a web request into two separate TCP connections, one to the client and the other to the server. Since the proxy forwards requests to origin servers, it hides the client’s network address. This characteristic raises a number of interesting issues that we’ll explore in later chapters.
One of the most difficult aspects of operating a caching proxy is getting clients to use the service. As we’ll see in Chapter 4, configuring browsers to use a proxy is a little complicated. Users might not configure their browsers correctly, and they can even disable the caching proxy if they feel like it. Many organizations use interception caching to divert their network’s HTTP traffic to a cache. Network administrators like interception caching because it reduces their administrative burdens and increases the number of clients using the cache. However, the technique is controversial because it violates and breaks protocols (such as TCP and HTTP) in subtle ways. I’ll cover interception caching extensively in Chapter 5.
Surrogates
Until recently, we didn’t have a good name for reverse proxies, server accelerators, and other devices that pretend to be origin servers. RFC 3040 defines a surrogate:
A gateway co-located with an origin server, or at a different point in the network, delegated the authority to operate on behalf of, and typically working in close co-operation with, one or more origin servers. Responses are typically delivered from an internal cache.
Surrogates are useful in a number of situations. Content distribution networks use them to replicate information at many different locations. Typically, clients are directed to the nearest surrogate that has a given resource. In this manner, it seems like all users are closer to the origin server.
Another common use for surrogates is to “accelerate” slow web servers. Of course, the acceleration is accomplished simply by caching the server’s responses. Some web servers are slow because they generate pages dynamically. For example, they may use Java to assemble an HTML page from different components stored in a relational database. If the same page is delivered to all the clients who visit, a surrogate will accelerate the server. If the pages are customized to each client, a surrogate would not speed things up significantly.
Surrogates are also often used to decrypt HTTP/TLS connections. Such decryption requires a fair amount of processing power. Rather than put that burden on the origin server itself, a surrogate encrypts and decrypts the traffic. Although communication between the surrogate and the origin server is unencrypted, there is little risk of eavesdropping because the two devices are usually right next to each other.
Surrogates that cache origin server responses are not much different from caching proxies. It’s likely that any product sold as a client-side caching proxy can also function as an origin server surrogate. It may not work automatically, however. You’ll probably have to configure it specifically for surrogate operation.
The workload for a surrogate is generally much different from that of a caching proxy. In particular, a surrogate receives requests for a small number of origin servers, while client-side proxies typically forward requests to more than 100,000 different servers. Since the traffic arriving at the surrogate is focused on a small number of servers, the hit ratio is significantly higher. In many cases, surrogates achieve hit ratios of 90% or more.