There are several ways to maintain cache consistency of files placed in a cache: time-to-live fields, active invalidation protocols, and client polling. Time-to-live fields are efficient to set up, but are inaccurate. It is hard to tell in advance how long an object will be valid for; therefore, it is necessary to set the TTL field to a relatively short interval and reload the object frequently to avoid returning stale data. Client polling, as we saw with the Alex system , depends on the fact that the older a file is the less likely it is to change, and therefore the less often the client needs to check to see if its copy is stale.
The Alex protocol dynamically adjusts to minimize the amount of stale data returned, but any weak-consistency protocol inevitably results in clients receiving some stale data. We study its potential for cache consistency on the Web in chapter .
Invalidation protocols are required when loose consistency is not sufficient; many distributed file systems rely on invalidation protocols to insure that cached copies do not become stale. Invalidation protocols depend on the server keeping track of cached copies of its data; when a file changes the server notifies the caches that their copies are no longer valid.
Kurt Worrell, as part of his master's thesis , examined the use of TTL fields on the World Wide Web and determined through simulation that they are not as effective as an invalidation protocol that allows the primary host to inform caches when objects that they are caching become stale. We challenge this result in chapter .
One flaw with this work is its focus on invalidation to the exclusion of other loose consistency protocols like the Alex polling protocol. Invalidation protocols might be efficient in terms of network bandwidth, but they have several non-trivial costs. For one thing, servers must be rewritten to take advantage of server-directed invalidation. Client-directed protocols such as the Alex polling protocol or TTL fields do not require server modification. For another thing, servers must keep track of where their objects are currently cached. This could be a prohibitive limit to a system's scalability unless caches are organized hierarchically, such that the server only has to keep track of the highest level caches. Worrell assumes this model to make his invalidation protocols feasible, but in doing so presents another potential obstacle to scalability: he also assumes that the inclusion principle holds. The inclusion principle states that any time an object is located in a lower level cache, it must also reside in any higher level caches in the hierarchy.
This makes sense for the purposes of an invalidation protocol, since invalidation messages only need to be passed on to the highest level caches, but could lead to an inefficient use of disk space. All objects cached on lower levels must also be cached on higher levels, but due to the nature of hierarchical caches, this means that one high level cache might potentially need to cache the equivalent of hundreds of low level caches. Otherwise pages may be kicked out of a low level cache simply because a high level cache is full, and it may be impossible to fully utilize space available in low level caches due to a lack of space in a high level cache.
Once again, a client-directed cache consistency mechanism removes this requirement since there is no need to organize caches hierarchically. This also means that caches can be created and taken down quite easily since there is no reason to fit them into a hierarchical tree. Client-directed cache consistency also removes one of the most serious error-modes of invalidation protocols: if a cache is temporarily unavailable then the server must continue trying to reach it; with no other mechanism for invalidation available the cache will not know to invalidate the object unless it hears from the server.
The Harvest system  was designed to solve the problem of resource location, but also includes a hierarchical caching subsystem. This caching subsystem functions like a Web proxy in that clients request Web objects through the local Harvest cache. If this query results in a cache miss, the local cache in turn queries each of its neighbors, its parent, and the object's primary host. Each neighbor cache and parent returns with a ``hit'' or ``miss'', and if the home site is running an SNTP echo server it returns a ``hit'' as well. The object is then requested from whichever site returns a ``hit'' fastest. If all caches missed, and the primary host is slower than the parent cache, then the local cache will request the object through the parent cache. Otherwise the local cache will request the item from the object's primary host.
This system is still primitive; caches must be configured manually with information about parents and neighbors set in a configuration file. On the other hand, their results show that the Harvest cache is twice as fast at returning data as other widely available Web proxies for most requests.