One of our initial assumptions was that loose consistency would be acceptable on the World Wide-Web. This means that our protocols are optimized to reduce bandwidth and decrease latency, at the expense of occasionally returning stale data to the user. It is possible to remove the weak-consistency assumption by using server-initiated invalidation protocols at the cost of increased server load. This burden may become an obstacle to wide-area scalability.
As we saw in section , Worrell showed that it is often impossible to accurately set TTL fields ahead of time in a wide-area distributed system such as the World Wide Web . In an attempt to determine a context-independent TTL he simulated the performance of a TTL-based cache. He also simulated an Invalidation Protocol based consistency scheme, and showed that the two schemes are essentially equivalent in terms of their network bandwidth requirements when the default TTL is set to a little over a week. At this point, however, one out of every five cache requests are returned with stale data.
We hypothesized that a consistency mechanism originally designed for use with caching FTP objects may also be useful on the World Wide Web. The Alex cache consistency scheme  is based on the assumption that the older an object is, the less likely it is to change. This makes intuitive sense; an object that was just changed may change frequently. An object that has not changed in several months clearly changes infrequently. Specifically, the Alex scheme uses an Update Threshold to decide when an object should be considered stale. If time since last checked > total age update threshold then the file is considered invalid. The example we provided in section was for an update threshold of 10%. In such a case, if a file is 1 month old, then alex will serve the file for up to three days ( days = 3 days) before checking to see if it has become invalid.