We have shown that push-caching is an efficient way to replicate documents across the Internet in order to reduce server load and network bandwidth. The last question we must address is whether push-caching scales to handle the millions of servers that will soon be using the Web. Push-caching depends on servers voluntarily accepting replicated documents, but no server will accept replicated documents if accepting them leads to a higher overall server load.
An analysis of the NCSA push-caching simulations shows that this is not the case. As we saw in Figure , the primary host's traffic was reduced from 5563 Mb to 2604 Mb when using miles to predict distance, for a savings of 2959 Mb. Data was pushed to 34 servers, and the average traffic on each server was 87 Mb. If we assume that every server that replicates pages is also willing to accept replicated pages, and if we assume that every server that replicates pages is as popular as the NCSA servers, and if we assume that servers are selected uniformly, then since each server will replicate to 34 other servers, each server will also accept pages from 34 other servers. We may therefore calculate the increased load to be , which is the same amount by which each server's load was decreased initially.
Push-caching has therefore decreased the amount of network traffic without significantly affecting each primary server's load. If only primary servers could store replicated files then push-caching would be of dubious value. The virtue of push-caching, however, is that it is very easy to add additional servers. Proxy-servers, for example, are ideal candidates for accepting replicated objects because they are already running web caching software, frequently are attached to large disks, and are usually not hidden behind firewalls. Push-caching can distribute the load from overloaded primary servers onto proxy-servers and other servers without imposing an unacceptible load because all servers caching replicated objects may refuse additional objects at any time.
We may also remove several of our assumptions to make our scalability argument more general. Not every server that replicates pages must be willing to accept pages in return, as long as a sufficient number of push-cache servers exists. In the case of an interactive television network, for example, it might make sense to deploy many push-cache servers to cache replicated objects and only a few digital libraries which each store all the films available.
We may also remove the assumption that servers replicating pages are equally popular, although it would be possible in this case for a server to gain more load than it saved. In such a case the server would need to protect itself by refusing to accept new objects once its load has risen sufficiently.
Distributing the load created by popular items helps the Web scale as its population grows, but it creates a potential bottleneck at the primary server for two reasons. Clients must currently use the primary server in order to locate nearby replicas, and replicas must use the primary server to maintain consistency. We will discuss cache consistency in the next chapter; resource location places less of a burden on the primary server than serving files, since a single redirect messages can prevent all future references from a given client,
The last potential bottleneck to push-caching is the registry. The registry is essentially a database of every available push-cache server, and it must be able to handle requests arriving constantly from all push-cache server on the Internet. The registry must therefore be able to scale freely, up to millions of hosts, and must be very reliable. This need for scalability and reliability implies that the registry must be a widely-replicated distributed database. Luckly, however, the registry does not need tight consistency. Because the registry selects lists of servers at random, a given instance of the database does not need to know about all the available servers in order to provide a representative sample, just most of them.
As long as updates propagate quickly enough among the instances of the registry such that they do not frequently include inactive servers in their lists of available servers, the system will function smoothly. The refdbms distributed bibliographic database system  serves as a good model for such a distributed database.