The primary function of an inter-cache protocol is to stop object duplication, increasing hit rates. If you have a large network with widely separated caches, you may wish to store objects in each cache even if one of your other caches has it: by keeping objects close to your users, you reduce their network latency (even if you end up "wasting" disk space in the process.)
Inter-branch traffic can be reduced by placing a cache at each branch. Since caches can avoid duplicating objects between them, each disk you add to a cache adds space to the overall hierarchy, increasing your hierarchy hit-rate. This is a lot better than simply having caches at branches which do not communicate with one another, since with that setup you end up end up with multiple copies of each cache object; one per server. Clients can also be configured to query another branches cache if their local one goes down, adding redundancy.
If overloaded, a central cache machine can become a network bottleneck. Unlike one cache machine, caches in a hierarchy can be close to all parts of the network; they can also handle a much larger load (with a near-linear increase in performance with each added machine). Loaded caches can thus be replaced with clusters of low-load caches, without wasting disk space.
Integrating your caches into a public cache hierarchy can increase your hit rate (since you increase your effective disk space by accessing other machine's object stores.) By choosing peers carefully, you can reduce latency, or reduce costs by saving Internet bandwidth (if communicating with your peers is cheaper than going direct to the source.) On the other hand, communicating with peers via loaded (or high-latency) line can slow down your cache. It's best to check your peer response times periodically to check if the peering arrangement is beneficial. You can use the client program to check cache response times, and the cache manager (discussed in Chapter 12) to look at Squid's view on the cache.