The list of accelerator options is short, and setup is fairly simple. Once we have a working accelerator cache, you will have to create the appropriate access-list rules. (Since you probably want people outside your local network to be able to access your server, you cannot simple use source-IP address rulesets anymore.)
You will need to set the hostname of the accelerated server here. It's only possible to have one destination server, so you can only have one occurence of this line. If you are going accelerate more than one server, or transparently cache traffic (as described in the next chapter), you will have to use the word virtual instead of a hostname here.
Accelerated requests can only be forwarded to one port: there is no table that associates accelerated hosts and a destination port. Squid will connect to the port that you set the httpd_accel_port value to. When acting as a front-end for a web server on the local machine, you will set up the web server to listen for connections on a different port (8000, for example), and set this squid.conf option to match the same value. If, on the other hand, you are forwarding requests to a set of slow backend servers, they will almost certainly be listening to port 80 (the default web-server port), and this option will need to be set to 80.
If you use the httpd_accel_host option, Squid will stop recognizing cache requests. So that your cache can function both as an accelerator and as a web cache, you will need to set the httpd_accel_with_proxy option to on.
A normal HTTP request consists of three values: the type of transfer (normally a GET, which is used for downloads); the path and filename to be retrieved (or executed, in the case of a cgi program); and the HTTP version.
This layout is fine if you only have one web site on a machine. On systems where you have more than one site, though, it makes life difficult: the request does not contain enough information, since it doesn't include information about the destination domain. Most operating systems allow you to have IP aliases, where you have more than one IP address per network card. By allocating one IP per hosted site, you could run one web server per IP address. Once the programs were made more efficient, one running program could act as a server for many sites: the only requirement was that you had one IP address per domain. Server programs would find out which of the IP addresses clients were connected to, and would serve data from different directories for each IP.
There are a limited number of IP addresses, and they are fast running out. Some systems also have a limited number of IP aliases, which means that you cannot host more than a (fairly arbitrary) number of web sites on machine. If the client were to pass the destination host name along with the path and filename, the web server could listen to only one IP address, and would find the right destination directores by looking in a simple hostname table.
From version 1.1 on, the HTTP standard supports a special Host header, which is passed along with every outgoing request. This header also makes transparent caching and acceleration easier: by pulling the host value out of the headers, Squid can translate a standard HTTP request to a cache-specific HTTP request, which can then be handled by the standard Squid code. Turning on the httpd_accel_uses_host_header option enables this translation. You will need to use this option when doing transparent caching.
It's important to note that acls are checked before this translation. You must combine this option with strict source-address checks, so you cannot use this option to accelerate multiple backend servers (this is certain to change in a later version of Squid).