When upgrading Slony-I, the installation on all nodes in a cluster must be upgraded at once, using the slonik command UPDATE FUNCTIONS.
While this requires temporarily stopping replication, it does not forcibly require an outage for applications that submit updates.
The proper upgrade procedure is thus:
Stop the slon processes on all nodes. (e.g. - old version of slon)
Install the new version of slon software on all nodes.
Execute a slonik script containing the command update functions (id = [whatever]); for each node in the cluster.
Start all slons.
The overall operation is relatively safe: If there is any mismatch between component versions, the slon will refuse to start up, which provides protection against corruption.
You need to be sure that the C library containing SPI trigger functions has been copied into place in the PostgreSQL build. There are multiple possible approaches to this:
The trickiest part of this is ensuring that the C library containing SPI functions is copied into place in the PostgreSQL build; the easiest and safest way to handle this is to have two separate PostgreSQL builds, one for each Slony-I version, where the postmaster is shut down and then restarted against the "new" build; that approach requires a brief database outage on each node.
While that approach has been found to be easier and safer, nothing prevents one from carefully copying Slony-I components for the new version into place to overwrite the old version as the "install" step. That might not work on Windows™ if it locks library files that are in use.
If you build Slony-I on the same system on which it is to be deployed, and build from sources, overwriting the old with the new is as easy as make install. There is no need to restart a database backend; just to stop slon processes, run the UPDATE FUNCTIONS script, and start new slon processes.
Unfortunately, this approach requires having a build environment on the same host as the deployment. That may not be consistent with efforts to use common PostgreSQL and Slony-I binaries across a set of nodes.
With this approach, the old PostgreSQL build with old Slony-I components persists after switching to a new PostgreSQL build with new Slony-I components. In order to switch to the new Slony-I build, you need to restart the PostgreSQL postmaster, therefore interrupting applications, in order to get it to be aware of the location of the new components.