Not all of these are, strictly speaking, "frequently asked;" some represent trouble found that seemed worth documenting.
configure: error: Headers for libpqserver are not found in the includeserverdir. This is the path to postgres.h. Please specify the includeserverdir with --with-pgincludeserverdir=<dir>
DEBUG1 copy_set 1 DEBUG1 remoteWorkerThread_1: connected to provider DB WARN remoteWorkerThread_1: transactions earlier than XID 127314958 are still in progress WARN remoteWorkerThread_1: data copy for set 1 failed - sleep 60 seconds
set add table (set id = 1, origin = 1, id = 27, full qualified name = 'nspace.some_table', key = 'key_on_whatever', comment = 'Table some_table in namespace nspace with a candidate primary key');
1.1. I am using Frotznik Freenix 4.5, with its FFPM (Frotznik Freenix Package Manager) package management system. It comes with FFPM packages for PostgreSQL 7.4.7, which are what I am using for my databases, but they don't include Slony-I in the packaging. How do I add Slony-I to this?
Frotznik Freenix is new to me, so it's a bit dangerous to give really hard-and-fast definitive answers.
The answers differ somewhat between the various combinations of PostgreSQL and Slony-I versions; the newer versions generally somewhat easier to cope with than are the older versions. In general, you almost certainly need to compile Slony-I from sources; depending on versioning of both Slony-I and PostgreSQL, you may need to compile PostgreSQL from scratch. (Whether you need to use the PostgreSQL compile is another matter; you probably don't...)
Slony-I version 1.0.5 and earlier require having a fully configured copy of PostgreSQL sources available when you compile Slony-I.
Hopefully you can make the configuration this closely match against the configuration in use by the packaged version of PostgreSQL by checking the configuration using the command pg_config --configure.
Slony-I version 1.1 simplifies this considerably; it does not require the full copy of PostgreSQL sources, but can, instead, refer to the various locations where PostgreSQL libraries, binaries, configuration, and #include files are located.
PostgreSQL 8.0 and higher is generally easier to deal with in that a "default" installation includes all of the #include files.
If you are using an earlier version of PostgreSQL, you may find it necessary to resort to a source installation if the packaged version did not install the "server #include" files, which are installed by the command make install-all-headers .
In effect, the "worst case" scenario takes place if you are using a version of Slony-I earlier than 1.1 with an "elderly" version of PostgreSQL, in which case you can expect to need to compile PostgreSQL from scratch in order to have everything that the Slony-I compile needs even though you are using a "packaged" version of PostgreSQL.
If you are running a recent PostgreSQL and a recent Slony-I, then the codependencies can be fairly small, and you may not need extra PostgreSQL sources. These improvements should ease the production of Slony-I packages so that you might soon even be able to hope to avoid compiling Slony-I.
1.2. I tried building Slony-I 1.1 and got the following error message:
configure: error: Headers for libpqserver are not found in the includeserverdir. This is the path to postgres.h. Please specify the includeserverdir with --with-pgincludeserverdir=<dir>
You are almost certainly running version PostgreSQL 7.4 or earlier, where server headers are not installed by default if you just do a make install of PostgreSQL.
You need to install server headers when you install PostgreSQL via the command make install-all-headers.
1.3. Slony-I seemed to compile fine; now, when I run a slon, some events are moving around, but no replication is taking place.
Slony logs might look like the following:
DEBUG1 remoteListenThread_1: connected to 'host=host004 dbname=pgbenchrep user=postgres port=5432' ERROR remoteListenThread_1: "select ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type, ev_data1, ev_data2, ev_data3, ev_data4, ev_data5, ev_data6, ev_data7, ev_data8 from "_pgbenchtest".sl_event e where (e.ev_origin = '1' and e.ev_seqno > '1') order by e.ev_origin, e.ev_seqno" - could not receive data from server: Operation now in progress
Alternatively, it may appear like...
ERROR remoteListenThread_2: "select ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type, ev_data1, ev_data2, ev_data3, ev_data4, ev_data5, ev_data6, ev_data7, ev_data8 from "_sl_p2t2".sl_event e where (e.ev_origin = '2' and e.ev_seqno > '0') order by e.ev_origin, e.ev_seqno" - could not receive data from server: Error 0
On AIX and Solaris (and possibly elsewhere), both Slony-I and PostgreSQL must be compiled with the --enable-thread-safety option. The above results when PostgreSQL isn't so compiled.
What breaks here is that the libc (threadsafe) and libpq (non-threadsafe) use different memory locations for errno, thereby leading to the request failing.
Problems like this crop up with disadmirable regularity on AIX and Solaris; it may take something of an "object code audit" to make sure that ALL of the necessary components have been compiled and linked with --enable-thread-safety.
For instance, I ran into the problem one that LD_LIBRARY_PATH had been set, on Solaris, to point to libraries from an old PostgreSQL compile. That meant that even though the database had been compiled with --enable-thread-safety, and slon had been compiled against that, slon was being dynamically linked to the "bad old thread-unsafe version," so slon didn't work. It wasn't clear that this was the case until I ran ldd against slon.
Note that with libpq version 7.4.2, on Solaris, a further thread patch was required; similar is also required for PostgreSQL version 8.0.
If the DSNs are wrong, then slon instances can't connect to the nodes.
This will generally lead to nodes remaining entirely untouched.
Recheck the connection configuration. By the way, since slon links to libpq, you could have password information stored in $HOME/.pgpass, partially filling in right/wrong authentication information there.
2.2. I created a "superuser" account, slony, to run replication activities. As suggested, I set it up as a superuser, via the following query: update pg_shadow set usesuper = 't' where usename in ('slony', 'molly', 'dumpy'); (that command also deals with other users I set up to run vacuums and backups).
Unfortunately, I ran into a problem the next time I subscribed to a new set.
DEBUG1 copy_set 28661 DEBUG1 remoteWorkerThread_1: connected to provider DB DEBUG2 remoteWorkerThread_78: forward confirm 1,594436 received by 78 DEBUG2 remoteWorkerThread_1: copy table public.billing_discount ERROR remoteWorkerThread_1: "select "_mycluster".setAddTable_int(28661, 51, 'public.billing_discount', 'billing_discount_pkey', 'Table public.billing_discount with candidate primary key billing_discount_pkey'); " PGRES_FATAL_ERROR ERROR: permission denied for relation pg_class CONTEXT: PL/pgSQL function "altertableforreplication" line 23 at select into variables PL/pgSQL function "setaddtable_int" line 76 at perform WARN remoteWorkerThread_1: data copy for set 28661 failed - sleep 60 seconds
This continues to fail, over and over, until I restarted the slon to connect as postgres instead.
The problem is fairly self-evident; permission is being denied on the system table, pg_class.
The "fix" is thus:
update pg_shadow set usesuper = 't', usecatupd='t' where usename = 'slony';
In version 8.1 and higher, you may also need the following:
update pg_authid set rolcatupdate = 't', rolsuper='t' where rolname = 'slony';
2.3. I'm trying to get a slave subscribed, and get the following messages in the logs:
DEBUG1 copy_set 1 DEBUG1 remoteWorkerThread_1: connected to provider DB WARN remoteWorkerThread_1: transactions earlier than XID 127314958 are still in progress WARN remoteWorkerThread_1: data copy for set 1 failed - sleep 60 seconds
There is evidently some reasonably old outstanding transaction blocking Slony-I from processing the sync. You might want to take a look at pg_locks to see what's up:
sampledb=# select * from pg_locks where transaction is not null order by transaction; relation | database | transaction | pid | mode | granted ----------+----------+-------------+---------+---------------+--------- | | 127314921 | 2605100 | ExclusiveLock | t | | 127326504 | 5660904 | ExclusiveLock | t (2 rows)
See? 127314921 is indeed older than 127314958, and it's still running.
A long running G/L report, a runaway RT3 query, a pg_dump, all will open up transactions that may run for substantial periods of time. Until they complete, or are interrupted, you will continue to see the message " data copy for set 1 failed - sleep 60 seconds ".
By the way, if there is more than one database on the PostgreSQL cluster, and activity is taking place on the OTHER database, that will lead to there being "transactions earlier than XID whatever" being found to be still in progress. The fact that it's a separate database on the cluster is irrelevant; Slony-I will wait until those old transactions terminate.
2.4. Same as the above. What I forgot to mention, as well, was that I was trying to add TWO subscribers, concurrently.
That doesn't work out: Slony-I can't work on the
COPY commands concurrently. See
src/slon/remote_worker.c, function
copy_set()
$ ps -aef | egrep '[2]605100' postgres 2605100 205018 0 18:53:43 pts/3 3:13 postgres: postgres sampledb localhost COPY
This happens to be a COPY transaction involved in setting up the subscription for one of the nodes. All is well; the system is busy setting up the first subscriber; it won't start on the second one until the first one has completed subscribing. That represents one possible cause.
This has the (perhaps unfortunate) implication that you cannot populate two slaves concurrently from a single provider. You have to subscribe one to the set, and only once it has completed setting up the subscription (copying table contents and such) can the second subscriber start setting up the subscription.
2.5. We got bitten by something we didn't foresee when completely uninstalling a slony replication cluster from the master and slave...
Warning |
MAKE SURE YOU STOP YOUR APPLICATION RUNNING AGAINST YOUR MASTER DATABASE WHEN REMOVING THE WHOLE SLONY CLUSTER, or at least re-cycle all your open connections after the event! |
The connections "remember" or refer to OIDs which are removed by the uninstall node script. And you will get lots of errors as a result...
There are two notable areas of PostgreSQL that cache query plans and OIDs:
Prepared statements
pl/pgSQL functions
The problem isn't particularly a Slony-I one; it would occur any time such significant changes are made to the database schema. It shouldn't be expected to lead to data loss, but you'll see a wide range of OID-related errors.
The problem occurs when you are using some sort of "connection pool" that keeps recycling old connections. If you restart the application after this, the new connections will create new query plans, and the errors will go away. If your connection pool drops the connections, and creates new ones, the new ones will have new query plans, and the errors will go away.
In our code we drop the connection on any error we cannot map to an expected condition. This would eventually recycle all connections on such unexpected problems after just one error per connection. Of course if the error surfaces as a constraint violation which is a recognized condition, this won't help either, and if the problem is persistent, the connections will keep recycling which will drop the effect of the pooling, in the latter case the pooling code could also announce an admin to take a look...
2.6. I upgraded my cluster to Slony-I version 1.2. I'm now getting the following notice in the logs:
NOTICE: Slony-I: log switch to sl_log_2 still in progress - sl_log_1 not truncated
Both sl_log_1 and sl_log_2 are continuing to grow, and sl_log_1 is never getting truncated. What's wrong?
This is symptomatic of the same issue as above with dropping replication: if there are still old connections lingering that are using old query plans that reference the old stored functions, resulting in the inserts to sl_log_1
Closing those connections and opening new ones will resolve the issue.
In the longer term, there is an item on the PostgreSQL TODO list to implement dependancy checking that would flush cached query plans when dependent objects change.
We noticed this happening when we wanted to re-initialize a node, where we had configuration thus:
Node 1 - provider
Node 2 - subscriber to node 1 - the node we're reinitializing
Node 3 - subscriber to node 3 - node that should keep replicating
The subscription for node 3 was changed to have node 1 as provider, and we did DROP SET /SUBSCRIBE SET for node 2 to get it repopulating.
Unfortunately, replication suddenly stopped to node 3.
The problem was that there was not a suitable set of "listener paths" in sl_listen to allow the events from node 1 to propagate to node 3. The events were going through node 2, and blocking behind the SUBSCRIBE SET event that node 2 was working on.
The following slonik script dropped out the listen paths where node 3 had to go through node 2, and added in direct listens between nodes 1 and 3.
cluster name = oxrslive; node 1 admin conninfo='host=32.85.68.220 dbname=oxrslive user=postgres port=5432'; node 2 admin conninfo='host=32.85.68.216 dbname=oxrslive user=postgres port=5432'; node 3 admin conninfo='host=32.85.68.244 dbname=oxrslive user=postgres port=5432'; node 4 admin conninfo='host=10.28.103.132 dbname=oxrslive user=postgres port=5432'; try { store listen (origin = 1, receiver = 3, provider = 1); store listen (origin = 3, receiver = 1, provider = 3); drop listen (origin = 1, receiver = 3, provider = 2); drop listen (origin = 3, receiver = 1, provider = 2); }
Immediately after this script was run, SYNC events started propagating again to node 3. This points out two principles:
If you have multiple nodes, and cascaded subscribers, you need to be quite careful in populating the STORE LISTEN entries, and in modifying them if the structure of the replication "tree" changes.
Version 1.1 provides better tools to help manage this.
The issues of "listener paths" are discussed further at Section 9
2.8. I was starting a slon, and got the following "FATAL" messages in its logs. What's up???
2006-03-29 16:01:34 UTC CONFIG main: slon version 1.2.0 starting up 2006-03-29 16:01:34 UTC DEBUG2 slon: watchdog process started 2006-03-29 16:01:34 UTC DEBUG2 slon: watchdog ready - pid = 28326 2006-03-29 16:01:34 UTC DEBUG2 slon: worker process created - pid = 28327 2006-03-29 16:01:34 UTC CONFIG main: local node id = 1 2006-03-29 16:01:34 UTC DEBUG2 main: main process started 2006-03-29 16:01:34 UTC CONFIG main: launching sched_start_mainloop 2006-03-29 16:01:34 UTC CONFIG main: loading current cluster configuration 2006-03-29 16:01:34 UTC CONFIG storeSet: set_id=1 set_origin=1 set_comment='test set' 2006-03-29 16:01:34 UTC DEBUG2 sched_wakeup_node(): no_id=1 (0 threads + worker signaled) 2006-03-29 16:01:34 UTC DEBUG2 main: last local event sequence = 7 2006-03-29 16:01:34 UTC CONFIG main: configuration complete - starting threads 2006-03-29 16:01:34 UTC DEBUG1 localListenThread: thread starts 2006-03-29 16:01:34 UTC FATAL localListenThread: "select "_test1538".cleanupNodelock(); insert into "_test1538".sl_nodelock values ( 1, 0, "pg_catalog".pg_backend_pid()); " - ERROR: duplicate key violates unique constraint "sl_nodelock-pkey" 2006-03-29 16:01:34 UTC FATAL Do you already have a slon running against this node? 2006-03-29 16:01:34 UTC FATAL Or perhaps a residual idle backend connection from a dead slon?
The table sl_nodelock is used as an "interlock" to prevent two slon processes from trying to manage the same node at the same time. The slon tries inserting a record into the table; it can only succeed if it is the only node manager.
This error message is typically a sign that you have started up a second slon process for a given node. The slon asks the obvious question: "Do you already have a slon running against this node?"
Supposing you experience some sort of network outage, the connection between slon and database may fail, and the slon may figure this out long before the PostgreSQL instance it was connected to does. The result is that there will be some number of idle connections left on the database server, which won't be closed out until TCP/IP timeouts complete, which seems to normally take about two hours. For that two hour period, the slon will try to connect, over and over, and will get the above fatal message, over and over.
An administrator may clean this out by logging onto the server and issuing kill -2 to any of the offending connections. Unfortunately, since the problem took place within the networking layer, neither PostgreSQL nor Slony-I have a direct way of detecting this.
You can mostly avoid this by making sure that slon processes always run somewhere nearby the server that each one manages. If the slon runs on the same server as the database it manages, any "networking failure" that could interrupt local connections would be likely to be serious enough to threaten the entire server.
2.9. When can I shut down slon processes?
Generally, it's no big deal to shut down a slon process. Each one is "merely" a PostgreSQL client, managing one node, which spawns threads to manage receiving events from other nodes.
The "event listening" threads are no big deal; they are doing nothing fancier than periodically checking remote nodes to see if they have work to be done on this node. If you kill off the slon these threads will be closed, which should have little or no impact on much of anything. Events generated while the slon is down will be picked up when it is restarted.
The "node managing" thread is a bit more interesting; most of the time, you can expect, on a subscriber, for this thread to be processing SYNC events. If you shut off the slon during an event, the transaction will fail, and be rolled back, so that when the slon restarts, it will have to go back and reprocess the event.
The only situation where this will cause particular "heartburn" is if the event being processed was one which takes a long time to process, such as COPY_SET for a large replication set.
The other thing that might cause trouble is if the slon runs fairly distant from nodes that it connects to; you could discover that database connections are left idle in transaction. This would normally only occur if the network connection is destroyed without either slon or database being made aware of it. In that case, you may discover that "zombied" connections are left around for as long as two hours if you don't go in by hand and kill off the PostgreSQL backends.
There is one other case that could cause trouble; when the slon managing the origin node is not running, no SYNC events run against that node. If the slon stays down for an extended period of time, and something like Section 6.2 isn't running, you could be left with one big SYNC to process when it comes back up. But that is only a concern if that slon is down for an extended period of time; shutting it down for a few seconds shouldn't cause any great problem.
3.1. Slonik fails - cannot load PostgreSQL library - PGRES_FATAL_ERROR load '$libdir/xxid';
When I run the sample setup script I get an error message similar to: stdin:64: PGRES_FATAL_ERROR load '$libdir/xxid'; - ERROR: LOAD: could not open file '$libdir/xxid': No such file or directory
Evidently, you haven't got the xxid.so library in the $libdir directory that the PostgreSQL instance is using. Note that the Slony-I components need to be installed in the PostgreSQL software installation for each and every one of the nodes, not just on the origin node.
This may also point to there being some other mismatch between the PostgreSQL binary instance and the Slony-I instance. If you compiled Slony-I yourself, on a machine that may have multiple PostgreSQL builds "lying around," it's possible that the slon or slonik binaries are asking to load something that isn't actually in the library directory for the PostgreSQL database cluster that it's hitting.
Long and short: This points to a need to "audit" what installations of PostgreSQL and Slony-I you have in place on the machine(s). Unfortunately, just about any mismatch will cause things not to link up quite right. See also thread safety concerning threading issues on Solaris ...
Life is simplest if you only have one set of PostgreSQL binaries on a given server; in that case, there isn't a "wrong place" in which Slony-I components might get installed. If you have several software installs, you'll have to verify that the right versions of Slony-I components are associated with the right PostgreSQL binaries.
Slony-I uses the same rules for unquoted identifiers as the PostgreSQL main parser, so no, you probably shouldn't put a "-" in your identifier name.
You may be able to defeat this by putting "quotes" around identifier names, but it's still liable to bite you some, so this is something that is probably not worth working around.
3.3. ps finds passwords on command line
If I run a ps command, I, and everyone else, can see passwords on the command line.
Take the passwords out of the Slony configuration, and put them into $(HOME)/.pgpass.
3.4. Table indexes with FQ namespace names
set add table (set id = 1, origin = 1, id = 27, full qualified name = 'nspace.some_table', key = 'key_on_whatever', comment = 'Table some_table in namespace nspace with a candidate primary key');
If you have key = 'nspace.key_on_whatever' the request will FAIL.
3.5. Replication has fallen behind, and it appears that the queries to draw data from sl_log_1/sl_log_2 are taking a long time to pull just a few SYNCs.
Until version 1.1.1, there was only one index on sl_log_1/sl_log_2, and if
there were multiple replication sets, some of the columns on the index
would not provide meaningful selectivity. If there is no index on
column log_xid
, consider adding it. See
slony1_base.sql for an example of how to create
the index.
3.6. I need to rename a column that is in the primary key for one of my replicated tables. That seems pretty dangerous, doesn't it? I have to drop the table out of replication and recreate it, right?
Actually, this is a scenario which works out remarkably cleanly. Slony-I does indeed make intense use of the primary key columns, but actually does so in a manner that allows this sort of change to be made very nearly transparently.
Suppose you revise a column name, as with the SQL DDL alter table accounts alter column aid rename to cid; This revises the names of the columns in the table; it simultaneously renames the names of the columns in the primary key index. The result is that the normal course of things is that altering a column name affects both aspects simultaneously on a given node.
The ideal and proper handling of this change would involve using EXECUTE SCRIPT to deploy the alteration, which ensures it is applied at exactly the right point in the transaction stream on each node.
Interestingly, that isn't forcibly necessary. As long as the alteration is applied on the replication set's origin before application on subscribers, things won't break irrepairably. Some SYNC events that do not include changes to the altered table can make it through without any difficulty... At the point that the first update to the table is drawn in by a subscriber, that is the point at which SYNC events will start to fail, as the provider will indicate the "new" set of columns whilst the subscriber still has the "old" ones. If you then apply the alteration to the subscriber, it can retry the SYNC, at which point it will, finding the "new" column names, work just fine.
3.7. I have a PostgreSQL 7.2-based system that I really, really want to use Slony-I to help me upgrade it to 8.0. What is involved in getting Slony-I to work for that?
Rod Taylor has reported the following...
This is approximately what you need to do:
Take the 7.3 templates and copy them to 7.2 -- or otherwise hardcode the version your using to pick up the 7.3 templates
Remove all traces of schemas from the code and sql templates. I basically changed the "." to an "_".
Bunch of work related to the XID datatype and functions. For example, Slony creates CASTs for the xid to xxid and back -- but 7.2 cannot create new casts that way so you need to edit system tables by hand. I recall creating an Operator Class and editing several functions as well.
sl_log_1 will have severe performance problems with any kind of data volume. This required a number of index and query changes to optimize for 7.2. 7.3 and above are quite a bit smarter in terms of optimizations they can apply.
Don't bother trying to make sequences work. Do them by hand after the upgrade using pg_dump and grep.
Of course, now that you have done all of the above, it's not compatible with standard Slony now. So you either need to implement 7.2 in a less hackish way, or you can also hack up slony to work without schemas on newer versions of PostgreSQL so they can talk to each other.
Almost immediately after getting the DB upgraded from 7.2 to 7.4, we deinstalled the hacked up Slony (by hand for the most part), and started a migration from 7.4 to 7.4 on a different machine using the regular Slony. This was primarily to ensure we didn't keep our system catalogues which had been manually fiddled with.
All that said, we upgraded a few hundred GB from 7.2 to 7.4 with about 30 minutes actual downtime (versus 48 hours for a dump / restore cycle) and no data loss.
That represents a sufficiently ugly set of "hackery" that the developers are exceedingly reluctant to let it anywhere near to the production code. If someone were interested in "productionizing" this, it would probably make sense to do so based on the Slony-I 1.0 branch, with the express plan of not trying to keep much in the way of forwards compatibility or long term maintainability of replicas.
You should only head down this road if you are sufficiently comfortable with PostgreSQL and Slony-I that you are prepared to hack pretty heavily with the code.
3.8. I had a network "glitch" that led to my using FAILOVER to fail over to an alternate node. The failure wasn't a disk problem that would corrupt databases; why do I need to rebuild the failed node from scratch?
The action of FAILOVER is to abandon the failed node so that no more Slony-I activity goes to or from that node. As soon as that takes place, the failed node will progressively fall further and further out of sync.
The big problem with trying to recover the failed node is that it may contain updates that never made it out of the origin. If they get retried, on the new origin, you may find that you have conflicting updates. In any case, you do have a sort of "logical" corruption of the data even if there never was a disk failure making it "physical."
3.9. After notification of a subscription on another node, replication falls over on one of the subscribers, with the following error message:
ERROR remoteWorkerThread_1: "begin transaction; set transaction isolation level serializable; lock table "_livesystem".sl_config_lock; select "_livesystem".enableSubscription(25506, 1, 501); notify "_livesystem_Event"; notify "_livesystem_Confirm"; insert into "_livesystem".sl_event (ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type , ev_data1, ev_data2, ev_data3, ev_data4 ) values ('1', '4896546', '2005-01-23 16:08:55.037395', '1745281261', '1745281262', '', 'ENABLE_SUBSCRIPTION', '25506', '1', '501', 't'); insert into "_livesystem".sl_confirm (con_origin, con_received, con_seqno, con_timestamp) values (1, 4, '4896546', CURRENT_TIMESTAMP); commit transaction;" PGRES_FATAL_ERROR ERROR: insert or update on table "sl_subscribe" violates foreign key constraint "sl_subscribe-sl_path-ref" DETAIL: Key (sub_provider,sub_receiver)=(1,501) is not present in table "sl_path".
This is then followed by a series of failed syncs as the slon shuts down:
DEBUG2 remoteListenThread_1: queue event 1,4897517 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897518 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897519 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897520 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897521 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897522 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897523 SYNC
If you see a slon shutting down with ignore new events due to shutdown log entries, you typically need to step back in the log to before they started failing to see indication of the root cause of the problem.
In this particular case, the problem was that some of the STORE PATH commands had not yet made it to node 4 before the SUBSCRIBE SET command propagated.
This demonstrates yet another example of the need to not do things in a rush; you need to be sure things are working right before making further configuration changes.
3.10. I just used MOVE SET to move the origin to a new node. Unfortunately, some subscribers are still pointing to the former origin node, so I can't take it out of service for maintenance without stopping them from getting updates. What do I do?
You need to use SUBSCRIBE SET to alter the subscriptions for those nodes to have them subscribe to a provider that will be sticking around during the maintenance.
Warning |
What you don't do is to UNSUBSCRIBE SET; that would require reloading all data for the nodes from scratch later. |
3.11. After notification of a subscription on another node, replication falls over, starting with the following error message:
ERROR remoteWorkerThread_1: "begin transaction; set transaction isolation level serializable; lock table "_livesystem".sl_config_lock; select "_livesystem".enableSubscription(25506, 1, 501); notify "_livesystem_Event"; notify "_livesystem_Confirm"; insert into "_livesystem".sl_event (ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type , ev_data1, ev_data2, ev_data3, ev_data4 ) values ('1', '4896546', '2005-01-23 16:08:55.037395', '1745281261', '1745281262', '', 'ENABLE_SUBSCRIPTION', '25506', '1', '501', 't'); insert into "_livesystem".sl_confirm (con_origin, con_received, con_seqno, con_timestamp) values (1, 4, '4896546', CURRENT_TIMESTAMP); commit transaction;" PGRES_FATAL_ERROR ERROR: insert or update on table "sl_subscribe" violates foreign key constraint "sl_subscribe-sl_path-ref" DETAIL: Key (sub_provider,sub_receiver)=(1,501) is not present in table "sl_path".
This is then followed by a series of failed syncs as the slon shuts down:
DEBUG2 remoteListenThread_1: queue event 1,4897517 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897518 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897519 SYNC DEBUG2 remoteListenThread_1: queue event 1,4897520 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897521 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897522 SYNC DEBUG2 remoteWorker_event: ignore new events due to shutdown DEBUG2 remoteListenThread_1: queue event 1,4897523 SYNC
If you see a slon shutting down with ignore new events due to shutdown log entries, you'll typically have to step back to before they started failing to see indication of the root cause of the problem.
In this particular case, the problem was that some of the STORE PATH commands had not yet made it to node 4 before the SUBSCRIBE SET command propagated.
This is yet another example of the need to not do things too terribly quickly; you need to be sure things are working right before making further configuration changes.
Most of the time, it isn't. You might imagine it of some value to order the tables in some particular way in order that "parent" entries would make it in before their "children" in some foreign key relationship; that isn't the case since foreign key constraint triggers are turned off on subscriber nodes.
(Jan Wieck comments:) The order of table ID's is only significant during a LOCK SET in preparation of switchover. If that order is different from the order in which an application is acquiring its locks, it can lead to deadlocks that abort either the application or slon.
(David Parker) I ran into one other case where the ordering of tables in the set was significant: in the presence of inherited tables. If a child table appears before its parent in a set, then the initial subscription will end up deleting that child table after it has possibly already received data, because the copy_set logic does a delete, not a delete only, so the delete of the parent will delete the new rows in the child as well.
3.13. If you have a slonik script something like this, it will hang on you and never complete, because you can't have wait for event inside a try block. A try block is executed as one transaction, and the event that you are waiting for can never arrive inside the scope of the transaction.
try { echo 'Moving set 1 to node 3'; lock set (id=1, origin=1); echo 'Set locked'; wait for event (origin = 1, confirmed = 3); echo 'Moving set'; move set (id=1, old origin=1, new origin=3); echo 'Set moved - waiting for event to be confirmed by node 3'; wait for event (origin = 1, confirmed = 3); echo 'Confirmed'; } on error { echo 'Could not move set for cluster foo'; unlock set (id=1, origin=1); exit -1; }
You must not invoke WAIT FOR EVENT inside a "try" block.
3.14. Slony-I: cannot add table to currently subscribed set 1
I tried to add a table to a set, and got the following message:
Slony-I: cannot add table to currently subscribed set 1
You cannot add tables to sets that already have subscribers.
The workaround to this is to create ANOTHER set, add the new tables to that new set, subscribe the same nodes subscribing to "set 1" to the new set, and then merge the sets together.
3.15. ERROR: duplicate key violates unique constraint "sl_table-pkey"
I tried setting up a second replication set, and got the following error:
stdin:9: Could not create subscription set 2 for oxrslive! stdin:11: PGRES_FATAL_ERROR select "_oxrslive".setAddTable(2, 1, 'public.replic_test', 'replic_test__Slony-I_oxrslive_rowID_key', 'Table public.replic_test without primary key'); - ERROR: duplicate key violates unique constraint "sl_table-pkey" CONTEXT: PL/pgSQL function "setaddtable_int" line 71 at SQL statement
The table IDs used in SET ADD TABLE are required to be unique ACROSS ALL SETS. Thus, you can't restart numbering at 1 for a second set; if you are numbering them consecutively, a subsequent set has to start with IDs after where the previous set(s) left off.
3.16. One of my nodes fell over (slon / postmaster was down) and nobody noticed for several days. Now, when the slon for that node starts up, it runs for about five minutes, then terminates, with the error message: ERROR: remoteListenThread_%d: timeout for event selection What's wrong, and what do I do?
The problem is that the listener thread (in src/slon/remote_listener.c) timed out when trying to determine what events were outstanding for that node. By default, the query will run for five minutes; if there were many days worth of outstanding events, this might take too long.
On versions of Slony-I before 1.1.7, 1.2.7, and 1.3, one answer would be to increase the timeout in src/slon/remote_listener.c, recompile slon, and retry.
Another would be to treat the node as having failed, and use the slonik command DROP NODE to drop the node, and recreate it. If the database is heavily updated, it may well be cheaper to do this than it is to find a way to let it catch up.
In newer versions of Slony-I, there is a new configuration parameter called slon_conf_remote_listen_timeout; you'd alter the config file to increase the timeout, and try again. Of course, as mentioned above, it could be faster to drop the node and recreate it than to let it catch up across a week's worth of updates...
4.1. Replication has been slowing down, I'm seeing FETCH 100 FROM LOG queries running for a long time, sl_log_1 is growing, and performance is, well, generally getting steadily worse.
There are actually a number of possible causes for this sort of thing. There is a question involving similar pathology where the problem is that pg_listener grows because it is not vacuumed.
Another " proximate cause " for this growth is for there to be a connection connected to the node that sits IDLE IN TRANSACTION for a very long time.
That open transaction will have multiple negative effects, all of which will adversely affect performance:
You can monitor for this condition inside the database only if the PostgreSQL postgresql.conf parameter stats_command_string is set to true. If that is set, then you may submit the query select * from pg_stat_activity where current_query like '%IDLE% in transaction'; which will find relevant activity.
You should also be able to search for " idle in transaction " in the process table to find processes that are thus holding on to an ancient transaction.
It is also possible (though rarer) for the problem to be a transaction that is, for some other reason, being held open for a very long time. The query_start time in pg_stat_activity may show you some query that has been running way too long.
There are plans for PostgreSQL to have a timeout parameter, open_idle_transaction_timeout , which would cause old transactions to time out after some period of disuse. Buggy connection pool logic is a common culprit for this sort of thing. There are plans for pgpool to provide a better alternative, eventually, where connections would be shared inside a connection pool implemented in C. You may have some more or less buggy connection pool in your Java or PHP application; if a small set of real connections are held in pgpool, that will hide from the database the fact that the application imagines that numerous of them are left idle in transaction for hours at a time.
4.2. After dropping a node, sl_log_1 isn't getting purged out anymore.
This is a common scenario in versions before 1.0.5, as the "clean up" that takes place when purging the node does not include purging out old entries from the Slony-I table, sl_confirm, for the recently departed node.
The node is no longer around to update confirmations of what syncs have been applied on it, and therefore the cleanup thread that purges log entries thinks that it can't safely delete entries newer than the final sl_confirm entry, which rather curtails the ability to purge out old logs.
Diagnosis: Run the following query to see if there are any "phantom/obsolete/blocking" sl_confirm entries:
oxrsbar=# select * from _oxrsbar.sl_confirm where con_origin not in (select no_id from _oxrsbar.sl_node) or con_received not in (select no_id from _oxrsbar.sl_node); con_origin | con_received | con_seqno | con_timestamp ------------+--------------+-----------+---------------------------- 4 | 501 | 83999 | 2004-11-09 19:57:08.195969 1 | 2 | 3345790 | 2004-11-14 10:33:43.850265 2 | 501 | 102718 | 2004-11-14 10:33:47.702086 501 | 2 | 6577 | 2004-11-14 10:34:45.717003 4 | 5 | 83999 | 2004-11-14 21:11:11.111686 4 | 3 | 83999 | 2004-11-24 16:32:39.020194 (6 rows)
In version 1.0.5, the DROP NODE function purges out entries in sl_confirm for the departing node. In earlier versions, this needs to be done manually. Supposing the node number is 3, then the query would be:
delete from _namespace.sl_confirm where con_origin = 3 or con_received = 3;
Alternatively, to go after "all phantoms," you could use
oxrsbar=# delete from _oxrsbar.sl_confirm where con_origin not in (select no_id from _oxrsbar.sl_node) or con_received not in (select no_id from _oxrsbar.sl_node); DELETE 6
General "due diligence" dictates starting with a BEGIN, looking at the contents of sl_confirm before, ensuring that only the expected records are purged, and then, only after that, confirming the change with a COMMIT. If you delete confirm entries for the wrong node, that could ruin your whole day.
You'll need to run this on each node that remains...
Note that as of 1.0.5, this is no longer an issue at all, as it purges unneeded entries from sl_confirm in two places:
4.3. The slon spent the weekend out of commission [for some reason], and it's taking a long time to get a sync through.
You might want to take a look at the sl_log_1/sl_log_2 tables, and do a summary to see if there are any really enormous Slony-I transactions in there. Up until at least 1.0.2, there needs to be a slon connected to the origin in order for SYNC events to be generated.
If none are being generated, then all of the updates until the next one is generated will collect into one rather enormous Slony-I transaction.
Conclusion: Even if there is not going to be a subscriber around, you really want to have a slon running to service the origin node.
Slony-I 1.1 provides a stored procedure that allows SYNC counts to be updated on the origin based on a cron job even if there is no slon daemon running.
4.4. Some nodes start consistently falling behind
I have been running Slony-I on a node for a while, and am seeing system performance suffering.
I'm seeing long running queries of the form:
fetch 100 from LOG;
This can be characteristic of pg_listener (which is the table containing NOTIFY data) having plenty of dead tuples in it. That makes NOTIFY events take a long time, and causes the affected node to gradually fall further and further behind.
You quite likely need to do a VACUUM FULL on pg_listener, to vigorously clean it out, and need to vacuum pg_listener really frequently. Once every five minutes would likely be AOK.
Slon daemons already vacuum a bunch of tables, and cleanup_thread.c contains a list of tables that are frequently vacuumed automatically. In Slony-I 1.0.2, pg_listener is not included. In 1.0.5 and later, it is regularly vacuumed, so this should cease to be a direct issue. In version 1.2, pg_listener will only be used when a node is only receiving events periodically, which means that the issue should mostly go away even in the presence of evil long running transactions...
There is, however, still a scenario where this will still "bite." Under MVCC, vacuums cannot delete tuples that were made "obsolete" at any time after the start time of the eldest transaction that is still open. Long running transactions will cause trouble, and should be avoided, even on subscriber nodes.
4.5. I have submitted a MOVE SET / EXECUTE SCRIPT request, and it seems to be stuck on one of my nodes. Slony-I logs aren't displaying any errors or warnings
Is it possible that you are running pg_autovacuum, and it has taken out locks on some tables in the replication set? That would somewhat-invisibly block Slony-I from performing operations that require acquisition of exclusive locks.
You might check for these sorts of locks using the following query: select l.*, c.relname from pg_locks l, pg_class c where c.oid = l.relation ; A ShareUpdateExclusiveLock lock will block the Slony-I operations that need their own exclusive locks, which are likely queued up, marked as not being granted.
5.1. The slon processes servicing my subscribers are growing to enormous size, challenging system resources both in terms of swap space as well as moving towards breaking past the 2GB maximum process size on my system.
By the way, the data that I am replicating includes some rather large records. We have records that are tens of megabytes in size. Perhaps that is somehow relevant?
Yes, those very large records are at the root of the problem. The problem is that slon normally draws in about 100 records at a time when a subscriber is processing the query which loads data from the provider. Thus, if the average record size is 10MB, this will draw in 1000MB of data which is then transformed into INSERT or UPDATE statements, in the slon process' memory.
That obviously leads to slon growing to a fairly tremendous size.
The number of records that are fetched is controlled by the value SLON_DATA_FETCH_SIZE , which is defined in the file src/slon/slon.h. The relevant extract of this is shown below.
#ifdef SLON_CHECK_CMDTUPLES #define SLON_COMMANDS_PER_LINE 1 #define SLON_DATA_FETCH_SIZE 100 #define SLON_WORKLINES_PER_HELPER (SLON_DATA_FETCH_SIZE * 4) #else #define SLON_COMMANDS_PER_LINE 10 #define SLON_DATA_FETCH_SIZE 10 #define SLON_WORKLINES_PER_HELPER (SLON_DATA_FETCH_SIZE * 50) #endif
If you are experiencing this problem, you might modify the definition of SLON_DATA_FETCH_SIZE , perhaps reducing by a factor of 10, and recompile slon. There are two definitions as SLON_CHECK_CMDTUPLES allows doing some extra monitoring to ensure that subscribers have not fallen out of SYNC with the provider. By default, this option is turned off, so the default modification to make is to change the second definition of SLON_DATA_FETCH_SIZE from 10 to 1.
In version 1.2, configuration values sync_max_rowsize and sync_max_largemem are associated with a new algorithm that changes the logic as follows. Rather than fetching 100 rows worth of data at a time:
The fetch from LOG query will draw in 500 rows at a time where the size of the attributes does not exceed sync_max_rowsize. With default values, this restricts this aspect of memory consumption to about 8MB.
Tuples with larger attributes are loaded until aggregate size exceeds the parameter sync_max_largemem. By default, this restricts consumption of this sort to about 5MB. This value is not a strict upper bound; if you have a tuple with attributes 50MB in size, it forcibly must be loaded into memory. There is no way around that. But slon at least won't be trying to load in 100 such records at a time, chewing up 10GB of memory by the time it's done.
This should alleviate problems people have been experiencing when they sporadically have series' of very large tuples.
5.2. I am trying to replicate UNICODE data from PostgreSQL 8.0 to PostgreSQL 8.1, and am experiencing problems.
PostgreSQL 8.1 is quite a lot more strict about what UTF-8 mappings of Unicode characters it accepts as compared to version 8.0.
If you intend to use Slony-I to update an older database to 8.1, and might have invalid UTF-8 values, you may be for an unpleasant surprise.
Let us suppose we have a database running 8.0, encoding in UTF-8. That database will accept the sequence '\060\242' as UTF-8 compliant, even though it is really not.
If you replicate into a PostgreSQL 8.1 instance, it will complain about this, either at subscribe time, where Slony-I will complain about detecting an invalid Unicode sequence during the COPY of the data, which will prevent the subscription from proceeding, or, upon adding data, later, where this will hang up replication fairly much irretrievably. (You could hack on the contents of sl_log_1, but that quickly gets really unattractive...)
There have been discussions as to what might be done about this. No compelling strategy has yet emerged, as all are unattractive.
If you are using Unicode with PostgreSQL 8.0, you run a considerable risk of corrupting data.
If you use replication for a one-time conversion, there is a risk of failure due to the issues mentioned earlier; if that happens, it appears likely that the best answer is to fix the data on the 8.0 system, and retry.
In view of the risks, running replication between versions seems to be something you should not keep running any longer than is necessary to migrate to 8.1.
For more details, see the discussion on postgresql-hackers mailing list. .
5.3. I am running Slony-I 1.1 and have a 4+ node setup where there are two subscription sets, 1 and 2, that do not share any nodes. I am discovering that confirmations for set 1 never get to the nodes subscribing to set 2, and that confirmations for set 2 never get to nodes subscribing to set 1. As a result, sl_log_1 grows and grows and is never purged. This was reported as Slony-I bug 1485 .
Apparently the code for
RebuildListenEntries()
does not suffice for this
case.
RebuildListenEntries()
will be replaced
in Slony-I version 1.2 with an algorithm that covers this case.
In the interim, you'll want to manually add some sl_listen entries using STORE LISTEN or storeListen()
,
based on the (apparently not as obsolete as we thought) principles
described in Section 9.
5.4. I am finding some multibyte columns (Unicode, Big5) are being truncated a bit, clipping off the last character. Why?
This was a bug present until a little after Slony-I
version 1.1.0; the way in which columns were being captured by the
logtrigger()
function could clip off the last
byte of a column represented in a multibyte format. Check to see that
your version of src/backend/slony1_funcs.c is
1.34 or better; the patch was introduced in CVS version 1.34 of that
file.
5.5. Bug #1226 indicates an error condition that can come up if you have a replication set that consists solely of sequences.
The short answer is that having a replication set consisting only of sequences is not a best practice.
The problem with a sequence-only set comes up only if you have a case where the only subscriptions that are active for a particular subscriber to a particular provider are for "sequence-only" sets. If a node gets into that state, replication will fail, as the query that looks for data from sl_log_1 has no tables to find, and the query will be malformed, and fail. If a replication set with tables is added back to the mix, everything will work out fine; it just seems scary.
This problem should be resolved some time after Slony-I 1.1.0.
This can be accomplished several ways, not all equally desirable ;-).
You could drop the whole replication set, and recreate it with just the tables that you need. Alas, that means recopying a whole lot of data, and kills the usability of the cluster on the rest of the set while that's happening.
If you are running 1.0.5 or later, there is the command SET DROP TABLE, which will "do the trick."
If you are still using 1.0.1 or 1.0.2, the
essential functionality of SET DROP TABLE
involves the functionality in droptable_int()
.
You can fiddle this by hand by finding the table ID for the table you
want to get rid of, which you can find in sl_table, and then run the following three queries,
on each host:
select _slonyschema.alterTableRestore(40); select _slonyschema.tableDropKey(40); delete from _slonyschema.sl_table where tab_id = 40;
The schema will obviously depend on how you defined the Slony-I cluster. The table ID, in this case, 40, will need to change to the ID of the table you want to have go away.
You'll have to run these three queries on all of the nodes, preferably firstly on the origin node, so that the dropping of this propagates properly. Implementing this via a slonik statement with a new Slony-I event would do that. Submitting the three queries using EXECUTE SCRIPT could do that. Also possible would be to connect to each database and submit the queries by hand.
If you are running 1.0.5 or later, there is a SET DROP SEQUENCE command in Slonik to allow you to do this, parallelling SET DROP TABLE.
If you are running 1.0.2 or earlier, the process is a bit more manual.
Supposing I want to get rid of the two sequences listed below, whois_cachemgmt_seq and epp_whoi_cach_seq_, we start by needing the seq_id values.
oxrsorg=# select * from _oxrsorg.sl_sequence where seq_id in (93,59); seq_id | seq_reloid | seq_set | seq_comment --------+------------+---------+------------------------------------- 93 | 107451516 | 1 | Sequence public.whois_cachemgmt_seq 59 | 107451860 | 1 | Sequence public.epp_whoi_cach_seq_ (2 rows)
The data that needs to be deleted to stop Slony from continuing to replicate these are thus:
delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59); delete from _oxrsorg.sl_sequence where seq_id in (93,59);
Those two queries could be submitted to all of the nodes via schemadocddlscript_complete( integer, text, integer ) / EXECUTE SCRIPT, thus eliminating the sequence everywhere "at once." Or they may be applied by hand to each of the nodes.
Similarly to SET DROP TABLE, this is implemented Slony-I version 1.0.5 as SET DROP SEQUENCE.
6.1. slon does not restart after crash
After an immediate stop of PostgreSQL (simulation of system crash) in pg_listener a tuple with relname='_${cluster_name}_Restart' exists. slon doesn't start because it thinks another process is serving the cluster on this node. What can I do? The tuples can't be dropped from this relation.
Another slon daemon is serving this node already
The problem is that the system table pg_listener, used by PostgreSQL to manage event notifications, contains some entries that are pointing to backends that no longer exist. The new slon instance connects to the database, and is convinced, by the presence of these entries, that an old slon is still servicing this Slony-I node.
The "trash" in that table needs to be thrown away.
It's handy to keep a slonik script similar to the following to run in such cases:
twcsds004[/opt/twcsds004/OXRS/slony-scripts]$ cat restart_org.slonik cluster name = oxrsorg ; node 1 admin conninfo = 'host=32.85.68.220 dbname=oxrsorg user=postgres port=5532'; node 2 admin conninfo = 'host=32.85.68.216 dbname=oxrsorg user=postgres port=5532'; node 3 admin conninfo = 'host=32.85.68.244 dbname=oxrsorg user=postgres port=5532'; node 4 admin conninfo = 'host=10.28.103.132 dbname=oxrsorg user=postgres port=5532'; restart node 1; restart node 2; restart node 3; restart node 4;
RESTART NODE cleans up dead notifications so that you can restart the node.
As of version 1.0.5, the startup process of slon looks for this condition, and automatically cleans it up.
As of version 8.1 of PostgreSQL, the functions that manipulate pg_listener do not support this usage, so for Slony-I versions after 1.1.2 (e.g. - 1.1.5), this "interlock" behaviour is handled via a new table, and the issue should be transparently "gone."
6.2. I tried the following query which did not work:
sdb=# explain select query_start, current_query from pg_locks join pg_stat_activity on pid = procpid where granted = true and transaction in (select transaction from pg_locks where granted = false); ERROR: could not find hash function for hash operator 716373
It appears the Slony-I xxid
functions are
claiming to be capable of hashing, but cannot actually do so.
What's up?
Slony-I defined an XXID data type and operators on that type in order to allow manipulation of transaction IDs that are used to group together updates that are associated with the same transaction.
Operators were not available for PostgreSQL 7.3 and earlier
versions; in order to support version 7.3, custom functions had to be
added. The =
operator was marked as supporting
hashing, but for that to work properly, the join operator must appear
in a hash index operator class. That was not defined, and as a
result, queries (like the one above) that decide to use hash joins
will fail.
This has not been considered a "release-critical" bug, as Slony-I does not internally generate queries likely to use hash joins. This problem shouldn't injure Slony-I's ability to continue replicating.
Future releases of Slony-I (e.g. 1.0.6, 1.1) will omit the HASHES indicator, so that
Supposing you wish to repair an existing instance, so that your own queries will not run afoul of this problem, you may do so as follows:
/* cbbrowne@[local]/dba2 slony_test1=*/ \x Expanded display is on. /* cbbrowne@[local]/dba2 slony_test1=*/ select * from pg_operator where oprname = '=' and oprnamespace = (select oid from pg_namespace where nspname = 'public'); -[ RECORD 1 ]+------------- oprname | = oprnamespace | 2200 oprowner | 1 oprkind | b oprcanhash | t oprleft | 82122344 oprright | 82122344 oprresult | 16 oprcom | 82122365 oprnegate | 82122363 oprlsortop | 82122362 oprrsortop | 82122362 oprltcmpop | 82122362 oprgtcmpop | 82122360 oprcode | "_T1".xxideq oprrest | eqsel oprjoin | eqjoinsel /* cbbrowne@[local]/dba2 slony_test1=*/ update pg_operator set oprcanhash = 'f' where oprname = '=' and oprnamespace = 2200 ; UPDATE 1
6.3. I can do a pg_dump and load the data back in much faster than the SUBSCRIBE SET runs. Why is that?
Slony-I depends on there being an already existant index on the primary key, and leaves all indexes alone whilst using the PostgreSQL COPY command to load the data. Further hurting performane, the COPY SET event starts by deleting the contents of tables, which potentially leaves a lot of dead tuples
When you use pg_dump to dump the contents of a database, and then load that, creation of indexes is deferred until the very end. It is much more efficient to create indexes against the entire table, at the end, than it is to build up the index incrementally as each row is added to the table.
If you can drop unnecessary indices while the COPY takes place, that will improve performance quite a bit. If you can TRUNCATE tables that contain data that is about to be eliminated, that will improve performance a lot.
Slony-I version 1.1.5 and later versions should handle this automatically; it "thumps" on the indexes in the PostgreSQL catalog to hide them, in much the same way triggers are hidden, and then "fixes" the index pointers and reindexes the table.
6.4. Replication Fails - Unique Constraint Violation
Replication has been running for a while, successfully, when a node encounters a "glitch," and replication logs are filled with repetitions of the following:
DEBUG2 remoteWorkerThread_1: syncing set 2 with 5 table(s) from provider 1 DEBUG2 remoteWorkerThread_1: syncing set 1 with 41 table(s) from provider 1 DEBUG2 remoteWorkerThread_1: syncing set 5 with 1 table(s) from provider 1 DEBUG2 remoteWorkerThread_1: syncing set 3 with 1 table(s) from provider 1 DEBUG2 remoteHelperThread_1_1: 0.135 seconds delay for first row DEBUG2 remoteHelperThread_1_1: 0.343 seconds until close cursor ERROR remoteWorkerThread_1: "insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '34', '35090538', 'D', '_rserv_ts=''9275244'''); delete from only public.epp_domain_host where _rserv_ts='9275244';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '34', '35090539', 'D', '_rserv_ts=''9275245'''); delete from only public.epp_domain_host where _rserv_ts='9275245';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090540', 'D', '_rserv_ts=''24240590'''); delete from only public.epp_domain_contact where _rserv_ts='24240590';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090541', 'D', '_rserv_ts=''24240591'''); delete from only public.epp_domain_contact where _rserv_ts='24240591';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '26', '35090542', 'D', '_rserv_ts=''24240589'''); delete from only public.epp_domain_contact where _rserv_ts='24240589';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '11', '35090543', 'D', '_rserv_ts=''36968002'''); delete from only public.epp_domain_status where _rserv_ts='36968002';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '11', '35090544', 'D', '_rserv_ts=''36968003'''); delete from only public.epp_domain_status where _rserv_ts='36968003';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090549', 'I', '(contact_id,status,reason,_rserv_ts) values (''6972897'',''64'','''',''31044208'')'); insert into public.contact_status (contact_id,status,reason,_rserv_ts) values ('6972897','64','','31044208');insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090550', 'D', '_rserv_ts=''18139332'''); delete from only public.contact_status where _rserv_ts='18139332';insert into "_oxrsapp".sl_log_1 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtype, log_cmddata) values ('1', '919151224', '24', '35090551', 'D', '_rserv_ts=''18139333'''); delete from only public.contact_status where _rserv_ts='18139333';" ERROR: duplicate key violates unique constraint "contact_status_pkey" - qualification was: ERROR remoteWorkerThread_1: SYNC aborted
The transaction rolls back, and Slony-I tries again, and again, and again. The problem is with one of the last SQL statements, the one with log_cmdtype = 'I'. That isn't quite obvious; what takes place is that Slony-I groups 10 update queries together to diminish the number of network round trips.
A certain cause for this has been difficult to arrive at.
By the time we notice that there is a problem, the seemingly missed delete transaction has been cleaned out of sl_log_1, so there appears to be no recovery possible. What has seemed necessary, at this point, is to drop the replication set (or even the node), and restart replication from scratch on that node.
In Slony-I 1.0.5, the handling of purges of sl_log_1 became more conservative, refusing to purge entries that haven't been successfully synced for at least 10 minutes on all nodes. It was not certain that that would prevent the "glitch" from taking place, but it seemed plausible that it might leave enough sl_log_1 data to be able to do something about recovering from the condition or at least diagnosing it more exactly. And perhaps the problem was that sl_log_1 was being purged too aggressively, and this would resolve the issue completely.
It is a shame to have to reconstruct a large replication node for this; if you discover that this problem recurs, it may be an idea to break replication down into multiple sets in order to diminish the work involved in restarting replication. If only one set has broken, you may only need to unsubscribe/drop and resubscribe the one set.
In one case we found two lines in the SQL error message in the log file that contained identical insertions into sl_log_1. This ought to be impossible as is a primary key on sl_log_1. The latest (somewhat) punctured theory that comes from that was that perhaps this PK index has been corrupted (representing a PostgreSQL bug), and that perhaps the problem might be alleviated by running the query:
# reindex table _slonyschema.sl_log_1;
On at least one occasion, this has resolved the problem, so it is worth trying this.
This problem has been found to represent a PostgreSQL bug as opposed to one in Slony-I. Version 7.4.8 was released with two resolutions to race conditions that should resolve the issue. Thus, if you are running a version of PostgreSQL earlier than 7.4.8, you should consider upgrading to resolve this.
Ouch. What happens here is a conflict between:
pg_dump, which has taken out an AccessShareLock on all of the tables in the database, including the Slony-I ones, and
A Slony-I sync event, which wants to grab a AccessExclusiveLock on the table sl_event.
The initial query that will be blocked is thus:
select "_slonyschema".createEvent('_slonyschema, 'SYNC', NULL);
(You can see this in pg_stat_activity, if you have query display turned on in postgresql.conf)
The actual query combination that is causing the lock is from
the function Slony_I_ClusterStatus()
, found in
slony1_funcs.c, and is localized in the code that
does:
LOCK TABLE %s.sl_event; INSERT INTO %s.sl_event (...stuff...) SELECT currval('%s.sl_event_seq');
The LOCK statement will sit there and wait until pg_dump (or whatever else has pretty much any kind of access lock on sl_event) completes.
Every subsequent query submitted that touches
sl_event will block behind the
createEvent
call.
There are a number of possible answers to this:
Have pg_dump specify the schema dumped using --schema=whatever, and don't try dumping the cluster's schema.
It would be nice to add an --exclude-schema option to pg_dump to exclude the Slony-I cluster schema. Maybe in 8.2...
Note that 1.0.5 uses a more precise lock that is less exclusive that alleviates this problem.
Firstly, let's look at how it is handled absent of the special handling of the STORE TRIGGER Slonik command.
The function schemadocaltertableforreplication( integer ) prepares each table for replication.
On the origin node, this involves adding a trigger that uses the schemadoclogtrigger( ) function to the table.
That trigger initiates the action of logging all updates to the table to Slony-I sl_log_1 tables.
On a subscriber node, this involves disabling
triggers and rules, then adding in the trigger that denies write
access using the denyAccess()
function to
replicated tables.
Up until 1.1 (and perhaps onwards), the "disabling" is done by modifying the pg_trigger or pg_rewrite tgrelid to point to the OID of the "primary key" index on the table rather than to the table itself.
A somewhat unfortunate side-effect is that this handling of the rules and triggers somewhat "tramples" on them. The rules and triggers are still there, but are no longer properly tied to their tables. If you do a pg_dump on the "subscriber" node, it won't find the rules and triggers because it does not expect them to be associated with an index.
Now, consider how STORE TRIGGER enters into things.
Simply put, this command causes
Slony-I to restore the trigger using
alterTableRestore(table id)
, which restores the
table's OID into the pg_trigger or
pg_rewrite tgrelid column on the
affected node.
This implies that if you plan to draw backups from a subscriber node, you will need to draw the schema from the origin node. It is straightforward to do this:
% pg_dump -h originnode.example.info -p 5432 --schema-only --schema=public ourdb > schema_backup.sql % pg_dump -h subscribernode.example.info -p 5432 --data-only --schema=public ourdb > data_backup.sql
7.2. I was trying to request EXECUTE SCRIPT or MOVE SET, and found messages as follows on one of the subscribers:
NOTICE: Slony-I: multiple instances of trigger defrazzle on table frobozz NOTICE: Slony-I: multiple instances of trigger derez on table tron ERROR: Slony-I: Unable to disable triggers
The trouble would seem to be that you have added triggers on tables whose names conflict with triggers that were hidden by Slony-I.
Slony-I hides triggers (save for those "unhidden" via STORE TRIGGER) by repointing them to the primary key of the table. In the case of foreign key triggers, or other triggers used to do data validation, it should be quite unnecessary to run them on a subscriber, as equivalent triggers should have been invoked on the origin node. In contrast, triggers that do some form of "cache invalidation" are ones you might want to have run on a subscriber.
The Right Way to handle such triggers is normally to use STORE TRIGGER, which tells Slony-I that a trigger should not get deactivated.
But some intrepid DBA might take matters into their own hands and install a trigger by hand on a subscriber, and the above condition generally has that as the cause. What to do? What to do?
The answer is normally fairly simple: Drop out the "extra" trigger on the subscriber before the event that tries to restore them runs. Ideally, if the DBA is particularly intrepid, and aware of this issue, that should take place before there is ever a chance for the error message to appear.
If the DBA is not that intrepid, the answer is to connect to the offending node and drop the "visible" version of the trigger using the SQL DROP TRIGGER command. That should allow the event to proceed. If the event was EXECUTE SCRIPT, then the "not-so-intrepid" DBA may need to add the trigger back, by hand, or, if they are wise, they should consider activating it using STORE TRIGGER.
7.3. Behaviour - all the subscriber nodes start to fall behind the origin, and all the logs on the subscriber nodes have the following error message repeating in them (when I encountered it, there was a nice long SQL statement above each entry):
ERROR remoteWorkerThread_1: helper 1 finished with error ERROR remoteWorkerThread_1: SYNC aborted
Cause: you have likely issued alter table statements directly on the databases instead of using the slonik EXECUTE SCRIPT command.
The solution is to rebuild the trigger on the affected table and fix the entries in sl_log_1 by hand.
You'll need to identify from either the slon logs, or the PostgreSQL database logs exactly which statement it is that is causing the error.
You need to fix the Slony-defined triggers on the table in question. This is done with the following procedure.
BEGIN; LOCK TABLE table_name; SELECT _oxrsorg.altertablerestore(tab_id);--tab_id is _slony_schema.sl_table.tab_id SELECT _oxrsorg.altertableforreplication(tab_id);--tab_id is _slony_schema.sl_table.tab_id COMMIT;
You then need to find the rows in sl_log_1 that have bad entries and fix them. You may want to take down the slon daemons for all nodes except the master; that way, if you make a mistake, it won't immediately propagate through to the subscribers.
Here is an example:
BEGIN; LOCK TABLE customer_account; SELECT _app1.altertablerestore(31); SELECT _app1.altertableforreplication(31); COMMIT; BEGIN; LOCK TABLE txn_log; SELECT _app1.altertablerestore(41); SELECT _app1.altertableforreplication(41); COMMIT; --fixing customer_account, which had an attempt to insert a "" into a timestamp with timezone. BEGIN; update _app1.sl_log_1 SET log_cmddata = 'balance=''60684.00'' where pkey=''49''' where log_actionseq = '67796036'; update _app1.sl_log_1 SET log_cmddata = 'balance=''60690.00'' where pkey=''49''' where log_actionseq = '67796194'; update _app1.sl_log_1 SET log_cmddata = 'balance=''60684.00'' where pkey=''49''' where log_actionseq = '67795881'; update _app1.sl_log_1 SET log_cmddata = 'balance=''1852.00'' where pkey=''57''' where log_actionseq = '67796403'; update _app1.sl_log_1 SET log_cmddata = 'balance=''87906.00'' where pkey=''8''' where log_actionseq = '68352967'; update _app1.sl_log_1 SET log_cmddata = 'balance=''125180.00'' where pkey=''60''' where log_actionseq = '68386951'; update _app1.sl_log_1 SET log_cmddata = 'balance=''125198.00'' where pkey=''60''' where log_actionseq = '68387055'; update _app1.sl_log_1 SET log_cmddata = 'balance=''125174.00'' where pkey=''60''' where log_actionseq = '68386682'; update _app1.sl_log_1 SET log_cmddata = 'balance=''125186.00'' where pkey=''60''' where log_actionseq = '68386992'; update _app1.sl_log_1 SET log_cmddata = 'balance=''125192.00'' where pkey=''60''' where log_actionseq = '68387029';
7.4. Node #1 was dropped via DROP NODE, and the slon one of the other nodes is repeatedly failing with the error message:
ERROR remoteWorkerThread_3: "begin transaction; set transaction isolation level serializable; lock table "_mailermailer".sl_config_lock; select "_mailermailer" .storeListen_int(2, 1, 3); notify "_mailermailer_Event"; notify "_mailermailer_C onfirm"; insert into "_mailermailer".sl_event (ev_origin, ev_seqno, ev_times tamp, ev_minxid, ev_maxxid, ev_xip, ev_type , ev_data1, ev_data2, ev_data3 ) values ('3', '2215', '2005-02-18 10:30:42.529048', '3286814', '3286815', '' , 'STORE_LISTEN', '2', '1', '3'); insert into "_mailermailer".sl_confirm (con_origin, con_received, con_seqno, con_timestamp) values (3, 2, '2215', CU RRENT_TIMESTAMP); commit transaction;" PGRES_FATAL_ERROR ERROR: insert or updat e on table "sl_listen" violates foreign key constraint "sl_listen-sl_path-ref" DETAIL: Key (li_provider,li_receiver)=(1,3) is not present in table "sl_path". DEBUG1 syncThread: thread done
Evidently, a STORE LISTEN request hadn't propagated yet before node 1 was dropped.
This points to a case where you'll need to do "event surgery" on one or more of the nodes. A STORE_LISTEN event remains outstanding that wants to add a listen path that cannot be created because node 1 and all paths pointing to node 1 have gone away.
Let's assume, for exposition purposes, that the remaining nodes are #2 and #3, and that the above error is being reported on node #3.
That implies that the event is stored on node #2, as it wouldn't be on node #3 if it had not already been processed successfully. The easiest way to cope with this situation is to delete the offending sl_event entry on node #2. You'll connect to node #2's database, and search for the STORE_LISTEN event:
select * from sl_event where ev_type = 'STORE_LISTEN';
There may be several entries, only some of which need to be purged.
-# begin; -- Don't straight delete them; open a transaction so you can respond to OOPS BEGIN; -# delete from sl_event where ev_type = 'STORE_LISTEN' and -# (ev_data1 = '1' or ev_data2 = '1' or ev_data3 = '1'); DELETE 3 -# -- Seems OK... -# commit; COMMIT
The next time the slon for node 3 starts up, it will no longer find the "offensive" STORE_LISTEN events, and replication can continue. (You may then run into some other problem where an old stored event is referring to no-longer-existant configuration...)