Header And Logo

PostgreSQL
| The world's most advanced open source database.

async.c

Go to the documentation of this file.
00001 /*-------------------------------------------------------------------------
00002  *
00003  * async.c
00004  *    Asynchronous notification: NOTIFY, LISTEN, UNLISTEN
00005  *
00006  * Portions Copyright (c) 1996-2013, PostgreSQL Global Development Group
00007  * Portions Copyright (c) 1994, Regents of the University of California
00008  *
00009  * IDENTIFICATION
00010  *    src/backend/commands/async.c
00011  *
00012  *-------------------------------------------------------------------------
00013  */
00014 
00015 /*-------------------------------------------------------------------------
00016  * Async Notification Model as of 9.0:
00017  *
00018  * 1. Multiple backends on same machine. Multiple backends listening on
00019  *    several channels. (Channels are also called "conditions" in other
00020  *    parts of the code.)
00021  *
00022  * 2. There is one central queue in disk-based storage (directory pg_notify/),
00023  *    with actively-used pages mapped into shared memory by the slru.c module.
00024  *    All notification messages are placed in the queue and later read out
00025  *    by listening backends.
00026  *
00027  *    There is no central knowledge of which backend listens on which channel;
00028  *    every backend has its own list of interesting channels.
00029  *
00030  *    Although there is only one queue, notifications are treated as being
00031  *    database-local; this is done by including the sender's database OID
00032  *    in each notification message.  Listening backends ignore messages
00033  *    that don't match their database OID.  This is important because it
00034  *    ensures senders and receivers have the same database encoding and won't
00035  *    misinterpret non-ASCII text in the channel name or payload string.
00036  *
00037  *    Since notifications are not expected to survive database crashes,
00038  *    we can simply clean out the pg_notify data at any reboot, and there
00039  *    is no need for WAL support or fsync'ing.
00040  *
00041  * 3. Every backend that is listening on at least one channel registers by
00042  *    entering its PID into the array in AsyncQueueControl. It then scans all
00043  *    incoming notifications in the central queue and first compares the
00044  *    database OID of the notification with its own database OID and then
00045  *    compares the notified channel with the list of channels that it listens
00046  *    to. In case there is a match it delivers the notification event to its
00047  *    frontend.  Non-matching events are simply skipped.
00048  *
00049  * 4. The NOTIFY statement (routine Async_Notify) stores the notification in
00050  *    a backend-local list which will not be processed until transaction end.
00051  *
00052  *    Duplicate notifications from the same transaction are sent out as one
00053  *    notification only. This is done to save work when for example a trigger
00054  *    on a 2 million row table fires a notification for each row that has been
00055  *    changed. If the application needs to receive every single notification
00056  *    that has been sent, it can easily add some unique string into the extra
00057  *    payload parameter.
00058  *
00059  *    When the transaction is ready to commit, PreCommit_Notify() adds the
00060  *    pending notifications to the head of the queue. The head pointer of the
00061  *    queue always points to the next free position and a position is just a
00062  *    page number and the offset in that page. This is done before marking the
00063  *    transaction as committed in clog. If we run into problems writing the
00064  *    notifications, we can still call elog(ERROR, ...) and the transaction
00065  *    will roll back.
00066  *
00067  *    Once we have put all of the notifications into the queue, we return to
00068  *    CommitTransaction() which will then do the actual transaction commit.
00069  *
00070  *    After commit we are called another time (AtCommit_Notify()). Here we
00071  *    make the actual updates to the effective listen state (listenChannels).
00072  *
00073  *    Finally, after we are out of the transaction altogether, we check if
00074  *    we need to signal listening backends.  In SignalBackends() we scan the
00075  *    list of listening backends and send a PROCSIG_NOTIFY_INTERRUPT signal
00076  *    to every listening backend (we don't know which backend is listening on
00077  *    which channel so we must signal them all). We can exclude backends that
00078  *    are already up to date, though.  We don't bother with a self-signal
00079  *    either, but just process the queue directly.
00080  *
00081  * 5. Upon receipt of a PROCSIG_NOTIFY_INTERRUPT signal, the signal handler
00082  *    can call inbound-notify processing immediately if this backend is idle
00083  *    (ie, it is waiting for a frontend command and is not within a transaction
00084  *    block).  Otherwise the handler may only set a flag, which will cause the
00085  *    processing to occur just before we next go idle.
00086  *
00087  *    Inbound-notify processing consists of reading all of the notifications
00088  *    that have arrived since scanning last time. We read every notification
00089  *    until we reach either a notification from an uncommitted transaction or
00090  *    the head pointer's position. Then we check if we were the laziest
00091  *    backend: if our pointer is set to the same position as the global tail
00092  *    pointer is set, then we move the global tail pointer ahead to where the
00093  *    second-laziest backend is (in general, we take the MIN of the current
00094  *    head position and all active backends' new tail pointers). Whenever we
00095  *    move the global tail pointer we also truncate now-unused pages (i.e.,
00096  *    delete files in pg_notify/ that are no longer used).
00097  *
00098  * An application that listens on the same channel it notifies will get
00099  * NOTIFY messages for its own NOTIFYs.  These can be ignored, if not useful,
00100  * by comparing be_pid in the NOTIFY message to the application's own backend's
00101  * PID.  (As of FE/BE protocol 2.0, the backend's PID is provided to the
00102  * frontend during startup.)  The above design guarantees that notifies from
00103  * other backends will never be missed by ignoring self-notifies.
00104  *
00105  * The amount of shared memory used for notify management (NUM_ASYNC_BUFFERS)
00106  * can be varied without affecting anything but performance.  The maximum
00107  * amount of notification data that can be queued at one time is determined
00108  * by slru.c's wraparound limit; see QUEUE_MAX_PAGE below.
00109  *-------------------------------------------------------------------------
00110  */
00111 
00112 #include "postgres.h"
00113 
00114 #include <limits.h>
00115 #include <unistd.h>
00116 #include <signal.h>
00117 
00118 #include "access/slru.h"
00119 #include "access/transam.h"
00120 #include "access/xact.h"
00121 #include "catalog/pg_database.h"
00122 #include "commands/async.h"
00123 #include "funcapi.h"
00124 #include "libpq/libpq.h"
00125 #include "libpq/pqformat.h"
00126 #include "miscadmin.h"
00127 #include "storage/ipc.h"
00128 #include "storage/lmgr.h"
00129 #include "storage/procsignal.h"
00130 #include "storage/sinval.h"
00131 #include "tcop/tcopprot.h"
00132 #include "utils/builtins.h"
00133 #include "utils/memutils.h"
00134 #include "utils/ps_status.h"
00135 #include "utils/timestamp.h"
00136 
00137 
00138 /*
00139  * Maximum size of a NOTIFY payload, including terminating NULL.  This
00140  * must be kept small enough so that a notification message fits on one
00141  * SLRU page.  The magic fudge factor here is noncritical as long as it's
00142  * more than AsyncQueueEntryEmptySize --- we make it significantly bigger
00143  * than that, so changes in that data structure won't affect user-visible
00144  * restrictions.
00145  */
00146 #define NOTIFY_PAYLOAD_MAX_LENGTH   (BLCKSZ - NAMEDATALEN - 128)
00147 
00148 /*
00149  * Struct representing an entry in the global notify queue
00150  *
00151  * This struct declaration has the maximal length, but in a real queue entry
00152  * the data area is only big enough for the actual channel and payload strings
00153  * (each null-terminated).  AsyncQueueEntryEmptySize is the minimum possible
00154  * entry size, if both channel and payload strings are empty (but note it
00155  * doesn't include alignment padding).
00156  *
00157  * The "length" field should always be rounded up to the next QUEUEALIGN
00158  * multiple so that all fields are properly aligned.
00159  */
00160 typedef struct AsyncQueueEntry
00161 {
00162     int         length;         /* total allocated length of entry */
00163     Oid         dboid;          /* sender's database OID */
00164     TransactionId xid;          /* sender's XID */
00165     int32       srcPid;         /* sender's PID */
00166     char        data[NAMEDATALEN + NOTIFY_PAYLOAD_MAX_LENGTH];
00167 } AsyncQueueEntry;
00168 
00169 /* Currently, no field of AsyncQueueEntry requires more than int alignment */
00170 #define QUEUEALIGN(len)     INTALIGN(len)
00171 
00172 #define AsyncQueueEntryEmptySize    (offsetof(AsyncQueueEntry, data) + 2)
00173 
00174 /*
00175  * Struct describing a queue position, and assorted macros for working with it
00176  */
00177 typedef struct QueuePosition
00178 {
00179     int         page;           /* SLRU page number */
00180     int         offset;         /* byte offset within page */
00181 } QueuePosition;
00182 
00183 #define QUEUE_POS_PAGE(x)       ((x).page)
00184 #define QUEUE_POS_OFFSET(x)     ((x).offset)
00185 
00186 #define SET_QUEUE_POS(x,y,z) \
00187     do { \
00188         (x).page = (y); \
00189         (x).offset = (z); \
00190     } while (0)
00191 
00192 #define QUEUE_POS_EQUAL(x,y) \
00193      ((x).page == (y).page && (x).offset == (y).offset)
00194 
00195 /* choose logically smaller QueuePosition */
00196 #define QUEUE_POS_MIN(x,y) \
00197     (asyncQueuePagePrecedes((x).page, (y).page) ? (x) : \
00198      (x).page != (y).page ? (y) : \
00199      (x).offset < (y).offset ? (x) : (y))
00200 
00201 /*
00202  * Struct describing a listening backend's status
00203  */
00204 typedef struct QueueBackendStatus
00205 {
00206     int32       pid;            /* either a PID or InvalidPid */
00207     QueuePosition pos;          /* backend has read queue up to here */
00208 } QueueBackendStatus;
00209 
00210 #define InvalidPid              (-1)
00211 
00212 /*
00213  * Shared memory state for LISTEN/NOTIFY (excluding its SLRU stuff)
00214  *
00215  * The AsyncQueueControl structure is protected by the AsyncQueueLock.
00216  *
00217  * When holding the lock in SHARED mode, backends may only inspect their own
00218  * entries as well as the head and tail pointers. Consequently we can allow a
00219  * backend to update its own record while holding only SHARED lock (since no
00220  * other backend will inspect it).
00221  *
00222  * When holding the lock in EXCLUSIVE mode, backends can inspect the entries
00223  * of other backends and also change the head and tail pointers.
00224  *
00225  * In order to avoid deadlocks, whenever we need both locks, we always first
00226  * get AsyncQueueLock and then AsyncCtlLock.
00227  *
00228  * Each backend uses the backend[] array entry with index equal to its
00229  * BackendId (which can range from 1 to MaxBackends).  We rely on this to make
00230  * SendProcSignal fast.
00231  */
00232 typedef struct AsyncQueueControl
00233 {
00234     QueuePosition head;         /* head points to the next free location */
00235     QueuePosition tail;         /* the global tail is equivalent to the tail
00236                                  * of the "slowest" backend */
00237     TimestampTz lastQueueFillWarn;      /* time of last queue-full msg */
00238     QueueBackendStatus backend[1];      /* actually of length MaxBackends+1 */
00239     /* DO NOT ADD FURTHER STRUCT MEMBERS HERE */
00240 } AsyncQueueControl;
00241 
00242 static AsyncQueueControl *asyncQueueControl;
00243 
00244 #define QUEUE_HEAD                  (asyncQueueControl->head)
00245 #define QUEUE_TAIL                  (asyncQueueControl->tail)
00246 #define QUEUE_BACKEND_PID(i)        (asyncQueueControl->backend[i].pid)
00247 #define QUEUE_BACKEND_POS(i)        (asyncQueueControl->backend[i].pos)
00248 
00249 /*
00250  * The SLRU buffer area through which we access the notification queue
00251  */
00252 static SlruCtlData AsyncCtlData;
00253 
00254 #define AsyncCtl                    (&AsyncCtlData)
00255 #define QUEUE_PAGESIZE              BLCKSZ
00256 #define QUEUE_FULL_WARN_INTERVAL    5000        /* warn at most once every 5s */
00257 
00258 /*
00259  * slru.c currently assumes that all filenames are four characters of hex
00260  * digits. That means that we can use segments 0000 through FFFF.
00261  * Each segment contains SLRU_PAGES_PER_SEGMENT pages which gives us
00262  * the pages from 0 to SLRU_PAGES_PER_SEGMENT * 0x10000 - 1.
00263  *
00264  * It's of course possible to enhance slru.c, but this gives us so much
00265  * space already that it doesn't seem worth the trouble.
00266  *
00267  * The most data we can have in the queue at a time is QUEUE_MAX_PAGE/2
00268  * pages, because more than that would confuse slru.c into thinking there
00269  * was a wraparound condition.  With the default BLCKSZ this means there
00270  * can be up to 8GB of queued-and-not-read data.
00271  *
00272  * Note: it's possible to redefine QUEUE_MAX_PAGE with a smaller multiple of
00273  * SLRU_PAGES_PER_SEGMENT, for easier testing of queue-full behaviour.
00274  */
00275 #define QUEUE_MAX_PAGE          (SLRU_PAGES_PER_SEGMENT * 0x10000 - 1)
00276 
00277 /*
00278  * listenChannels identifies the channels we are actually listening to
00279  * (ie, have committed a LISTEN on).  It is a simple list of channel names,
00280  * allocated in TopMemoryContext.
00281  */
00282 static List *listenChannels = NIL;      /* list of C strings */
00283 
00284 /*
00285  * State for pending LISTEN/UNLISTEN actions consists of an ordered list of
00286  * all actions requested in the current transaction.  As explained above,
00287  * we don't actually change listenChannels until we reach transaction commit.
00288  *
00289  * The list is kept in CurTransactionContext.  In subtransactions, each
00290  * subtransaction has its own list in its own CurTransactionContext, but
00291  * successful subtransactions attach their lists to their parent's list.
00292  * Failed subtransactions simply discard their lists.
00293  */
00294 typedef enum
00295 {
00296     LISTEN_LISTEN,
00297     LISTEN_UNLISTEN,
00298     LISTEN_UNLISTEN_ALL
00299 } ListenActionKind;
00300 
00301 typedef struct
00302 {
00303     ListenActionKind action;
00304     char        channel[1];     /* actually, as long as needed */
00305 } ListenAction;
00306 
00307 static List *pendingActions = NIL;      /* list of ListenAction */
00308 
00309 static List *upperPendingActions = NIL; /* list of upper-xact lists */
00310 
00311 /*
00312  * State for outbound notifies consists of a list of all channels+payloads
00313  * NOTIFYed in the current transaction. We do not actually perform a NOTIFY
00314  * until and unless the transaction commits.  pendingNotifies is NIL if no
00315  * NOTIFYs have been done in the current transaction.
00316  *
00317  * The list is kept in CurTransactionContext.  In subtransactions, each
00318  * subtransaction has its own list in its own CurTransactionContext, but
00319  * successful subtransactions attach their lists to their parent's list.
00320  * Failed subtransactions simply discard their lists.
00321  *
00322  * Note: the action and notify lists do not interact within a transaction.
00323  * In particular, if a transaction does NOTIFY and then LISTEN on the same
00324  * condition name, it will get a self-notify at commit.  This is a bit odd
00325  * but is consistent with our historical behavior.
00326  */
00327 typedef struct Notification
00328 {
00329     char       *channel;        /* channel name */
00330     char       *payload;        /* payload string (can be empty) */
00331 } Notification;
00332 
00333 static List *pendingNotifies = NIL;     /* list of Notifications */
00334 
00335 static List *upperPendingNotifies = NIL;        /* list of upper-xact lists */
00336 
00337 /*
00338  * State for inbound notifications consists of two flags: one saying whether
00339  * the signal handler is currently allowed to call ProcessIncomingNotify
00340  * directly, and one saying whether the signal has occurred but the handler
00341  * was not allowed to call ProcessIncomingNotify at the time.
00342  *
00343  * NB: the "volatile" on these declarations is critical!  If your compiler
00344  * does not grok "volatile", you'd be best advised to compile this file
00345  * with all optimization turned off.
00346  */
00347 static volatile sig_atomic_t notifyInterruptEnabled = 0;
00348 static volatile sig_atomic_t notifyInterruptOccurred = 0;
00349 
00350 /* True if we've registered an on_shmem_exit cleanup */
00351 static bool unlistenExitRegistered = false;
00352 
00353 /* True if we're currently registered as a listener in asyncQueueControl */
00354 static bool amRegisteredListener = false;
00355 
00356 /* has this backend sent notifications in the current transaction? */
00357 static bool backendHasSentNotifications = false;
00358 
00359 /* GUC parameter */
00360 bool        Trace_notify = false;
00361 
00362 /* local function prototypes */
00363 static bool asyncQueuePagePrecedes(int p, int q);
00364 static void queue_listen(ListenActionKind action, const char *channel);
00365 static void Async_UnlistenOnExit(int code, Datum arg);
00366 static void Exec_ListenPreCommit(void);
00367 static void Exec_ListenCommit(const char *channel);
00368 static void Exec_UnlistenCommit(const char *channel);
00369 static void Exec_UnlistenAllCommit(void);
00370 static bool IsListeningOn(const char *channel);
00371 static void asyncQueueUnregister(void);
00372 static bool asyncQueueIsFull(void);
00373 static bool asyncQueueAdvance(QueuePosition *position, int entryLength);
00374 static void asyncQueueNotificationToEntry(Notification *n, AsyncQueueEntry *qe);
00375 static ListCell *asyncQueueAddEntries(ListCell *nextNotify);
00376 static void asyncQueueFillWarning(void);
00377 static bool SignalBackends(void);
00378 static void asyncQueueReadAllNotifications(void);
00379 static bool asyncQueueProcessPageEntries(QueuePosition *current,
00380                              QueuePosition stop,
00381                              char *page_buffer);
00382 static void asyncQueueAdvanceTail(void);
00383 static void ProcessIncomingNotify(void);
00384 static void NotifyMyFrontEnd(const char *channel,
00385                  const char *payload,
00386                  int32 srcPid);
00387 static bool AsyncExistsPendingNotify(const char *channel, const char *payload);
00388 static void ClearPendingActionsAndNotifies(void);
00389 
00390 /*
00391  * We will work on the page range of 0..QUEUE_MAX_PAGE.
00392  */
00393 static bool
00394 asyncQueuePagePrecedes(int p, int q)
00395 {
00396     int         diff;
00397 
00398     /*
00399      * We have to compare modulo (QUEUE_MAX_PAGE+1)/2.  Both inputs should be
00400      * in the range 0..QUEUE_MAX_PAGE.
00401      */
00402     Assert(p >= 0 && p <= QUEUE_MAX_PAGE);
00403     Assert(q >= 0 && q <= QUEUE_MAX_PAGE);
00404 
00405     diff = p - q;
00406     if (diff >= ((QUEUE_MAX_PAGE + 1) / 2))
00407         diff -= QUEUE_MAX_PAGE + 1;
00408     else if (diff < -((QUEUE_MAX_PAGE + 1) / 2))
00409         diff += QUEUE_MAX_PAGE + 1;
00410     return diff < 0;
00411 }
00412 
00413 /*
00414  * Report space needed for our shared memory area
00415  */
00416 Size
00417 AsyncShmemSize(void)
00418 {
00419     Size        size;
00420 
00421     /* This had better match AsyncShmemInit */
00422     size = mul_size(MaxBackends, sizeof(QueueBackendStatus));
00423     size = add_size(size, sizeof(AsyncQueueControl));
00424 
00425     size = add_size(size, SimpleLruShmemSize(NUM_ASYNC_BUFFERS, 0));
00426 
00427     return size;
00428 }
00429 
00430 /*
00431  * Initialize our shared memory area
00432  */
00433 void
00434 AsyncShmemInit(void)
00435 {
00436     bool        found;
00437     int         slotno;
00438     Size        size;
00439 
00440     /*
00441      * Create or attach to the AsyncQueueControl structure.
00442      *
00443      * The used entries in the backend[] array run from 1 to MaxBackends.
00444      * sizeof(AsyncQueueControl) already includes space for the unused zero'th
00445      * entry, but we need to add on space for the used entries.
00446      */
00447     size = mul_size(MaxBackends, sizeof(QueueBackendStatus));
00448     size = add_size(size, sizeof(AsyncQueueControl));
00449 
00450     asyncQueueControl = (AsyncQueueControl *)
00451         ShmemInitStruct("Async Queue Control", size, &found);
00452 
00453     if (!found)
00454     {
00455         /* First time through, so initialize it */
00456         int         i;
00457 
00458         SET_QUEUE_POS(QUEUE_HEAD, 0, 0);
00459         SET_QUEUE_POS(QUEUE_TAIL, 0, 0);
00460         asyncQueueControl->lastQueueFillWarn = 0;
00461         /* zero'th entry won't be used, but let's initialize it anyway */
00462         for (i = 0; i <= MaxBackends; i++)
00463         {
00464             QUEUE_BACKEND_PID(i) = InvalidPid;
00465             SET_QUEUE_POS(QUEUE_BACKEND_POS(i), 0, 0);
00466         }
00467     }
00468 
00469     /*
00470      * Set up SLRU management of the pg_notify data.
00471      */
00472     AsyncCtl->PagePrecedes = asyncQueuePagePrecedes;
00473     SimpleLruInit(AsyncCtl, "Async Ctl", NUM_ASYNC_BUFFERS, 0,
00474                   AsyncCtlLock, "pg_notify");
00475     /* Override default assumption that writes should be fsync'd */
00476     AsyncCtl->do_fsync = false;
00477 
00478     if (!found)
00479     {
00480         /*
00481          * During start or reboot, clean out the pg_notify directory.
00482          */
00483         (void) SlruScanDirectory(AsyncCtl, SlruScanDirCbDeleteAll, NULL);
00484 
00485         /* Now initialize page zero to empty */
00486         LWLockAcquire(AsyncCtlLock, LW_EXCLUSIVE);
00487         slotno = SimpleLruZeroPage(AsyncCtl, QUEUE_POS_PAGE(QUEUE_HEAD));
00488         /* This write is just to verify that pg_notify/ is writable */
00489         SimpleLruWritePage(AsyncCtl, slotno);
00490         LWLockRelease(AsyncCtlLock);
00491     }
00492 }
00493 
00494 
00495 /*
00496  * pg_notify -
00497  *    SQL function to send a notification event
00498  */
00499 Datum
00500 pg_notify(PG_FUNCTION_ARGS)
00501 {
00502     const char *channel;
00503     const char *payload;
00504 
00505     if (PG_ARGISNULL(0))
00506         channel = "";
00507     else
00508         channel = text_to_cstring(PG_GETARG_TEXT_PP(0));
00509 
00510     if (PG_ARGISNULL(1))
00511         payload = "";
00512     else
00513         payload = text_to_cstring(PG_GETARG_TEXT_PP(1));
00514 
00515     /* For NOTIFY as a statement, this is checked in ProcessUtility */
00516     PreventCommandDuringRecovery("NOTIFY");
00517 
00518     Async_Notify(channel, payload);
00519 
00520     PG_RETURN_VOID();
00521 }
00522 
00523 
00524 /*
00525  * Async_Notify
00526  *
00527  *      This is executed by the SQL notify command.
00528  *
00529  *      Adds the message to the list of pending notifies.
00530  *      Actual notification happens during transaction commit.
00531  *      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00532  */
00533 void
00534 Async_Notify(const char *channel, const char *payload)
00535 {
00536     Notification *n;
00537     MemoryContext oldcontext;
00538 
00539     if (Trace_notify)
00540         elog(DEBUG1, "Async_Notify(%s)", channel);
00541 
00542     /* a channel name must be specified */
00543     if (!channel || !strlen(channel))
00544         ereport(ERROR,
00545                 (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
00546                  errmsg("channel name cannot be empty")));
00547 
00548     if (strlen(channel) >= NAMEDATALEN)
00549         ereport(ERROR,
00550                 (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
00551                  errmsg("channel name too long")));
00552 
00553     if (payload)
00554     {
00555         if (strlen(payload) >= NOTIFY_PAYLOAD_MAX_LENGTH)
00556             ereport(ERROR,
00557                     (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
00558                      errmsg("payload string too long")));
00559     }
00560 
00561     /* no point in making duplicate entries in the list ... */
00562     if (AsyncExistsPendingNotify(channel, payload))
00563         return;
00564 
00565     /*
00566      * The notification list needs to live until end of transaction, so store
00567      * it in the transaction context.
00568      */
00569     oldcontext = MemoryContextSwitchTo(CurTransactionContext);
00570 
00571     n = (Notification *) palloc(sizeof(Notification));
00572     n->channel = pstrdup(channel);
00573     if (payload)
00574         n->payload = pstrdup(payload);
00575     else
00576         n->payload = "";
00577 
00578     /*
00579      * We want to preserve the order so we need to append every notification.
00580      * See comments at AsyncExistsPendingNotify().
00581      */
00582     pendingNotifies = lappend(pendingNotifies, n);
00583 
00584     MemoryContextSwitchTo(oldcontext);
00585 }
00586 
00587 /*
00588  * queue_listen
00589  *      Common code for listen, unlisten, unlisten all commands.
00590  *
00591  *      Adds the request to the list of pending actions.
00592  *      Actual update of the listenChannels list happens during transaction
00593  *      commit.
00594  */
00595 static void
00596 queue_listen(ListenActionKind action, const char *channel)
00597 {
00598     MemoryContext oldcontext;
00599     ListenAction *actrec;
00600 
00601     /*
00602      * Unlike Async_Notify, we don't try to collapse out duplicates. It would
00603      * be too complicated to ensure we get the right interactions of
00604      * conflicting LISTEN/UNLISTEN/UNLISTEN_ALL, and it's unlikely that there
00605      * would be any performance benefit anyway in sane applications.
00606      */
00607     oldcontext = MemoryContextSwitchTo(CurTransactionContext);
00608 
00609     /* space for terminating null is included in sizeof(ListenAction) */
00610     actrec = (ListenAction *) palloc(sizeof(ListenAction) + strlen(channel));
00611     actrec->action = action;
00612     strcpy(actrec->channel, channel);
00613 
00614     pendingActions = lappend(pendingActions, actrec);
00615 
00616     MemoryContextSwitchTo(oldcontext);
00617 }
00618 
00619 /*
00620  * Async_Listen
00621  *
00622  *      This is executed by the SQL listen command.
00623  */
00624 void
00625 Async_Listen(const char *channel)
00626 {
00627     if (Trace_notify)
00628         elog(DEBUG1, "Async_Listen(%s,%d)", channel, MyProcPid);
00629 
00630     queue_listen(LISTEN_LISTEN, channel);
00631 }
00632 
00633 /*
00634  * Async_Unlisten
00635  *
00636  *      This is executed by the SQL unlisten command.
00637  */
00638 void
00639 Async_Unlisten(const char *channel)
00640 {
00641     if (Trace_notify)
00642         elog(DEBUG1, "Async_Unlisten(%s,%d)", channel, MyProcPid);
00643 
00644     /* If we couldn't possibly be listening, no need to queue anything */
00645     if (pendingActions == NIL && !unlistenExitRegistered)
00646         return;
00647 
00648     queue_listen(LISTEN_UNLISTEN, channel);
00649 }
00650 
00651 /*
00652  * Async_UnlistenAll
00653  *
00654  *      This is invoked by UNLISTEN * command, and also at backend exit.
00655  */
00656 void
00657 Async_UnlistenAll(void)
00658 {
00659     if (Trace_notify)
00660         elog(DEBUG1, "Async_UnlistenAll(%d)", MyProcPid);
00661 
00662     /* If we couldn't possibly be listening, no need to queue anything */
00663     if (pendingActions == NIL && !unlistenExitRegistered)
00664         return;
00665 
00666     queue_listen(LISTEN_UNLISTEN_ALL, "");
00667 }
00668 
00669 /*
00670  * SQL function: return a set of the channel names this backend is actively
00671  * listening to.
00672  *
00673  * Note: this coding relies on the fact that the listenChannels list cannot
00674  * change within a transaction.
00675  */
00676 Datum
00677 pg_listening_channels(PG_FUNCTION_ARGS)
00678 {
00679     FuncCallContext *funcctx;
00680     ListCell  **lcp;
00681 
00682     /* stuff done only on the first call of the function */
00683     if (SRF_IS_FIRSTCALL())
00684     {
00685         MemoryContext oldcontext;
00686 
00687         /* create a function context for cross-call persistence */
00688         funcctx = SRF_FIRSTCALL_INIT();
00689 
00690         /* switch to memory context appropriate for multiple function calls */
00691         oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
00692 
00693         /* allocate memory for user context */
00694         lcp = (ListCell **) palloc(sizeof(ListCell *));
00695         *lcp = list_head(listenChannels);
00696         funcctx->user_fctx = (void *) lcp;
00697 
00698         MemoryContextSwitchTo(oldcontext);
00699     }
00700 
00701     /* stuff done on every call of the function */
00702     funcctx = SRF_PERCALL_SETUP();
00703     lcp = (ListCell **) funcctx->user_fctx;
00704 
00705     while (*lcp != NULL)
00706     {
00707         char       *channel = (char *) lfirst(*lcp);
00708 
00709         *lcp = lnext(*lcp);
00710         SRF_RETURN_NEXT(funcctx, CStringGetTextDatum(channel));
00711     }
00712 
00713     SRF_RETURN_DONE(funcctx);
00714 }
00715 
00716 /*
00717  * Async_UnlistenOnExit
00718  *
00719  * This is executed at backend exit if we have done any LISTENs in this
00720  * backend.  It might not be necessary anymore, if the user UNLISTENed
00721  * everything, but we don't try to detect that case.
00722  */
00723 static void
00724 Async_UnlistenOnExit(int code, Datum arg)
00725 {
00726     Exec_UnlistenAllCommit();
00727     asyncQueueUnregister();
00728 }
00729 
00730 /*
00731  * AtPrepare_Notify
00732  *
00733  *      This is called at the prepare phase of a two-phase
00734  *      transaction.  Save the state for possible commit later.
00735  */
00736 void
00737 AtPrepare_Notify(void)
00738 {
00739     /* It's not allowed to have any pending LISTEN/UNLISTEN/NOTIFY actions */
00740     if (pendingActions || pendingNotifies)
00741         ereport(ERROR,
00742                 (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
00743                  errmsg("cannot PREPARE a transaction that has executed LISTEN, UNLISTEN, or NOTIFY")));
00744 }
00745 
00746 /*
00747  * PreCommit_Notify
00748  *
00749  *      This is called at transaction commit, before actually committing to
00750  *      clog.
00751  *
00752  *      If there are pending LISTEN actions, make sure we are listed in the
00753  *      shared-memory listener array.  This must happen before commit to
00754  *      ensure we don't miss any notifies from transactions that commit
00755  *      just after ours.
00756  *
00757  *      If there are outbound notify requests in the pendingNotifies list,
00758  *      add them to the global queue.  We do that before commit so that
00759  *      we can still throw error if we run out of queue space.
00760  */
00761 void
00762 PreCommit_Notify(void)
00763 {
00764     ListCell   *p;
00765 
00766     if (pendingActions == NIL && pendingNotifies == NIL)
00767         return;                 /* no relevant statements in this xact */
00768 
00769     if (Trace_notify)
00770         elog(DEBUG1, "PreCommit_Notify");
00771 
00772     /* Preflight for any pending listen/unlisten actions */
00773     foreach(p, pendingActions)
00774     {
00775         ListenAction *actrec = (ListenAction *) lfirst(p);
00776 
00777         switch (actrec->action)
00778         {
00779             case LISTEN_LISTEN:
00780                 Exec_ListenPreCommit();
00781                 break;
00782             case LISTEN_UNLISTEN:
00783                 /* there is no Exec_UnlistenPreCommit() */
00784                 break;
00785             case LISTEN_UNLISTEN_ALL:
00786                 /* there is no Exec_UnlistenAllPreCommit() */
00787                 break;
00788         }
00789     }
00790 
00791     /* Queue any pending notifies */
00792     if (pendingNotifies)
00793     {
00794         ListCell   *nextNotify;
00795 
00796         /*
00797          * Make sure that we have an XID assigned to the current transaction.
00798          * GetCurrentTransactionId is cheap if we already have an XID, but not
00799          * so cheap if we don't, and we'd prefer not to do that work while
00800          * holding AsyncQueueLock.
00801          */
00802         (void) GetCurrentTransactionId();
00803 
00804         /*
00805          * Serialize writers by acquiring a special lock that we hold till
00806          * after commit.  This ensures that queue entries appear in commit
00807          * order, and in particular that there are never uncommitted queue
00808          * entries ahead of committed ones, so an uncommitted transaction
00809          * can't block delivery of deliverable notifications.
00810          *
00811          * We use a heavyweight lock so that it'll automatically be released
00812          * after either commit or abort.  This also allows deadlocks to be
00813          * detected, though really a deadlock shouldn't be possible here.
00814          *
00815          * The lock is on "database 0", which is pretty ugly but it doesn't
00816          * seem worth inventing a special locktag category just for this.
00817          * (Historical note: before PG 9.0, a similar lock on "database 0" was
00818          * used by the flatfiles mechanism.)
00819          */
00820         LockSharedObject(DatabaseRelationId, InvalidOid, 0,
00821                          AccessExclusiveLock);
00822 
00823         /* Now push the notifications into the queue */
00824         backendHasSentNotifications = true;
00825 
00826         nextNotify = list_head(pendingNotifies);
00827         while (nextNotify != NULL)
00828         {
00829             /*
00830              * Add the pending notifications to the queue.  We acquire and
00831              * release AsyncQueueLock once per page, which might be overkill
00832              * but it does allow readers to get in while we're doing this.
00833              *
00834              * A full queue is very uncommon and should really not happen,
00835              * given that we have so much space available in the SLRU pages.
00836              * Nevertheless we need to deal with this possibility. Note that
00837              * when we get here we are in the process of committing our
00838              * transaction, but we have not yet committed to clog, so at this
00839              * point in time we can still roll the transaction back.
00840              */
00841             LWLockAcquire(AsyncQueueLock, LW_EXCLUSIVE);
00842             asyncQueueFillWarning();
00843             if (asyncQueueIsFull())
00844                 ereport(ERROR,
00845                         (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
00846                       errmsg("too many notifications in the NOTIFY queue")));
00847             nextNotify = asyncQueueAddEntries(nextNotify);
00848             LWLockRelease(AsyncQueueLock);
00849         }
00850     }
00851 }
00852 
00853 /*
00854  * AtCommit_Notify
00855  *
00856  *      This is called at transaction commit, after committing to clog.
00857  *
00858  *      Update listenChannels and clear transaction-local state.
00859  */
00860 void
00861 AtCommit_Notify(void)
00862 {
00863     ListCell   *p;
00864 
00865     /*
00866      * Allow transactions that have not executed LISTEN/UNLISTEN/NOTIFY to
00867      * return as soon as possible
00868      */
00869     if (!pendingActions && !pendingNotifies)
00870         return;
00871 
00872     if (Trace_notify)
00873         elog(DEBUG1, "AtCommit_Notify");
00874 
00875     /* Perform any pending listen/unlisten actions */
00876     foreach(p, pendingActions)
00877     {
00878         ListenAction *actrec = (ListenAction *) lfirst(p);
00879 
00880         switch (actrec->action)
00881         {
00882             case LISTEN_LISTEN:
00883                 Exec_ListenCommit(actrec->channel);
00884                 break;
00885             case LISTEN_UNLISTEN:
00886                 Exec_UnlistenCommit(actrec->channel);
00887                 break;
00888             case LISTEN_UNLISTEN_ALL:
00889                 Exec_UnlistenAllCommit();
00890                 break;
00891         }
00892     }
00893 
00894     /* If no longer listening to anything, get out of listener array */
00895     if (amRegisteredListener && listenChannels == NIL)
00896         asyncQueueUnregister();
00897 
00898     /* And clean up */
00899     ClearPendingActionsAndNotifies();
00900 }
00901 
00902 /*
00903  * Exec_ListenPreCommit --- subroutine for PreCommit_Notify
00904  *
00905  * This function must make sure we are ready to catch any incoming messages.
00906  */
00907 static void
00908 Exec_ListenPreCommit(void)
00909 {
00910     /*
00911      * Nothing to do if we are already listening to something, nor if we
00912      * already ran this routine in this transaction.
00913      */
00914     if (amRegisteredListener)
00915         return;
00916 
00917     if (Trace_notify)
00918         elog(DEBUG1, "Exec_ListenPreCommit(%d)", MyProcPid);
00919 
00920     /*
00921      * Before registering, make sure we will unlisten before dying. (Note:
00922      * this action does not get undone if we abort later.)
00923      */
00924     if (!unlistenExitRegistered)
00925     {
00926         on_shmem_exit(Async_UnlistenOnExit, 0);
00927         unlistenExitRegistered = true;
00928     }
00929 
00930     /*
00931      * This is our first LISTEN, so establish our pointer.
00932      *
00933      * We set our pointer to the global tail pointer and then move it forward
00934      * over already-committed notifications.  This ensures we cannot miss any
00935      * not-yet-committed notifications.  We might get a few more but that
00936      * doesn't hurt.
00937      */
00938     LWLockAcquire(AsyncQueueLock, LW_SHARED);
00939     QUEUE_BACKEND_POS(MyBackendId) = QUEUE_TAIL;
00940     QUEUE_BACKEND_PID(MyBackendId) = MyProcPid;
00941     LWLockRelease(AsyncQueueLock);
00942 
00943     /* Now we are listed in the global array, so remember we're listening */
00944     amRegisteredListener = true;
00945 
00946     /*
00947      * Try to move our pointer forward as far as possible. This will skip over
00948      * already-committed notifications. Still, we could get notifications that
00949      * have already committed before we started to LISTEN.
00950      *
00951      * Note that we are not yet listening on anything, so we won't deliver any
00952      * notification to the frontend.
00953      *
00954      * This will also advance the global tail pointer if possible.
00955      */
00956     asyncQueueReadAllNotifications();
00957 }
00958 
00959 /*
00960  * Exec_ListenCommit --- subroutine for AtCommit_Notify
00961  *
00962  * Add the channel to the list of channels we are listening on.
00963  */
00964 static void
00965 Exec_ListenCommit(const char *channel)
00966 {
00967     MemoryContext oldcontext;
00968 
00969     /* Do nothing if we are already listening on this channel */
00970     if (IsListeningOn(channel))
00971         return;
00972 
00973     /*
00974      * Add the new channel name to listenChannels.
00975      *
00976      * XXX It is theoretically possible to get an out-of-memory failure here,
00977      * which would be bad because we already committed.  For the moment it
00978      * doesn't seem worth trying to guard against that, but maybe improve this
00979      * later.
00980      */
00981     oldcontext = MemoryContextSwitchTo(TopMemoryContext);
00982     listenChannels = lappend(listenChannels, pstrdup(channel));
00983     MemoryContextSwitchTo(oldcontext);
00984 }
00985 
00986 /*
00987  * Exec_UnlistenCommit --- subroutine for AtCommit_Notify
00988  *
00989  * Remove the specified channel name from listenChannels.
00990  */
00991 static void
00992 Exec_UnlistenCommit(const char *channel)
00993 {
00994     ListCell   *q;
00995     ListCell   *prev;
00996 
00997     if (Trace_notify)
00998         elog(DEBUG1, "Exec_UnlistenCommit(%s,%d)", channel, MyProcPid);
00999 
01000     prev = NULL;
01001     foreach(q, listenChannels)
01002     {
01003         char       *lchan = (char *) lfirst(q);
01004 
01005         if (strcmp(lchan, channel) == 0)
01006         {
01007             listenChannels = list_delete_cell(listenChannels, q, prev);
01008             pfree(lchan);
01009             break;
01010         }
01011         prev = q;
01012     }
01013 
01014     /*
01015      * We do not complain about unlistening something not being listened;
01016      * should we?
01017      */
01018 }
01019 
01020 /*
01021  * Exec_UnlistenAllCommit --- subroutine for AtCommit_Notify
01022  *
01023  *      Unlisten on all channels for this backend.
01024  */
01025 static void
01026 Exec_UnlistenAllCommit(void)
01027 {
01028     if (Trace_notify)
01029         elog(DEBUG1, "Exec_UnlistenAllCommit(%d)", MyProcPid);
01030 
01031     list_free_deep(listenChannels);
01032     listenChannels = NIL;
01033 }
01034 
01035 /*
01036  * ProcessCompletedNotifies --- send out signals and self-notifies
01037  *
01038  * This is called from postgres.c just before going idle at the completion
01039  * of a transaction.  If we issued any notifications in the just-completed
01040  * transaction, send signals to other backends to process them, and also
01041  * process the queue ourselves to send messages to our own frontend.
01042  *
01043  * The reason that this is not done in AtCommit_Notify is that there is
01044  * a nonzero chance of errors here (for example, encoding conversion errors
01045  * while trying to format messages to our frontend).  An error during
01046  * AtCommit_Notify would be a PANIC condition.  The timing is also arranged
01047  * to ensure that a transaction's self-notifies are delivered to the frontend
01048  * before it gets the terminating ReadyForQuery message.
01049  *
01050  * Note that we send signals and process the queue even if the transaction
01051  * eventually aborted.  This is because we need to clean out whatever got
01052  * added to the queue.
01053  *
01054  * NOTE: we are outside of any transaction here.
01055  */
01056 void
01057 ProcessCompletedNotifies(void)
01058 {
01059     MemoryContext caller_context;
01060     bool        signalled;
01061 
01062     /* Nothing to do if we didn't send any notifications */
01063     if (!backendHasSentNotifications)
01064         return;
01065 
01066     /*
01067      * We reset the flag immediately; otherwise, if any sort of error occurs
01068      * below, we'd be locked up in an infinite loop, because control will come
01069      * right back here after error cleanup.
01070      */
01071     backendHasSentNotifications = false;
01072 
01073     /*
01074      * We must preserve the caller's memory context (probably MessageContext)
01075      * across the transaction we do here.
01076      */
01077     caller_context = CurrentMemoryContext;
01078 
01079     if (Trace_notify)
01080         elog(DEBUG1, "ProcessCompletedNotifies");
01081 
01082     /*
01083      * We must run asyncQueueReadAllNotifications inside a transaction, else
01084      * bad things happen if it gets an error.
01085      */
01086     StartTransactionCommand();
01087 
01088     /* Send signals to other backends */
01089     signalled = SignalBackends();
01090 
01091     if (listenChannels != NIL)
01092     {
01093         /* Read the queue ourselves, and send relevant stuff to the frontend */
01094         asyncQueueReadAllNotifications();
01095     }
01096     else if (!signalled)
01097     {
01098         /*
01099          * If we found no other listening backends, and we aren't listening
01100          * ourselves, then we must execute asyncQueueAdvanceTail to flush the
01101          * queue, because ain't nobody else gonna do it.  This prevents queue
01102          * overflow when we're sending useless notifies to nobody. (A new
01103          * listener could have joined since we looked, but if so this is
01104          * harmless.)
01105          */
01106         asyncQueueAdvanceTail();
01107     }
01108 
01109     CommitTransactionCommand();
01110 
01111     MemoryContextSwitchTo(caller_context);
01112 
01113     /* We don't need pq_flush() here since postgres.c will do one shortly */
01114 }
01115 
01116 /*
01117  * Test whether we are actively listening on the given channel name.
01118  *
01119  * Note: this function is executed for every notification found in the queue.
01120  * Perhaps it is worth further optimization, eg convert the list to a sorted
01121  * array so we can binary-search it.  In practice the list is likely to be
01122  * fairly short, though.
01123  */
01124 static bool
01125 IsListeningOn(const char *channel)
01126 {
01127     ListCell   *p;
01128 
01129     foreach(p, listenChannels)
01130     {
01131         char       *lchan = (char *) lfirst(p);
01132 
01133         if (strcmp(lchan, channel) == 0)
01134             return true;
01135     }
01136     return false;
01137 }
01138 
01139 /*
01140  * Remove our entry from the listeners array when we are no longer listening
01141  * on any channel.  NB: must not fail if we're already not listening.
01142  */
01143 static void
01144 asyncQueueUnregister(void)
01145 {
01146     bool        advanceTail;
01147 
01148     Assert(listenChannels == NIL);      /* else caller error */
01149 
01150     if (!amRegisteredListener)          /* nothing to do */
01151         return;
01152 
01153     LWLockAcquire(AsyncQueueLock, LW_SHARED);
01154     /* check if entry is valid and oldest ... */
01155     advanceTail = (MyProcPid == QUEUE_BACKEND_PID(MyBackendId)) &&
01156         QUEUE_POS_EQUAL(QUEUE_BACKEND_POS(MyBackendId), QUEUE_TAIL);
01157     /* ... then mark it invalid */
01158     QUEUE_BACKEND_PID(MyBackendId) = InvalidPid;
01159     LWLockRelease(AsyncQueueLock);
01160 
01161     /* mark ourselves as no longer listed in the global array */
01162     amRegisteredListener = false;
01163 
01164     /* If we were the laziest backend, try to advance the tail pointer */
01165     if (advanceTail)
01166         asyncQueueAdvanceTail();
01167 }
01168 
01169 /*
01170  * Test whether there is room to insert more notification messages.
01171  *
01172  * Caller must hold at least shared AsyncQueueLock.
01173  */
01174 static bool
01175 asyncQueueIsFull(void)
01176 {
01177     int         nexthead;
01178     int         boundary;
01179 
01180     /*
01181      * The queue is full if creating a new head page would create a page that
01182      * logically precedes the current global tail pointer, ie, the head
01183      * pointer would wrap around compared to the tail.  We cannot create such
01184      * a head page for fear of confusing slru.c.  For safety we round the tail
01185      * pointer back to a segment boundary (compare the truncation logic in
01186      * asyncQueueAdvanceTail).
01187      *
01188      * Note that this test is *not* dependent on how much space there is on
01189      * the current head page.  This is necessary because asyncQueueAddEntries
01190      * might try to create the next head page in any case.
01191      */
01192     nexthead = QUEUE_POS_PAGE(QUEUE_HEAD) + 1;
01193     if (nexthead > QUEUE_MAX_PAGE)
01194         nexthead = 0;           /* wrap around */
01195     boundary = QUEUE_POS_PAGE(QUEUE_TAIL);
01196     boundary -= boundary % SLRU_PAGES_PER_SEGMENT;
01197     return asyncQueuePagePrecedes(nexthead, boundary);
01198 }
01199 
01200 /*
01201  * Advance the QueuePosition to the next entry, assuming that the current
01202  * entry is of length entryLength.  If we jump to a new page the function
01203  * returns true, else false.
01204  */
01205 static bool
01206 asyncQueueAdvance(QueuePosition *position, int entryLength)
01207 {
01208     int         pageno = QUEUE_POS_PAGE(*position);
01209     int         offset = QUEUE_POS_OFFSET(*position);
01210     bool        pageJump = false;
01211 
01212     /*
01213      * Move to the next writing position: First jump over what we have just
01214      * written or read.
01215      */
01216     offset += entryLength;
01217     Assert(offset <= QUEUE_PAGESIZE);
01218 
01219     /*
01220      * In a second step check if another entry can possibly be written to the
01221      * page. If so, stay here, we have reached the next position. If not, then
01222      * we need to move on to the next page.
01223      */
01224     if (offset + QUEUEALIGN(AsyncQueueEntryEmptySize) > QUEUE_PAGESIZE)
01225     {
01226         pageno++;
01227         if (pageno > QUEUE_MAX_PAGE)
01228             pageno = 0;         /* wrap around */
01229         offset = 0;
01230         pageJump = true;
01231     }
01232 
01233     SET_QUEUE_POS(*position, pageno, offset);
01234     return pageJump;
01235 }
01236 
01237 /*
01238  * Fill the AsyncQueueEntry at *qe with an outbound notification message.
01239  */
01240 static void
01241 asyncQueueNotificationToEntry(Notification *n, AsyncQueueEntry *qe)
01242 {
01243     size_t      channellen = strlen(n->channel);
01244     size_t      payloadlen = strlen(n->payload);
01245     int         entryLength;
01246 
01247     Assert(channellen < NAMEDATALEN);
01248     Assert(payloadlen < NOTIFY_PAYLOAD_MAX_LENGTH);
01249 
01250     /* The terminators are already included in AsyncQueueEntryEmptySize */
01251     entryLength = AsyncQueueEntryEmptySize + payloadlen + channellen;
01252     entryLength = QUEUEALIGN(entryLength);
01253     qe->length = entryLength;
01254     qe->dboid = MyDatabaseId;
01255     qe->xid = GetCurrentTransactionId();
01256     qe->srcPid = MyProcPid;
01257     memcpy(qe->data, n->channel, channellen + 1);
01258     memcpy(qe->data + channellen + 1, n->payload, payloadlen + 1);
01259 }
01260 
01261 /*
01262  * Add pending notifications to the queue.
01263  *
01264  * We go page by page here, i.e. we stop once we have to go to a new page but
01265  * we will be called again and then fill that next page. If an entry does not
01266  * fit into the current page, we write a dummy entry with an InvalidOid as the
01267  * database OID in order to fill the page. So every page is always used up to
01268  * the last byte which simplifies reading the page later.
01269  *
01270  * We are passed the list cell containing the next notification to write
01271  * and return the first still-unwritten cell back.  Eventually we will return
01272  * NULL indicating all is done.
01273  *
01274  * We are holding AsyncQueueLock already from the caller and grab AsyncCtlLock
01275  * locally in this function.
01276  */
01277 static ListCell *
01278 asyncQueueAddEntries(ListCell *nextNotify)
01279 {
01280     AsyncQueueEntry qe;
01281     QueuePosition queue_head;
01282     int         pageno;
01283     int         offset;
01284     int         slotno;
01285 
01286     /* We hold both AsyncQueueLock and AsyncCtlLock during this operation */
01287     LWLockAcquire(AsyncCtlLock, LW_EXCLUSIVE);
01288 
01289     /*
01290      * We work with a local copy of QUEUE_HEAD, which we write back to shared
01291      * memory upon exiting.  The reason for this is that if we have to advance
01292      * to a new page, SimpleLruZeroPage might fail (out of disk space, for
01293      * instance), and we must not advance QUEUE_HEAD if it does.  (Otherwise,
01294      * subsequent insertions would try to put entries into a page that slru.c
01295      * thinks doesn't exist yet.)  So, use a local position variable.  Note
01296      * that if we do fail, any already-inserted queue entries are forgotten;
01297      * this is okay, since they'd be useless anyway after our transaction
01298      * rolls back.
01299      */
01300     queue_head = QUEUE_HEAD;
01301 
01302     /* Fetch the current page */
01303     pageno = QUEUE_POS_PAGE(queue_head);
01304     slotno = SimpleLruReadPage(AsyncCtl, pageno, true, InvalidTransactionId);
01305     /* Note we mark the page dirty before writing in it */
01306     AsyncCtl->shared->page_dirty[slotno] = true;
01307 
01308     while (nextNotify != NULL)
01309     {
01310         Notification *n = (Notification *) lfirst(nextNotify);
01311 
01312         /* Construct a valid queue entry in local variable qe */
01313         asyncQueueNotificationToEntry(n, &qe);
01314 
01315         offset = QUEUE_POS_OFFSET(queue_head);
01316 
01317         /* Check whether the entry really fits on the current page */
01318         if (offset + qe.length <= QUEUE_PAGESIZE)
01319         {
01320             /* OK, so advance nextNotify past this item */
01321             nextNotify = lnext(nextNotify);
01322         }
01323         else
01324         {
01325             /*
01326              * Write a dummy entry to fill up the page. Actually readers will
01327              * only check dboid and since it won't match any reader's database
01328              * OID, they will ignore this entry and move on.
01329              */
01330             qe.length = QUEUE_PAGESIZE - offset;
01331             qe.dboid = InvalidOid;
01332             qe.data[0] = '\0';  /* empty channel */
01333             qe.data[1] = '\0';  /* empty payload */
01334         }
01335 
01336         /* Now copy qe into the shared buffer page */
01337         memcpy(AsyncCtl->shared->page_buffer[slotno] + offset,
01338                &qe,
01339                qe.length);
01340 
01341         /* Advance queue_head appropriately, and detect if page is full */
01342         if (asyncQueueAdvance(&(queue_head), qe.length))
01343         {
01344             /*
01345              * Page is full, so we're done here, but first fill the next page
01346              * with zeroes.  The reason to do this is to ensure that slru.c's
01347              * idea of the head page is always the same as ours, which avoids
01348              * boundary problems in SimpleLruTruncate.  The test in
01349              * asyncQueueIsFull() ensured that there is room to create this
01350              * page without overrunning the queue.
01351              */
01352             slotno = SimpleLruZeroPage(AsyncCtl, QUEUE_POS_PAGE(queue_head));
01353             /* And exit the loop */
01354             break;
01355         }
01356     }
01357 
01358     /* Success, so update the global QUEUE_HEAD */
01359     QUEUE_HEAD = queue_head;
01360 
01361     LWLockRelease(AsyncCtlLock);
01362 
01363     return nextNotify;
01364 }
01365 
01366 /*
01367  * Check whether the queue is at least half full, and emit a warning if so.
01368  *
01369  * This is unlikely given the size of the queue, but possible.
01370  * The warnings show up at most once every QUEUE_FULL_WARN_INTERVAL.
01371  *
01372  * Caller must hold exclusive AsyncQueueLock.
01373  */
01374 static void
01375 asyncQueueFillWarning(void)
01376 {
01377     int         headPage = QUEUE_POS_PAGE(QUEUE_HEAD);
01378     int         tailPage = QUEUE_POS_PAGE(QUEUE_TAIL);
01379     int         occupied;
01380     double      fillDegree;
01381     TimestampTz t;
01382 
01383     occupied = headPage - tailPage;
01384 
01385     if (occupied == 0)
01386         return;                 /* fast exit for common case */
01387 
01388     if (occupied < 0)
01389     {
01390         /* head has wrapped around, tail not yet */
01391         occupied += QUEUE_MAX_PAGE + 1;
01392     }
01393 
01394     fillDegree = (double) occupied / (double) ((QUEUE_MAX_PAGE + 1) / 2);
01395 
01396     if (fillDegree < 0.5)
01397         return;
01398 
01399     t = GetCurrentTimestamp();
01400 
01401     if (TimestampDifferenceExceeds(asyncQueueControl->lastQueueFillWarn,
01402                                    t, QUEUE_FULL_WARN_INTERVAL))
01403     {
01404         QueuePosition min = QUEUE_HEAD;
01405         int32       minPid = InvalidPid;
01406         int         i;
01407 
01408         for (i = 1; i <= MaxBackends; i++)
01409         {
01410             if (QUEUE_BACKEND_PID(i) != InvalidPid)
01411             {
01412                 min = QUEUE_POS_MIN(min, QUEUE_BACKEND_POS(i));
01413                 if (QUEUE_POS_EQUAL(min, QUEUE_BACKEND_POS(i)))
01414                     minPid = QUEUE_BACKEND_PID(i);
01415             }
01416         }
01417 
01418         ereport(WARNING,
01419                 (errmsg("NOTIFY queue is %.0f%% full", fillDegree * 100),
01420                  (minPid != InvalidPid ?
01421                   errdetail("The server process with PID %d is among those with the oldest transactions.", minPid)
01422                   : 0),
01423                  (minPid != InvalidPid ?
01424                   errhint("The NOTIFY queue cannot be emptied until that process ends its current transaction.")
01425                   : 0)));
01426 
01427         asyncQueueControl->lastQueueFillWarn = t;
01428     }
01429 }
01430 
01431 /*
01432  * Send signals to all listening backends (except our own).
01433  *
01434  * Returns true if we sent at least one signal.
01435  *
01436  * Since we need EXCLUSIVE lock anyway we also check the position of the other
01437  * backends and in case one is already up-to-date we don't signal it.
01438  * This can happen if concurrent notifying transactions have sent a signal and
01439  * the signaled backend has read the other notifications and ours in the same
01440  * step.
01441  *
01442  * Since we know the BackendId and the Pid the signalling is quite cheap.
01443  */
01444 static bool
01445 SignalBackends(void)
01446 {
01447     bool        signalled = false;
01448     int32      *pids;
01449     BackendId  *ids;
01450     int         count;
01451     int         i;
01452     int32       pid;
01453 
01454     /*
01455      * Identify all backends that are listening and not already up-to-date. We
01456      * don't want to send signals while holding the AsyncQueueLock, so we just
01457      * build a list of target PIDs.
01458      *
01459      * XXX in principle these pallocs could fail, which would be bad. Maybe
01460      * preallocate the arrays?  But in practice this is only run in trivial
01461      * transactions, so there should surely be space available.
01462      */
01463     pids = (int32 *) palloc(MaxBackends * sizeof(int32));
01464     ids = (BackendId *) palloc(MaxBackends * sizeof(BackendId));
01465     count = 0;
01466 
01467     LWLockAcquire(AsyncQueueLock, LW_EXCLUSIVE);
01468     for (i = 1; i <= MaxBackends; i++)
01469     {
01470         pid = QUEUE_BACKEND_PID(i);
01471         if (pid != InvalidPid && pid != MyProcPid)
01472         {
01473             QueuePosition pos = QUEUE_BACKEND_POS(i);
01474 
01475             if (!QUEUE_POS_EQUAL(pos, QUEUE_HEAD))
01476             {
01477                 pids[count] = pid;
01478                 ids[count] = i;
01479                 count++;
01480             }
01481         }
01482     }
01483     LWLockRelease(AsyncQueueLock);
01484 
01485     /* Now send signals */
01486     for (i = 0; i < count; i++)
01487     {
01488         pid = pids[i];
01489 
01490         /*
01491          * Note: assuming things aren't broken, a signal failure here could
01492          * only occur if the target backend exited since we released
01493          * AsyncQueueLock; which is unlikely but certainly possible. So we
01494          * just log a low-level debug message if it happens.
01495          */
01496         if (SendProcSignal(pid, PROCSIG_NOTIFY_INTERRUPT, ids[i]) < 0)
01497             elog(DEBUG3, "could not signal backend with PID %d: %m", pid);
01498         else
01499             signalled = true;
01500     }
01501 
01502     pfree(pids);
01503     pfree(ids);
01504 
01505     return signalled;
01506 }
01507 
01508 /*
01509  * AtAbort_Notify
01510  *
01511  *  This is called at transaction abort.
01512  *
01513  *  Gets rid of pending actions and outbound notifies that we would have
01514  *  executed if the transaction got committed.
01515  */
01516 void
01517 AtAbort_Notify(void)
01518 {
01519     /*
01520      * If we LISTEN but then roll back the transaction after PreCommit_Notify,
01521      * we have registered as a listener but have not made any entry in
01522      * listenChannels.  In that case, deregister again.
01523      */
01524     if (amRegisteredListener && listenChannels == NIL)
01525         asyncQueueUnregister();
01526 
01527     /* And clean up */
01528     ClearPendingActionsAndNotifies();
01529 }
01530 
01531 /*
01532  * AtSubStart_Notify() --- Take care of subtransaction start.
01533  *
01534  * Push empty state for the new subtransaction.
01535  */
01536 void
01537 AtSubStart_Notify(void)
01538 {
01539     MemoryContext old_cxt;
01540 
01541     /* Keep the list-of-lists in TopTransactionContext for simplicity */
01542     old_cxt = MemoryContextSwitchTo(TopTransactionContext);
01543 
01544     upperPendingActions = lcons(pendingActions, upperPendingActions);
01545 
01546     Assert(list_length(upperPendingActions) ==
01547            GetCurrentTransactionNestLevel() - 1);
01548 
01549     pendingActions = NIL;
01550 
01551     upperPendingNotifies = lcons(pendingNotifies, upperPendingNotifies);
01552 
01553     Assert(list_length(upperPendingNotifies) ==
01554            GetCurrentTransactionNestLevel() - 1);
01555 
01556     pendingNotifies = NIL;
01557 
01558     MemoryContextSwitchTo(old_cxt);
01559 }
01560 
01561 /*
01562  * AtSubCommit_Notify() --- Take care of subtransaction commit.
01563  *
01564  * Reassign all items in the pending lists to the parent transaction.
01565  */
01566 void
01567 AtSubCommit_Notify(void)
01568 {
01569     List       *parentPendingActions;
01570     List       *parentPendingNotifies;
01571 
01572     parentPendingActions = (List *) linitial(upperPendingActions);
01573     upperPendingActions = list_delete_first(upperPendingActions);
01574 
01575     Assert(list_length(upperPendingActions) ==
01576            GetCurrentTransactionNestLevel() - 2);
01577 
01578     /*
01579      * Mustn't try to eliminate duplicates here --- see queue_listen()
01580      */
01581     pendingActions = list_concat(parentPendingActions, pendingActions);
01582 
01583     parentPendingNotifies = (List *) linitial(upperPendingNotifies);
01584     upperPendingNotifies = list_delete_first(upperPendingNotifies);
01585 
01586     Assert(list_length(upperPendingNotifies) ==
01587            GetCurrentTransactionNestLevel() - 2);
01588 
01589     /*
01590      * We could try to eliminate duplicates here, but it seems not worthwhile.
01591      */
01592     pendingNotifies = list_concat(parentPendingNotifies, pendingNotifies);
01593 }
01594 
01595 /*
01596  * AtSubAbort_Notify() --- Take care of subtransaction abort.
01597  */
01598 void
01599 AtSubAbort_Notify(void)
01600 {
01601     int         my_level = GetCurrentTransactionNestLevel();
01602 
01603     /*
01604      * All we have to do is pop the stack --- the actions/notifies made in
01605      * this subxact are no longer interesting, and the space will be freed
01606      * when CurTransactionContext is recycled.
01607      *
01608      * This routine could be called more than once at a given nesting level if
01609      * there is trouble during subxact abort.  Avoid dumping core by using
01610      * GetCurrentTransactionNestLevel as the indicator of how far we need to
01611      * prune the list.
01612      */
01613     while (list_length(upperPendingActions) > my_level - 2)
01614     {
01615         pendingActions = (List *) linitial(upperPendingActions);
01616         upperPendingActions = list_delete_first(upperPendingActions);
01617     }
01618 
01619     while (list_length(upperPendingNotifies) > my_level - 2)
01620     {
01621         pendingNotifies = (List *) linitial(upperPendingNotifies);
01622         upperPendingNotifies = list_delete_first(upperPendingNotifies);
01623     }
01624 }
01625 
01626 /*
01627  * HandleNotifyInterrupt
01628  *
01629  *      This is called when PROCSIG_NOTIFY_INTERRUPT is received.
01630  *
01631  *      If we are idle (notifyInterruptEnabled is set), we can safely invoke
01632  *      ProcessIncomingNotify directly.  Otherwise, just set a flag
01633  *      to do it later.
01634  */
01635 void
01636 HandleNotifyInterrupt(void)
01637 {
01638     /*
01639      * Note: this is called by a SIGNAL HANDLER. You must be very wary what
01640      * you do here. Some helpful soul had this routine sprinkled with
01641      * TPRINTFs, which would likely lead to corruption of stdio buffers if
01642      * they were ever turned on.
01643      */
01644 
01645     /* Don't joggle the elbow of proc_exit */
01646     if (proc_exit_inprogress)
01647         return;
01648 
01649     if (notifyInterruptEnabled)
01650     {
01651         bool        save_ImmediateInterruptOK = ImmediateInterruptOK;
01652 
01653         /*
01654          * We may be called while ImmediateInterruptOK is true; turn it off
01655          * while messing with the NOTIFY state.  (We would have to save and
01656          * restore it anyway, because PGSemaphore operations inside
01657          * ProcessIncomingNotify() might reset it.)
01658          */
01659         ImmediateInterruptOK = false;
01660 
01661         /*
01662          * I'm not sure whether some flavors of Unix might allow another
01663          * SIGUSR1 occurrence to recursively interrupt this routine. To cope
01664          * with the possibility, we do the same sort of dance that
01665          * EnableNotifyInterrupt must do --- see that routine for comments.
01666          */
01667         notifyInterruptEnabled = 0;     /* disable any recursive signal */
01668         notifyInterruptOccurred = 1;    /* do at least one iteration */
01669         for (;;)
01670         {
01671             notifyInterruptEnabled = 1;
01672             if (!notifyInterruptOccurred)
01673                 break;
01674             notifyInterruptEnabled = 0;
01675             if (notifyInterruptOccurred)
01676             {
01677                 /* Here, it is finally safe to do stuff. */
01678                 if (Trace_notify)
01679                     elog(DEBUG1, "HandleNotifyInterrupt: perform async notify");
01680 
01681                 ProcessIncomingNotify();
01682 
01683                 if (Trace_notify)
01684                     elog(DEBUG1, "HandleNotifyInterrupt: done");
01685             }
01686         }
01687 
01688         /*
01689          * Restore ImmediateInterruptOK, and check for interrupts if needed.
01690          */
01691         ImmediateInterruptOK = save_ImmediateInterruptOK;
01692         if (save_ImmediateInterruptOK)
01693             CHECK_FOR_INTERRUPTS();
01694     }
01695     else
01696     {
01697         /*
01698          * In this path it is NOT SAFE to do much of anything, except this:
01699          */
01700         notifyInterruptOccurred = 1;
01701     }
01702 }
01703 
01704 /*
01705  * EnableNotifyInterrupt
01706  *
01707  *      This is called by the PostgresMain main loop just before waiting
01708  *      for a frontend command.  If we are truly idle (ie, *not* inside
01709  *      a transaction block), then process any pending inbound notifies,
01710  *      and enable the signal handler to process future notifies directly.
01711  *
01712  *      NOTE: the signal handler starts out disabled, and stays so until
01713  *      PostgresMain calls this the first time.
01714  */
01715 void
01716 EnableNotifyInterrupt(void)
01717 {
01718     if (IsTransactionOrTransactionBlock())
01719         return;                 /* not really idle */
01720 
01721     /*
01722      * This code is tricky because we are communicating with a signal handler
01723      * that could interrupt us at any point.  If we just checked
01724      * notifyInterruptOccurred and then set notifyInterruptEnabled, we could
01725      * fail to respond promptly to a signal that happens in between those two
01726      * steps.  (A very small time window, perhaps, but Murphy's Law says you
01727      * can hit it...)  Instead, we first set the enable flag, then test the
01728      * occurred flag.  If we see an unserviced interrupt has occurred, we
01729      * re-clear the enable flag before going off to do the service work. (That
01730      * prevents re-entrant invocation of ProcessIncomingNotify() if another
01731      * interrupt occurs.) If an interrupt comes in between the setting and
01732      * clearing of notifyInterruptEnabled, then it will have done the service
01733      * work and left notifyInterruptOccurred zero, so we have to check again
01734      * after clearing enable.  The whole thing has to be in a loop in case
01735      * another interrupt occurs while we're servicing the first. Once we get
01736      * out of the loop, enable is set and we know there is no unserviced
01737      * interrupt.
01738      *
01739      * NB: an overenthusiastic optimizing compiler could easily break this
01740      * code. Hopefully, they all understand what "volatile" means these days.
01741      */
01742     for (;;)
01743     {
01744         notifyInterruptEnabled = 1;
01745         if (!notifyInterruptOccurred)
01746             break;
01747         notifyInterruptEnabled = 0;
01748         if (notifyInterruptOccurred)
01749         {
01750             if (Trace_notify)
01751                 elog(DEBUG1, "EnableNotifyInterrupt: perform async notify");
01752 
01753             ProcessIncomingNotify();
01754 
01755             if (Trace_notify)
01756                 elog(DEBUG1, "EnableNotifyInterrupt: done");
01757         }
01758     }
01759 }
01760 
01761 /*
01762  * DisableNotifyInterrupt
01763  *
01764  *      This is called by the PostgresMain main loop just after receiving
01765  *      a frontend command.  Signal handler execution of inbound notifies
01766  *      is disabled until the next EnableNotifyInterrupt call.
01767  *
01768  *      The PROCSIG_CATCHUP_INTERRUPT signal handler also needs to call this,
01769  *      so as to prevent conflicts if one signal interrupts the other.  So we
01770  *      must return the previous state of the flag.
01771  */
01772 bool
01773 DisableNotifyInterrupt(void)
01774 {
01775     bool        result = (notifyInterruptEnabled != 0);
01776 
01777     notifyInterruptEnabled = 0;
01778 
01779     return result;
01780 }
01781 
01782 /*
01783  * Read all pending notifications from the queue, and deliver appropriate
01784  * ones to my frontend.  Stop when we reach queue head or an uncommitted
01785  * notification.
01786  */
01787 static void
01788 asyncQueueReadAllNotifications(void)
01789 {
01790     QueuePosition pos;
01791     QueuePosition oldpos;
01792     QueuePosition head;
01793     bool        advanceTail;
01794 
01795     /* page_buffer must be adequately aligned, so use a union */
01796     union
01797     {
01798         char        buf[QUEUE_PAGESIZE];
01799         AsyncQueueEntry align;
01800     }           page_buffer;
01801 
01802     /* Fetch current state */
01803     LWLockAcquire(AsyncQueueLock, LW_SHARED);
01804     /* Assert checks that we have a valid state entry */
01805     Assert(MyProcPid == QUEUE_BACKEND_PID(MyBackendId));
01806     pos = oldpos = QUEUE_BACKEND_POS(MyBackendId);
01807     head = QUEUE_HEAD;
01808     LWLockRelease(AsyncQueueLock);
01809 
01810     if (QUEUE_POS_EQUAL(pos, head))
01811     {
01812         /* Nothing to do, we have read all notifications already. */
01813         return;
01814     }
01815 
01816     /*----------
01817      * Note that we deliver everything that we see in the queue and that
01818      * matches our _current_ listening state.
01819      * Especially we do not take into account different commit times.
01820      * Consider the following example:
01821      *
01822      * Backend 1:                    Backend 2:
01823      *
01824      * transaction starts
01825      * NOTIFY foo;
01826      * commit starts
01827      *                               transaction starts
01828      *                               LISTEN foo;
01829      *                               commit starts
01830      * commit to clog
01831      *                               commit to clog
01832      *
01833      * It could happen that backend 2 sees the notification from backend 1 in
01834      * the queue.  Even though the notifying transaction committed before
01835      * the listening transaction, we still deliver the notification.
01836      *
01837      * The idea is that an additional notification does not do any harm, we
01838      * just need to make sure that we do not miss a notification.
01839      *
01840      * It is possible that we fail while trying to send a message to our
01841      * frontend (for example, because of encoding conversion failure).
01842      * If that happens it is critical that we not try to send the same
01843      * message over and over again.  Therefore, we place a PG_TRY block
01844      * here that will forcibly advance our backend position before we lose
01845      * control to an error.  (We could alternatively retake AsyncQueueLock
01846      * and move the position before handling each individual message, but
01847      * that seems like too much lock traffic.)
01848      *----------
01849      */
01850     PG_TRY();
01851     {
01852         bool        reachedStop;
01853 
01854         do
01855         {
01856             int         curpage = QUEUE_POS_PAGE(pos);
01857             int         curoffset = QUEUE_POS_OFFSET(pos);
01858             int         slotno;
01859             int         copysize;
01860 
01861             /*
01862              * We copy the data from SLRU into a local buffer, so as to avoid
01863              * holding the AsyncCtlLock while we are examining the entries and
01864              * possibly transmitting them to our frontend.  Copy only the part
01865              * of the page we will actually inspect.
01866              */
01867             slotno = SimpleLruReadPage_ReadOnly(AsyncCtl, curpage,
01868                                                 InvalidTransactionId);
01869             if (curpage == QUEUE_POS_PAGE(head))
01870             {
01871                 /* we only want to read as far as head */
01872                 copysize = QUEUE_POS_OFFSET(head) - curoffset;
01873                 if (copysize < 0)
01874                     copysize = 0;       /* just for safety */
01875             }
01876             else
01877             {
01878                 /* fetch all the rest of the page */
01879                 copysize = QUEUE_PAGESIZE - curoffset;
01880             }
01881             memcpy(page_buffer.buf + curoffset,
01882                    AsyncCtl->shared->page_buffer[slotno] + curoffset,
01883                    copysize);
01884             /* Release lock that we got from SimpleLruReadPage_ReadOnly() */
01885             LWLockRelease(AsyncCtlLock);
01886 
01887             /*
01888              * Process messages up to the stop position, end of page, or an
01889              * uncommitted message.
01890              *
01891              * Our stop position is what we found to be the head's position
01892              * when we entered this function. It might have changed already.
01893              * But if it has, we will receive (or have already received and
01894              * queued) another signal and come here again.
01895              *
01896              * We are not holding AsyncQueueLock here! The queue can only
01897              * extend beyond the head pointer (see above) and we leave our
01898              * backend's pointer where it is so nobody will truncate or
01899              * rewrite pages under us. Especially we don't want to hold a lock
01900              * while sending the notifications to the frontend.
01901              */
01902             reachedStop = asyncQueueProcessPageEntries(&pos, head,
01903                                                        page_buffer.buf);
01904         } while (!reachedStop);
01905     }
01906     PG_CATCH();
01907     {
01908         /* Update shared state */
01909         LWLockAcquire(AsyncQueueLock, LW_SHARED);
01910         QUEUE_BACKEND_POS(MyBackendId) = pos;
01911         advanceTail = QUEUE_POS_EQUAL(oldpos, QUEUE_TAIL);
01912         LWLockRelease(AsyncQueueLock);
01913 
01914         /* If we were the laziest backend, try to advance the tail pointer */
01915         if (advanceTail)
01916             asyncQueueAdvanceTail();
01917 
01918         PG_RE_THROW();
01919     }
01920     PG_END_TRY();
01921 
01922     /* Update shared state */
01923     LWLockAcquire(AsyncQueueLock, LW_SHARED);
01924     QUEUE_BACKEND_POS(MyBackendId) = pos;
01925     advanceTail = QUEUE_POS_EQUAL(oldpos, QUEUE_TAIL);
01926     LWLockRelease(AsyncQueueLock);
01927 
01928     /* If we were the laziest backend, try to advance the tail pointer */
01929     if (advanceTail)
01930         asyncQueueAdvanceTail();
01931 }
01932 
01933 /*
01934  * Fetch notifications from the shared queue, beginning at position current,
01935  * and deliver relevant ones to my frontend.
01936  *
01937  * The current page must have been fetched into page_buffer from shared
01938  * memory.  (We could access the page right in shared memory, but that
01939  * would imply holding the AsyncCtlLock throughout this routine.)
01940  *
01941  * We stop if we reach the "stop" position, or reach a notification from an
01942  * uncommitted transaction, or reach the end of the page.
01943  *
01944  * The function returns true once we have reached the stop position or an
01945  * uncommitted notification, and false if we have finished with the page.
01946  * In other words: once it returns true there is no need to look further.
01947  * The QueuePosition *current is advanced past all processed messages.
01948  */
01949 static bool
01950 asyncQueueProcessPageEntries(QueuePosition *current,
01951                              QueuePosition stop,
01952                              char *page_buffer)
01953 {
01954     bool        reachedStop = false;
01955     bool        reachedEndOfPage;
01956     AsyncQueueEntry *qe;
01957 
01958     do
01959     {
01960         QueuePosition thisentry = *current;
01961 
01962         if (QUEUE_POS_EQUAL(thisentry, stop))
01963             break;
01964 
01965         qe = (AsyncQueueEntry *) (page_buffer + QUEUE_POS_OFFSET(thisentry));
01966 
01967         /*
01968          * Advance *current over this message, possibly to the next page. As
01969          * noted in the comments for asyncQueueReadAllNotifications, we must
01970          * do this before possibly failing while processing the message.
01971          */
01972         reachedEndOfPage = asyncQueueAdvance(current, qe->length);
01973 
01974         /* Ignore messages destined for other databases */
01975         if (qe->dboid == MyDatabaseId)
01976         {
01977             if (TransactionIdDidCommit(qe->xid))
01978             {
01979                 /* qe->data is the null-terminated channel name */
01980                 char       *channel = qe->data;
01981 
01982                 if (IsListeningOn(channel))
01983                 {
01984                     /* payload follows channel name */
01985                     char       *payload = qe->data + strlen(channel) + 1;
01986 
01987                     NotifyMyFrontEnd(channel, payload, qe->srcPid);
01988                 }
01989             }
01990             else if (TransactionIdDidAbort(qe->xid))
01991             {
01992                 /*
01993                  * If the source transaction aborted, we just ignore its
01994                  * notifications.
01995                  */
01996             }
01997             else
01998             {
01999                 /*
02000                  * The transaction has neither committed nor aborted so far,
02001                  * so we can't process its message yet.  Break out of the
02002                  * loop, but first back up *current so we will reprocess the
02003                  * message next time.  (Note: it is unlikely but not
02004                  * impossible for TransactionIdDidCommit to fail, so we can't
02005                  * really avoid this advance-then-back-up behavior when
02006                  * dealing with an uncommitted message.)
02007                  */
02008                 *current = thisentry;
02009                 reachedStop = true;
02010                 break;
02011             }
02012         }
02013 
02014         /* Loop back if we're not at end of page */
02015     } while (!reachedEndOfPage);
02016 
02017     if (QUEUE_POS_EQUAL(*current, stop))
02018         reachedStop = true;
02019 
02020     return reachedStop;
02021 }
02022 
02023 /*
02024  * Advance the shared queue tail variable to the minimum of all the
02025  * per-backend tail pointers.  Truncate pg_notify space if possible.
02026  */
02027 static void
02028 asyncQueueAdvanceTail(void)
02029 {
02030     QueuePosition min;
02031     int         i;
02032     int         oldtailpage;
02033     int         newtailpage;
02034     int         boundary;
02035 
02036     LWLockAcquire(AsyncQueueLock, LW_EXCLUSIVE);
02037     min = QUEUE_HEAD;
02038     for (i = 1; i <= MaxBackends; i++)
02039     {
02040         if (QUEUE_BACKEND_PID(i) != InvalidPid)
02041             min = QUEUE_POS_MIN(min, QUEUE_BACKEND_POS(i));
02042     }
02043     oldtailpage = QUEUE_POS_PAGE(QUEUE_TAIL);
02044     QUEUE_TAIL = min;
02045     LWLockRelease(AsyncQueueLock);
02046 
02047     /*
02048      * We can truncate something if the global tail advanced across an SLRU
02049      * segment boundary.
02050      *
02051      * XXX it might be better to truncate only once every several segments, to
02052      * reduce the number of directory scans.
02053      */
02054     newtailpage = QUEUE_POS_PAGE(min);
02055     boundary = newtailpage - (newtailpage % SLRU_PAGES_PER_SEGMENT);
02056     if (asyncQueuePagePrecedes(oldtailpage, boundary))
02057     {
02058         /*
02059          * SimpleLruTruncate() will ask for AsyncCtlLock but will also release
02060          * the lock again.
02061          */
02062         SimpleLruTruncate(AsyncCtl, newtailpage);
02063     }
02064 }
02065 
02066 /*
02067  * ProcessIncomingNotify
02068  *
02069  *      Deal with arriving NOTIFYs from other backends.
02070  *      This is called either directly from the PROCSIG_NOTIFY_INTERRUPT
02071  *      signal handler, or the next time control reaches the outer idle loop.
02072  *      Scan the queue for arriving notifications and report them to my front
02073  *      end.
02074  *
02075  *      NOTE: since we are outside any transaction, we must create our own.
02076  */
02077 static void
02078 ProcessIncomingNotify(void)
02079 {
02080     bool        catchup_enabled;
02081 
02082     /* We *must* reset the flag */
02083     notifyInterruptOccurred = 0;
02084 
02085     /* Do nothing else if we aren't actively listening */
02086     if (listenChannels == NIL)
02087         return;
02088 
02089     /* Must prevent catchup interrupt while I am running */
02090     catchup_enabled = DisableCatchupInterrupt();
02091 
02092     if (Trace_notify)
02093         elog(DEBUG1, "ProcessIncomingNotify");
02094 
02095     set_ps_display("notify interrupt", false);
02096 
02097     /*
02098      * We must run asyncQueueReadAllNotifications inside a transaction, else
02099      * bad things happen if it gets an error.
02100      */
02101     StartTransactionCommand();
02102 
02103     asyncQueueReadAllNotifications();
02104 
02105     CommitTransactionCommand();
02106 
02107     /*
02108      * Must flush the notify messages to ensure frontend gets them promptly.
02109      */
02110     pq_flush();
02111 
02112     set_ps_display("idle", false);
02113 
02114     if (Trace_notify)
02115         elog(DEBUG1, "ProcessIncomingNotify: done");
02116 
02117     if (catchup_enabled)
02118         EnableCatchupInterrupt();
02119 }
02120 
02121 /*
02122  * Send NOTIFY message to my front end.
02123  */
02124 static void
02125 NotifyMyFrontEnd(const char *channel, const char *payload, int32 srcPid)
02126 {
02127     if (whereToSendOutput == DestRemote)
02128     {
02129         StringInfoData buf;
02130 
02131         pq_beginmessage(&buf, 'A');
02132         pq_sendint(&buf, srcPid, sizeof(int32));
02133         pq_sendstring(&buf, channel);
02134         if (PG_PROTOCOL_MAJOR(FrontendProtocol) >= 3)
02135             pq_sendstring(&buf, payload);
02136         pq_endmessage(&buf);
02137 
02138         /*
02139          * NOTE: we do not do pq_flush() here.  For a self-notify, it will
02140          * happen at the end of the transaction, and for incoming notifies
02141          * ProcessIncomingNotify will do it after finding all the notifies.
02142          */
02143     }
02144     else
02145         elog(INFO, "NOTIFY for \"%s\" payload \"%s\"", channel, payload);
02146 }
02147 
02148 /* Does pendingNotifies include the given channel/payload? */
02149 static bool
02150 AsyncExistsPendingNotify(const char *channel, const char *payload)
02151 {
02152     ListCell   *p;
02153     Notification *n;
02154 
02155     if (pendingNotifies == NIL)
02156         return false;
02157 
02158     if (payload == NULL)
02159         payload = "";
02160 
02161     /*----------
02162      * We need to append new elements to the end of the list in order to keep
02163      * the order. However, on the other hand we'd like to check the list
02164      * backwards in order to make duplicate-elimination a tad faster when the
02165      * same condition is signaled many times in a row. So as a compromise we
02166      * check the tail element first which we can access directly. If this
02167      * doesn't match, we check the whole list.
02168      *
02169      * As we are not checking our parents' lists, we can still get duplicates
02170      * in combination with subtransactions, like in:
02171      *
02172      * begin;
02173      * notify foo '1';
02174      * savepoint foo;
02175      * notify foo '1';
02176      * commit;
02177      *----------
02178      */
02179     n = (Notification *) llast(pendingNotifies);
02180     if (strcmp(n->channel, channel) == 0 &&
02181         strcmp(n->payload, payload) == 0)
02182         return true;
02183 
02184     foreach(p, pendingNotifies)
02185     {
02186         n = (Notification *) lfirst(p);
02187 
02188         if (strcmp(n->channel, channel) == 0 &&
02189             strcmp(n->payload, payload) == 0)
02190             return true;
02191     }
02192 
02193     return false;
02194 }
02195 
02196 /* Clear the pendingActions and pendingNotifies lists. */
02197 static void
02198 ClearPendingActionsAndNotifies(void)
02199 {
02200     /*
02201      * We used to have to explicitly deallocate the list members and nodes,
02202      * because they were malloc'd.  Now, since we know they are palloc'd in
02203      * CurTransactionContext, we need not do that --- they'll go away
02204      * automatically at transaction exit.  We need only reset the list head
02205      * pointers.
02206      */
02207     pendingActions = NIL;
02208     pendingNotifies = NIL;
02209 }