There are a few routine maintenance chores that must be performed on
a regular basis to keep a EnterpriseDB
server running smoothly. The tasks discussed here are repetitive
in nature and can easily be automated using standard Unix tools such
as cron scripts. But it is the database
administrator's responsibility to set up appropriate scripts, and to
check that they execute successfully.
One obvious maintenance task is creation of backup copies of the data on a
regular schedule. Without a recent backup, you have no chance of recovery
after a catastrophe (disk failure, fire, mistakenly dropping a critical
table, etc.). The backup and recovery mechanisms available in
EnterpriseDB are discussed at length in
Chapter 36.
Something else that might need periodic attention is log file management.
This is discussed in Section 35.3.
EnterpriseDB's VACUUM command
must be run on a regular basis for several reasons:
To recover disk space occupied by updated or deleted
rows.
To update data statistics used by the
EnterpriseDB query planner.
To protect against loss of very old data due to
transaction ID wraparound.
The frequency and scope of the VACUUM operations performed for each of
these reasons will vary depending on the needs of each site.
Therefore, database administrators must understand these issues and
develop an appropriate maintenance strategy. This section concentrates
on explaining the high-level issues; for details about command syntax
and so on, see the VACUUM command reference page.
In normal EnterpriseDB operation, an
UPDATE or DELETE of a row does not
immediately remove the old version of the row.
This approach is necessary to gain the benefits of multiversion
concurrency control (see MVCC): the row version
must not be deleted while it is still potentially visible to other
transactions. But eventually, an outdated or deleted row version is no
longer of interest to any transaction. The space it occupies must be
reclaimed for reuse by new rows, to avoid infinite growth of disk
space requirements. This is done by running VACUUM.
Clearly, a table that receives frequent updates or deletes will need
to be vacuumed more often than tables that are seldom updated. It
may be useful to set up periodic cron tasks that
VACUUM only selected tables, skipping tables that are known not to
change often. This is only likely to be helpful if you have both
large heavily-updated tables and large seldom-updated tables - the
extra cost of vacuuming a small table isn't enough to be worth
worrying about.
There are two variants of the VACUUM
command. The first form, known as "lazy vacuum" or
just VACUUM, marks expired data in tables and
indexes for future reuse; it does not attempt
to reclaim the space used by this expired data
unless the space is at the end of the table and an exclusive table
lock can be easily obtained. Unused space at the start or middle
of the file does not result in the file being shortened and space
returned to the operating system. This variant of VACUUM
can be run concurrently with normal database operations.
The second form is the VACUUM FULL
command. This uses a more aggressive algorithm for reclaiming the
space consumed by expired row versions. Any space that is freed by
VACUUM FULL is immediately returned to the
operating system. Unfortunately, this variant of the
VACUUM command acquires an exclusive lock on
each table while VACUUM FULL is processing
it. Therefore, frequently using VACUUM FULL can
have an extremely negative effect on the performance of concurrent
database queries.
The standard form of VACUUM is best used with the goal
of maintaining a fairly level steady-state usage of disk space. If
you need to return disk space to the operating system you can use
VACUUM FULL - but what's the point of releasing disk
space that will only have to be allocated again soon? Moderately
frequent standard VACUUM runs are a better approach
than infrequent VACUUM FULL runs for maintaining
heavily-updated tables.
Recommended practice for most sites is to schedule a database-wide
VACUUM once a day at a low-usage time of day,
supplemented by more frequent vacuuming of heavily-updated tables
if necessary. In fact, some installations with an extremely high
rate of data modification VACUUM some tables as
often as once very five minutes. (If you have multiple databases
in a cluster, don't forget to VACUUM each one;
the program vacuumdb may be helpful.)
VACUUM FULL is recommended for cases where you know
you have deleted the majority of rows in a table, so that the
steady-state size of the table can be shrunk substantially with
VACUUM FULL's more aggressive approach. Use plain
VACUUM, not VACUUM FULL, for routine
vacuuming for space recovery.
If you have a table whose contents are deleted on a periodic
basis, consider doing it with TRUNCATE rather
than using DELETE followed by
VACUUM. TRUNCATE removes the
entire content of the table immediately, without requiring a
subsequent VACUUM or VACUUM
FULL to reclaim the now-unused disk space.
The EnterpriseDB query planner relies on
statistical information about the contents of tables in order to
generate good plans for queries. These statistics are gathered by
the ANALYZE command, which can be invoked by itself or
as an optional step in VACUUM. It is important to have
reasonably accurate statistics, otherwise poor choices of plans may
degrade database performance.
As with vacuuming for space recovery, frequent updates of statistics
are more useful for heavily-updated tables than for seldom-updated
ones. But even for a heavily-updated table, there may be no need for
statistics updates if the statistical distribution of the data is
not changing much. A simple rule of thumb is to think about how much
the minimum and maximum values of the columns in the table change.
For example, a timestamp column that contains the time
of row update will have a constantly-increasing maximum value as
rows are added and updated; such a column will probably need more
frequent statistics updates than, say, a column containing URLs for
pages accessed on a website. The URL column may receive changes just
as often, but the statistical distribution of its values probably
changes relatively slowly.
It is possible to run ANALYZE on specific tables and even
just specific columns of a table, so the flexibility exists to update some
statistics more frequently than others if your application requires it.
In practice, however, the usefulness of this feature is doubtful.
ANALYZE is a fairly fast operation even on large tables,
because it uses a statistical random sampling of the rows of a table
rather than reading every single row. So it's probably much simpler
to just run it over the whole database every so often.
Tip: Although per-column tweaking of ANALYZE frequency may not be
very productive, you may well find it worthwhile to do per-column
adjustment of the level of detail of the statistics collected by
ANALYZE. Columns that are heavily used in WHERE clauses
and have highly irregular data distributions may require a finer-grain
data histogram than other columns. See ALTER TABLE SET
STATISTICS.
Recommended practice for most sites is to schedule a database-wide
ANALYZE once a day at a low-usage time of day; this can
usefully be combined with a nightly VACUUM. However,
sites with relatively slowly changing table statistics may find that
this is overkill, and that less-frequent ANALYZE runs
are sufficient.
EnterpriseDB's MVCC transaction semantics
depend on being able to compare transaction ID (XID)
numbers: a row version with an insertion XID greater than the current
transaction's XID is "in the future" and should not be visible
to the current transaction. But since transaction IDs have limited size
(32 bits at this writing) a cluster that runs for a long time (more
than 4 billion transactions) will suffer transaction ID
wraparound: the XID counter wraps around to zero, and all of a sudden
transactions that were in the past appear to be in the future - which
means their outputs become invisible. In short, catastrophic data loss.
(Actually the data is still there, but that's cold comfort if you can't
get at it.)
The only defense against XID wraparound is maintenance;
every table in the database must
be vacuumed at least once every billion transactions.
In practice this isn't an onerous requirement, but since the
consequences of failing to meet it can be complete data loss (not
just wasted disk space or slow performance), some special provisions
have been made to help database administrators keep track of the
time since the last VACUUM. The remainder of this
section gives the details.
The new approach to XID comparison distinguishes two special XIDs,
numbers 1 and 2 (BootstrapXID and
FrozenXID). These two XIDs are always considered older
than every normal XID. Normal XIDs (those greater than 2) are
compared using modulo-231 arithmetic. This means
that for every normal XID, there are two billion XIDs that are
"older" and two billion that are "newer"; another
way to say it is that the normal XID space is circular with no
endpoint. Therefore, once a row version has been created with a particular
normal XID, the row version will appear to be "in the past" for
the next two billion transactions, no matter which normal XID we are
talking about. If the row version still exists after more than two billion
transactions, it will suddenly appear to be in the future. To
prevent data loss, old row versions must be reassigned the XID
FrozenXID sometime before they reach the
two-billion-transactions-old mark. Once they are assigned this
special XID, they will appear to be "in the past" to all
normal transactions regardless of wraparound issues, and so such
row versions will be good until deleted, no matter how long that is. This
reassignment of XID is handled by VACUUM.
VACUUM's normal policy is to reassign FrozenXID
to any row version with a normal XID more than one billion transactions in the
past. This policy preserves the original insertion XID until it is not
likely to be of interest anymore. (In fact, most row versions will probably
live and die without ever being "frozen".) With this policy,
the maximum safe interval between VACUUM runs on any table
is exactly one billion transactions: if you wait longer, it's possible
that a row version that was not quite old enough to be reassigned last time
is now more than two billion transactions old and has wrapped around
into the future - i.e., is lost to you. (Of course, it'll reappear
after another two billion transactions, but that's no help.)
Since periodic VACUUM runs are needed anyway for the reasons
described earlier, it's unlikely that any table would not be vacuumed
for as long as a billion transactions. But to help administrators ensure
this constraint is met, VACUUM stores transaction ID
statistics in the system table pg_database. In particular,
the datfrozenxid column of a database's
pg_database row is updated at the completion of any
database-wide VACUUM operation (i.e., VACUUM that does not
name a specific table). The value stored in this field is the freeze
cutoff XID that was used by that VACUUM command. All normal
XIDs older than this cutoff XID are guaranteed to have been replaced by
FrozenXID within that database. A convenient way to
examine this information is to execute the query
SELECT datname, age(datfrozenxid) FROM pg_database;
The age column measures the number of transactions from the
cutoff XID to the current transaction's XID.
With the standard freezing policy, the age column will start
at one billion for a freshly-vacuumed database. When the age
approaches two billion, the database must be vacuumed again to avoid
risk of wraparound failures. Recommended practice is to VACUUM each
database at least once every half-a-billion (500 million) transactions,
so as to provide plenty of safety margin. To help meet this rule,
each database-wide VACUUM automatically delivers a warning
if there are any pg_database entries showing an
age of more than 1.5 billion transactions, for example:
play=# VACUUM;
WARNING: some databases have not been vacuumed in 1613770184 transactions
HINT: Better vacuum them within 533713463 transactions, or you may have a wraparound failure.
VACUUM
VACUUM with the FREEZE option uses a more
aggressive freezing policy: row versions are frozen if they are old enough
to be considered good by all open transactions. In particular, if a
VACUUM FREEZE is performed in an otherwise-idle
database, it is guaranteed that all row versions in that
database will be frozen. Hence, as long as the database is not
modified in any way, it will not need subsequent vacuuming to avoid
transaction ID wraparound problems. This technique is used by
initdb to prepare the template0 database.
It should also be used to prepare any user-created databases that
are to be marked datallowconn = false in
pg_database, since there isn't any convenient way to
VACUUM a database that you can't connect to. Note that
VACUUM's automatic warning message about
unvacuumed databases will ignore pg_database entries
with datallowconn = false, so as to avoid
giving false warnings about these databases; therefore it's up to
you to ensure that such databases are frozen correctly.