Indexes

Indexes are commonly used to enhance database performance. An index allows the database server to find and retrieve specific rows much faster than it could do without an index. However, indexes also add overhead to the database system as a whole, so they should not be overused.

Introduction to Indexes

The classical example for the need of an index is if there is a table similar to this:
CREATE TABLE test1 (
    id integer,
    content varchar
);
and the application requires a lot of queries of the form
SELECT content FROM test1 WHERE id = constant;
Ordinarily, the system would have to scan the entire test1 table row by row to find all matching entries. If there are a lot of rows in test1 and only a few rows (possibly zero or one) returned by the query, then this is clearly an inefficient method. If the system were instructed to maintain an index on the id column, then it could use a more efficient method for locating matching rows. For instance, it might only have to walk a few levels deep into a search tree.

A similar approach is used in most books of non-fiction: Terms and concepts that are frequently referenced by readers are collected in an alphabetic index at the end of the book. The interested reader can scan the index relatively quickly and flip to the appropriate page, without having to read the entire book to find the desired information. Just as it is the task of the author to anticipate the items that the readers are most likely to look up, it is the task of the database programmer to foresee which indexes would be of advantage to users.

The following command would be used to create the index on the id column, as discussed:
CREATE INDEX test1_id_index ON test1 (id);
The name test1_id_index can be chosen freely, but you should pick something that enables you to remember later what the index was for.

To remove an index, use the DROP INDEX command. Indexes can be added and removed from tables at any time.

Once the index is created, no further intervention is required: the system will use the index when it thinks it would be more efficient than a sequential table scan. But you may have to run the VACUUM ANALYZE command regularly to update statistics to allow the query planner to make educated decisions. Also read the Red Hat Database Administrator and User's Guide for information about how to find out whether an index is used and when and why the planner may choose to not use an index.

Indexes can also benefit UPDATE commands and DELETE commands with search conditions. Note that query or data manipulation commands can only use at most one index per table. Indexes can also be used in table join methods. Thus, an index defined on a column that is part of a join condition can significantly speed up queries with joins.

When an index is created, it has to be kept synchronized with the table. This adds overhead to data manipulation operations. Therefore indexes that are non-essential or do not get used at all should be removed.

Index Types

PostgreSQL provides several index types: B-tree, R-tree, and Hash. Each index type is more appropriate for a particular query type because of the algorithm it uses. By default, the CREATE INDEX command will create a B-tree index, which fits the most common situations. In particular, the PostgreSQL query optimizer will consider using a B-tree index whenever an indexed column is involved in a comparison using one of these operators: <, <=, =, >=, >

R-tree indexes are especially suited for spacial data. To create an R-tree index, use a command of the form
CREATE INDEX name ON table USING RTREE (column);
The PostgreSQL query optimizer will consider using an R-tree index whenever an indexed column is involved in a comparison using one of these operators: <<, &<, &>, >>, @, ~=, && (Refer to
the Section called Geometric Functions and Operators in Chapter 3 about the meaning of these operators.)

The query optimizer will consider using a hash index whenever an indexed column is involved in a comparison using the = operator.

A sequential scan will be used for comparisons involving the <> operator as this, in general, has low selectivity. For the rare cases where this is not true, you can use a combination of < and > instead (if the inequality and the use of indexes will be considered in the query optimization). For instance, instead of:

x <> 'foo'

you can write:

x < 'foo' or x > 'foo'

The following command is used to create a hash index:
CREATE INDEX name ON table USING HASH (column);

Note

Because of the limited utility of hash indexes, a B-tree index should generally be preferred over a hash index. We do not have sufficient evidence that hash indexes are actually faster than B-trees even for = comparisons. Moreover, hash indexes require coarser locks; see the Red Hat Database Administrator and User's Guide.

The B-tree index is an implementation of Lehman-Yao high-concurrency B-trees. The R-tree index method implements standard R-trees using Guttman's quadratic split algorithm. The hash index is an implementation of Litwin's linear hashing. We mention the algorithms used solely to indicate that all of these access methods are fully dynamic and do not have to be optimized periodically (as is the case with, for example, static hash access methods).

Multi-Column Indexes

An index can be defined on more than one column. For example, if you have a table of this form:
CREATE TABLE test2 (
  major int,
  minor int,
  name varchar
);
(Say, you keep you your /dev directory in a database...) and you frequently make queries like
SELECT name FROM test2 WHERE major = constant AND minor = constant;
then it may be appropriate to define an index on the columns major and minor together, for example,
CREATE INDEX test2_mm_idx ON test2 (major, minor);

Currently, GIST and the B-tree implementation supports multi-column indexes. Up to 16 columns may be specified. (This limit can be altered when building PostgreSQL; see the file config.h.)

The query optimizer can use a multi-column index for queries that involve the first n consecutive columns in the index (when used with appropriate operators), up to the total number of columns specified in the index definition. For example, an index on (a, b, c) can be used in queries involving all of a, b, and c, or in queries involving both a and b, or in queries involving only a, but not in other combinations. (In a query involving a and c the optimizer might choose to use the index for a only and treat c like an ordinary unindexed column.)

Multi-column indexes can only be used if the clauses involving the indexed columns are joined with AND. For instance,
SELECT name FROM test2 WHERE major = constant OR minor = constant;
cannot make use of the index test2_mm_idx defined above to look up both columns. (It can be used to look up only the major column, however.)

Multi-column indexes should be used sparingly. Most of the time, an index on a single column is sufficient and saves space and time. Indexes with more than three columns are almost certainly inappropriate.

Unique Indexes

Indexes may also be used to enforce uniqueness of a column's value, or the uniqueness of the combined values of more than one column.
CREATE UNIQUE INDEX name ON table (column [, ...]);
Only B-tree indexes can be declared unique.

When an index is declared unique, multiple table rows with equal indexed values will not be allowed. NULL values are not considered equal.

PostgreSQL automatically creates unique indexes when a table is declared with a unique constraint or a primary key, on the columns that make up the primary key or unique columns (a multi-column index, if appropriate), to enforce that constraint. A unique index can be added to a table at any later time to add a unique constraint.

Note

The preferred way to add a unique constraint to a table is ALTER TABLE ... ADD CONSTRAINT. The use of indexes to enforce unique constraints could be considered an implementation detail that should not be accessed directly.

Functional Indexes

For a functional index, an index is defined on the result of a function applied to one or more columns of a single table. Functional indexes can be used to obtain fast access to data based on the result of function calls.

For example, a common way to do case-insensitive comparisons is to use the lower() function:
SELECT * FROM test1 WHERE lower(col1) = 'value';
In order for that query to be able to use an index, it has to be defined on the result of the lower(column) operation:
CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1));

The function in the index definition can take more than one argument, but they must be table columns, not constants. Functional indexes are always single-column (namely, the function result) even if the function uses more than one input field; there cannot be multi-column indexes that contain function calls.

Tip

The restrictions mentioned in the previous paragraph can easily be worked around by defining custom functions to use in the index definition that call the desired function(s) internally.

Operator Classes

The next table of interest is pg_opclass. This table defines operator class names and input data types for each of the operator classes supported by a given index access method. The same class name can be used for several different access methods (for example, both B-tree and hash access methods have operator classes named oid_ops), but a separate pg_opclass row must appear for each access method. The OID of the pg_opclass row is used as a foreign key in other tables to associate specific operators and support routines with the operator class.

You need to add a row with your operator class name (for example, complex_abs_ops) to pg_opclass:

INSERT INTO pg_opclass (opcamid, opcname, opcintype, opcdefault,

opckeytype)
VALUES (
(SELECT oid FROM pg_am WHERE amname = 'btree'),
'complex_abs_ops',
(SELECT oid FROM pg_type WHERE typname = 'complex'),
true,
0);
SELECT oid, *
FROM pg_opclass
WHERE opcname = 'complex_abs_ops';

oid | opcamid | opcname | opcintype | opcdefault | opckeytype
-------+---------+-----------------+-----------+------------+-----------
277975 | 403 | complex_abs_ops | 277946 | t | 0
(1 row)

Note that the OID for your pg_opclass row will be different! Do not worry about this though. We will get this number from the system later just like we got the OID of the type here.

The above example assumes that you want to make this new operator class the default B-tree operator class for the complex data type. If you do not, just set opcdefault to false instead. opckeytype is not described here; it should always be zero for B-tree operator classes.

Keys

Subject: Re: [QUESTIONS] PRIMARY KEY | UNIQUE

        What is the difference between:

              PRIMARY KEY(fields,...) and
              UNIQUE (fields,...)

       - Is this an alias?
       - If PRIMARY KEY is already unique, then why
         is there another kind of key named UNIQUE?

A primary key refers to the field(s) used to identify a specific row. For example, Social Security numbers identifying a person.

A simply UNIQUE combination of fields has nothing to do with identifying the row. It is simply an integrity constraint. For example, consider collections of links. Each collection is identified by a unique number, which is the primary key. This key is used in relations.

However, the application requires that each collection will also have a unique name, so that when you want to modify the collection the application will be able to identify which one to modify. It is difficult to know, if you have two collections named "Life Science", the one tagged 24433 is the one you need to modify, and the one tagged 29882 is not.

Because the user selects each collection by its name, PostgreSQL makes sure that names are unique within the database. However, no other table in the database relates to the collections table by the collection Name. That would be very inefficient.

Despite being unique, the collection name does not actually define the collection. For example, if you decided to change the name of the collection from "Life Science" to "Biology", it will still be the same collection, only with a different name. As long as the name is still unique, the new name will be acceptable.

The following details the differences between primary keys and unique fields:

  • Primary key:

    • Is used for identifying the row and relating to it.

    • Is impossible (or hard) to update.

    • Should not allow NULL.

  • Unique field(s):

    • Are used as an alternative access to the row.

    • Are updatable, so long as they are kept unique.

    • NULLs are acceptable.

No non-unique keys are defined explicitly in standard SQL syntax because indexes are implementation-dependent. SQL does not define the implementation; it merely defines the relations between data in the database. PostgreSQL does allow non-unique indexes, but the indexes used to enforce SQL keys must always be unique.

Therefore, you may query a table by any combination of its columns, even if you do not have an index on these columns. The indexes are merely an implementation aid that each RDBMS offers you, in order to enable commonly-used queries to be executed more efficiently. Some RDBMS may provide you with additional measures, such as enabling a key to be stored in main memory. These measures will have special commands, such as:
CREATE MEMSTORE ON table COLUMNS cols
(MEMSTORE is not an existing command, it is just an example.)

Note that when you create a primary key or a unique combination of fields, the SQL specification does not actually inform you that an index is created, nor that the retrieval of data by the key is going to be more efficient than a sequential scan. Therefore, if you want to use a combination of non-unique fields as a secondary key, you do not have to specify anything. You can simply start retrieving by that combination. However, if you want to make the retrieval efficient, you will have to resort to the means your RDBMS provider gives you—be it an index, my imaginary MEMSTORE command, or an intelligent RDBMS that creates indexes without your knowledge, based on the fact that you have sent it many queries based on a specific combination of keys... (It learns from experience).

Partial Indexes

A partial index is an index built over a subset of a table; the subset is defined by a conditional expression (called the predicate of the partial index). The index contains entries for only those table rows that satisfy the predicate.

A major motivation for partial indexes is to avoid indexing common values. Since a query searching for a common value (one that accounts for more than a few percent of all the table rows) will not use the index anyway, there is no point in keeping those rows in the index at all. This reduces the size of the index, which will speed up queries that do use the index. It will also speed up many table update operations because the index does not need to be updated in all cases. Example 1-1 shows a possible application of this idea.

Another possibility is to exclude values from the index that the typical query workload is not interested in; this is shown in Example 1-2. This results in the same advantages as listed above, but it prevents the "uninteresting" values from being accessed via that index at all, even if an index scan might be profitable in that case. Obviously, setting up partial indexes for this kind of scenario will require a lot of care and experimentation.

Example 1-2 also illustrates that the indexed column and the column used in the predicate do not need to match. PostgreSQL supports partial indexes with arbitrary predicates, so long as only columns of the table being indexed are involved. However, keep in mind that the predicate must match the conditions used in the queries that are supposed to benefit from the index. To be precise, a partial index can be used in a query only if the system can recognize that the query's WHERE condition mathematically implies the index's predicate. PostgreSQL does not have a sophisticated theorem prover that can recognize mathematically equivalent predicates that are written in different forms. (Not only is such a general theorem prover extremely difficult to create, it would probably be too slow to be of any real use.) The system can recognize simple inequality implications, for example "x < 1" implies "x < 2"; otherwise the predicate condition must exactly match the query's WHERE condition or the index will not be recognized to be usable.

A third possible use for partial indexes does not require the index to be used in queries at all. The idea here is to create a unique index over a subset of a table, as in Example 1-3. This enforces uniqueness among the rows that satisfy the index predicate, without constraining those that do not.

Finally, a partial index can also be used to override the system's query plan choices. It may occur that data sets with peculiar distributions will cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, PostgreSQL makes reasonable choices about index usage (for example, it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause for a bug report.

Keep in mind that setting up a partial index indicates that you know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in PostgreSQL work. In most cases, the advantage of a partial index over a regular index will not be much.

Examining Index Usage

Although indexes in PostgreSQL do not need maintenance and tuning, it is still important to check which indexes are actually used by the real-life query workload. Examining index usage is done with the EXPLAIN command; its application for this purpose is illustrated in Red Hat Database Administrator and User's Guide and in Graphical Tools Guide.

It is difficult to formulate a general procedure for determining which indexes to set up. There are a number of typical cases that have been shown in the examples throughout the previous sections. A good deal of experimentation will be necessary in most cases. The rest of this section gives some tips for that.