Indices

Indices are commonly used to enhance database performance. An index allows the database server to find and retrieve specific rows much faster than it could do without an index. However, indexes also add overhead to the database system as a whole, so they should not be overused.

Introduction to Indices

The classical example for the need of an index is if there is a table similar to this:
CREATE TABLE test1 (
    id integer,
    content varchar
);
and the application requires a lot of queries of the form
SELECT content FROM test1 WHERE id = constant;
Ordinarily, the system would have to scan the entire test1 table row by row to find all matching entries. If there are a lot of rows in test1 and only a few rows (possibly zero or one) returned by the query, then this is clearly an inefficient method. If the system were instructed to maintain an index on the id column, then it could use a more efficient method for locating matching rows. For instance, it might only have to walk a few levels deep into a search tree.

A similar approach is used in most books of non-fiction: Terms and concepts that are frequently referenced by readers are collected in an alphabetic index at the end of the book. The interested reader can scan the index relatively quickly and flip to the appropriate page, without having to read the entire book to find the desired information. Just as it is the task of the author to anticipate the items that the readers are most likely to look up, it is the task of the database programmer to foresee which indexes would be of advantage to users.

The following command would be used to create the index on the id column, as discussed:
CREATE INDEX test1_id_index ON test1 (id);
The name test1_id_index can be chosen freely, but you should pick something that enables you to remember later what the index was for.

To remove an index, use the DROP INDEX command. Indices can be added and removed from tables at any time.

Once the index is created, no further intervention is required: the system will use the index when it thinks it would be more efficient than a sequential table scan. But you may have to run the VACUUM ANALYZE command regularly to update statistics to allow the query planner to make educated decisions. Also read the Red Hat Database Administrator and User's Guide for information about how to find out whether an index is used and when and why the planner may choose to not use an index.

Indices can also benefit UPDATE commands and DELETE commands with search conditions. Note that a query or data manipulation commands can only use at most one index per table. Indices can also be used in table join methods. Thus, an index defined on a column that is part of a join condition can significantly speed up queries with joins.

When an index is created, it has to be kept synchronized with the table. This adds overhead to data manipulation operations. Therefore indexes that are non-essential or do not get used at all should be removed.

Index Types

PostgreSQL provides several index types: B-tree, R-tree, and Hash. Each index type is more appropriate for a particular query type because of the algorithm it uses. By default, the CREATE INDEX command will create a B-tree index, which fits the most common situations. In particular, the PostgreSQL query optimizer will consider using a B-tree index whenever an indexed column is involved in a comparison using one of these operators: <, <=, =, >=, >

R-tree indexes are especially suited for spacial data. To create an R-tree index, use a command of the form
CREATE INDEX name ON table USING RTREE (column);
The PostgreSQL query optimizer will consider using an R-tree index whenever an indexed column is involved in a comparison using one of these operators: <<, &<, &>, >>, @, ~=, && (Refer to the Section called Geometric Functions and Operators in Chapter 3 about the meaning of these operators.)

The query optimizer will consider using a hash index whenever an indexed column is involved in a comparison using the = operator.

A sequential scan will be used for comparisons involving the <> operator as this, in general, has low selectivity. For the rare cases where this is not true, you can use a combination of < and > instead (if the inequality and the use of indices will be considered in the query optimization). For instance, instead of:

x <> 'foo'

you can write:

x < 'foo' or x > 'foo'

The following command is used to create a hash index:
CREATE INDEX name ON table USING HASH (column);

Note

Because of the limited utility of hash indexes, a B-tree index should generally be preferred over a hash index. We do not have sufficient evidence that hash indexes are actually faster than B-trees even for = comparisons. Moreover, hash indexes require coarser locks; see the Red Hat Database Administrator and User's Guide.

The B-tree index is an implementation of Lehman-Yao high-concurrency B-trees. The R-tree index method implements standard R-trees using Guttman's quadratic split algorithm. The hash index is an implementation of Litwin's linear hashing. We mention the algorithms used solely to indicate that all of these access methods are fully dynamic and do not have to be optimized periodically (as is the case with, for example, static hash access methods).

Multi-Column Indices

An index can be defined on more than one column. For example, if you have a table of this form:
CREATE TABLE test2 (
  major int,
  minor int,
  name varchar
);
(Say, you keep you your /dev directory in a database...) and you frequently make queries like
SELECT name FROM test2 WHERE major = constant AND minor = constant;
then it may be appropriate to define an index on the columns major and minor together, e.g.,
CREATE INDEX test2_mm_idx ON test2 (major, minor);

Currently, only the B-tree implementation supports multi-column indexes. Up to 16 columns may be specified. (This limit can be altered when building PostgreSQL; see the file config.h.)

The query optimizer can use a multi-column index for queries that involve the first n consecutive columns in the index (when used with appropriate operators), up to the total number of columns specified in the index definition. For example, an index on (a, b, c) can be used in queries involving all of a, b, and c, or in queries involving both a and b, or in queries involving only a, but not in other combinations. (In a query involving a and c the optimizer might choose to use the index for a only and treat c like an ordinary unindexed column.)

Multi-column indexes can only be used if the clauses involving the indexed columns are joined with AND. For instance,
SELECT name FROM test2 WHERE major = constant OR minor = constant;
cannot make use of the index test2_mm_idx defined above to look up both columns. (It can be used to look up only the major column, however.)

Multi-column indexes should be used sparingly. Most of the time, an index on a single column is sufficient and saves space and time. Indexes with more than three columns are almost certainly inappropriate.

Unique Indices

Indexes may also be used to enforce uniqueness of a column's value, or the uniqueness of the combined values of more than one column.
CREATE UNIQUE INDEX name ON table (column [, ...]);
Only B-tree indexes can be declared unique.

When an index is declared unique, multiple table rows with equal indexed values will not be allowed. NULL values are not considered equal.

PostgreSQL automatically creates unique indexes when a table is declared with a unique constraint or a primary key, on the columns that make up the primary key or unique columns (a multi-column index, if appropriate), to enforce that constraint. A unique index can be added to a table at any later time, to add a unique constraint. (But a primary key cannot be added after table creation.)

Functional Indices

For a functional index, an index is defined on the result of a function applied to one or more columns of a single table. Functional indexes can be used to obtain fast access to data based on the result of function calls.

For example, a common way to do case-insensitive comparisons is to use the lower:
SELECT * FROM test1 WHERE lower(col1) = 'value';
In order for that query to be able to use an index, it has to be defined on the result of the lower(column) operation:
CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1));

The function in the index definition can take more than one argument, but they must be table columns, not constants. Functional indexes are always single-column (namely, the function result) even if the function uses more than one input field; there cannot be multi-column indexes that contain function calls.

Tip

The restrictions mentioned in the previous paragraph can easily be worked around by defining custom functions to use in the index definition that call the desired function(s) internally.

Operator Classes

An index definition may specify an operator class for each column of an index.
CREATE INDEX name ON table (column opclass [, ...]);
The operator class identifies the operators to be used by the index for that column. For example, a B-tree index on four-byte integers would use the int4_ops class; this operator class includes comparison functions for four-byte integers. In practice the default operator class for the column's data type is usually sufficient. The main point of having operator classes is that for some data types, there could be more than one meaningful ordering. For example, we might want to sort a complex-number data type either by absolute value or by real part. We could do this by defining two operator classes for the data type and then selecting the proper class when making an index. There are also some operator classes with special purposes:

The following query shows all defined operator classes:
SELECT am.amname AS acc_name,
       opc.opcname AS ops_name,
       opr.oprname AS ops_comp
    FROM pg_am am, pg_amop amop,
         pg_opclass opc, pg_operator opr
    WHERE amop.amopid = am.oid AND
          amop.amopclaid = opc.oid AND
          amop.amopopr = opr.oid
    ORDER BY acc_name, ops_name, ops_comp;

Keys

Subject: Re: [QUESTIONS] PRIMARY KEY | UNIQUE

        What is the difference between:

              PRIMARY KEY(fields,...) and
              UNIQUE (fields,...)

       - Is this an alias?
       - If PRIMARY KEY is already unique, then why
         is there another kind of key named UNIQUE?

A primary key refers to the field(s) used to identify a specific row. For example, Social Security numbers identifying a person.

A simply UNIQUE combination of fields has nothing to do with identifying the row. It is simply an integrity constraint. For example, consider collections of links. Each collection is identified by a unique number, which is the primary key. This key is used in relations.

However, the application requires that each collection will also have a unique name, so that when you want to modify the collection the application will be able to identify which one to modify. It is difficult to know, if you have two collections named "Life Science", the one tagged 24433 is the one you need to modify, and the one tagged 29882 is not.

Because the user selects each collection by its name, PostgreSQL makes sure that names are unique within the database. However, no other table in the database relates to the collections table by the collection Name. That would be very inefficient.

Despite being unique, the collection name does not actually define the collection. For example, if you decided to change the name of the collection from "Life Science" to "Biology", it will still be the same collection, only with a different name. As long as the name is still unique, the new name will be acceptable.

The following details the differences between primary keys and unique fields:

No non-unique keys are defined explicitly in standard SQL syntax because indexes are implementation-dependent. SQL does not define the implementation; it merely defines the relations between data in the database. PostgreSQL does allow non-unique indexes, but the indexes used to enforce SQL keys must always be unique.

Therefore, you may query a table by any combination of its columns, even if you do not have an index on these columns. The indexes are merely an implementation aid that each RDBMS offers you, in order to enable commonly-used queries to be executed more efficiently. Some RDBMS may provide you with additional measures, such as enabling a key to be stored in main memory. These measures will have special commands, such as:
CREATE MEMSTORE ON table COLUMNS cols
(This is not an existing command, just an example.)

Note that when you create a primary key or a unique combination of fields, the SQL specification does not actually inform you that an index is created, nor that the retrieval of data by the key is going to be more efficient than a sequential scan. Therefore, if you want to use a combination of non-unique fields as a secondary key, you do not have to specify anything. You can simply start retrieving by that combination. However, if you want to make the retrieval efficient, you will have to resort to the means your RDBMS provider gives you - be it an index, my imaginary MEMSTORE command, or an intelligent RDBMS that creates indexes without your knowledge, based on the fact that you have sent it many queries based on a specific combination of keys... (It learns from experience).

Partial Indices

NoteNote
 

Partial indexes are not currently supported by PostgreSQL, but they were once supported by its predecessor PostgreSQL, and much of the code is still there. We hope to revive support for this feature someday.

A partial index is an index built over a subset of a table; the subset is defined by a predicate. PostgreSQL supported partial indexes with arbitrary predicates. I believe IBM's DB2 for AS/400 supports partial indexes using single-clause predicates.

The main motivation for partial indexes is this: if all of the queries you ask that can profitably use an index fall into a certain range, why build an index over the whole table and suffer the associated space/time costs?

The machinery to build, update and query partial indexes is not too bad. The hairy parts are index selection (which indexes do I build?) and query optimization (which indexes do I use?); i.e., the parts that involve deciding what predicate(s) match the workload/query in some useful way. For those who are into database theory, the problems are basically analogous to the corresponding materialized view problems, albeit with different cost parameters and formulae. These are, in the general case, hard problems for the standard ordinal SQL types; they are super-hard problems with black-box extension types, because the selectivity estimation technology is so crude.