Optimization is a complex task because ultimately it requires understanding of the entire system to be optimized. Although it may be possible to perform some local optimizations with little knowledge of your system or application, the more optimal you want your system to become, the more you will have to know about it.
This chapter tries to explain and give some examples of different ways to optimize MySQL. Remember, however, that there are always additional ways to make the system even faster, although they may require increasing effort to achieve.
The most important factor in making a system fast is its basic design. You also need to know what kinds of things your system will be doing, and what your bottlenecks are.
The most common system bottlenecks are:
When using the MyISAM
storage engine, MySQL uses extremely fast table
locking that allows multiple readers or a single writer. The biggest problem
with this storage engine occurs when you have a steady stream of mixed
updates and slow selects on a single table. If this is a problem for
certain tables, you can use another storage engine for them. See section 14 MySQL Storage Engines and Table Types.
MySQL can work with both transactional and non-transactional tables.
To be able to work smoothly with non-transactional tables (which can't
roll back if something goes wrong), MySQL has the following rules
(when not running in strict mode or if you use the IGNORE
specifier to INSERT
or UPDATE
).
NULL
.
(This behavior can be changed by using the
ERROR_FOR_DIVISION_BY_ZERO
SQL mode).
If you are using non-transactional tables, you should not use MySQL to check column content. In general, the safest (and often fastest) way is to let the application ensure that it passes only legal values to the database.
For more information about this, see section 1.5.6 How MySQL Deals with Constraints and section 13.1.4 INSERT
Syntax or
section 5.2.2 The Server SQL Mode.
Because all SQL servers implement different parts of standard SQL, it takes work to write portable SQL applications. It is very easy to achieve portability for very simple selects and inserts, but becomes more difficult the more capabilities you require. If you want an application that is fast with many database systems, it becomes even harder!
To make a complex application portable, you need to determine which SQL servers it must work with, then determine what features those servers support.
All database systems have some weak points. That is, they have different design compromises that lead to different behavior.
You can use the MySQL crash-me
program to find functions, types, and
limits that you can use with a selection of database servers.
crash-me
does not check for every possible feature, but it is still
reasonably comprehensive, performing about 450 tests.
An example of the type of information crash-me
can provide is that
you shouldn't have column names longer than 18 characters
if you want to be able to use Informix or DB2.
The crash-me
program and the MySQL benchmarks are all very database
independent. By taking a look at how they are written, you can get
a feeling for what you have to do to make your own applications database
independent. The programs can be found in the `sql-bench' directory
of MySQL source distributions. They are written in Perl and use the DBI
database interface. Use of DBI in itself solves part of the portability
problem because it provides database-independent access methods.
For crash-me
results, visit
http://dev.mysql.com/tech-resources/crash-me.php.
See http://dev.mysql.com/tech-resources/benchmarks/ for the results
from the benchmarks.
If you strive for database independence, you need to get a good feeling
for each SQL server's bottlenecks. For example, MySQL is very fast in
retrieving and updating records for MyISAM
tables, but will have
a problem in mixing slow readers and writers on the same table. Oracle,
on the other hand, has a big problem when you try to access rows that
you have recently updated (until they are flushed to disk). Transactional
databases in general are not very good at generating summary tables from
log tables, because in this case row locking is almost useless.
To make your application really database independent, you need to define an easily extendable interface through which you manipulate your data. As C++ is available on most systems, it makes sense to use a C++ class-based interface to the databases.
If you use some feature that is specific to a given database system (such
as the REPLACE
statement, which is specific to MySQL), you should
implement the same feature for other SQL servers by coding an alternative
method. Although the alternative may be slower, it will allow the other
servers to perform the same tasks.
With MySQL, you can use the /*! */
syntax to add MySQL-specific
keywords to a query. The code inside /**/
will be treated as a
comment (and ignored) by most other SQL servers.
If high performance is more important than exactness, as in some Web applications, it is possible to create an application layer that caches all results to give you even higher performance. By letting old results ``expire'' after a while, you can keep the cache reasonably fresh. This provides a method to handle high load spikes, in which case you can dynamically increase the cache and set the expiration timeout higher until things get back to normal.
In this case, the table creation information should contain information of the initial size of the cache and how often the table should normally be refreshed.
An alternative to implementing an application cache is to use the MySQL query cache. By enabling the query cache, the server handles the details of determining whether a query result can be reused. This simplifies your application. See section 5.11 The MySQL Query Cache.
This section describes an early application for MySQL.
During MySQL initial development, the features of MySQL were made to fit our largest customer, which handled data warehousing for a couple of the largest retailers in Sweden.
From all stores, we got weekly summaries of all bonus card transactions, and were expected to provide useful information for the store owners to help them find how their advertising campaigns were affecting their own customers.
The volume of data was quite huge (about seven million summary transactions per month), and we had data for 4-10 years that we needed to present to the users. We got weekly requests from our customers, who wanted to get ``instant'' access to new reports from this data.
We solved this problem by storing all information per month in compressed ``transaction'' tables. We had a set of simple macros that generated summary tables grouped by different criteria (product group, customer id, store, and so on) from the tables in which the transactions were stored. The reports were Web pages that were dynamically generated by a small Perl script. This script parsed a Web page, executed the SQL statements in it, and inserted the results. We would have used PHP or mod_perl instead, but they were not available at the time.
For graphical data, we wrote a simple tool in C that could process SQL query results and produce GIF images based on those results. This tool also was dynamically executed from the Perl script that parses the Web pages.
In most cases, a new report could be created simply by copying an existing script and modifying the SQL query in it. In some cases, we needed to add more columns to an existing summary table or generate a new one. This also was quite simple because we kept all transaction-storage tables on disk. (This amounted to about 50GB of transaction tables and 200GB of other customer data.)
We also let our customers access the summary tables directly with ODBC so that the advanced users could experiment with the data themselves.
This system worked well and we had no problems handling the data with quite modest Sun Ultra SPARCstation hardware (2x200MHz). Eventually the system was migrated to Linux.
This section should contain a technical description of the MySQL
benchmark suite (and crash-me
), but that description has not yet
been written. Currently, you can get a good idea of the benchmarks by
looking at the code and results in the `sql-bench' directory in any
MySQL source distribution.
This benchmark suite is meant to tell any user what operations a given SQL implementation performs well or poorly.
Note that this benchmark is single-threaded, so it measures the minimum time for the operations performed. We plan to add multi-threaded tests to the benchmark suite in the future.
To use the benchmark suite, the following requirements must be satisfied:
DBD::mysql
, DBD::Pg
, and DBD::DB2
modules installed.
See section 2.13 Perl Installation Notes.
After you obtain a MySQL source distribution, you will find the benchmark
suite located in its `sql-bench' directory. To run the benchmark tests,
build MySQL, then
change location into the `sql-bench' directory and execute the run-all-tests
script:
shell> cd sql-bench shell> perl run-all-tests --server=server_name
server_name is one of the supported servers. To get a list of all options and supported servers, invoke this command:
shell> perl run-all-tests --help
The crash-me
script also is located in the `sql-bench' directory.
crash-me
tries to determine what features a database supports and
what its capabilities and limitations are by actually running
queries. For example, it determines:
VARCHAR
column can be
You can find the results from crash-me
for many different database
servers at http://dev.mysql.com/tech-resources/crash-me.php.
For more information about benchmark results, visit
http://dev.mysql.com/tech-resources/benchmarks/.
You should definitely benchmark your application and database to find out where the bottlenecks are. By fixing a bottleneck (or by replacing it with a ``dummy module''), you can then easily identify the next bottleneck. Even if the overall performance for your application currently is acceptable, you should at least make a plan for each bottleneck, and decide how to solve it if someday you really need the extra performance.
For an example of portable benchmark programs, look at the MySQL benchmark suite. See section 7.1.4 The MySQL Benchmark Suite. You can take any program from this suite and modify it for your needs. By doing this, you can try different solutions to your problem and test which really is fastest for you.
Another free benchmark suite is the Open Source Database Benchmark, available at http://osdb.sourceforge.net/.
It is very common for a problem to occur only when the system is very heavily loaded. We have had many customers who contact us when they have a (tested) system in production and have encountered load problems. In most cases, performance problems turn out to be due to issues of basic database design (for example, table scans are not good at high load) or problems with the operating system or libraries. Most of the time, these problems would be a lot easier to fix if the systems were not already in production.
To avoid problems like this, you should put some effort into benchmarking your whole application under the worst possible load! You can use Super Smack for this. It is available at http://jeremy.zawodny.com/mysql/super-smack/. As the name suggests, it can bring a system to its knees if you ask it, so make sure to use it only on your development systems.
SELECT
Statements and Other QueriesFirst, one factor affects all statements: The more complex your permission setup is, the more overhead you will have.
Using simpler permissions when you issue GRANT
statements enables
MySQL to reduce permission-checking overhead when clients execute
statements. For example, if you don't grant any table-level or column-level
privileges, the server need not ever check the contents of the
tables_priv
and columns_priv
tables. Similarly, if you place
no resource limits on any accounts, the server does not have to perform
resource counting. If you have a very high query volume, it may be worth
the time to use a simplified grant structure to reduce permission-checking
overhead.
If your problem is with some specific MySQL expression or function, you can
use the BENCHMARK()
function from the mysql
client program
to perform a timing test. Its syntax is
BENCHMARK(loop_count,expression)
. For example:
mysql> SELECT BENCHMARK(1000000,1+1); +------------------------+ | BENCHMARK(1000000,1+1) | +------------------------+ | 0 | +------------------------+ 1 row in set (0.32 sec)
This result was obtained on a Pentium II 400MHz system. It shows that MySQL can execute 1,000,000 simple addition expressions in 0.32 seconds on that system.
All MySQL functions should be very optimized, but there may be some
exceptions. BENCHMARK()
is a great tool to find out if this is a
problem with your query.
EXPLAIN
Syntax (Get Information About a SELECT
)EXPLAIN tbl_name
Or:
EXPLAIN SELECT select_options
The EXPLAIN
statement can be used either as a synonym for
DESCRIBE
or as a way to obtain information about how MySQL will execute
a SELECT
statement:
EXPLAIN tbl_name
syntax is synonymous with DESCRIBE tbl_name
or
SHOW COLUMNS FROM tbl_name
.
SELECT
statement with the keyword EXPLAIN
,
MySQL explains how it would process the SELECT
, providing
information about how tables are joined and in which order.
This section provides information about the second use of EXPLAIN
.
With the help of EXPLAIN
, you can see when you must add indexes
to tables to get a faster SELECT
that uses indexes to find
records.
If you have a problem with incorrect index usage, you should run
ANALYZE TABLE
to update table statistics such as cardinality of
keys, which can affect the choices the optimizer makes. See section 13.5.2.1 ANALYZE TABLE
Syntax.
You can also see whether the optimizer joins the tables in an optimal order.
To force the optimizer to use a join order corresponding to the order
in which the tables are named in the SELECT
statement, begin the
statement with SELECT STRAIGHT_JOIN
rather than just SELECT
.
EXPLAIN
returns a row of information for each table used in the
SELECT
statement. The tables are listed in the output in the order
that MySQL would read them while processing the query. MySQL resolves
all joins using a single-sweep
multi-join method. This means that MySQL reads a row from the first
table, then finds a matching row in the second table, then in the third table,
and so on. When all tables are processed, it outputs the selected columns and
backtracks through the table list until a table is found for which there are
more matching rows. The next row is read from this table and the process
continues with the next table.
In MySQL version 4.1, the EXPLAIN
output format was changed to work
better with constructs such as UNION
statements, subqueries, and
derived tables. Most notable is the addition of two new columns: id
and select_type
. You will not see these columns when using servers
older than MySQL 4.1.
Each output row from EXPLAIN
provides information about one table, and
each row consists of the following columns:
id
SELECT
identifier. This is the sequential number of the
SELECT
within the query.
select_type
SELECT
, which can be any of the following:
SIMPLE
SELECT
(not using UNION
or subqueries)
PRIMARY
SELECT
UNION
SELECT
statement in a UNION
DEPENDENT UNION
SELECT
statement in a UNION
, dependent on outer
query
UNION RESULT
UNION
.
SUBQUERY
SELECT
in subquery
DEPENDENT SUBQUERY
SELECT
in subquery, dependent on outer query
DERIVED
SELECT
(subquery in FROM
clause)
table
type
system
const
join type.
const
const
tables are very fast because they are read only once!
const
is used when you compare all parts of a
PRIMARY KEY
or UNIQUE
index with constant values. In the
following queries, tbl_name can be used as a const
table:
SELECT * FROM tbl_name WHERE primary_key=1; SELECT * FROM tbl_name WHERE primary_key_part1=1 AND primary_key_part2=2;
eq_ref
const
types, this is the best
possible join type. It is used when all parts of an index are used by
the join and the index is a PRIMARY KEY
or UNIQUE
index.
eq_ref
can be used for indexed columns that are compared using the
=
operator. The comparison value can be a constant or an expression
that uses columns from tables that are read before this table.
In the following examples, MySQL can use an eq_ref
join to process
ref_table:
SELECT * FROM ref_table,other_table WHERE ref_table.key_column=other_table.column; SELECT * FROM ref_table,other_table WHERE ref_table.key_column_part1=other_table.column AND ref_table.key_column_part2=1;
ref
ref
is used
if the join uses only a leftmost prefix of the key or if the key is not
a PRIMARY KEY
or UNIQUE
index (in other words, if the join
cannot select a single row based on the key value). If the key that is
used matches only a few rows, this is a good join type.
ref
can be used for indexed columns that are compared using the =
operator.
In the following examples, MySQL can use a ref
join to process
ref_table:
SELECT * FROM ref_table WHERE key_column=expr; SELECT * FROM ref_table,other_table WHERE ref_table.key_column=other_table.column; SELECT * FROM ref_table,other_table WHERE ref_table.key_column_part1=other_table.column AND ref_table.key_column_part2=1;
ref_or_null
ref
, but with the addition that MySQL will
do an extra search for rows that contain NULL
values. This join
type optimization is new for MySQL 4.1.1 and is mostly used when resolving
subqueries.
In the following examples, MySQL can use a ref_or_null
join to process
ref_table:
SELECT * FROM ref_table WHERE key_column=expr OR key_column IS NULL;See section 7.2.7 How MySQL Optimizes
IS NULL
.
index_merge
key
column contains a list of indexes used, and
key_len
contains a list of the longest key parts for the indexes
used. For more information, see
section 7.2.6 Index Merge Optimization.
unique_subquery
ref
for some IN
subqueries of the following
form:
value IN (SELECT primary_key FROM single_table WHERE some_expr)
unique_subquery
is just an index lookup function that replaces the
subquery completely for better efficiency.
index_subquery
unique_subquery
. It replaces IN
subqueries, but
it works for non-unique indexes in subqueries of the following form:
value IN (SELECT key_column FROM single_table WHERE some_expr)
range
key
column indicates which index is used.
The key_len
contains the longest key part that was used.
The ref
column will be NULL
for this type.
range
can be used for when a key column is compared to a
constant using any of the =
, <>
, >
, >=
, <
,
<=
, IS NULL
, <=>
, BETWEEN
, or IN
operators:
SELECT * FROM tbl_name WHERE key_column = 10; SELECT * FROM tbl_name WHERE key_column BETWEEN 10 and 20; SELECT * FROM tbl_name WHERE key_column IN (10,20,30); SELECT * FROM tbl_name WHERE key_part1= 10 AND key_part2 IN (10,20,30);
index
ALL
, except that only the index tree
is scanned. This usually is faster than ALL
, because the index
file usually is smaller than the data file.
MySQL can use this join type when the query uses only columns that are
part of a single index.
ALL
const
, and usually very bad in all other
cases. Normally, you can avoid ALL
by adding indexes that allow row
retrieval from the table based on constant values or column values from
earlier tables.
possible_keys
possible_keys
column indicates which indexes MySQL could use to
find the rows in this table. Note that this column is totally independent of
the order of the tables as displayed in the output from EXPLAIN
. That
means that some of the keys in possible_keys
might not be usable in
practice with the generated table order.
If this column is NULL
, there are no relevant indexes. In this case,
you may be able to improve the performance of your query by examining
the WHERE
clause to see whether it refers to some column or columns
that would be suitable for indexing. If so, create an appropriate index
and check the query with EXPLAIN
again.
See section 13.2.2 ALTER TABLE
Syntax.
To see what indexes a table has, use SHOW INDEX FROM tbl_name
.
key
key
column indicates the key (index) that MySQL actually decided
to use. The key is NULL
if no index was chosen. To force MySQL
to use or ignore an index listed in the possible_keys
column, use
FORCE INDEX
, USE INDEX
, or IGNORE INDEX
in your query.
See section 13.1.7 SELECT
Syntax.
For MyISAM
and BDB
tables, running ANALYZE TABLE
will help the optimizer choose better indexes. For MyISAM
tables,
myisamchk --analyze
will do the same. See section 13.5.2.1 ANALYZE TABLE
Syntax and section 5.7.2 Table Maintenance and Crash Recovery.
key_len
key_len
column indicates the length of the key that MySQL
decided to use. The length is NULL
if the key
column says
NULL
. Note that the value of key_len
allows you to determine
how many parts of a multiple-part key MySQL will actually use.
ref
ref
column shows which columns or constants are used with the
key
to select rows from the table.
rows
rows
column indicates the number of rows MySQL
believes it must examine to execute the query.
Extra
Distinct
Not exists
LEFT JOIN
optimization on the
query and will not examine more rows in this table for the previous row
combination after it finds one row that matches the LEFT JOIN
criteria.
Here is an example of the type of query that can be optimized this way:
SELECT * FROM t1 LEFT JOIN t2 ON t1.id=t2.id WHERE t2.id IS NULL;Assume that
t2.id
is defined as NOT NULL
. In this case,
MySQL will scan t1
and look up the rows in t2
using the values
of t1.id
. If MySQL finds a matching row in t2
, it knows that
t2.id
can never be NULL
, and will not scan through the rest
of the rows in t2
that have the same id
value. In other
words, for each row in t1
, MySQL needs to do only a single lookup
in t2
, regardless of how many rows actually match in t2
.
range checked for each record (index map: #)
range
or index_merge
access method to
retrieve rows. The applicability criteria are as described in section 7.2.5 Range Optimization
and section 7.2.6 Index Merge Optimization, with the exception that all column values for the
preceding table are known and considered to be constants.
This is not very fast, but is faster than performing a join with no index
at all.
Using filesort
WHERE
clause. The keys then are
sorted and the rows are retrieved in sorted order.
See section 7.2.10 How MySQL Optimizes ORDER BY
.
Using index
Using temporary
GROUP BY
and ORDER BY
clauses that list columns differently.
Using where
WHERE
clause will be used to restrict which rows to match
against the next table or send to the client. Unless you specifically intend
to fetch or examine all rows from the table, you may have something wrong
in your query if the Extra
value is not Using where
and the table join type is ALL
or index
.
If you want to make your queries as fast as possible, you should look out for
Extra
values of Using filesort
and Using temporary
.
Using sort_union(...)
Using union(...)
Using intersect(...)
index_merge
join type. See section 7.2.6 Index Merge Optimization for more information.
Using index for group-by
Using index
way of accessing a table, Using
index for group-by
indicates that MySQL found an index that can be used
to retrieve all columns of a GROUP BY
or DISTINCT
query
without any extra disk access to the actual table. Additionally, the index
will be used in the most efficient way so that for each group, only a few
index entries will be read. For details, see
section 7.2.11 How MySQL Optimizes GROUP BY
.
You can get a good indication of how good a join is by taking the
product of the values in the rows
column of the EXPLAIN
output. This should tell you roughly how many rows MySQL must examine to
execute the query. If you restrict queries with the max_join_size
system variable, this product also is used to determine which multiple-table
SELECT
statements to execute.
See section 7.5.2 Tuning Server Parameters.
The following example shows how a multiple-table join can be optimized
progressively based on the information provided by EXPLAIN
.
Suppose that you have the SELECT
statement shown here and you plan to
examine it using EXPLAIN
:
EXPLAIN SELECT tt.TicketNumber, tt.TimeIn, tt.ProjectReference, tt.EstimatedShipDate, tt.ActualShipDate, tt.ClientID, tt.ServiceCodes, tt.RepetitiveID, tt.CurrentProcess, tt.CurrentDPPerson, tt.RecordVolume, tt.DPPrinted, et.COUNTRY, et_1.COUNTRY, do.CUSTNAME FROM tt, et, et AS et_1, do WHERE tt.SubmitTime IS NULL AND tt.ActualPC = et.EMPLOYID AND tt.AssignedPC = et_1.EMPLOYID AND tt.ClientID = do.CUSTNMBR;
For this example, make the following assumptions:
Table | Column | Column Type |
tt | ActualPC | CHAR(10)
|
tt | AssignedPC | CHAR(10)
|
tt | ClientID | CHAR(10)
|
et | EMPLOYID | CHAR(15)
|
do | CUSTNMBR | CHAR(15)
|
Table | Index |
tt | ActualPC
|
tt | AssignedPC
|
tt | ClientID
|
et | EMPLOYID (primary key)
|
do | CUSTNMBR (primary key)
|
tt.ActualPC
values are not evenly distributed.
Initially, before any optimizations have been performed, the EXPLAIN
statement produces the following information:
table type possible_keys key key_len ref rows Extra et ALL PRIMARY NULL NULL NULL 74 do ALL PRIMARY NULL NULL NULL 2135 et_1 ALL PRIMARY NULL NULL NULL 74 tt ALL AssignedPC, NULL NULL NULL 3872 ClientID, ActualPC range checked for each record (key map: 35)
Because type
is ALL
for each table, this output indicates
that MySQL is generating a Cartesian product of all the tables; that is,
every combination of rows. This will take quite a long time, because the
product of the number of rows in each table must be examined. For the case
at hand, this product is 74 * 2135 * 74 * 3872 = 45,268,558,720
rows.
If the tables were bigger, you can only imagine how long it would take.
One problem here is that MySQL can use indexes on columns more efficiently
if they are declared the same. (For ISAM
tables, indexes may not be
used at all unless the columns are declared the same.) In this context,
VARCHAR
and CHAR
are the same unless they are declared as
different lengths. Because tt.ActualPC
is declared as CHAR(10)
and et.EMPLOYID
is declared as CHAR(15)
, there is a length
mismatch.
To fix this disparity between column lengths, use ALTER TABLE
to
lengthen ActualPC
from 10 characters to 15 characters:
mysql> ALTER TABLE tt MODIFY ActualPC VARCHAR(15);
Now tt.ActualPC
and et.EMPLOYID
are both VARCHAR(15)
.
Executing the EXPLAIN
statement again produces this result:
table type possible_keys key key_len ref rows Extra tt ALL AssignedPC, NULL NULL NULL 3872 Using ClientID, where ActualPC do ALL PRIMARY NULL NULL NULL 2135 range checked for each record (key map: 1) et_1 ALL PRIMARY NULL NULL NULL 74 range checked for each record (key map: 1) et eq_ref PRIMARY PRIMARY 15 tt.ActualPC 1
This is not perfect, but is much better: The product of the rows
values is now less by a factor of 74. This version is executed in a couple
of seconds.
A second alteration can be made to eliminate the column length mismatches
for the tt.AssignedPC = et_1.EMPLOYID
and tt.ClientID =
do.CUSTNMBR
comparisons:
mysql> ALTER TABLE tt MODIFY AssignedPC VARCHAR(15), -> MODIFY ClientID VARCHAR(15);
Now EXPLAIN
produces the output shown here:
table type possible_keys key key_len ref rows Extra et ALL PRIMARY NULL NULL NULL 74 tt ref AssignedPC, ActualPC 15 et.EMPLOYID 52 Using ClientID, where ActualPC et_1 eq_ref PRIMARY PRIMARY 15 tt.AssignedPC 1 do eq_ref PRIMARY PRIMARY 15 tt.ClientID 1
This is almost as good as it can get.
The remaining problem is that, by default, MySQL assumes that values
in the tt.ActualPC
column are evenly distributed, and that is not the
case for the tt
table. Fortunately, it is easy to tell MySQL
to analyze the key distribution:
mysql> ANALYZE TABLE tt;
Now the join is perfect, and EXPLAIN
produces this result:
table type possible_keys key key_len ref rows Extra tt ALL AssignedPC NULL NULL NULL 3872 Using ClientID, where ActualPC et eq_ref PRIMARY PRIMARY 15 tt.ActualPC 1 et_1 eq_ref PRIMARY PRIMARY 15 tt.AssignedPC 1 do eq_ref PRIMARY PRIMARY 15 tt.ClientID 1
Note that the rows
column in the output from EXPLAIN
is an
educated guess from the MySQL join optimizer. You should check whether the
numbers are even close to the truth. If not, you may get better performance
by using STRAIGHT_JOIN
in your SELECT
statement and trying
to list the tables in a different order in the FROM
clause.
In most cases, you can estimate the performance by counting disk seeks.
For small tables, you can usually find a row in one disk seek (because the
index is probably cached). For bigger tables, you can estimate that,
using B-tree indexes, you will need this many seeks to find a row:
log(row_count) / log(index_block_length / 3 * 2 /
(index_length + data_pointer_length)) +
1
.
In MySQL, an index block is usually 1024 bytes and the data
pointer is usually 4 bytes. For a 500,000-row table with an
index length of 3 bytes (medium integer), the formula indicates
log(500,000)/log(1024/3*2/(3+4)) + 1
= 4
seeks.
This index would require storage of about 500,000 * 7 * 3/2 = 5.2MB (assuming a typical index buffer fill ratio of 2/3), so you will probably have much of the index in memory and you will probably need only one or two calls to read data to find the row.
For writes, however, you will need four seek requests (as above) to find where to place the new index and normally two seeks to update the index and write the row.
Note that the preceding discussion doesn't mean that your application
performance will slowly degenerate by log N! As long as everything
is cached by the OS or SQL server, things will become only marginally
slower as the table gets bigger. After the data gets too big to be cached,
things will start to go much slower until your applications is only bound
by disk-seeks (which increase by log N). To avoid this, increase the key
cache size as the data grows. For MyISAM
tables, the key cache
size is controlled by the key_buffer_size
system variable.
See section 7.5.2 Tuning Server Parameters.
SELECT
Queries
In general, when you want to make a slow SELECT ... WHERE
query
faster, the first thing to check is whether you can add an index.
All references between different tables should usually be done with
indexes. You can use the EXPLAIN
statement to determine which
indexes are used for a SELECT
.
See section 7.4.5 How MySQL Uses Indexes and
section 7.2.1 EXPLAIN
Syntax (Get Information About a SELECT
).
Some general tips for speeding up queries on MyISAM
tables:
ANALYZE TABLE
or
run myisamchk --analyze
on a table after it has been loaded with
data. This updates a value for each index part that indicates the average
number of rows that have the same value. (For unique indexes, this is
always 1.) MySQL will use this to decide which index to choose when you
join two tables based on a non-constant expression. You can check the
result from the table analysis by using SHOW INDEX FROM tbl_name
and examining the Cardinality
value. myisamchk --description
--verbose
shows index distribution information.
myisamchk
--sort-index --sort-records=1
(if you want to sort on index 1). This is
a good way to make queries faster if you have a unique index from which
you want to read all records in order according to the index. Note that
the first time you sort a large table this way, it may take a long time.
WHERE
Clauses
This section discusses optimizations that can be made for processing
WHERE
clauses. The examples use SELECT
statements, but
the same optimizations apply for WHERE
clauses in DELETE
and UPDATE
statements.
Note that work on the MySQL optimizer is ongoing, so this section is incomplete. MySQL does many optimizations, not all of which are documented here.
Some of the optimizations performed by MySQL are listed here:
((a AND b) AND c OR (((a AND b) AND (c AND d)))) -> (a AND b AND c) OR (a AND b AND c AND d)
(a<b AND b=c) AND a=5 -> b>5 AND b=c AND a=5
(B>=5 AND B=5) OR (B=6 AND 5=5) OR (B=7 AND 5=6) -> B=5 OR B=6
COUNT(*)
on a single table without a WHERE
is retrieved
directly from the table information for MyISAM
and HEAP
tables.
This is also done for any NOT NULL
expression when used with only one
table.
SELECT
statements are impossible and returns no rows.
HAVING
is merged with WHERE
if you don't use GROUP BY
or group functions (COUNT()
, MIN()
, and so on).
WHERE
is constructed to get a fast
WHERE
evaluation for the table and also to skip records as
soon as possible.
WHERE
clause on a PRIMARY KEY
or a UNIQUE
index, where all index parts are compared to constant
expressions and are defined as NOT NULL
.
SELECT * FROM t WHERE primary_key=1; SELECT * FROM t1,t2 WHERE t1.primary_key=1 AND t2.primary_key=t1.id;
ORDER BY
and GROUP
BY
clauses come from the same table, that table is preferred first when
joining.
ORDER BY
clause and a different GROUP BY
clause, or if the ORDER BY
or GROUP BY
contains columns
from tables other than the first table in the join queue, a temporary
table is created.
SQL_SMALL_RESULT
, MySQL uses an in-memory
temporary table.
HAVING
clause
are skipped.
Some examples of queries that are very fast:
SELECT COUNT(*) FROM tbl_name; SELECT MIN(key_part1),MAX(key_part1) FROM tbl_name; SELECT MAX(key_part2) FROM tbl_name WHERE key_part1=constant; SELECT ... FROM tbl_name ORDER BY key_part1,key_part2,... LIMIT 10; SELECT ... FROM tbl_name ORDER BY key_part1 DESC, key_part2 DESC, ... LIMIT 10;
The following queries are resolved using only the index tree, assuming that the indexed columns are numeric:
SELECT key_part1,key_part2 FROM tbl_name WHERE key_part1=val; SELECT COUNT(*) FROM tbl_name WHERE key_part1=val1 AND key_part2=val2; SELECT key_part2 FROM tbl_name GROUP BY key_part1;
The following queries use indexing to retrieve the rows in sorted order without a separate sorting pass:
SELECT ... FROM tbl_name ORDER BY key_part1,key_part2,... ; SELECT ... FROM tbl_name ORDER BY key_part1 DESC, key_part2 DESC, ... ;
The range
access method uses a single index to retrieve a subset
of table records that are contained within one or several index value
intervals. It can be used for a single-part or multiple-part index.
A detailed description of how intervals are extracted from the
WHERE
clause is given in the following sections.
For a single-part index, index value intervals can be conveniently
represented by corresponding conditions in the WHERE
clause, so
we'll talk about ``range conditions'' instead of intervals.
The definition of a range condition for a single-part index is as follows:
BTREE
and HASH
indexes, comparison of a key part with
a constant value is a range condition when using the =
, <=>
,
IN
, IS NULL
, or IS NOT NULL
operators.
BTREE
indexes, comparison of a key part with a constant
value is a range condition when using the >
, <
, >=
,
<=
, BETWEEN
, !=
, or <>
operators, or LIKE
'pattern'
(where 'pattern'
doesn't start with a
wildcard).
OR
or AND
form a range condition.
``Constant value'' in the preceding descriptions means one of the following:
const
or system
table from the same join
Here are some examples of queries with range conditions in the
WHERE
clause:
SELECT * FROM t1 WHERE key_col > 1 AND key_col < 10; SELECT * FROM t1 WHERE key_col = 1 OR key_col IN (15,18,20); SELECT * FROM t1 WHERE key_col LIKE 'ab%' OR key_col BETWEEN 'bar' AND 'foo';
Note that some non-constant values may be converted to constants during the constant propagation phase.
MySQL tries to extract range conditions from the WHERE
clause for
each of the possible indexes. During the extraction process, conditions
that can't be used for constructing the range condition are dropped,
conditions that produce overlapping ranges are combined, and conditions that
produce empty ranges are removed.
For example, consider the following statement, where key1
is an
indexed column and nonkey
is not indexed:
SELECT * FROM t1 WHERE (key1 < 'abc' AND (key1 LIKE 'abcde%' OR key1 LIKE '%b')) OR (key1 < 'bar' AND nonkey = 4) OR (key1 < 'uux' AND key1 > 'z');
The extraction process for key key1
is as follows:
WHERE
clause:
(key1 < 'abc' AND (key1 LIKE 'abcde%' OR key1 LIKE '%b')) OR (key1 < 'bar' AND nonkey = 4) OR (key1 < 'uux' AND key1 > 'z')
nonkey = 4
and key1 LIKE '%b'
because they cannot be
used for a range scan. The right way to remove them is to replace them
with TRUE
, so that we don't miss any matching records when doing
the range scan. Having replaced them with TRUE
, we get:
(key1 < 'abc' AND (key1 LIKE 'abcde%' OR TRUE)) OR (key1 < 'bar' AND TRUE) OR (key1 < 'uux' AND key1 > 'z')
(key1 LIKE 'abcde%' OR TRUE)
is always true
(key1 < 'uux' AND key1 > 'z')
is always false
(key1 < 'abc' AND TRUE) OR (key1 < 'bar' AND TRUE) OR (FALSE)Removing unnecessary
TRUE
and FALSE
constants, we obtain
(key1 < 'abc') OR (key1 < 'bar')
(key1 < 'bar')
In general (and as demonstrated in the example), the condition used for
a range scan is less restrictive than the WHERE
clause. MySQL will
perform an additional check to filter out rows that satisfy the range
condition but not the full WHERE
clause.
The range condition extraction algorithm can handle nested
AND
/OR
constructs of arbitrary depth, and its output doesn't
depend on the order in which conditions appear in WHERE
clause.
Range conditions on a multiple-part index are an extension of range conditions for a single-part index. A range condition on a multiple-part index restricts index records to lie within one or several key tuple intervals. Key tuple intervals are defined over a set of key tuples, using ordering from the index.
For example, consider a multiple-part index defined as
key1(key_part1, key_part2, key_part3)
, and the
following set of key tuples listed in key order:
key_part1 key_part2 key_part3 NULL 1 'abc' NULL 1 'xyz' NULL 2 'foo' 1 1 'abc' 1 1 'xyz' 1 2 'abc' 2 1 'aaa'
The condition key_part1 = 1
defines this interval:
(1, -inf, -inf) <= (key_part1, key_part2, key_part3) < (1, +inf, +inf)
The interval covers the 4th, 5th, and 6th tuples in the preceding data set and can be used by the range access method.
By contrast, the condition key_part3 = 'abc'
does not define a single
interval and cannot be used by the range access method.
The following descriptions indicate how range conditions work for multiple-part indexes in greater detail.
HASH
indexes, each interval containing identical values
can be used. This means that the interval can be produced only for
conditions in the following form:
key_part1 cmp const1 AND key_part2 cmp const2 AND ... AND key_partN cmp constN;Here, const1, const2, ... are constants, cmp is one of the
=
, <=>
, or IS NULL
comparison operators, and the
conditions cover all index parts. (That is, there are N conditions,
one for each part of an N-part index.)
See section 7.2.5.1 Range Access Method for Single-Part Indexes for the definition of
what is considered to be a constant.
For example, the following is a range condition for a three-part
HASH
index:
key_part1 = 1 AND key_part2 IS NULL AND key_part3 = 'foo'
BTREE
index, an interval might be usable for conditions
combined with AND
, where each condition compares a key part with
a constant value using =
, <=>
, IS NULL
, >
,
<
, >=
, <=
, !=
, <>
, BETWEEN
, or
LIKE 'pattern'
(where 'pattern'
doesn't start
with a wildcard). An interval can be used as long as it is possible to
determine a single key tuple containing all records that match the condition
(or two intervals if <>
or !=
is used). For example, for
this condition:
key_part1 = 'foo' AND key_part2 >= 10 AND key_part3 > 10The single interval will be:
('foo', 10, 10) < (key_part1, key_part2, key_part3) < ('foo', +inf, +inf)It is possible that the created interval will contain more records than the initial condition. For example, the preceding interval includes the value
('foo', 11, 0)
,
which does not satisfy the original condition.
OR
, they form a condition that covers a set of records
contained within the union of their intervals. If the conditions are combined
with AND
, they form a condition that covers a set of records
contained within the intersection of their intervals. For example, for
this condition on a two-part index:
(key_part1 = 1 AND key_part2 < 2) OR (key_part1 > 5)The intervals will be:
(1, -inf) < (key_part1, key_part2) < (1, 2) (5, -inf) < (key_part1, key_part2)In this example, the interval on the first line uses one key part for the left bound and two key parts for the right bound. The interval on the second line uses only one key part. The
key_len
column in the EXPLAIN
output indicates the maximum length of the key prefix used.
In some cases, key_len
may indicate that a key part was used, but
that might be not what you would expect. Suppose that key_part1
and key_part2 can be NULL
. Then the key_len
column
will display two key part lengths for the following condition:
key_part1 >= 1 AND key_part2 < 2But in fact, the condition will be converted to this:
key_part1 >= 1 AND key_part2 IS NOT NULL
section 7.2.5.1 Range Access Method for Single-Part Indexes describes how optimizations are performed to combine or eliminate intervals for range conditions on single-part index. Analogous steps are performed for range conditions on multiple-part keys.
The Index Merge (index_merge
) method is used to retrieve rows with
several ref
, ref_or_null
, or range
scans and merge
the results into one. This method is employed when the table condition
is a disjunction of conditions for which ref
, ref_or_null
,
or range
could be used with different keys.
This ``join'' type optimization is new in MySQL 5.0.0, and represents a significant change in behavior with regard to indexes, because the old rule was that the server is only ever able to use at most one index for each referenced table.
In EXPLAIN
output, this method appears as index_merge
in the
type
column. In this case, the key
column contains a list of
indexes used, and key_len
contains a list of the longest key parts
for those indexes.
Examples:
SELECT * FROM tbl_name WHERE key_part1 = 10 OR key_part2 = 20; SELECT * FROM tbl_name WHERE (key_part1 = 10 OR key_part2 = 20) AND non_key_part=30; SELECT * FROM t1, t2 WHERE (t1.key1 IN (1,2) OR t1.key2 LIKE 'value%') AND t2.key1=t1.some_col; SELECT * FROM t1, t2 WHERE t1.key1=1 AND (t2.key1=t1.some_col OR t2.key2=t1.some_col2);
The Index Merge method has several access algorithms (seen in the
Extra
field of EXPLAIN
output):
The following sections describe these methods in greater detail.
Note: The Index Merge optimization algorithm has the following known deficiencies:
SELECT * FROM t1 WHERE (goodkey1 < 10 OR goodkey2 < 20) AND badkey < 30;For this query, two plans are possible:
(goodkey1 < 10 OR goodkey2 < 20)
condition.
badkey < 30
condition.
index_merge
by using
IGNORE INDEX
or FORCE INDEX
. The following queries will be
executed using Index Merge:
SELECT * FROM t1 FORCE INDEX(goodkey1,goodkey2) WHERE (goodkey1 < 10 OR goodkey2 < 20) AND badkey < 30; SELECT * FROM t1 IGNORE INDEX(badkey) WHERE (goodkey1 < 10 OR goodkey2 < 20) AND badkey < 30;
WHERE
clause with deep
AND
/OR
nesting and MySQL doesn't choose the optimal plan,
try distributing terms using the following identity laws:
(x AND y) OR z = (x OR z) AND (y OR z) (x OR y) AND z = (x AND z) OR (y AND z)
The choice between different possible variants of the index_merge
access method and other access methods is based on cost estimates of
various available options.
This access algorithm can be employed when a WHERE
clause was
converted to several range conditions on different keys combined with
AND
, and each condition is one of the following:
key_part1=const1 AND key_part2=const2 ... AND key_partN=constN
InnoDB
or BDB
table.
Here are some examples:
SELECT * FROM innodb_table WHERE primary_key < 10 AND key_col1=20; SELECT * FROM tbl_name WHERE (key1_part1=1 AND key1_part2=2) AND key2=2;
The Index Merge intersection algorithm performs simultaneous scans on all used indexes and produces the intersection of row sequences that it receives from the merged index scans.
If all columns used in the query are covered by the used indexes, full
table records will not be retrieved (EXPLAIN
output will contain
Using index
in Extra
field in this case). Here is an example
of such query:
SELECT COUNT(*) FROM t1 WHERE key1=1 AND key2=1;
If the used indexes don't cover all columns used in the query, full records will be retrieved only when the range conditions for all used keys are satisfied.
If one of the merged conditions is a condition over a primary key of an
InnoDB
or BDB
table, it is not used for record retrieval,
but is used to filter out records retrieved using other conditions.
The applicability criteria for this algorithm are similar to those of the
Index Merge method intersection algorithm. The algorithm can be
employed when the table WHERE
clause was converted to several range
conditions on different keys combined with OR
, and each condition
is one of the following:
key_part1=const1 AND key_part2=const2 ... AND key_partN=constN
InnoDB
or BDB
table.
Here are some examples:
SELECT * FROM t1 WHERE key1=1 OR key2=2 OR key3=3; SELECT * FROM innodb_table WHERE (key1=1 AND key2=2) OR (key3='foo' AND key4='bar') AND key5=5;
This access algorithm is employed when the WHERE
clause was converted
to several range conditions combined by OR
, but for which the
Index Merge method union algorithm is not applicable.
Here are some examples:
SELECT * FROM tbl_name WHERE key_col1 < 10 OR key_col2 < 20; SELECT * FROM tbl_name WHERE (key_col1 > 10 OR key_col2 = 20) AND nonkey_col=30;
The difference between the sort-union algorithm and the union algorithm is that the sort-union algorithm must first fetch row IDs for all records and sort them before returning any records.
IS NULL
MySQL can do the same optimization on col_name IS NULL
that it can do
with col_name =
constant_value. For example, MySQL can use
indexes and ranges to search for NULL
with IS NULL
.
SELECT * FROM tbl_name WHERE key_col IS NULL; SELECT * FROM tbl_name WHERE key_col <=> NULL; SELECT * FROM tbl_name WHERE key_col=const1 OR key_col=const2 OR key_col IS NULL;
If a WHERE
clause includes a col_name IS NULL
condition for a
column that is declared as NOT NULL
, that expression will be
optimized away. This optimization does not occur in cases when the column
might produce NULL
anyway; for example, if it comes from a table on
the right side of a LEFT JOIN
.
MySQL 4.1.1 and up can additionally optimize the combination
col_name = expr AND col_name IS NULL
,
a form that is common in resolved subqueries.
EXPLAIN
will show ref_or_null
when this
optimization is used.
This optimization can handle one IS NULL
for any key part.
Some examples of queries that are optimized, assuming that there is an index
on columns a
and b
of table t2
:
SELECT * FROM t1 WHERE t1.a=expr OR t1.a IS NULL; SELECT * FROM t1, t2 WHERE t1.a=t2.a OR t2.a IS NULL; SELECT * FROM t1, t2 WHERE (t1.a=t2.a OR t2.a IS NULL) AND t2.b=t1.b; SELECT * FROM t1, t2 WHERE t1.a=t2.a AND (t2.b=t1.b OR t2.b IS NULL); SELECT * FROM t1, t2 WHERE (t1.a=t2.a AND t2.a IS NULL AND ...) OR (t1.a=t2.a AND t2.a IS NULL AND ...);
ref_or_null
works by first doing a read on the reference key,
and then a separate search for rows with a NULL
key value.
Note that the optimization can handle only one IS NULL
level.
In the following query, MySQL will use key lookups only on the expression
(t1.a=t2.a AND t2.a IS NULL)
and not be able to use the key part on
b
:
SELECT * FROM t1, t2 WHERE (t1.a=t2.a AND t2.a IS NULL) OR (t1.b=t2.b AND t2.b IS NULL);
DISTINCT
DISTINCT
combined with ORDER BY
will
need a temporary table in many cases.
Note that because DISTINCT
may use GROUP BY
, you should be
aware of how MySQL works with columns in ORDER BY
or HAVING
clauses that are not part of the selected columns.
See section 12.9.3 GROUP BY
with Hidden Fields.
In most cases, a DISTINCT
clause can be considered as a special case
of GROUP BY
. For example, the following two queries are equivalent:
SELECT DISTINCT c1, c2, c3 FROM t1 WHERE c1 > const; SELECT c1, c2, c3 FROM t1 WHERE c1 > const GROUP BY c1, c2, c3;
Due to this equivalence, the optimizations applicable to GROUP BY
queries can be also applied to queries with a DISTINCT
clause. Thus,
for more details on the optimization possibilities for DISTINCT
queries, see section 7.2.11 How MySQL Optimizes GROUP BY
.
When combining LIMIT row_count
with DISTINCT
, MySQL stops
as soon as it finds row_count unique rows.
If you don't use columns from all tables named in a query, MySQL stops
scanning the not-used tables as soon as it finds the first match.
In the following case, assuming that t1
is used before t2
(which you can check with EXPLAIN
), MySQL stops reading from t2
(for any particular row in t1
) when the first row in t2
is found:
SELECT DISTINCT t1.a FROM t1, t2 where t1.a=t2.a;
LEFT JOIN
and RIGHT JOIN
A LEFT JOIN B join_condition
is implemented in MySQL as follows:
B
is set to depend on table A
and all tables
on which A
depends.
A
is set to depend on all tables (except B
)
that are used in the LEFT JOIN
condition.
LEFT JOIN
condition is used to decide how to retrieve rows
from table B. (In other words, any condition in the WHERE
clause
is not used.)
WHERE
optimizations are done.
A
that matches the WHERE
clause, but there
is no row in B
that matches the ON
condition,
an extra B
row is generated with all columns set to NULL
.
LEFT JOIN
to find rows that don't exist in some
table and you have the following test: col_name IS NULL
in the
WHERE
part, where col_name is a column that is declared as
NOT NULL
, MySQL stops searching for more rows
(for a particular key combination) after it has found one row that
matches the LEFT JOIN
condition.
RIGHT JOIN
is implemented analogously to LEFT JOIN
, with the
roles of the tables reversed.
The join optimizer calculates the order in which tables should be joined.
The table read order forced by LEFT JOIN
and STRAIGHT_JOIN
helps the join optimizer do its work much more quickly, because there are
fewer table permutations to check.
Note that this means that if you do a query of the following type,
MySQL will do a full scan on b
because the LEFT JOIN
forces
it to be read before d
:
SELECT * FROM a,b LEFT JOIN c ON (c.key=a.key) LEFT JOIN d ON (d.key=a.key) WHERE b.key=d.key;
The fix in this case is to rewrite the query as follows:
SELECT * FROM b,a LEFT JOIN c ON (c.key=a.key) LEFT JOIN d ON (d.key=a.key) WHERE b.key=d.key;
Starting from 4.0.14, MySQL does the following LEFT JOIN
optimization:
If the WHERE
condition is always false for the generated
NULL
row, the LEFT JOIN
is changed to a normal join.
For example, the WHERE
clause would be
false in the following query
if t2.column1
would be NULL
:
SELECT * FROM t1 LEFT JOIN t2 ON (column1) WHERE t2.column2=5;
Therefore, it's safe to convert the query to a normal join:
SELECT * FROM t1, t2 WHERE t2.column2=5 AND t1.column1=t2.column1;
This can be made faster because MySQL can now use table t2
before
table t1
if this would result in a better query plan. To force a
specific table order, use STRAIGHT_JOIN
.
ORDER BY
In some cases, MySQL can use an index to satisfy an ORDER BY
clause without doing any extra sorting.
The index can also be used even if the ORDER BY
doesn't match the
index exactly, as long as all the unused index parts and all the extra
are ORDER BY
columns are constants in the WHERE
clause. The following queries will use the index to resolve the
ORDER BY
part:
SELECT * FROM t1 ORDER BY key_part1,key_part2,... ; SELECT * FROM t1 WHERE key_part1=constant ORDER BY key_part2; SELECT * FROM t1 ORDER BY key_part1 DESC, key_part2 DESC; SELECT * FROM t1 WHERE key_part1=1 ORDER BY key_part1 DESC, key_part2 DESC;
In some cases, MySQL cannot use indexes to resolve the ORDER
BY
, although it still will use indexes to find the rows that
match the WHERE
clause. These cases include the following:
ORDER BY
on different keys:
SELECT * FROM t1 ORDER BY key1, key2;
ORDER BY
on non-consecutive key parts:
SELECT * FROM t1 WHERE key2=constant ORDER BY key_part2;
ASC
and DESC
:
SELECT * FROM t1 ORDER BY key_part1 DESC, key_part2 ASC;
ORDER BY
:
SELECT * FROM t1 WHERE key2=constant ORDER BY key1;
ORDER
BY
are not all from the first non-constant table that is used to
retrieve rows. (This is the first table in the EXPLAIN
output that
doesn't have a const
join type.)
ORDER BY
and GROUP BY
expressions.
HASH
index in a HEAP
table.
With EXPLAIN SELECT ... ORDER BY
, you can check whether MySQL can use
indexes to resolve the query. It cannot if you see Using filesort
in
the Extra
column.
See section 7.2.1 EXPLAIN
Syntax (Get Information About a SELECT
).
In those cases where MySQL must sort the result, it uses the following
filesort
algorithm before MySQL 4.1:
WHERE
clause are skipped.
sort_buffer_size
system variable.
MERGEBUFF
(7) regions to one block in
another temporary file. Repeat until all blocks from the first file
are in the second file.
MERGEBUFF2
(15)
blocks left.
read_rnd_buffer_size
system variable.
The code for this step is in the `sql/records.cc' source file.
One problem with this approach is that it reads rows twice: One time when
evaluating the WHERE
clause, and again after sorting the pair values.
And even if the rows were accessed successively the first time (for example,
if a table scan is done), the second time they are accessed randomly. (The
sort keys are ordered, but the row positions are not.)
In MySQL 4.1 and up, a filesort
optimization is used that records not
only the sort key value and row position, but also the columns required for
the query. This avoids reading the rows twice. The modified filesort
algorithm works like this:
WHERE
clause, as before.
Using the modified filesort
algorithm, the tuples are longer than the
pairs used in the original method, and fewer of them fit in the sort buffer
(the size of which is given by sort_buffer_size
). As a result, it is
possible for the extra I/O to make the modified approach slower, not faster.
To avoid a slowdown, the optimization is used only if the total size of the
extra columns in the sort tuple does not exceed the value of the
max_length_for_sort_data
system variable. (A symptom of setting the
value of this variable too high is that you will see high disk activity and
low CPU activity.)
If you want to increase ORDER BY
speed, first see whether you can get
MySQL to use indexes rather than an extra sorting phase. If this is not
possible, you can try the following strategies:
sort_buffer_size
variable.
read_rnd_buffer_size
variable.
tmpdir
to point to a dedicated filesystem with lots of empty
space. If you use MySQL 4.1 or later, this option accepts several paths
that are used in round-robin fashion. Paths should be separated by colon
characters (`:') on Unix and semicolon characters (`;') on
Windows, NetWare, and OS/2. You can use this feature to spread the load
across several directories. Note: The paths should be for
directories in filesystems that are located on different physical
disks, not different partitions of the same disk.
By default, MySQL sorts all GROUP BY col1, col2, ...
queries as if
you specified ORDER BY col1, col2, ...
in the query as well. If you
include an ORDER BY
clause explicitly that contains the same column
list, MySQL optimizes it away without any speed penalty, although the sorting
still occurs. If a query includes GROUP BY
but you want to avoid the
overhead of sorting the result, you can suppress sorting by specifying
ORDER BY NULL
. For example:
INSERT INTO foo SELECT a, COUNT(*) FROM bar GROUP BY a ORDER BY NULL;
GROUP BY
The most general way to satisfy a GROUP BY
clause is to scan the whole
table and create a new temporary table where all rows from each group are
consecutive, and then use this temporary table to discover groups and apply
aggregate functions (if any). In some cases, MySQL is able to do much better
than that and to avoid creation of temporary tables by using index access.
The most important preconditions for using indexes for GROUP BY
are
that all GROUP BY
columns reference attributes from the same index,
and the index stores its keys in order (for example, this is a B-Tree index,
and not a HASH index). Whether usage of temporary tables can be replaced by
index access also depends on which parts of an index are used in a query, the
conditions specified for these parts, and the selected aggregate functions.
There are two ways to execute a GROUP BY
query via index access,
as detailed in the following sections. In the first method, the grouping
operation is applied together with all range predicates (if any). The second
method first performs a range scan, and then groups the resulting tuples.
The most efficient way is when the index is used to directly retrieve
the group fields. With this access method, MySQL uses the property of
some index types (for example, B-Trees) that the keys are ordered. This
property allows use of lookup groups in an index without having to consider
all keys in the index that satisfy all WHERE
conditions. Since
this access method considers only a fraction of the keys in an index,
it is called ``loose index scan.'' When there is no WHERE
clause,
a loose index scan will read as many keys as the number of groups, which
may be a much smaller number than all keys. If the WHERE
clause
contains range predicates (described in section 7.2.1 EXPLAIN
Syntax (Get Information About a SELECT
),
under the range
join type), a loose index scan looks up the first key of
each group that satisfies the range conditions, and again reads the least
possible number of keys. This is possible under the following conditions:
GROUP BY
includes the first consecutive parts of the index
(if instead of GROUP BY
, the query has a DISTINCT
clause,
then all distinct attributes refer to the beginning of the index).
MIN()
and MAX()
,
and all of them refer to the same column.
GROUP BY
referenced in the
query must be constants (that is, they must be referenced in equalities
with constants), except for the argument of MIN()
or MAX()
functions.
The EXPLAIN
output for such queries shows Using index for
group-by
in the Extra
column.
The following queries provide several examples that fall into this
category, assuming there is an index idx(c1, c2, c3)
on table
t1(c1,c2,c3,c4)
:
SELECT c1, c2 FROM t1 GROUP BY c1, c2; SELECT DISTINCT c1, c2 FROM t1; SELECT c1, MIN(c2) FROM t1 GROUP BY c1; SELECT c1, c2 FROM t1 WHERE c1 < const GROUP BY c1, c2; SELECT MAX(c3), MIN(c3), c1, c2 FROM t1 WHERE c2 > const GROUP BY c1, c2; SELECT c2 FROM t1 WHERE c1 < const GROUP BY c1, c2; SELECT c1, c2 FROM t1 WHERE c3 = const GROUP BY c1, c2;
The following queries cannot be executed with this quick select method, for the reasons given:
MIN()
or MAX()
:
SELECT c1, SUM(c2) FROM t1 GROUP BY c1;
GROUP BY
do not refer to the beginning of the index:
SELECT c1,c2 FROM t1 GROUP BY c2, c3;
GROUP BY
parts,
and for which there is no equality with a constant:
SELECT c1,c3 FROM t1 GROUP BY c1, c2;
A tight index scan may be either a full index scan or a range index scan, depending on the query conditions.
When the conditions for a loose index scan are not met, it is still
possible to avoid creation of temporary tables for GROUP BY
queries. If there are range conditions in the WHERE
clause, this
method will read only the keys that satisfy these conditions. Otherwise,
it performs an index scan. Since this method reads all keys in each range
defined by the WHERE
clause, or scans the whole index if there are
no range conditions, we term it a ``tight index scan.'' Notice that with a
tight index scan, the grouping operation is performed after all keys that
satisfy the range conditions have been found.
For this method to work, it is sufficient that for all columns in a query
referring to key parts before or in between the GROUP BY
key parts,
there is a constant equality condition. The constants from the equality
conditions fill in the ``gaps'' in the search keys so that it is possible
to form complete prefixes of the index. Then these index prefixes can
be used for index lookups. If we require sorting of the GROUP BY
result, and it is possible to form search keys that are prefixes of the
index, MySQL also will avoid sorting because searching with prefixes in
an ordered index already retrieves all keys in order.
The following queries will not work with the first method above, but will
still work with the second index access method (assuming we have the
aforementioned index idx
on table t1
):
GROUP BY
, but it is covered by the condition (c2 = 'a').
SELECT c1, c2, c3 FROM t1 WHERE c2 = 'a' GROUP BY c1, c3;
GROUP BY
does not begin from the first key part, but there is a
condition that provides a constant for that key part:
SELECT c1, c2, c3 FROM t1 WHERE c1 = 'a' GROUP BY c2, c3;
LIMIT
In some cases, MySQL will handle a query differently when you are
using LIMIT row_count
and not using HAVING
:
LIMIT
, MySQL
uses indexes in some cases when normally it would prefer to do a
full table scan.
LIMIT row_count
with ORDER BY
, MySQL ends the
sorting as soon as it has found the first row_count lines rather
than sorting the whole table.
LIMIT row_count
with DISTINCT
, MySQL stops
as soon as it finds row_count unique rows.
GROUP BY
can be resolved by reading the key in order
(or doing a sort on the key) and then calculating summaries until the
key value changes. In this case, LIMIT row_count
will not calculate any
unnecessary GROUP BY
values.
SQL_CALC_FOUND_ROWS
.
LIMIT 0
always quickly returns an empty set. This is useful
to check the query or to get the column types of the result columns.
LIMIT row_count
is used to calculate how much space is required.
The output from EXPLAIN
will show ALL
in the type
column when MySQL uses a table scan to resolve a query. This usually happens
under the following conditions:
ON
or WHERE
clause
for indexed columns.
WHERE
Clauses.
For small tables, a table scan often is appropriate. For large tables, try the following techniques to avoid having the optimizer incorrectly choose a table scan:
ANALYZE TABLE tbl_name
to update the key distributions for the
scanned table. See section 13.5.2.1 ANALYZE TABLE
Syntax.
FORCE INDEX
for the scanned table to tell MySQL that table
scans are very expensive compared to using the given index.
See section 13.1.7 SELECT
Syntax.
SELECT * FROM t1, t2 FORCE INDEX (index_for_column) WHERE t1.col_name=t2.col_name;
mysqld
with the --max-seeks-for-key=1000
option or use
SET max_seeks_for_key=1000
to tell the optimizer to assume that no
key scan will cause more than 1,000 key seeks.
See section 5.2.3 Server System Variables.
INSERT
StatementsThe time to insert a record is determined by the following factors, where the numbers indicate approximate proportions:
This does not take into consideration the initial overhead to open tables, which is done once for each concurrently running query.
The size of the table slows down the insertion of indexes by log N, assuming B-tree indexes.
You can use the following methods to speed up inserts:
INSERT
statements with multiple VALUES
lists to insert several
rows at a time. This is much faster (many times faster in some cases) than
using separate single-row INSERT
statements. If you are adding data
to a non-empty table, you may tune the bulk_insert_buffer_size
variable to make it even faster.
See section 5.2.3 Server System Variables.
INSERT DELAYED
statement. See section 13.1.4 INSERT
Syntax.
MyISAM
tables you can insert rows at the same time that
SELECT
statements are running if there are no deleted rows in the
tables.
LOAD DATA INFILE
. This
is usually 20 times faster than using a lot of INSERT
statements.
See section 13.1.5 LOAD DATA INFILE
Syntax.
LOAD DATA INFILE
run even
faster when the table has many indexes. Use the following procedure:
CREATE TABLE
.
FLUSH TABLES
statement or a mysqladmin flush-tables
command.
myisamchk --keys-used=0 -rq /path/to/db/tbl_name.
This will
remove all use of all indexes for the table.
LOAD DATA INFILE
. This will not
update any indexes and will therefore be very fast.
myisampack
to make it smaller. See section 14.1.3.3 Compressed Table Characteristics.
myisamchk -r -q
/path/to/db/tbl_name
. This will create the index tree in memory before
writing it to disk, which is much faster because it avoids lots of disk
seeks. The resulting index tree is also perfectly balanced.
FLUSH TABLES
statement or a mysqladmin flush-tables
command.
LOAD DATA INFILE
also performs the preceding optimization
if you insert into an empty MyISAM
table; the main difference is that you can let
myisamchk
allocate much more temporary memory for the index creation
than you might want the server to allocate for index re-creation when it
executes the LOAD DATA INFILE
statement.
As of MySQL 4.0, you can also use
ALTER TABLE tbl_name DISABLE KEYS
instead of
myisamchk --keys-used=0 -rq /path/to/db/tbl_name
and
ALTER TABLE tbl_name ENABLE KEYS
instead of
myisamchk -r -q /path/to/db/tbl_name
. This way you can also skip the
FLUSH TABLES
steps.
INSERT
operations that are done
with multiple statements by locking your tables:
LOCK TABLES a WRITE; INSERT INTO a VALUES (1,23),(2,34),(4,33); INSERT INTO a VALUES (8,26),(6,29); UNLOCK TABLES;A performance benefit occurs because the index buffer is flushed to disk only once, after all
INSERT
statements have completed. Normally there would
be as many index buffer flushes as there are different INSERT
statements. Explicit locking statements are not needed if you can insert
all rows with a single statement.
For transactional tables, you should use BEGIN/COMMIT
instead of
LOCK TABLES
to get a speedup.
Locking also lowers the total time of multiple-connection tests, although the
maximum wait time for individual connections might go up because they wait for
locks. For example:
Connection 1 does 1000 inserts Connections 2, 3, and 4 do 1 insert Connection 5 does 1000 insertsIf you don't use locking, connections 2, 3, and 4 will finish before 1 and 5. If you use locking, connections 2, 3, and 4 probably will not finish before 1 or 5, but the total time should be about 40% faster.
INSERT
, UPDATE
, and DELETE
operations are very
fast in MySQL, but you will obtain better overall performance by
adding locks around everything that does more than about five inserts or
updates in a row. If you do very many inserts in a row, you could do a
LOCK TABLES
followed by an UNLOCK TABLES
once in a while
(about each 1,000 rows) to allow other threads access to the table. This
would still result in a nice performance gain.
INSERT
is still much slower for loading data than LOAD DATA
INFILE
, even when using the strategies just outlined.
MyISAM
tables, for both LOAD DATA
INFILE
and INSERT
, enlarge the key cache by increasing the
key_buffer_size
system variable.
See section 7.5.2 Tuning Server Parameters.
UPDATE
Statements
Update statements are optimized as a SELECT
query with the additional
overhead of a write. The speed of the write depends on the amount of
data being updated and the number of indexes that are updated. Indexes that
are not changed will not be updated.
Also, another way to get fast updates is to delay updates and then do many updates in a row later. Doing many updates in a row is much quicker than doing one at a time if you lock the table.
Note that for a MyISAM
table that uses dynamic record format,
updating a record to a longer total length may split the record. If you do
this often, it is very important to use OPTIMIZE TABLE
occasionally.
See section 13.5.2.5 OPTIMIZE TABLE
Syntax.
DELETE
StatementsThe time to delete individual records is exactly proportional to the number of indexes. To delete records more quickly, you can increase the size of the key cache. See section 7.5.2 Tuning Server Parameters.
If you want to delete all rows in the table, use TRUNCATE TABLE
tbl_name
rather than DELETE FROM tbl_name
.
See section 13.1.9 TRUNCATE
Syntax.
This section lists a number of miscellaneous tips for improving query processing speed:
thread_cache_size
variable. See section 7.5.2 Tuning Server Parameters.
EXPLAIN
statement. See section 7.2.1 EXPLAIN
Syntax (Get Information About a SELECT
).
SELECT
queries on MyISAM
tables that are
updated frequently, to avoid problems with table locking that occur
due to contention between readers and writers.
MyISAM
tables that have no deleted rows, you can insert rows at
the end at the same time that another query is reading from the table. If this
is important for you, you should consider using the table in ways that avoid
deleting rows. Another possibility is to run OPTIMIZE TABLE
after you
have deleted a lot of rows.
ALTER TABLE ... ORDER BY expr1, expr2, ...
if you mostly
retrieve rows in expr1, expr2, ...
order. By using this option after
extensive changes to the table, you may be able to get higher performance.
SELECT * FROM tbl_name WHERE hash_col=MD5(CONCAT(col1,col2)) AND col1='constant' AND col2='constant';
MyISAM
tables that change a lot, you should try to avoid all
variable-length columns (VARCHAR
, BLOB
, and TEXT
). The
table will use dynamic record format if it includes even a single
variable-length column.
See section 14 MySQL Storage Engines and Table Types.
MyISAM
table with dynamic record format (see above) that you can
change to a fixed record size,
or if you very often need to scan the table but do not need
most of the columns. See section 14 MySQL Storage Engines and Table Types.
UPDATE tbl_name SET count_col=count_col+1 WHERE key_col=constant;This is really important when you use MySQL storage engines such as
MyISAM
and ISAM
that have only table-level locking (multiple
readers / single writers). This will also give better performance with most
databases, because the row locking manager in this case will have less to do.
BLOB
column. In this case, you must add some extra code in your application to
pack and unpack information in the BLOB
values, but this may save a
lot of accesses at some stage. This is practical when you have data that
doesn't conform to a rows-and-columns table structure.
INSERT DELAYED
when you do not need to know when your
data is written. This speeds things up because many records can be written
with a single disk write.
INSERT LOW_PRIORITY
when you want to give SELECT
statements higher priority than your inserts.
SELECT HIGH_PRIORITY
to get retrievals that jump the
queue. That is, the SELECT
is done even if there is another client
waiting to do a write.
INSERT
statements to store many rows with one
SQL statement (many SQL servers support this).
LOAD DATA INFILE
to load large amounts of data. This is
faster than using INSERT
statements.
AUTO_INCREMENT
columns to generate unique values.
OPTIMIZE TABLE
once in a while to avoid fragmentation
with MyISAM
tables
when
using a dynamic table format.
See section 14.1.3 MyISAM
Table Storage Formats.
HEAP
tables when possible to get more speed.
See section 14 MySQL Storage Engines and Table Types.
customer
,
use a column name of name
instead of customer_name
. To make
your names portable to other SQL servers, you should keep them shorter than
18 characters.
MyISAM
storage engine directly, you could
get a speed increase of two to five times compared to using the SQL interface.
To be able to do this, the data must be on the same server as
the application, and usually it should only be accessed by one process
(because external file locking is really slow). One could eliminate these
problems by introducing low-level MyISAM
commands in the
MySQL server (this could be one easy way to get more
performance if needed). By carefully designing the database interface,
it should be quite easy to support this types of optimization.
MyISAM
table with the DELAY_KEY_WRITE=1
table
option makes index updates faster because they are not flushed to disk
until the table is closed. The downside is that if something kills the
server while such a table is open, you should ensure that they are okay by
running the server with the --myisam-recover
option, or by
running myisamchk
before restarting the server. (However, even in
this case, you should not lose anything by using DELAY_KEY_WRITE
,
because the key information can always be generated from the data rows.)
Currently, MySQL supports table-level locking for ISAM
,
MyISAM
, and MEMORY
(HEAP
) tables, page-level locking
for BDB
tables, and row-level locking for InnoDB
tables.
In many cases, you can make an educated guess about which locking type is best for an application, but generally it's very hard to say that a given lock type is better than another. Everything depends on the application and different parts of an application may require different lock types.
To decide whether you want to use a storage engine with row-level locking,
you will want to look at what your application does and what mix of select
and update statements it uses. For example, most Web applications do lots
of selects, very few deletes, updates based mainly on key values, and
inserts into some specific tables. The base MySQL MyISAM
setup is
very well tuned for this.
Table locking in MySQL is deadlock-free for storage engines that use table-level locking. Deadlock avoidance is managed by always requesting all needed locks at once at the beginning of a query and always locking the tables in the same order.
The table-locking method MySQL uses for WRITE
locks works as follows:
The table-locking method MySQL uses for READ
locks works as follows:
When a lock is released, the lock is made available to the threads in the write lock queue, then to the threads in the read lock queue.
This means that if you have many updates for a table, SELECT
statements will wait until there are no more updates.
Starting in MySQL 3.23.33, you can analyze the table lock contention
on your system by checking the Table_locks_waited
and
Table_locks_immediate
status variables:
mysql> SHOW STATUS LIKE 'Table%'; +-----------------------+---------+ | Variable_name | Value | +-----------------------+---------+ | Table_locks_immediate | 1151552 | | Table_locks_waited | 15324 | +-----------------------+---------+
As of MySQL 3.23.7 (3.23.25 for Windows), you can freely mix concurrent
INSERT
and SELECT
statements for a MyISAM
table without
locks if the INSERT
statements are non-conflicting. That is, you can
insert rows into a MyISAM
table at the same time other clients are
reading from it. No conflict occurs if the data file contains no free
blocks in the middle, because in that case, records always are inserted at
the end of the data file. (Holes can result from rows having been deleted
from or updated in the middle of the table.) If there are holes, concurrent
inserts are re-enabled automatically when all holes have been filled with
new data.
If you want to do many INSERT
and SELECT
operations on a table
when concurrent inserts are not possible, you can insert rows in a temporary
table and update the real table with the records from the temporary table
once in a while. This can be done with the following code:
mysql> LOCK TABLES real_table WRITE, insert_table WRITE; mysql> INSERT INTO real_table SELECT * FROM insert_table; mysql> TRUNCATE TABLE insert_table; mysql> UNLOCK TABLES;
InnoDB
uses row locks and BDB
uses page locks. For the
InnoDB
and BDB
storage engines, deadlock is possible. This is
because InnoDB
automatically acquires row locks and BDB
acquires page locks during the processing of SQL statements, not at the
start of the transaction.
Advantages of row-level locking:
Disadvantages of row-level locking:
GROUP
BY
operations on a large part of the data or if you often must scan the
entire table.
Table locks are superior to page-level or row-level locks in the following cases:
UPDATE tbl_name SET column=value WHERE unique_key_col=key_value; DELETE FROM tbl_name WHERE unique_key_col=key_value;
SELECT
combined with concurrent INSERT
statements, and very
few UPDATE
and DELETE
statements.
GROUP BY
operations on the entire table without any writers.
Options other than row-level or page-level locking:
Versioning (such as we use in MySQL for concurrent inserts) where you can have one writer at the same time as many readers. This means that the database/table supports different views for the data depending on when you started to access it. Other names for this are time travel, copy on write, or copy on demand.
Copy on demand is in many cases much better than page-level or row-level locking. However, the worst case does use much more memory than when using normal locks.
Instead of using row-level locks, you can use application-level locks,
such as GET_LOCK()
and RELEASE_LOCK()
in MySQL. These are
advisory locks, so they work only in well-behaved applications.
To achieve a very high lock speed, MySQL uses table locking (instead of
page, row, or column locking) for all storage engines except InnoDB
and BDB
.
For InnoDB
and BDB
tables, MySQL only uses table locking if
you explicitly lock the table with LOCK TABLES
. For these table
types, we recommend you to not use LOCK TABLES
at all, because
InnoDB
uses automatic row-level locking and BDB
uses
page-level locking to ensure transaction isolation.
For large tables, table locking is much better than row locking for most applications, but there are some pitfalls.
Table locking enables many threads to read from a table at the same time, but if a thread wants to write to a table, it must first get exclusive access. During the update, all other threads that want to access this particular table must wait until the update is done.
Table updates normally are considered to be more important than table
retrievals, so they are given higher priority. This should ensure that
updates to a table are not ``starved'' even if there is heavy SELECT
activity for the table.
Table locking causes problems in cases such as when a thread is waiting because the disk is full and free space needs to become available before the thread can proceed. In this case, all threads that want to access the problem table will also be put in a waiting state until more disk space is made available.
Table locking is also disadvantageous under the following scenario:
SELECT
that takes a long time to run.
UPDATE
on the same table. This client
will wait until the SELECT
is finished.
SELECT
statement on the same table.
Because
UPDATE
has higher priority than SELECT
, this SELECT
will wait for the UPDATE
to finish. It will also wait for the first
SELECT
to finish!
The following list describes some ways to avoid or reduce contention caused by table locking:
SELECT
statements to run faster. You might have to
create some summary tables to do this.
mysqld
with --low-priority-updates
. This gives
all statements that update (modify) a table lower priority than SELECT
statements. In this case, the second SELECT
statement in the preceding
scenario would execute before the INSERT
statement, and would not need
to wait for the first SELECT
to finish.
SET LOW_PRIORITY_UPDATES=1
statement.
See section 13.5.3 SET
Syntax.
INSERT
, UPDATE
, or DELETE
statement lower priority with the LOW_PRIORITY
attribute.
SELECT
statement higher priority with the
HIGH_PRIORITY
attribute. See section 13.1.7 SELECT
Syntax.
mysqld
with a low value for
the max_write_lock_count
system variable to force MySQL to
temporarily elevate the priority of all SELECT
statements that are
waiting for a table after a specific number of inserts to the table occur.
This allows READ
locks after a certain number of WRITE
locks.
INSERT
combined with SELECT
, switch
to using MyISAM
tables, which support concurrent SELECT
and
INSERT
statements.
INSERT DELAYED
may be of great help.
See section 13.1.4.2 INSERT DELAYED
Syntax.
SELECT
and DELETE
statements,
the LIMIT
option to DELETE
may help.
See section 13.1.1 DELETE
Syntax.
SQL_BUFFER_RESULT
with SELECT
statements can help to
make the duration of table locks shorter.
See section 13.1.7 SELECT
Syntax.
Here are some tips about table locking in MySQL:
LOCK TABLES
to speed up things (many updates within
a single lock is much faster than updates without locks). Splitting
table contents into separate tables may also help.
InnoDB
or
BDB
tables.
See section 15 The InnoDB
Storage Engine.
See section 14.4 The BDB
(BerkeleyDB
) Storage Engine.
MySQL keeps row data and index data in separate files. Many (almost all) other databases mix row and index data in the same file. We believe that the MySQL choice is better for a very wide range of modern systems.
Another way to store the row data is to keep the information for each column in a separate area (examples are SDBM and Focus). This will cause a performance hit for every query that accesses more than one column. Because this degenerates so quickly when more than one column is accessed, we believe that this model is not good for general-purpose databases.
The more common case is that the index and data are stored together (as in Oracle/Sybase, et al). In this case, you will find the row information at the leaf page of the index. The good thing with this layout is that it, in many cases, depending on how well the index is cached, saves a disk read. The bad things with this layout are:
One of the most basic optimizations is to design your tables to take as little space on the disk as possible. This can give huge improvements because disk reads are faster, and smaller tables normally require less main memory while their contents are being actively processed during query execution. Indexing also is a lesser resource burden if done on smaller columns.
MySQL supports a lot of different table types and row formats. For each table, you can decide which storage/index method to use. Choosing the right table format for your application may give you a big performance gain. See section 14 MySQL Storage Engines and Table Types.
You can get better performance on a table and minimize storage space using the techniques listed here:
MEDIUMINT
is often better than INT
.
NOT NULL
if possible. It makes everything
faster and you save one bit per column. If you really need
NULL
in your application, you should definitely use it. Just avoid
having it on all columns by default.
MyISAM
tables, if you don't have any variable-length columns
(VARCHAR
, TEXT
, or BLOB
columns), a fixed-size record
format is used. This is faster but unfortunately may waste some space.
See section 14.1.3 MyISAM
Table Storage Formats.
You can hint that you want to have fixed length rows even if you have
VARCHAR
columns with the CREATE
option
ROW_FORMAT=fixed
.
InnoDB
tables use a more
compact storage format. In earlier versions of MySQL, InnoDB records
contain some redundant information, such as the number of columns and
the lengths of each column, even for fixed-size columns. By default,
tables will be created in the compact format
(ROW_FORMAT=COMPACT
). If you wish to downgrade to older
versions of MySQL/InnoDB, you can request the old format with
ROW_FORMAT=REDUNDANT
.
All MySQL column types can be indexed. Use of indexes on the
relevant columns is the best way to improve the performance of SELECT
operations.
The maximum number of indexes per table and the maximum index length is defined per storage engine. See section 14 MySQL Storage Engines and Table Types. All storage engines support at least 16 indexes per table and a total index length of at least 256 bytes. Most storage engines have higher limits.
With col_name(length)
syntax in an index specification,
you can create an index that uses only the first length characters of
a CHAR
or VARCHAR
column. Indexing only a prefix of column
values like this can make the index file much smaller.
See section 7.4.3 Column Indexes.
The MyISAM
and (as of MySQL 4.0.14) InnoDB
storage engines also
support indexing on BLOB
and TEXT
columns. When indexing
a BLOB
or TEXT
column, you must specify a prefix
length for the index. For example:
CREATE TABLE test (blob_col BLOB, INDEX(blob_col(10)));
Prefixes can be up to 255 bytes long (or 1000 bytes for MyISAM
and InnoDB
tables as of MySQL 4.1.2). Note that prefix limits
are measured in bytes, whereas the prefix length in CREATE TABLE
statements is interpreted as number of characters. Take this into account
when specifying a prefix length for a column that uses a multi-byte
character set.
As of MySQL 3.23.23, you can also create FULLTEXT
indexes.
They are used for full-text searches. Only the MyISAM
table type
supports FULLTEXT
indexes and only for CHAR
, VARCHAR
,
and TEXT
columns. Indexing always happens over the entire column and
partial (prefix) indexing is not supported. See section 12.6 Full-Text Search Functions for
details.
As of MySQL 4.1.0, you can create indexes on spatial column types.
Currently, spatial types are supported only by the MyISAM
storage
engine. Spatial indexes use R-trees.
The MEMORY
(HEAP
) storage engine uses hash indexes by default.
It also supports B-tree indexes as of MySQL 4.1.0.
MySQL can create indexes on multiple columns. An index may consist of up to 15 columns. For certain column types, you can index a prefix of the column (see section 7.4.3 Column Indexes).
A multiple-column index can be considered a sorted array containing values that are created by concatenating the values of the indexed columns.
MySQL uses multiple-column indexes in such a way that queries are
fast when you specify a known quantity for the first column of the index in a
WHERE
clause, even if you don't specify values for the other columns.
Suppose that a table has the following specification:
CREATE TABLE test ( id INT NOT NULL, last_name CHAR(30) NOT NULL, first_name CHAR(30) NOT NULL, PRIMARY KEY (id), INDEX name (last_name,first_name));
The name
index is an index over last_name
and
first_name
. The index can be used for queries that specify
values in a known range for last_name
, or for both last_name
and first_name
.
Therefore, the name
index will be used in the following queries:
SELECT * FROM test WHERE last_name='Widenius'; SELECT * FROM test WHERE last_name='Widenius' AND first_name='Michael'; SELECT * FROM test WHERE last_name='Widenius' AND (first_name='Michael' OR first_name='Monty'); SELECT * FROM test WHERE last_name='Widenius' AND first_name >='M' AND first_name < 'N';
However, the name
index will not be used in the following
queries:
SELECT * FROM test WHERE first_name='Michael'; SELECT * FROM test WHERE last_name='Widenius' OR first_name='Michael';
The manner in which MySQL uses indexes to improve query performance is discussed further in the next section.
Indexes are used to find rows with specific column values fast. Without an index, MySQL has to start with the first record and then read through the whole table to find the relevant rows. The larger the table, the more this costs. If the table has an index for the columns in question, MySQL can quickly determine the position to seek to in the middle of the data file without having to look at all the data. If a table has 1,000 rows, this is at least 100 times faster than reading sequentially. Note that if you need to access almost all 1,000 rows, it is faster to read sequentially, because that minimizes disk seeks.
Most MySQL indexes (PRIMARY KEY
, UNIQUE
, INDEX
, and
FULLTEXT
) are stored in B-trees. Exceptions are that indexes on
spatial column types use R-trees, and MEMORY
(HEAP
) tables
support hash indexes.
Strings are automatically prefix- and end-space compressed.
See section 13.2.5 CREATE INDEX
Syntax.
In general, indexes are used as described in the following discussion.
Characteristics specific to hash indexes (as used in MEMORY
tables)
are described at the end of this section.
Indexes are used for these operations:
WHERE
clause.
MIN()
or MAX()
value for a specific indexed column
key_col. This is optimized by a preprocessor that checks whether you are
using WHERE key_part_# = constant
on all key parts that occur before
key_col in the index. In this case, MySQL will do a single key
lookup for each MIN()
or MAX()
expression and replace it
with a constant. If all expressions are replaced with constants, the
query will return at once. For example:
SELECT MIN(key_part2),MAX(key_part2) FROM tbl_name WHERE key_part1=10;
ORDER BY key_part1,
key_part2
). If all key parts are followed by DESC
, the key
is read in reverse order.
See section 7.2.10 How MySQL Optimizes ORDER BY
.
SELECT key_part3 FROM tbl_name WHERE key_part1=1
Suppose that you issue the following SELECT
statement:
mysql> SELECT * FROM tbl_name WHERE col1=val1 AND col2=val2;
If a multiple-column index exists on col1
and col2
, the
appropriate rows can be fetched directly. If separate single-column
indexes exist on col1
and col2
, the optimizer tries to
find the most restrictive index by deciding which index will find fewer
rows and using that index to fetch the rows.
If the table has a multiple-column index, any leftmost prefix of the
index can be used by the optimizer to find rows. For example, if you
have a three-column index on (col1, col2, col3)
, you have indexed
search capabilities on (col1)
, (col1, col2)
, and
(col1, col2, col3)
.
MySQL can't use a partial index if the columns don't form a
leftmost prefix of the index. Suppose that you have the SELECT
statements shown here:
SELECT * FROM tbl_name WHERE col1=val1; SELECT * FROM tbl_name WHERE col2=val2; SELECT * FROM tbl_name WHERE col2=val2 AND col3=val3;
If an index exists on (col1, col2, col3)
, only the first of the preceding
queries uses the index. The second and third queries do involve
indexed columns, but (col2)
and (col2, col3)
are not
leftmost prefixes of (col1, col2, col3)
.
A B-tree index can be used for column comparisons in expressions that use
the =
, >
, >=
, <
, <=
, or BETWEEN
operators. The index also can be used for LIKE
comparisons if the
argument to LIKE
is a constant string that doesn't start with a
wildcard character. For example, the following SELECT
statements
use indexes:
SELECT * FROM tbl_name WHERE key_col LIKE 'Patrick%'; SELECT * FROM tbl_name WHERE key_col LIKE 'Pat%_ck%';
In the first statement, only rows with 'Patrick' <= key_col <
'Patricl'
are considered. In the second statement, only rows with
'Pat' <= key_col < 'Pau'
are considered.
The following SELECT
statements will not use indexes:
SELECT * FROM tbl_name WHERE key_col LIKE '%Patrick%'; SELECT * FROM tbl_name WHERE key_col LIKE other_col;
In the first statement, the LIKE
value begins with a wildcard
character. In the second statement, the LIKE
value is not a
constant.
MySQL 4.0 and up performs an additional LIKE
optimization. If you use
... LIKE '%string%'
and string is longer than three characters,
MySQL will use the Turbo Boyer-Moore
algorithm to initialize the
pattern for the string and then use this pattern to perform the search
quicker.
Searching using col_name IS NULL
will use indexes if col_name
is indexed.
Any index that doesn't span all AND
levels in the WHERE
clause
is not used to optimize the query. In other words, to be able to use an
index, a prefix of the index must be used in every AND
group.
The following WHERE
clauses use indexes:
... WHERE index_part1=1 AND index_part2=2 AND other_column=3 /* index = 1 OR index = 2 */ ... WHERE index=1 OR A=10 AND index=2 /* optimized like "index_part1='hello'" */ ... WHERE index_part1='hello' AND index_part3=5 /* Can use index on index1 but not on index2 or index3 */ ... WHERE index1=1 AND index2=2 OR index1=3 AND index3=3;
These WHERE
clauses do not use indexes:
/* index_part1 is not used */ ... WHERE index_part2=1 AND index_part3=2 /* Index is not used in both AND parts */ ... WHERE index=1 OR A=10 /* No index spans all rows */ ... WHERE index_part1=1 OR index_part2=10
Sometimes MySQL will not use an index, even if one is available. One way
this occurs is when the optimizer estimates that using the index would
require MySQL to access a large percentage of the rows in the table.
(In this case, a table scan is probably much faster, because it will
require many fewer seeks.) However, if such a query uses LIMIT
to
only retrieve part of the rows, MySQL will use an index anyway, because
it can much more quickly find the few rows to return in the result.
Hash indexes have somewhat different characteristics than those just discussed:
=
or
<=>
operators (but are very fast). They are not used for
comparison operators such as <
that find a range of values.
ORDER BY
operations. (This type of index cannot be used to search for the next entry
in order.)
MyISAM
table to a hash-indexed MEMORY
table.
MyISAM
Key Cache
To minimize disk I/O, the MyISAM
storage engine employs a strategy that
is used by many database management systems. It exploits a cache
mechanism to keep the most frequently accessed table blocks in memory:
This section first describes the basic operation of the MyISAM
key
cache. Then it discusses changes made in MySQL 4.1 that improve key cache
performance and that enable you to better control cache operation:
The key cache mechanism also is used for ISAM
tables. However, the
significance of this fact is on the wane. ISAM
table use has been
decreasing since MySQL 3.23 when MyISAM
was introduced. MySQL 4.1
carries this trend further; the ISAM
storage engine is disabled by
default.
You can control the size of the key cache by means of the
key_buffer_size
system variable. If this variable is set equal
to zero, no key cache is used. The key cache also is not used if the
key_buffer_size
value is too small to allocate the minimal number
of block buffers (8).
When the key cache is not operational, index files are accessed using only the native filesystem buffering provided by the operating system. (In other words, table index blocks are accessed using the same strategy as that employed for table data blocks.)
An index block is a contiguous unit of access to the MyISAM
index files.
Usually the size of an index block is equal to the size of nodes of the
index B-tree. (Indexes are represented on disk using a B-tree data structure.
Nodes at the bottom of the tree are leaf nodes. Nodes above the leaf nodes
are non-leaf nodes.)
All block buffers in a key cache structure are the same size. This size can be equal to, greater than, or less than the size of a table index block. Usually one these two values is a multiple of the other.
When data from any table index block must be accessed, the server first checks whether it is available in some block buffer of the key cache. If it is, the server accesses data in the key cache rather than on disk. That is, it reads from the cache or writes into it rather than reading from or writing to disk. Otherwise, the server chooses a cache block buffer containing a different table index block (or blocks) and replaces the data there by a copy of required table index block. As soon as the new index block is in the cache, the index data can be accessed.
If it happens that a block selected for replacement has been modified, the block is considered ``dirty.'' In this case, before being replaced, its contents are flushed to the table index from which it came.
Usually the server follows an LRU (Least Recently Used) strategy: When choosing a block for replacement, it selects the least recently used index block. To be able to make such a choice easy, the key cache module maintains a special queue (LRU chain) of all used blocks. When a block is accessed, it is placed at the end of the queue. When blocks need to be replaced, blocks at the beginning of the queue are the least recently used and become the first candidates for eviction.
Prior to MySQL 4.1, access to the key cache is serialized: No two threads can access key cache buffers simultaneously. The server processes a request for an index block only after it has finished processing the previous request. As a result, a request for an index block not present in any key cache buffer blocks access by other threads while a buffer is being updated to contain the requested index block.
Starting from version 4.1.0, the server supports shared access to the key cache:
Shared access to the key cache allows the server to improve throughput significantly.
Shared access to the key cache improves performance but does not eliminate contention among threads entirely. They still compete for control structures that manage access to the key cache buffers. To reduce key cache access contention further, MySQL 4.1.1 offers the feature of multiple key caches. This allows you to assign different table indexes to different key caches.
When there can be multiple key caches, the server must know which cache to
use when processing queries for a given MyISAM
table. By default, all
MyISAM
table indexes are cached in the default key cache. To assign
table indexes to a specific key cache, use the CACHE INDEX
statement.
For example, the following statement assigns indexes from the tables
t1
, t2
, and t3
to the key cache named hot_cache
:
mysql> CACHE INDEX t1, t2, t3 IN hot_cache; +---------+--------------------+----------+----------+ | Table | Op | Msg_type | Msg_text | +---------+--------------------+----------+----------+ | test.t1 | assign_to_keycache | status | OK | | test.t2 | assign_to_keycache | status | OK | | test.t3 | assign_to_keycache | status | OK | +---------+--------------------+----------+----------+
Note: If the server has been built with the ISAM
storage
engine enabled, ISAM
tables use the key cache mechanism. However,
ISAM
indexes use only the default key cache and cannot be reassigned to
a different cache.
The key cache referred to in a CACHE INDEX
statement can be created
by setting its size with a SET GLOBAL
parameter setting statement or by
using server startup options. For example:
mysql> SET GLOBAL keycache1.key_buffer_size=128*1024;
To destroy a key cache, set its size to zero:
mysql> SET GLOBAL keycache1.key_buffer_size=0;
Key cache variables are structured system variables that have a name and
components. For keycache1.key_buffer_size
, keycache1
is the
cache variable name and key_buffer_size
is the cache component.
See section 9.4.1 Structured System Variables for a description of the syntax used
for referring to structured key cache system variables.
By default, table indexes are assigned to the main (default) key cache created at the server startup. When a key cache is destroyed, all indexes assigned to it are reassigned to the default key cache.
For a busy server, we recommend a strategy that uses three key caches:
One reason the use of three key caches is beneficial is that access to one key cache structure does not block access to the others. Queries that access tables assigned to one cache do not compete with queries that access tables assigned to another cache. Performance gains occur for other reasons as well:
CACHE INDEX
sets up an association between a table and a key cache,
but the association is lost each time the server restarts. If you want the
association to take effect each time the server starts, one way to accomplish
this is to use an option file: Include variable settings that configure
your key caches, and an init-file
option that names a file containing
CACHE INDEX
statements to be executed. For example:
key_buffer_size = 4G hot_cache.key_buffer_size = 2G cold_cache.key_buffer_size = 2G init_file=/path/to/data-directory/mysqld_init.sql
The statements in `mysqld_init.sql' will be executed each time the
server starts. It should contain one SQL statement per line. The following
example assigns several tables each to hot_cache
and cold_cache
:
CACHE INDEX a.t1, a.t2, b.t3 IN hot_cache CACHE INDEX a.t4, b.t5, b.t6 IN cold_cache
By default, the key cache management system of MySQL 4.1 uses the LRU strategy for choosing key cache blocks to be evicted, but it also supports a more sophisticated method called the "midpoint insertion strategy."
When using the midpoint insertion strategy, the LRU chain is divided into
two parts: a hot sub-chain and a warm sub-chain. The division point between
two parts is not fixed, but the key cache management system takes care that
the warm part is not ``too short,'' always containing at least
key_cache_division_limit
percent of the key cache blocks.
key_cache_division_limit
is a component of structured key cache
variables, so its value is a parameter that can be set per cache.
When an index block is read from a table into the key cache, it is placed at the end of the warm sub-chain. After a certain number of hits (accesses of the block), it is promoted to the hot sub-chain. At present, the number of hits required to promote a block (3) is the same for all index blocks. In the future, we will allow the hit count to depend on the B-tree level of the node corresponding to an index block: Fewer hits will be required for promotion of an index block if it contains a non-leaf node from the upper levels of the index B-tree than if it contains a leaf node.
A block promoted into the hot sub-chain is placed at the end of the chain.
The block then circulates within this sub-chain. If the block stays at the
beginning of the sub-chain for a long enough time, it is demoted to the warm
chain. This time is determined by the value of the
key_cache_age_threshold
component of the key cache.
The threshold value prescribes that, for a key cache containing N
blocks, the block at the beginning of the hot sub-chain not accessed within
the last N*key_cache_age_threshold/100
hits is to be moved to the
beginning of the warm sub-chain. It then becomes the first candidate for
eviction, because blocks for replacement always are taken from the beginning
of the warm sub-chain.
The midpoint insertion strategy allows you to keep more-valued blocks
always in the cache. If you prefer to use the plain LRU strategy, leave the
key_cache_division_limit
value set to its default of 100.
The midpoint insertion strategy helps to improve performance when execution
of a query that requires an index scan effectively pushes out of the cache
all the index blocks corresponding to valuable high-level B-tree nodes. To
avoid this, you must use a midpoint insertion strategy with the
key_cache_division_limit
set to much less than 100. Then valuable
frequently hit nodes will be preserved in the hot sub-chain during an index
scan operation as well.
If there are enough blocks in a key cache to hold blocks of an entire index, or at least the blocks corresponding to its non-leaf nodes, then it makes sense to preload the key cache with index blocks before starting to use it. Preloading allows you to put the table index blocks into a key cache buffer in the most efficient way: by reading the index blocks from disk sequentially.
Without preloading, the blocks still will be placed into the key cache as needed by queries. Although the blocks will stay in the cache, because there are enough buffers for all of them, they will be fetched from disk in a random order, not sequentially.
To preload an index into a cache, use the LOAD INDEX INTO CACHE
statement. For example,
the following statement preloads nodes (index blocks) of indexes of the
tables t1
and t2
:
mysql> LOAD INDEX INTO CACHE t1, t2 IGNORE LEAVES; +---------+--------------+----------+----------+ | Table | Op | Msg_type | Msg_text | +---------+--------------+----------+----------+ | test.t1 | preload_keys | status | OK | | test.t2 | preload_keys | status | OK | +---------+--------------+----------+----------+
The IGNORE LEAVES
modifier causes only blocks for the non-leaf
nodes of the index to be preloaded. Thus,
the statement shown preloads all index blocks from t1
, but only
blocks for the non-leaf nodes from t2
.
If an index has been assigned to a key cache using a
CACHE INDEX
statement, preloading places index blocks into that cache.
Otherwise, the index is loaded into the default key cache.
MySQL 4.1 introduces a new key_cache_block_size
variable on
a per-key cache basis. This variable specifies the size of the block
buffers for a key cache. It is intended to allow tuning of the performance
of I/O operations for index files.
The best performance for I/O operations is achieved when the size of read buffers is equal to the size of the native operating system I/O buffers. But setting the size of key nodes equal to the size of the I/O buffer does not always ensure the best overall performance. When reading the big leaf nodes, the server pulls in a lot of unnecessary data, effectively preventing reading other leaf nodes.
Currently, you cannot control the size of the index blocks in a table. This
size is set by the server when the `.MYI' index file is created,
depending on the size of the keys in the indexes present in the table
definition. In most cases, it is set equal to the I/O buffer size. In the
future, this will be changed and then key_cache_block_size
variable
will be fully employed.
A key cache can be restructured at any time by updating its parameter values. For example:
mysql> SET GLOBAL cold_cache.key_buffer_size=4*1024*1024;
If you assign to either the key_buffer_size
or
key_cache_block_size
key cache component a value that differs from
the component's current value, the server destroys the cache's old
structure and creates a new one based on the new values. If the cache
contains any dirty blocks, the server saves them to disk before destroying
and re-creating the cache. Restructuring does not occur if you set other key
cache parameters.
When restructuring a key cache, the server first flushes the contents of any dirty buffers to disk. After that, the cache contents become unavailable. However, restructuring does not block queries that need to use indexes assigned to the cache. Instead, the server directly accesses the table indexes using native filesystem caching. Filesystem caching is not as efficient as using a key cache, so although queries will execute, a slowdown can be anticipated. Once the cache has been restructured, it becomes available again for caching indexes assigned to it, and the use of filesystem caching for the indexes ceases.
When you execute a mysqladmin status
command, you'll see something
like this:
Uptime: 426 Running threads: 1 Questions: 11082 Reloads: 1 Open tables: 12
The Open tables
value of 12 can be somewhat puzzling if you have
only six tables.
MySQL is multi-threaded, so there may be many clients issuing queries for a
given table simultaneously. To minimize the problem with multiple client threads
having different states on the same table, the table is opened independently
by each concurrent thread. This takes some memory but normally increases
performance. With MyISAM
tables, one extra file descriptor is
required for the data file for each client that has the table open. (By
contrast, the index file descriptor is shared between all threads.) The
ISAM
storage engine shares this behavior.
You can read more about this topic in the next section. See section 7.4.8 How MySQL Opens and Closes Tables.
The table_cache
, max_connections
, and max_tmp_tables
system variables affect the maximum number of files the server keeps open.
If you increase one or more of these values, you may run up against a limit
imposed by your operating system on the per-process number of open file
descriptors. Many operating systems allow you to increase the open-files
limit, although the method varies widely from system to system. Consult your
operating system documentation to determine whether it is possible to
increase the limit and how to do so.
table_cache
is related to max_connections
. For example,
for 200 concurrent running connections, you should have a table cache size of
at least 200 * N
, where N is the maximum number of tables
in a join. You also need to reserve some extra file descriptors for
temporary tables and files.
Make sure that your operating system can handle the number of open file
descriptors implied by the table_cache
setting. If
table_cache
is set too high, MySQL may run out of file
descriptors and refuse connections, fail to perform queries, and be very
unreliable. You also have to take into account that the MyISAM
storage
engine needs two file descriptors for each unique open table. You can
increase the number of file descriptors available for MySQL with
the --open-files-limit
startup option to mysqld_safe
.
See section A.2.17 File Not Found.
The cache of open tables will be kept at a level of table_cache
entries. The default value is 64; this can be changed with the
--table_cache
option to mysqld
. Note that MySQL may
temporarily open even more tables to be able to execute queries.
An unused table is closed and removed from the table cache under the following circumstances:
table_cache
entries and
a thread is no longer using a table.
FLUSH TABLES
statement or executes a mysqladmin flush-tables
or mysqladmin refresh
command.
When the table cache fills up, the server uses the following procedure to locate a cache entry to use:
When the cache is in a temporarily extended state and a table goes from a used to unused state, the table is closed and released from the cache.
A table is opened for each concurrent access. This means the table needs to be opened twice if two threads access the same table or if a thread accesses the table twice in the same query (for example, by joining the table to itself). Each concurrent open requires an entry in the table cache. The first open of any table takes two file descriptors: one for the data file and one for the index file. Each additional use of the table takes only one file descriptor, for the data file. The index file descriptor is shared among all threads.
If you are opening a table with the HANDLER tbl_name OPEN
statement, a dedicated table object is allocated for the thread.
This table object is not shared by other threads and is not closed
until the thread calls HANDLER tbl_name CLOSE
or the thread
terminates. When this happens, the table is put back in the table cache
(if the cache isn't full).
See section 13.1.3 HANDLER
Syntax.
You can determine whether your table cache is too small by checking the
mysqld
status variable Opened_tables
:
mysql> SHOW STATUS LIKE 'Opened_tables'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | Opened_tables | 2741 | +---------------+-------+
If the value is quite big, even when you haven't issued a lot of FLUSH
TABLES
statements, you should increase your table cache size.
See section 5.2.3 Server System Variables and section 5.2.4 Server Status Variables.
If you have many MyISAM
or ISAM
tables in a database
directory, open, close, and create operations will be slow. If you execute
SELECT
statements on many different tables, there will be a little
overhead when the table cache is full, because for every table that has to
be opened, another must be closed. You can reduce this overhead by making
the table cache larger.
We start with system-level factors, because some of these decisions must be made very early to achieve large performance gains. In other cases, a quick look at this section may suffice. However, it is always nice to have a sense of how much can be gained by changing things at this level.
The default operating system to use is very important! To get the best use of multiple-CPU machines, you should use Solaris (because its threads implementation works really well) or Linux (because the 2.4 kernel has really good SMP support). Note that older Linux kernels have a 2GB filesize limit by default. If you have such a kernel and a desperate need for files larger than 2GB, you should get the Large File Support (LFS) patch for the ext2 filesystem. Other filesystems such as ReiserFS and XFS do not have this 2GB limitation.
Before using MySQL in production, we advise you to test it on your intended platform.
--skip-external-locking
MySQL option to avoid external
locking. This option is on by default as of MySQL 4.0.
Before that, it is on by default when compiling with
MIT-pthreads, because flock()
isn't fully supported by
MIT-pthreads on all platforms. It's also on by default for Linux
because Linux file locking is not yet safe.
Note that the --skip-external-locking
option will not affect MySQL's
functionality as long as you run only one server. Just remember to take
down the server (or lock and flush the relevant tables) before you run
myisamchk
. On some systems this option is mandatory, because the
external locking does not work in any case.
The only case when you can't use --skip-external-locking
is if you run
multiple MySQL servers (not clients) on the same data,
or if you run myisamchk
to check (not repair) a table without telling
the server to flush and lock the tables first.
You can still use LOCK TABLES
and UNLOCK TABLES
even if you
are using --skip-external-locking
.
You can determine the default buffer sizes used by the mysqld
server
with this command (prior to MySQL 4.1, omit --verbose
):
shell> mysqld --verbose --help
This command produces a list of all mysqld
options and configurable
system variables. The output includes the default variable values and
looks something like this:
back_log current value: 5 bdb_cache_size current value: 1048540 binlog_cache_size current value: 32768 connect_timeout current value: 5 delayed_insert_limit current value: 100 delayed_insert_timeout current value: 300 delayed_queue_size current value: 1000 flush_time current value: 0 interactive_timeout current value: 28800 join_buffer_size current value: 131072 key_buffer_size current value: 1048540 long_query_time current value: 10 lower_case_table_names current value: 0 max_allowed_packet current value: 1048576 max_binlog_cache_size current value: 4294967295 max_connect_errors current value: 10 max_connections current value: 100 max_delayed_threads current value: 20 max_heap_table_size current value: 16777216 max_join_size current value: 4294967295 max_sort_length current value: 1024 max_tmp_tables current value: 32 max_write_lock_count current value: 4294967295 myisam_sort_buffer_size current value: 8388608 net_buffer_length current value: 16384 net_read_timeout current value: 30 net_retry_count current value: 10 net_write_timeout current value: 60 read_buffer_size current value: 131072 read_rnd_buffer_size current value: 262144 slow_launch_time current value: 2 sort_buffer current value: 2097116 table_cache current value: 64 thread_concurrency current value: 10 thread_stack current value: 131072 tmp_table_size current value: 1048576 wait_timeout current value: 28800
If there is a mysqld
server currently running, you can see what
values it actually is using for the system variables by connecting to it and
issuing this statement:
mysql> SHOW VARIABLES;
You can also see some statistical and status indicators for a running server by issuing this statement:
mysql> SHOW STATUS;
System variable and status information also can be obtained using
mysqladmin
:
shell> mysqladmin variables shell> mysqladmin extended-status
You can find a full description for all system and status variables in section 5.2.3 Server System Variables and section 5.2.4 Server Status Variables.
MySQL uses algorithms that are very scalable, so you can usually run with very little memory. However, normally you will get better performance by giving MySQL more memory.
When tuning a MySQL server, the two most important variables to configure
are key_buffer_size
and table_cache
. You should first feel
confident that you have these set appropriately before trying to change
any other variables.
The following examples indicate some typical variable values for different
runtime configurations. The examples use the mysqld_safe
script and
use --var_name=value
syntax to set the variable var_name to the value
value. This syntax is available as of MySQL 4.0. For older versions
of MySQL, take the following differences into account:
safe_mysqld
rather than mysqld_safe
.
--set-variable=var_name=value
or -O
var_name=value
syntax.
_size
, you may need to specify them
without _size
. For example, the old name for sort_buffer_size
is
sort_buffer
. The old name for read_buffer_size
is
record_buffer
. To see which variables your version of the server
recognizes, use mysqld --help
.
If you have at least 256MB of memory and many tables and want maximum performance with a moderate number of clients, you should use something like this:
shell> mysqld_safe --key_buffer_size=64M --table_cache=256 \ --sort_buffer_size=4M --read_buffer_size=1M &
If you have only 128MB of memory and only a few tables, but you still do a lot of sorting, you can use something like this:
shell> mysqld_safe --key_buffer_size=16M --sort_buffer_size=1M
If there are very many simultaneous connections, swapping problems
may occur unless mysqld
has been configured to use very
little memory for each connection. mysqld
performs better
if you have enough memory for all connections.
With little memory and lots of connections, use something like this:
shell> mysqld_safe --key_buffer_size=512K --sort_buffer_size=100K \ --read_buffer_size=100K &
Or even this:
shell> mysqld_safe --key_buffer_size=512K --sort_buffer_size=16K \ --table_cache=32 --read_buffer_size=8K \ --net_buffer_length=1K &
If you are doing GROUP BY
or ORDER BY
operations on tables
that are much larger than your available memory, you should increase the
value of read_rnd_buffer_size
to speed up the reading of rows after
sorting operations.
When you have installed MySQL, the `support-files' directory will contain some different `my.cnf' sample files: `my-huge.cnf', `my-large.cnf', `my-medium.cnf', and `my-small.cnf'. You can use these as a basis for optimizing your system.
Note that if you specify an option on the command line for mysqld
or
mysqld_safe
, it remains in effect only for that invocation of the server.
To use the option every time the server runs, put it in an option file.
To see the effects of a parameter change, do something like this (prior to
MySQL 4.1, omit --verbose
):
shell> mysqld --key_buffer_size=32M --verbose --help
The variable values are listed near the end of the output.
Make sure that the --verbose
and --help
options are
last. Otherwise, the effect of any options listed after them on the command
line will not be reflected in the output.
For information on tuning the InnoDB
storage engine, see
section 15.12 InnoDB
Performance Tuning Tips.
The task of the query optimizer is to find an optimal plan for executing an SQL query. Because the difference in performance between ``good'' and ``bad'' plans can be orders of magnitude (that is, seconds versus hours or even days), most query optimizers, including that of MySQL, perform more or less exhaustive search for an optimal plan among all possible query evaluation plans. For join queries, the number of possible plans investigated by the MySQL optimizer grows exponentially with the number of tables referenced in a query. For small numbers of tables (typically less than 7-10) this is not a problem. However, when bigger queries are submitted, the time spent in query optimization may easily become the major bottleneck in the server performance.
MySQL 5.0.1 introduces a new more flexible method for query optimization that allows the user to control how exhaustive the optimizer is in its search for an optimal query evaluation plan. The general idea is that the fewer plans that are investigated by the optimizer, the less time it will spend in compiling a query. On the other hand, because the optimizer will skip some plans, it may miss finding an optimal plan.
The behavior of the optimizer with respect to the number of plans it evaluates can be controlled via two system variables:
optimizer_prune_level
variable tells the optimizer to skip
certain plans based on estimates of the number of rows accessed for each
table. Our experience shows that this kind of ``educated guess'' rarely
misses optimal plans, while it may dramatically reduce query compilation
times. That is why this option is on (optimizer_prune_level
=1)
by default. However, if you believe that the optimizer missed better query
plans, then this option can be switched off (optimizer_prune_level
=0)
with the risk that query compilation may take much longer. Notice that
even with the use of this heuristic, the optimizer will still explore a
roughly exponential number of plans.
optimizer_search_depth
variable tells how far in the
``future'' of each incomplete plan the optimizer should look in order
to evaluate whether it should be expanded further. Smaller values of
optimizer_search_depth
may result in orders of magnitude smaller query
compilation times. For example, queries with 12-13 or more tables may easily
require hours and even days to compile if optimizer_search_depth
is close to the number of tables in the query. At the same time, if
compiled with optimizer_search_depth
equal to 3 or 4, the compiler
may compile in less than a minute for the same query. If you are unsure of
what a reasonable value is for optimizer_search_depth
, this variable
can be set to 0 to tell the optimizer to determine the value automatically.
Most of the following tests were performed on Linux with the MySQL benchmarks, but they should give some indication for other operating systems and workloads.
You get the fastest executables when you link with -static
.
On Linux, you will get the fastest code when compiling with pgcc
and -O3
. You need about 200MB memory to compile `sql_yacc.cc'
with these options, because gcc/pgcc
needs a lot of memory to
make all functions inline. You should also set CXX=gcc
when
configuring MySQL to avoid inclusion of the libstdc++
library, which is not needed. Note that with some versions of pgcc
,
the resulting code will run only on true Pentium processors, even if you
use the compiler option indicating that you want the resulting code to work on
all x586-type processors (such as AMD).
By just using a better compiler and better compiler options, you can get a 10-30% speed increase in your application. This is particularly important if you compile the MySQL server yourself.
We have tested both the Cygnus CodeFusion and Fujitsu compilers, but when we tested them, neither was sufficiently bug-free to allow MySQL to be compiled with optimizations enabled.
The standard MySQL binary distributions are compiled with support for all
character sets. When you compile MySQL yourself, you should include support
only for the character sets that you are going to use. This is controlled by
the --with-charset
option to configure
.
Here is a list of some measurements that we have made:
pgcc
and compile everything with -O6
, the
mysqld
server is 1% faster than with gcc
2.95.2.
-static
), the result is 13%
slower on Linux. Note that you still can use a dynamically linked
MySQL library for your client applications. It is the server that is most
critical for performance.
mysqld
binary with strip mysqld
,
the resulting binary can be up to 4% faster.
localhost
,
MySQL uses a socket file by default.)
--with-debug=full
, most queries will be 20% slower.
Some queries may take substantially longer; for example,
the MySQL benchmarks ran 35% slower. If you use --with-debug
(without
=full
),
the slowdown will be only 15%. For a version of mysqld
that has
been compiled with --with-debug=full
, you can disable memory
checking at runtime by starting it with the --skip-safemalloc
option. The execution speed should then be close to that obtained when
configuring with --with-debug
.
gcc
3.2.
gcc
2.95.2 for UltraSPARC with the
-mcpu=v8 -Wa,-xarch=v8plusa
options gives 4% more performance.
gcc
without frame pointers
(-fomit-frame-pointer
or -fomit-frame-pointer -ffixed-ebp
)
makes mysqld
1-4% faster.
Binary MySQL distributions for Linux that are provided by MySQL AB used
to be compiled with pgcc
. We had to go back to regular gcc
due to a bug in pgcc
that would generate code that does
not run on AMD. We will continue using gcc
until that bug is resolved.
In the meantime, if you have a non-AMD machine, you can get a faster
binary by compiling with pgcc
. The standard MySQL
Linux binary is linked statically to make it faster and more portable.
The following list indicates some of the ways that the mysqld
server
uses memory. Where applicable, the name of the system variable relevant
to the memory use is given:
key_buffer_size
) is shared by all
threads; other buffers used by the server are allocated as
needed. See section 7.5.2 Tuning Server Parameters.
thread_stack
)
net_buffer_length
)
net_buffer_length
)
max_allowed_packet
when needed. While
a query is running, a copy of the current query string is also allocated.
ISAM
and MyISAM
tables are memory mapped. This
is because the 32-bit memory space of 4GB is not large enough for most
big tables. When systems with a 64-bit address space become more
common, we may add general support for memory mapping.
read_buffer_size
).
read_rnd_buffer_size
).
HEAP
)
tables. Temporary tables with a large record length (calculated as the
sum of all column lengths) or that contain BLOB
columns are
stored on disk.
One problem before MySQL 3.23.2 is that if an internal in-memory
heap table exceeds the size of tmp_table_size
, the error The
table tbl_name is full
occurs. From 3.23.2 on, this is handled
automatically by changing the in-memory heap table to a disk-based
MyISAM
table as necessary. To work around this problem for older
servers, you can increase the temporary table size by setting the
tmp_table_size
option to mysqld
, or by setting the SQL option
SQL_BIG_TABLES
in the client program.
See section 13.5.3 SET
Syntax.
In MySQL 3.20, the maximum size of the temporary table is
record_buffer*16
; if you are using this version, you have to increase
the value of record_buffer
. You can also start mysqld
with the
--big-tables
option to always store temporary tables on disk.
However, this will affect the speed of many complicated queries.
malloc()
and
free()
.
MyISAM
and ISAM
table that is opened, the index file
is opened once and the data file is opened once for each concurrently running
thread. For each concurrent thread, a table structure, column structures for
each column, and a buffer of size 3 * N
are allocated (where N
is the maximum row length, not counting BLOB
columns). A BLOB
column requires five to eight bytes plus the length of the BLOB
data.
The MyISAM
and ISAM
storage engines maintain one extra row
buffer for internal use.
BLOB
columns, a buffer is enlarged dynamically
to read in larger BLOB
values. If you scan a table, a buffer as large
as the largest BLOB
value is allocated.
FLUSH TABLES
statement or mysqladmin flush-tables
command
closes all tables that are not in use at once and marks all in-use tables to be
closed when the currently executing thread finishes. This effectively frees
most in-use memory. FLUSH TABLES
does not return until all tables have
been closed.
ps
and other system status programs may report that mysqld
uses a lot of memory. This may be caused by thread stacks on different
memory addresses. For example, the Solaris version of ps
counts
the unused memory between stacks as used memory. You can verify this by
checking available swap with swap -s
. We have tested
mysqld
with several memory-leakage detectors (both commercial and open
source), so there should be no memory leaks.
When a new client connects to mysqld
, mysqld
spawns a
new thread to handle the request. This thread first checks whether the
hostname is in the hostname cache. If not, the thread attempts to resolve the
hostname:
gethostbyaddr_r()
and
gethostbyname_r()
calls, the thread uses them to perform hostname
resolution.
gethostbyaddr()
and
gethostbyname()
instead. In this case, no other thread
can resolve hostnames that are not in the hostname cache until the
first thread unlocks the mutex.
You can disable DNS hostname lookups by starting mysqld
with the
--skip-name-resolve
option. However, in this case, you can use only IP
numbers in the MySQL grant tables.
If you have a very slow DNS and many hosts, you can get more performance by
either disabling DNS lookups with --skip-name-resolve
or by
increasing the HOST_CACHE_SIZE
define (default value: 128) and
recompiling mysqld
.
You can disable the hostname cache by starting the server with the
--skip-host-cache
option. To clear the hostname cache, issue a
FLUSH HOSTS
statement or execute the mysqladmin flush-hosts
command.
If you want to disallow TCP/IP connections entirely, start mysqld
with
the --skip-networking
option.
MyISAM
tables, you symlink the index file and/or
data file from their usual location in the data directory to another disk
(that may also be striped). This makes both the seek and read times better,
assuming that the disk is not used for other purposes as well. See section 7.6.1 Using Symbolic Links.
hdparm
to
configure your disk's interface. (Up to 100% under load is not uncommon.)
The following hdparm
options should be quite good for MySQL, and
probably for many other applications:
hdparm -m 16 -d 1Note that performance and reliability when using this command depends on your hardware, so we strongly suggest that you test your system thoroughly after using
hdparm
. Please consult the hdparm
manual page for more information. If hdparm
is not used wisely,
filesystem corruption may result, so back up everything before
experimenting!
-o noatime
option. That skips updates
to the last access time in inodes on the filesystem, which avoids
some disk seeks.
On many operating systems, you can set a filesystem to be updated
asynchronously by mounting it with the -o async
option. If your
computer is reasonably stable, this should give you more performance without
sacrificing too much reliability. (This flag is on by default on Linux.)
You can move tables and databases from the database directory to other locations and replace them with symbolic links to the new locations. You might want to do this, for example, to move a database to a file system with more free space or increase the speed of your system by spreading your tables to different disk.
The recommended way to do this is to just symlink databases to a different disk. Symlink tables only as a last resort.
On Unix, the way to symlink a database is to first create a directory on some disk where you have free space and then create a symlink to it from the MySQL data directory.
shell> mkdir /dr1/databases/test shell> ln -s /dr1/databases/test /path/to/datadir
MySQL doesn't support linking one directory to multiple
databases. Replacing a database directory with a symbolic link will
work fine as long as you don't make a symbolic link between databases.
Suppose that you have a database db1
under the MySQL data
directory, and then make a symlink db2
that points to db1
:
shell> cd /path/to/datadir shell> ln -s db1 db2
Now, for any table tbl_a
in db1
, there also appears to be
a table tbl_a
in db2
. If one client updates db1.tbl_a
and another client updates db2.tbl_a
, there will be problems.
If you really need to do this, you can change one of the source files. The file to modify depends on your version of MySQL. For MySQL 4.0 and up, look for the following statement in the `mysys/my_symlink.c' file:
if (!(MyFlags & MY_RESOLVE_LINK) || (!lstat(filename,&stat_buff) && S_ISLNK(stat_buff.st_mode)))
Before MySQL 4.0, look for this statement in the `mysys/mf_format.c' file:
if (flag & 32 || (!lstat(to,&stat_buff) && S_ISLNK(stat_buff.st_mode)))
Change the statement to this:
if (1)
On Windows, you can use internal symbolic links to directories by compiling
MySQL with -DUSE_SYMDIR
. This allows you to put different
databases on different disks. See section 7.6.1.3 Using Symbolic Links for Databases on Windows.
Before MySQL 4.0, you should not symlink tables unless you are
very careful with them. The problem is that if you run ALTER
TABLE
, REPAIR TABLE
, or OPTIMIZE TABLE
on a symlinked
table, the symlinks will be removed and replaced by the original
files. This happens because these statements work by creating a
temporary file in the database directory and replacing the original
file with the temporary file when the statement operation is complete.
You should not symlink tables on systems that don't have a fully
working realpath()
call. (At least Linux and Solaris support
realpath()
). You can check whether your system supports symbolic links
by issuing a SHOW VARIABLES LIKE 'have_symlink'
statement.
In MySQL 4.0, symlinks are fully supported only for MyISAM
tables. For other table types, you will probably get strange problems
if you try to use symbolic links on files in the operating system with
any of the preceding statements.
The handling of symbolic links for MyISAM
tables in MySQL 4.0 works
the following way:
ln -s
if mysqld
is not running. With SQL, you can instruct the server to
perform the symlinking by using the DATA DIRECTORY
and INDEX
DIRECTORY
options to CREATE TABLE
.
See section 13.2.6 CREATE TABLE
Syntax.
myisamchk
will not replace a symlink with the data file or index file.
It works directly on the file a symlink points to. Any temporary files
are created in the directory where the data file or index file is
located.
mysqld
as root
or allow
users to have write access to the MySQL database directories.
ALTER TABLE ... RENAME
and you don't move
the table to another database, the symlinks in the database directory
are renamed to the new names and the data file and index file are
renamed accordingly.
ALTER TABLE ... RENAME
to move a table to another database,
the table is moved to the other database directory. The old
symlinks and the files to which they pointed are deleted. In other words,
the new table will not be symlinked.
--skip-symbolic-links
option to mysqld
to ensure that no one can use mysqld
to drop
or rename a file outside of the data directory.
SHOW CREATE TABLE
doesn't report if a table has symbolic links
prior to MySQL 4.0.15. This is also true for mysqldump
, which uses
SHOW CREATE TABLE
to generate CREATE TABLE
statements.
Table symlink operations that are not yet supported:
ALTER TABLE
ignores the DATA DIRECTORY
and INDEX
DIRECTORY
table options.
BACKUP TABLE
and RESTORE TABLE
don't respect symbolic
links.
db1
under the MySQL data directory,
a table tbl1
in this database, and in the db1
directory you
make a symlink tbl2
that points to tbl1
:
shell> cd /path/to/datadir/db1 shell> ln -s tbl1.frm tbl2.frm shell> ln -s tbl1.MYD tbl2.MYD shell> ln -s tbl1.MYI tbl2.MYINow there will be problems if one thread reads
db1.tbl1
and another
thread updates db1.tbl2
:
tbl1
has not been
updated so will return out-of-date results).
ALTER
statements on tbl2
will also fail.
Beginning with MySQL 3.23.16, the mysqld-max
and mysql-max-nt
servers for Windows are
compiled with the -DUSE_SYMDIR
option. This allows you to put a
database directory on a different disk by setting up a symbolic link to it.
This is similar to the way that symbolic links work on Unix, although the
procedure for setting up the link is different.
As of MySQL 4.0, symbolic links are enabled by default. If you don't need them,
you can disable them with the skip-symbolic-links
option:
[mysqld] skip-symbolic-links
Before MySQL 4.0, symbolic links are disabled by default. To enable them, you should put the following entry in your `my.cnf' or `my.ini' file:
[mysqld] symbolic-links
On Windows, you make a symbolic link to a MySQL database by creating a file in the data directory that contains the path to the destination directory. The file should be named `db_name.sym', where db_name is the database name.
Suppose that the MySQL data directory is `C:\mysql\data'
and you want to have database foo
located at `D:\data\foo'.
Set up a symlink like this:
D:\data\foo\
.
After that, all tables created in the database foo
will be created in
`D:\data\foo'. Note that the symbolic link will not be used if a
directory with the database name exists in the MySQL data directory.
Go to the first, previous, next, last section, table of contents.