Table of Contents
The MySQL Performance Schema is a feature for monitoring MySQL Server execution at a low level. The Performance Schema has these characteristics:
The Performance Schema provides a way to inspect internal
execution of the server at runtime. It is implemented using the
PERFORMANCE_SCHEMA
storage engine
and the performance_schema
database. The
Performance Schema focuses primarily on performance data. This
differs from INFORMATION_SCHEMA
, which serves
for inspection of metadata.
The Performance Schema monitors server events. An “event” is anything the server does that takes time and has been instrumented so that timing information can be collected. In general, an event could be a function call, a wait for the operating system, a stage of an SQL statement execution such as parsing or sorting, or an entire statement or group of statements. Currently, event collection provides access to information about synchronization calls (such as for mutexes) file and table I/O, table locks, and so forth for the server and for several storage engines.
Performance Schema events are distinct from events written to the server's binary log (which describe data modifications) and Event Scheduler events (which are a type of stored program).
Performance Schema events are specific to a given instance of the MySQL Server. In MySQL 5.6.9 and later, Performance Schema tables are considered local to the server, and changes to them are not replicated or written to the binary log. (Bug #14741537)
Current events are available, as well as event histories and summaries. This enables you to determine how many times instrumented activities were performed and how much time they took. Event information is available to show the activities of specific threads, or activity associated with particular objects such as a mutex or file.
The PERFORMANCE_SCHEMA
storage
engine collects event data using “instrumentation
points” in server source code.
Collected events are stored in tables in the
performance_schema
database. These tables can
be queried using SELECT
statements like other tables.
Performance Schema configuration can be modified dynamically by
updating tables in the performance_schema
database through SQL statements. Configuration changes affect
data collection immediately.
Tables in the performance_schema
database are
views or temporary tables that use no persistent on-disk
storage.
Monitoring is available on all platforms supported by MySQL.
Some limitations might apply: The types of timers might vary per platform. Instruments that apply to storage engines might not be implemented for all storage engines. Instrumentation of each third-party engine is the responsibility of the engine maintainer. See also Section E.8, “Restrictions on Performance Schema”.
Data collection is implemented by modifying the server source code to add instrumentation. There are no separate threads associated with the Performance Schema, unlike other features such as replication or the Event Scheduler.
The Performance Schema is intended to provide access to useful information about server execution while having minimal impact on server performance. The implementation follows these design goals:
Activating the Performance Schema causes no changes in server
behavior. For example, it does not cause thread scheduling to
change, and it does not cause query execution plans (as shown by
EXPLAIN
) to change.
No memory allocation is done beyond that which occurs during server startup. By using early allocation of structures with a fixed size, it is never necessary to resize or reallocate them, which is critical for achieving good runtime performance.
Server monitoring occurs continuously and unobtrusively with very little overhead. Activating the Performance Schema does not make the server unusable.
The parser is unchanged. There are no new keywords or statements.
Execution of server code proceeds normally even if the Performance Schema fails internally.
When there is a choice between performing processing during event collection initially or during event retrieval later, priority is given to making collection faster. This is because collection is ongoing whereas retrieval is on demand and might never happen at all.
It is easy to add new instrumentation points.
Instrumentation is versioned. If the instrumentation implementation changes, previously instrumented code will continue to work. This benefits developers of third-party plugins because it is not necessary to upgrade each plugin to stay synchronized with the latest Performance Schema changes.
This section briefly introduces the Performance Schema with examples that show how to use it. For additional examples, see Section 20.14, “Using the Performance Schema to Diagnose Problems”.
For the Performance Schema to be available, support for it must
have been configured when MySQL was built. You can verify whether
this is the case by checking the server's help output. If the
Performance Schema is available, the output will mention several
variables with names that begin with
performance_schema
:
shell> mysqld --verbose --help
...
--performance_schema
Enable the performance schema.
--performance_schema_events_waits_history_long_size=#
Number of rows in events_waits_history_long.
...
If such variables do not appear in the output, your server has not been built to support the Performance Schema. In this case, see Section 20.2, “Performance Schema Configuration”.
Assuming that the Performance Schema is available, it is enabled
by default as of MySQL 5.6.6. Before 5.6.6, it is disabled by
default. To enable or disable it explicitly, start the server with
the performance_schema
variable
set to an appropriate value. For example, use these lines in your
my.cnf
file:
[mysqld] performance_schema=on
When the server starts, it sees
performance_schema
and attempts
to initialize the Performance Schema. To verify successful
initialization, use this statement:
mysql> SHOW VARIABLES LIKE 'performance_schema';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| performance_schema | ON |
+--------------------+-------+
A value of ON
means that the Performance Schema
initialized successfully and is ready for use. A value of
OFF
means that some error occurred. Check the
server error log for information about what went wrong.
The Performance Schema is implemented as a storage engine. If this
engine is available (which you should already have checked
earlier), you should see it listed with a
SUPPORT
value of YES
in the
output from the
INFORMATION_SCHEMA.ENGINES
table or
the SHOW ENGINES
statement:
mysql>SELECT * FROM INFORMATION_SCHEMA.ENGINES
->WHERE ENGINE='PERFORMANCE_SCHEMA'\G
*************************** 1. row *************************** ENGINE: PERFORMANCE_SCHEMA SUPPORT: YES COMMENT: Performance Schema TRANSACTIONS: NO XA: NO SAVEPOINTS: NO mysql>SHOW ENGINES\G
... Engine: PERFORMANCE_SCHEMA Support: YES Comment: Performance Schema Transactions: NO XA: NO Savepoints: NO ...
The PERFORMANCE_SCHEMA
storage engine
operates on tables in the performance_schema
database. You can make performance_schema
the
default database so that references to its tables need not be
qualified with the database name:
mysql> USE performance_schema;
Many examples in this chapter assume that
performance_schema
is the default database.
Performance Schema tables are stored in the
performance_schema
database. Information about
the structure of this database and its tables can be obtained, as
for any other database, by selecting from the
INFORMATION_SCHEMA
database or by using
SHOW
statements. For example, use
either of these statements to see what Performance Schema tables
exist:
mysql>SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
->WHERE TABLE_SCHEMA = 'performance_schema';
+----------------------------------------------+ | TABLE_NAME | +----------------------------------------------+ | cond_instances | | events_waits_current | | events_waits_history | | events_waits_history_long | | events_waits_summary_by_instance | | events_waits_summary_by_thread_by_event_name | | events_waits_summary_global_by_event_name | | file_instances | | file_summary_by_event_name | | file_summary_by_instance | | mutex_instances | | objects_summary_global_by_type | | performance_timers | | rwlock_instances | | setup_actors | | setup_consumers | | setup_instruments | | setup_objects | | setup_timers | | threads | +----------------------------------------------+ mysql>SHOW TABLES FROM performance_schema;
+----------------------------------------------+ | Tables_in_performance_schema | +----------------------------------------------+ | cond_instances | | events_waits_current | | events_waits_history | ...
The number of Performance Schema tables is expected to increase over time as implementation of additional instrumentation proceeds.
The name of the performance_schema
database is
lowercase, as are the names of tables within it. Queries should
specify the names in lowercase.
To see the structure of individual tables, use SHOW
CREATE TABLE
:
mysql> SHOW CREATE TABLE setup_timers\G
*************************** 1. row ***************************
Table: setup_timers
Create Table: CREATE TABLE `setup_timers` (
`NAME` varchar(64) NOT NULL,
`TIMER_NAME` enum('CYCLE','NANOSECOND','MICROSECOND','MILLISECOND','TICK')
NOT NULL
) ENGINE=PERFORMANCE_SCHEMA DEFAULT CHARSET=utf8
Table structure is also available by selecting from tables such as
INFORMATION_SCHEMA.COLUMNS
or by
using statements such as SHOW COLUMNS
.
Tables in the performance_schema
database can
be grouped according to the type of information in them: Current
events, event histories and summaries, object instances, and setup
(configuration) information. The following examples illustrate a
few uses for these tables. For detailed information about the
tables in each group, see
Section 20.9, “Performance Schema Table Descriptions”.
To see what the server is doing at the moment, examine the
events_waits_current
table. It
contains one row per thread showing each thread's most recent
monitored event:
mysql> SELECT * FROM events_waits_current\G
*************************** 1. row ***************************
THREAD_ID: 0
EVENT_ID: 5523
EVENT_NAME: wait/synch/mutex/mysys/THR_LOCK::mutex
SOURCE: thr_lock.c:525
TIMER_START: 201660494489586
TIMER_END: 201660494576112
TIMER_WAIT: 86526
SPINS: NULL
OBJECT_SCHEMA: NULL
OBJECT_NAME: NULL
OBJECT_TYPE: NULL
OBJECT_INSTANCE_BEGIN: 142270668
NESTING_EVENT_ID: NULL
OPERATION: lock
NUMBER_OF_BYTES: NULL
FLAGS: 0
...
This event indicates that thread 0 was waiting for 86,526
picoseconds to acquire a lock on
THR_LOCK::mutex
, a mutex in the
mysys
subsystem. The first few columns provide
the following information:
The ID columns indicate which thread the event comes from and the event number.
EVENT_NAME
indicates what was instrumented
and SOURCE
indicates which source file
contains the instrumented code.
The timer columns show when the event started and stopped and
how long it took. If an event is still in progress, the
TIMER_END
and TIMER_WAIT
values are NULL
. Timer values are
approximate and expressed in picoseconds. For information
about timers and event time collection, see
Section 20.2.3.1, “Performance Schema Event Timing”.
The history tables contain the same kind of rows as the
current-events table but have more rows and show what the server
has been doing “recently” rather than
“currently.” The
events_waits_history
and
events_waits_history_long
tables
contain the most recent 10 events per thread and most recent
10,000 events, respectively. For example, to see information for
recent events produced by thread 13, do this:
mysql>SELECT EVENT_ID, EVENT_NAME, TIMER_WAIT
->FROM events_waits_history WHERE THREAD_ID = 13
->ORDER BY EVENT_ID;
+----------+-----------------------------------------+------------+ | EVENT_ID | EVENT_NAME | TIMER_WAIT | +----------+-----------------------------------------+------------+ | 86 | wait/synch/mutex/mysys/THR_LOCK::mutex | 686322 | | 87 | wait/synch/mutex/mysys/THR_LOCK_malloc | 320535 | | 88 | wait/synch/mutex/mysys/THR_LOCK_malloc | 339390 | | 89 | wait/synch/mutex/mysys/THR_LOCK_malloc | 377100 | | 90 | wait/synch/mutex/sql/LOCK_plugin | 614673 | | 91 | wait/synch/mutex/sql/LOCK_open | 659925 | | 92 | wait/synch/mutex/sql/THD::LOCK_thd_data | 494001 | | 93 | wait/synch/mutex/mysys/THR_LOCK_malloc | 222489 | | 94 | wait/synch/mutex/mysys/THR_LOCK_malloc | 214947 | | 95 | wait/synch/mutex/mysys/LOCK_alarm | 312993 | +----------+-----------------------------------------+------------+
As new events are added to a history table, older events are discarded if the table is full.
Summary tables provide aggregated information for all events over
time. The tables in this group summarize event data in different
ways. To see which instruments have been executed the most times
or have taken the most wait time, sort the
events_waits_summary_global_by_event_name
table on the COUNT_STAR
or
SUM_TIMER_WAIT
column, which correspond to a
COUNT(*)
or SUM(TIMER_WAIT)
value, respectively, calculated over all events:
mysql>SELECT EVENT_NAME, COUNT_STAR
->FROM events_waits_summary_global_by_event_name
->ORDER BY COUNT_STAR DESC LIMIT 10;
+---------------------------------------------------+------------+ | EVENT_NAME | COUNT_STAR | +---------------------------------------------------+------------+ | wait/synch/mutex/mysys/THR_LOCK_malloc | 6419 | | wait/io/file/sql/FRM | 452 | | wait/synch/mutex/sql/LOCK_plugin | 337 | | wait/synch/mutex/mysys/THR_LOCK_open | 187 | | wait/synch/mutex/mysys/LOCK_alarm | 147 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 115 | | wait/io/file/myisam/kfile | 102 | | wait/synch/mutex/sql/LOCK_global_system_variables | 89 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 89 | | wait/synch/mutex/sql/LOCK_open | 88 | +---------------------------------------------------+------------+ mysql>SELECT EVENT_NAME, SUM_TIMER_WAIT
->FROM events_waits_summary_global_by_event_name
->ORDER BY SUM_TIMER_WAIT DESC LIMIT 10;
+----------------------------------------+----------------+ | EVENT_NAME | SUM_TIMER_WAIT | +----------------------------------------+----------------+ | wait/io/file/sql/MYSQL_LOG | 1599816582 | | wait/synch/mutex/mysys/THR_LOCK_malloc | 1530083250 | | wait/io/file/sql/binlog_index | 1385291934 | | wait/io/file/sql/FRM | 1292823243 | | wait/io/file/myisam/kfile | 411193611 | | wait/io/file/myisam/dfile | 322401645 | | wait/synch/mutex/mysys/LOCK_alarm | 145126935 | | wait/io/file/sql/casetest | 104324715 | | wait/synch/mutex/sql/LOCK_plugin | 86027823 | | wait/io/file/sql/pid | 72591750 | +----------------------------------------+----------------+
These results show that the THR_LOCK_malloc
mutex is “hot,” both in terms of how often it is used
and amount of time that threads wait attempting to acquire it.
The THR_LOCK_malloc
mutex is used only in
debug builds. In production builds it is not hot because it is
nonexistent.
Instance tables document what types of objects are instrumented.
An instrumented object, when used by the server, produces an
event. These tables provide event names and explanatory notes or
status information. For example, the
file_instances
table lists instances
of instruments for file I/O operations and their associated files:
mysql> SELECT * FROM file_instances\G
*************************** 1. row ***************************
FILE_NAME: /opt/mysql-log/60500/binlog.000007
EVENT_NAME: wait/io/file/sql/binlog
OPEN_COUNT: 0
*************************** 2. row ***************************
FILE_NAME: /opt/mysql/60500/data/mysql/tables_priv.MYI
EVENT_NAME: wait/io/file/myisam/kfile
OPEN_COUNT: 1
*************************** 3. row ***************************
FILE_NAME: /opt/mysql/60500/data/mysql/columns_priv.MYI
EVENT_NAME: wait/io/file/myisam/kfile
OPEN_COUNT: 1
...
Setup tables are used to configure and display monitoring
characteristics. For example, to see which event timers are
selected, query the setup_timers
tables:
mysql> SELECT * FROM setup_timers;
+-----------+-------------+
| NAME | TIMER_NAME |
+-----------+-------------+
| idle | MICROSECOND |
| wait | CYCLE |
| stage | NANOSECOND |
| statement | NANOSECOND |
+-----------+-------------+
setup_instruments
lists the set of
instruments for which events can be collected and shows which of
them are enabled:
mysql> SELECT * FROM setup_instruments;
+------------------------------------------------------------+---------+-------+
| NAME | ENABLED | TIMED |
+------------------------------------------------------------+---------+-------+
...
| wait/synch/mutex/sql/LOCK_global_read_lock | YES | YES |
| wait/synch/mutex/sql/LOCK_global_system_variables | YES | YES |
| wait/synch/mutex/sql/LOCK_lock_db | YES | YES |
| wait/synch/mutex/sql/LOCK_manager | YES | YES |
...
| wait/synch/rwlock/sql/LOCK_grant | YES | YES |
| wait/synch/rwlock/sql/LOGGER::LOCK_logger | YES | YES |
| wait/synch/rwlock/sql/LOCK_sys_init_connect | YES | YES |
| wait/synch/rwlock/sql/LOCK_sys_init_slave | YES | YES |
...
| wait/io/file/sql/binlog | YES | YES |
| wait/io/file/sql/binlog_index | YES | YES |
| wait/io/file/sql/casetest | YES | YES |
| wait/io/file/sql/dbopt | YES | YES |
...
To understand how to interpret instrument names, see Section 20.4, “Performance Schema Instrument Naming Conventions”.
To control whether events are collected for an instrument, set its
ENABLED
value to YES
or
NO
. For example:
mysql>UPDATE setup_instruments SET ENABLED = 'NO'
->WHERE NAME = 'wait/synch/mutex/sql/LOCK_mysql_create_db';
The Performance Schema uses collected events to update tables in
the performance_schema
database, which act as
“consumers” of event information. The
setup_consumers
table lists the
available consumers and which are enabled:
mysql> SELECT * FROM setup_consumers;
+--------------------------------+---------+
| NAME | ENABLED |
+--------------------------------+---------+
| events_stages_current | NO |
| events_stages_history | NO |
| events_stages_history_long | NO |
| events_statements_current | YES |
| events_statements_history | NO |
| events_statements_history_long | NO |
| events_waits_current | NO |
| events_waits_history | NO |
| events_waits_history_long | NO |
| global_instrumentation | YES |
| thread_instrumentation | YES |
| statements_digest | YES |
+--------------------------------+---------+
To control whether the Performance Schema maintains a consumer as
a destination for event information, set its
ENABLED
value.
For more information about the setup tables and how to use them to control event collection, see Section 20.2.3.2, “Performance Schema Event Filtering”.
There are some miscellaneous tables that do not fall into any of
the previous groups. For example,
performance_timers
lists the
available event timers and their characteristics. For information
about timers, see Section 20.2.3.1, “Performance Schema Event Timing”.
To use the MySQL Performance Schema, these configuration considerations apply:
The Performance Schema must be configured into MySQL Server at build time to make it available. Performance Schema support is included in binary MySQL distributions. If you are building from source, you must ensure that it is configured into the build as described in Section 20.2.1, “Performance Schema Build Configuration”.
The Performance Schema must be enabled at server startup to enable event collection to occur. Specific Performance Schema features can be enabled at server startup or at runtime to control which types of event collection occur. See Section 20.2.2, “Performance Schema Startup Configuration”, Section 20.2.3, “Performance Schema Runtime Configuration”, and Section 20.2.3.2, “Performance Schema Event Filtering”.
For the Performance Schema to be available, it must be configured into the MySQL server at build time. Binary MySQL distributions provided by Oracle Corporation are configured to support the Performance Schema. If you use a binary MySQL distribution from another provider, check with the provider whether the distribution has been appropriately configured.
If you build MySQL from a source distribution, enable the
Performance Schema by running CMake with the
WITH_PERFSCHEMA_STORAGE_ENGINE
option enabled:
shell> cmake . -DWITH_PERFSCHEMA_STORAGE_ENGINE=1
Configuring MySQL with the
-DWITHOUT_PERFSCHEMA_STORAGE_ENGINE=1
option prevents inclusion of the Performance Schema, so if you
want it included, do not use this option. See
Section 2.9.4, “MySQL Source-Configuration Options”.
If you install MySQL over a previous installation that was
configured without the Performance Schema (or with an older
version of the Performance Schema that may not have all the
current tables), run mysql_upgrade after
starting the server to ensure that the
performance_schema
database exists with all
current tables. Then restart the server. One indication that you
need to do this is the presence of messages such as the
following in the error log:
[ERROR] Native table 'performance_schema'.'events_waits_history' has the wrong structure [ERROR] Native table 'performance_schema'.'events_waits_history_long' has the wrong structure ...
To verify whether a server was built with Performance Schema
support, check its help output. If the Performance Schema is
available, the output will mention several variables with names
that begin with performance_schema
:
shell> mysqld --verbose --help
...
--performance_schema
Enable the performance schema.
--performance_schema_events_waits_history_long_size=#
Number of rows in events_waits_history_long.
...
You can also connect to the server and look for a line that
names the PERFORMANCE_SCHEMA
storage engine in the output from SHOW
ENGINES
:
mysql> SHOW ENGINES\G
...
Engine: PERFORMANCE_SCHEMA
Support: YES
Comment: Performance Schema
Transactions: NO
XA: NO
Savepoints: NO
...
If the Performance Schema was not configured into the server at
build time, no row for
PERFORMANCE_SCHEMA
will appear in
the output from SHOW ENGINES
. You
might see performance_schema
listed in the
output from SHOW DATABASES
, but
it will have no tables and you will not be able to use it.
A line for PERFORMANCE_SCHEMA
in
the SHOW ENGINES
output means
that the Performance Schema is available, not that it is
enabled. To enable it, you must do so at server startup, as
described in the next section.
Assuming that the Performance Schema is available, it is enabled
by default as of MySQL 5.6.6. Before 5.6.6, it is disabled by
default. To enable or disable it explicitly, start the server
with the performance_schema
variable set to an appropriate value. For example, use these
lines in your my.cnf
file:
[mysqld] performance_schema=on
If the server is unable to allocate any internal buffer during
Performance Schema initialization, the Performance Schema
disables itself and sets
performance_schema
to
OFF
, and the server runs without
instrumentation.
As of MySQL 5.6.4, the Performance Schema permits instrument and
consumer configuration at server startup, which previously was
possible only at runtime using
UPDATE
statements for the
setup_instruments
and
setup_consumers
tables. This change
was made because configuration at runtime is too late to disable
instruments that have already been initialized during server
startup. For example, the
wait/sync/mutex/sql/LOCK_open
mutex is
initialized once during server startup, so attempts to disable
the corresponding instrument at runtime have no effect.
To control an instrument at server startup, use an option of this form:
--performance_schema_instrument='instrument_name
=value
'
Here, instrument_name
is an
instrument name such as
wait/sync/mutex/sql/LOCK_open
, and
value
is one of these values:
off
, false
, or
0
: Disable the instrument
on
, true
, or
1
: Enable and time the instrument
counted
: Enable and count (rather than
time) the instrument
Each
--performance_schema_instrument
option can specify only one instrument name, but multiple
instances of the option can be given to configure multiple
instruments. In addition, patterns are permitted in instrument
names to configure instruments that match the pattern. To
configure all condition synchronization instruments as enabled
and counted, use this option:
--performance_schema_instrument='wait/synch/cond/%=counted'
To disable all instruments, use this option:
--performance_schema_instrument='%=off'
Longer instrument name strings take precedence over shorter pattern names, regardless of order. For information about specifying patterns to select instruments, see Section 20.2.3.2.2, “Naming Instruments or Consumers for Filtering Operations”.
An unrecognized instrument name is ignored. It is possible that a plugin installed later may create the instrument, at which time the name is recognized and configured.
To control a consumer at server startup, use an option of this form:
--performance_schema_consumer_consumer_name
=value
Here, consumer_name
is a consumer
name such as events_waits_history
, and
value
is one of these values:
off
, false
, or
0
: Do not collect events for the consumer
on
, true
, or
1
: Collect events for the consumer
For example, to enable the
events_waits_history
consumer, use this
option:
--performance_schema_consumer_events_waits_history=on
The permitted consumer names can be found by examining the
setup_consumers
table. Patterns are
not permitted.
The Performance Schema includes several system variables that provide configuration information:
mysql> SHOW VARIABLES LIKE 'perf%';
+--------------------------------------------------------+---------+
| Variable_name | Value |
+--------------------------------------------------------+---------+
| performance_schema | ON |
| performance_schema_accounts_size | 100 |
| performance_schema_digests_size | 200 |
| performance_schema_events_stages_history_long_size | 10000 |
| performance_schema_events_stages_history_size | 10 |
| performance_schema_events_statements_history_long_size | 10000 |
| performance_schema_events_statements_history_size | 10 |
| performance_schema_events_waits_history_long_size | 10000 |
| performance_schema_events_waits_history_size | 10 |
| performance_schema_hosts_size | 100 |
| performance_schema_max_cond_classes | 80 |
| performance_schema_max_cond_instances | 1000 |
...
The performance_schema
variable
is ON
or OFF
to indicate
whether the Performance Schema is enabled or disabled. The other
variables indicate table sizes (number of rows) or memory
allocation values.
With the Performance Schema enabled, the number of Performance Schema instances affects the server memory footprint, perhaps to a large extent. It may be necessary to tune the values of Performance Schema system variables to find the number of instances that balances insufficient instrumentation against excessive memory consumption.
To change the value of Performance Schema system variables, set
them at server startup. For example, put the following lines in
a my.cnf
file to change the sizes of the
history tables:
[mysqld] performance_schema performance_schema_events_waits_history_size=20 performance_schema_events_waits_history_long_size=15000
As of MySQL 5.6.6, the Performance Schema automatically sizes the values of several of its parameters at server startup if they are not set explicitly. For example, it sizes the parameters that control the sizes of the events waits tables this way. To see which parameters are sized under this policy, use mysqld --verbose --help and look for those with a default value of –1, or see Section 20.11, “Performance Schema System Variables”.
For each autosized parameter that is not set at server startup (or is set to –1), the Performance Schema determines how to set its value based on the value of the following system values, which are considered as “hints” about how you have configured your MySQL server:
max_connections open_files_limit table_definition_cache table_open_cache
To override autosizing for a given parameter, set it a value other than –1 at startup. In this case, the Performance Schema assigns it the specified value.
At runtime, SHOW VARIABLES
displays the actual values that autosized parameters were set
to.
If the Performance Schema is disabled, its autosized parameters
remain set to –1 and SHOW
VARIABLES
displays –1.
Performance Schema setup tables contain information about monitoring configuration:
mysql>SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
->WHERE TABLE_SCHEMA = 'performance_schema'
->AND TABLE_NAME LIKE 'setup%';
+-------------------+ | TABLE_NAME | +-------------------+ | setup_actors | | setup_consumers | | setup_instruments | | setup_objects | | setup_timers | +-------------------+
You can examine the contents of these tables to obtain
information about Performance Schema monitoring characteristics.
If you have the UPDATE
privilege,
you can change Performance Schema operation by modifying setup
tables to affect how monitoring occurs. For additional details
about these tables, see
Section 20.9.1, “Performance Schema Setup Tables”.
To see which event timers are selected, query the
setup_timers
tables:
mysql> SELECT * FROM setup_timers;
+-----------+-------------+
| NAME | TIMER_NAME |
+-----------+-------------+
| idle | MICROSECOND |
| wait | CYCLE |
| stage | NANOSECOND |
| statement | NANOSECOND |
+-----------+-------------+
The NAME
value indicates the type of
instrument to which the timer applies, and
TIMER_NAME
indicates which timer applies to
those instruments. The timer applies to instruments where their
name begins with a component matching the
NAME
value.
To change the timer, update the NAME
value.
For example, to use the NANOSECOND
timer for
the wait
timer:
mysql>UPDATE setup_timers SET TIMER_NAME = 'NANOSECOND'
->WHERE NAME = 'wait';
mysql>SELECT * FROM setup_timers;
+-----------+-------------+ | NAME | TIMER_NAME | +-----------+-------------+ | idle | MICROSECOND | | wait | NANOSECOND | | stage | NANOSECOND | | statement | NANOSECOND | +-----------+-------------+
For discussion of timers, see Section 20.2.3.1, “Performance Schema Event Timing”.
The setup_instruments
and
setup_consumers
tables list the
instruments for which events can be collected and the types of
consumers for which event information actually is collected,
respectively. Other setup tables enable further modification of
the monitoring configuration.
Section 20.2.3.2, “Performance Schema Event Filtering”, discusses how
you can modify these tables to affect event collection.
If there are Performance Schema configuration changes that must
be made at runtime using SQL statements and you would like to
these changes take effect each time the server starts, put the
statements in a file and start the server with the
--init-file=
option. This strategy can also be useful if you have multiple
monitoring configurations, each tailored to produce a different
kind of monitoring, such as casual server health monitoring,
incident investigation, application behavior troubleshooting,
and so forth. Put the statements for each monitoring
configuration into their own file and specify the appropriate
file as the file_name
--init-file
argument
when you start the server.
Events are collected by means of instrumentation added to the server source code. Instruments time events, which is how the Performance Schema provides an idea of how long events take. It is also possible to configure instruments not to collect timing information. This section discusses the available timers and their characteristics, and how timing values are represented in events.
Two tables provide timer information:
performance_timers
lists the
available timers and their characteristics.
setup_timers
indicates which
timers are used for which instruments.
Each timer row in setup_timers
must refer to one of the timers listed in
performance_timers
.
Timers vary in precision and the amount of overhead they
involve. To see what timers are available and their
characteristics, check the
performance_timers
table:
mysql> SELECT * FROM performance_timers;
+-------------+-----------------+------------------+----------------+
| TIMER_NAME | TIMER_FREQUENCY | TIMER_RESOLUTION | TIMER_OVERHEAD |
+-------------+-----------------+------------------+----------------+
| CYCLE | 2389029850 | 1 | 72 |
| NANOSECOND | NULL | NULL | NULL |
| MICROSECOND | 1000000 | 1 | 585 |
| MILLISECOND | 1035 | 1 | 738 |
| TICK | 101 | 1 | 630 |
+-------------+-----------------+------------------+----------------+
The TIMER_NAME
column shows the names of
the available timers. CYCLE
refers to the
timer that is based on the CPU (processor) cycle counter. If
the values associated with a given timer name are
NULL
, that timer is not supported on your
platform. The rows that do not have NULL
indicate which timers you can use in
setup_timers
.
TIMER_FREQUENCY
indicates the number of
timer units per second. For a cycle timer, the frequency is
generally related to the CPU speed. The value shown was
obtained on a system with a 2.4GHz processor. The other timers
are based on fixed fractions of seconds. For
TICK
, the frequency may vary by platform
(for example, some use 100 ticks/second, others 1000
ticks/second).
TIMER_RESOLUTION
indicates the number of
timer units by which timer values increase at a time. If a
timer has a resolution of 10, its value increases by 10 each
time.
TIMER_OVERHEAD
is the minimal number of
cycles of overhead to obtain one timing with the given timer.
The overhead per event is twice the value displayed because
the timer is invoked at the beginning and end of the event.
To see which timer is in effect or to change the timer, access
the setup_timers
table:
mysql>SELECT * FROM setup_timers;
+-----------+-------------+ | NAME | TIMER_NAME | +-----------+-------------+ | idle | MICROSECOND | | wait | NANOSECOND | | stage | NANOSECOND | | statement | NANOSECOND | +-----------+-------------+ mysql>UPDATE setup_timers SET TIMER_NAME = 'MICROSECOND'
->WHERE NAME = 'wait';
mysql>SELECT * FROM setup_timers;
+-----------+-------------+ | NAME | TIMER_NAME | +-----------+-------------+ | idle | MICROSECOND | | wait | MICROSECOND | | stage | NANOSECOND | | statement | NANOSECOND | +-----------+-------------+
By default, the Performance Schema uses the best timer available for each instrument type, but you can select a different one.
To time waits, the most important criterion is to reduce
overhead, at the possible expense of the timer accuracy, so
using the CYCLE
timer is the best.
The time a statement (or stage) takes to execute is in general
orders of magnitude larger than the time it takes to execute a
single wait. To time statements, the most important criterion
is to have an accurate measure, which is not affected by
changes in processor frequency, so using a timer which is not
based on cycles is the best. The default timer for statements
is NANOSECOND
. The extra
“overhead” compared to the cycle timer is not
significant, because the overhead caused by calling a timer
twice (once when the statement starts, once when it ends) is
orders of magnitude less compared to the CPU time used to
execute the statement itself. Using the
CYCLE
timer has no benefit here, only
drawbacks.
The precision offered by the cycle counter depends on
processor speed. If the processor runs at 1 GHz (one billion
cycles/second) or higher, the cycle counter delivers
sub-nanosecond precision. Using the cycle counter is much
cheaper than getting the actual time of day. For example, the
standard gettimeofday()
function can take
hundreds of cycles, which is an unacceptable overhead for data
gathering that may occur thousands or millions of times per
second.
Cycle counters also have disadvantages:
End users expect to see timings in wall-clock units, such as fractions of a second. Converting from cycles to fractions of seconds can be expensive. For this reason, the conversion is a quick and fairly rough multiplication operation.
Processor cycle rate might change, such as when a laptop goes into power-saving mode or when a CPU slows down to reduce heat generation. If a processor's cycle rate fluctuates, conversion from cycles to real-time units is subject to error.
Cycle counters might be unreliable or unavailable
depending on the processor or the operating system. For
example, on Pentiums, the instruction is
RDTSC
(an assembly-language rather than
a C instruction) and it is theoretically possible for the
operating system to prevent user-mode programs from using
it.
Some processor details related to out-of-order execution or multiprocessor synchronization might cause the counter to seem fast or slow by up to 1000 cycles.
Currently, MySQL works with cycle counters on x386 (Windows, Mac OS X, Linux, Solaris, and other Unix flavors), PowerPC, and IA-64.
The setup_instruments
table has
an ENABLED
column to indicate the
instruments for which to collect events. The table also has a
TIMED
column to indicate which instruments
are timed. If an instrument is not enabled, it produces no
events. If an enabled instrument is not timed, events produced
by the instrument have NULL
for the
TIMER_START
, TIMER_END
,
and TIMER_WAIT
timer values. This in turn
causes those values to be ignored when calculating the sum,
minimum, maximum, and average time values in summary tables.
Within events, times are stored in picoseconds (trillionths of a second) to normalize them to a standard unit, regardless of which timer is selected. The timer used for an event is the one in effect when event timing begins. This timer is used to convert start and end values to picoseconds for storage in the event.
Modifications to the setup_timers
table affect monitoring immediately. Events already in
progress use the original timer for the begin time and the new
timer for the end time, which leads to unpredictable results.
If you make timer changes, you may want to use
TRUNCATE TABLE
to reset
Performance Schema statistics.
The timer baseline (“time zero”) occurs at
Performance Schema initialization during server startup.
TIMER_START
and
TIMER_END
values in events represent
picoseconds since the baseline. TIMER_WAIT
values are durations in picoseconds.
Picosecond values in events are approximate. Their accuracy is
subject to the usual forms of error associated with conversion
from one unit to another. If the CYCLE
timer is used and the processor rate varies, there might be
drift. For these reasons, it is not reasonable to look at the
TIMER_START
value for an event as an
accurate measure of time elapsed since server startup. On the
other hand, it is reasonable to use
TIMER_START
or
TIMER_WAIT
values in ORDER
BY
clauses to order events by start time or
duration.
The choice of picoseconds in events rather than a value such
as microseconds has a performance basis. One implementation
goal was to show results in a uniform time unit, regardless of
the timer. In an ideal world this time unit would look like a
wall-clock unit and be reasonably precise; in other words,
microseconds. But to convert cycles or nanoseconds to
microseconds, it would be necessary to perform a division for
every instrumentation. Division is expensive on many
platforms. Multiplication is not expensive, so that is what is
used. Therefore, the time unit is an integer multiple of the
highest possible TIMER_FREQUENCY
value,
using a multiplier large enough to ensure that there is no
major precision loss. The result is that the time unit is
“picoseconds.” This precision is spurious, but
the decision enables overhead to be minimized.
Events are processed in a producer/consumer fashion:
Instrumented code is the source for events and produces
events to be collected. The
setup_instruments
table lists
the instruments for which events can be collected, whether
they are enabled, and whether to collect timing
information:
mysql> SELECT * FROM setup_instruments;
+------------------------------------------------------------+---------+-------+
| NAME | ENABLED | TIMED |
+------------------------------------------------------------+---------+-------+
...
| wait/synch/mutex/sql/LOCK_global_read_lock | YES | YES |
| wait/synch/mutex/sql/LOCK_global_system_variables | YES | YES |
| wait/synch/mutex/sql/LOCK_lock_db | YES | YES |
| wait/synch/mutex/sql/LOCK_manager | YES | YES |
...
The setup_instruments
table provides
the most basic form of control over event production. To
further refine event production based on the type of
object or thread being monitored, other tables may be used
as described in
Section 20.2.3.2.1, “Event Pre-Filtering”.
Performance Schema tables are the destinations for events
and consume events. The
setup_consumers
table lists
the types of consumers to which event information can be
sent and which are enabled:
mysql> SELECT * FROM setup_consumers;
+--------------------------------+---------+
| NAME | ENABLED |
+--------------------------------+---------+
| events_stages_current | NO |
| events_stages_history | NO |
| events_stages_history_long | NO |
| events_statements_current | YES |
| events_statements_history | NO |
| events_statements_history_long | NO |
| events_waits_current | NO |
| events_waits_history | NO |
| events_waits_history_long | NO |
| global_instrumentation | YES |
| thread_instrumentation | YES |
| statements_digest | YES |
+--------------------------------+---------+
Filtering can be done at different stages of performance monitoring:
Pre-filtering. This is done by modifying Performance Schema configuration so that only certain types of events are collected from producers, and collected events update only certain consumers. This type of filtering is done by the Performance Schema and has a global effect that applies to all users.
Reasons to use pre-filtering:
Pre-filtering reduces overhead. The overhead should be minimal even with all instruments enabled, but perhaps you want to reduce it further. Or you do not care about timing events and want to disable the timing code to eliminate timing overhead.
You can avoid filling the current-events or history tables with events in which you have no interest. Pre-filtering leaves more “room” in these tables for instances of rows for enabled instrument types. If you enable only file instruments with pre-filtering, no rows are collected for nonfile instruments. With post-filtering, nonfile events are collected, leaving fewer rows for file events.
You can avoid maintaining some kinds of event tables. If you disable a consumer, the server does not spend time maintaining it. For example, if you do not care about event histories, you can disable the history table consumers to improve performance.
Post-filtering. This
involves the use of WHERE
clauses in
queries that select information from Performance Schema
tables, to specify which of the available events you want
to see. This type of filtering is performed on a per-user
basis because individual users select which of the
available events are of interest.
Reasons to use post-filtering:
To avoid making decisions for individual users about which event information is of interest.
To use the Performance Schema to investigate a performance issue when the restrictions to impose using pre-filtering are not known in advance.
The following sections provide more detail about pre-filtering and provide guidelines for naming instruments or consumers in filtering operations. For information about writing queries to retrieve information (post-filtering), see Section 20.3, “Performance Schema Queries”.
Pre-filtering is done by the Performance Schema and has a global effect that applies to all users. Pre-filtering can be applied to either the producer or consumer stage of event processing:
To configure pre-filtering at the producer stage, several tables can be used:
The setup_instruments
table indicates which instruments are available. An
instrument disabled in this table produces no events
regardless of the contents of the other
production-related setup tables. An instrument
enabled in this table is permitted to produce
events, subject to the contents of the other tables.
The setup_objects
table
determines whether particular objects are monitored.
Currently, the Performance Schema uses it to control
monitoring of table objects.
The threads
table
indicates whether monitoring is enabled for each
server thread.
The setup_actors
table
determines the initial monitoring state for new
foreground threads.
To configure pre-filtering at the consumer stage, modify
the setup_consumers
table.
This determines the destinations to which events are
sent.
The setup_consumers
table
also implicitly affects event production. If a given
event will not be sent to any destination (that is, will
not be consumed), the Performance Schema does not
produce it.
Modifications to any of these tables affect monitoring
immediately, with the exception of
setup_actors
. Modifications to
setup_actors
affect only
foreground threads created thereafter.
When you change the monitoring configuration, the
Performance Schema does not flush the history tables. Events
already collected remain in the current-events and history
tables until displaced by newer events. If you disable
instruments, you might need to wait a while before events
for them are displaced by newer events of interest.
Alternatively, use TRUNCATE
TABLE
to empty the history tables.
After making instrumentation changes, you might want to
truncate the summary tables as well to clear aggregate
information for previously collected events. Except for
events_statements_summary_by_digest
,
the effect of TRUNCATE TABLE
for summary tables is to reset the summary columns to 0 or
NULL
, not to remove rows.
The following sections describe how to use specific tables to control how the Performance Schema performs pre-filtering.
The setup_instruments
table
lists the available instruments:
mysql> SELECT * FROM setup_instruments;
+------------------------------------------------------------+---------+-------+
| NAME | ENABLED | TIMED |
+------------------------------------------------------------+---------+-------+
...
| wait/synch/mutex/sql/LOCK_global_read_lock | YES | YES |
| wait/synch/mutex/sql/LOCK_global_system_variables | YES | YES |
| wait/synch/mutex/sql/LOCK_lock_db | YES | YES |
| wait/synch/mutex/sql/LOCK_manager | YES | YES |
...
| wait/synch/rwlock/sql/LOCK_grant | YES | YES |
| wait/synch/rwlock/sql/LOGGER::LOCK_logger | YES | YES |
| wait/synch/rwlock/sql/LOCK_sys_init_connect | YES | YES |
| wait/synch/rwlock/sql/LOCK_sys_init_slave | YES | YES |
...
| wait/io/file/sql/binlog | YES | YES |
| wait/io/file/sql/binlog_index | YES | YES |
| wait/io/file/sql/casetest | YES | YES |
| wait/io/file/sql/dbopt | YES | YES |
...
To control whether an instrument is enabled, set its
ENABLED
column to
YES
or NO
. To
configure whether to collect timing information for an
instrument, set its TIMED
value to
YES
or NO
. Setting
the TIMED
column affects Performance
Schema table contents as described in
Section 20.2.3.1, “Performance Schema Event Timing”.
The following examples demonstrate possible operations on
the setup_instruments
table.
These changes, like other pre-filtering operations, affect
all users.
Disable all instruments:
mysql> UPDATE setup_instruments SET ENABLED = 'NO';
Now no events will be collected.
Disable all file instruments, adding them to the current set of disabled instruments:
mysql>UPDATE setup_instruments SET ENABLED = 'NO'
->WHERE NAME LIKE 'wait/io/file/%';
Disable only file instruments, enable all other instruments:
mysql>UPDATE setup_instruments
->SET ENABLED = IF(NAME LIKE 'wait/io/file/%', 'NO', 'YES');
The preceding queries use the
LIKE
operator and the
pattern 'wait/io/file/%'
to match
all instrument names that begin with
'wait/io/file/
. For additional
information about specifying patterns to select
instruments, see
Section 20.2.3.2.2, “Naming Instruments or Consumers for Filtering Operations”.
Enable all but those instruments in the
mysys
library:
mysql>UPDATE setup_instruments
->SET ENABLED = CASE WHEN NAME LIKE '%/mysys/%' THEN 'YES' ELSE 'NO' END;
Disable a specific instrument:
mysql>UPDATE setup_instruments SET ENABLED = 'NO'
->WHERE NAME = 'wait/synch/mutex/mysys/TMPDIR_mutex';
To toggle the state of an instrument,
“flip” its ENABLED
value:
mysql>UPDATE setup_instruments
->SET ENABLED = IF(ENABLED = 'YES', 'NO', 'YES')
->WHERE NAME = 'wait/synch/mutex/mysys/TMPDIR_mutex';
Disable timing for all events:
mysql> UPDATE setup_instruments SET TIMED = 'NO';
Modifications to the
setup_instruments
table
affect monitoring immediately.
The setup_objects
table
controls whether particular objects are monitored.
Currently, the Performance Schema uses it to control
monitoring of table objects. The initial
setup_objects
contents look
like this:
mysql> SELECT * FROM setup_objects;
+-------------+--------------------+-------------+---------+-------+
| OBJECT_TYPE | OBJECT_SCHEMA | OBJECT_NAME | ENABLED | TIMED |
+-------------+--------------------+-------------+---------+-------+
| TABLE | mysql | % | NO | NO |
| TABLE | performance_schema | % | NO | NO |
| TABLE | information_schema | % | NO | NO |
| TABLE | % | % | YES | YES |
+-------------+--------------------+-------------+---------+-------+
The OBJECT_SCHEMA
and
OBJECT_NAME
columns should contain a
literal schema or table name, or '%'
to
match any name.
When the Performance Schema checks for a match in
setup_objects
, it tries to
find more specific matches first. For example, with a
table db1.t1
, it looks for a match for
'db1'
and 't1'
, then
for 'db1'
and '%'
,
then for '%'
and
'%'
. The order in which matching occurs
matters because different
setup_objects
rows can have
different ENABLED
and
TIMED
values.
The effect of the default object configuration is to
instrument all tables except those in the
mysql
,
INFORMATION_SCHEMA
, and
performance_schema
databases. Tables in
the INFORMATION_SCHEMA
database are not
instrumented regardless of the contents of
setup_objects
; the row for
information_schema.%
simply makes this
default explicit.
For table-related events, the Performance Schema combines
the contents of setup_objects
with setup_instruments
to
determine whether to enable instruments and whether to
time enabled instruments:
For tables that match a row in
setup_objects
, table
instruments produce events only if they are enabled in
both setup_instruments
and setup_objects
.
The TIMED
values in the two tables
are combined, so that timing information is collected
only with both values are YES
.
Suppose that setup_objects
contains the following rows:
+-------------+---------------+-------------+---------+-------+ | OBJECT_TYPE | OBJECT_SCHEMA | OBJECT_NAME | ENABLED | TIMED | +-------------+---------------+-------------+---------+-------+ | TABLE | db1 | t1 | YES | YES | | TABLE | db1 | t2 | YES | NO | | TABLE | db2 | % | YES | YES | | TABLE | db3 | % | YES | NO | | TABLE | % | % | YES | YES | +-------------+---------------+-------------+---------+-------+
If a table-related instrument in
setup_instruments
has a
TIMED
value of NO
,
no events for the instrument are timed. If the
TIMED
value is YES
,
event timing occurs as follows:
db1.t1
events are timed
db1.t2
events are not timed
db2.t3
events are timed
db3.t4
events are not timed
db4.t5
events are timed
If a persistent table and a temporary table have the same
name, matching against
setup_objects
rows occurs the
same way for both. It is not possible to enable monitoring
for one table but not the other. However, each table is
instrumented separately.
Modifications to the
setup_objects
table affect
object monitoring immediately.
The ENABLED
column was added in MySQL
5.6.3. For earlier versions that have no
ENABLED
column,
setup_objects
is used only to
enable monitoring for objects that match some row in the
table. There is no way to explicitly disable
instrumentation with the table.
The threads
table contains a
row for each server thread. Each row contains information
about a thread and indicates whether monitoring is enabled
for it. For the Performance Schema to monitor a thread,
these things must be true:
The thread_instrumentation
consumer
in the setup_consumers
table must be YES
.
The thread.INSTRUMENTED
column must
be YES
.
Monitoring occurs only for those thread events
produced from instruments that are enabled, as
specified in the
setup_instruments
table.
The INSTRUMENTED
column in the
threads
table indicates the
monitoring state for each thread. For foreground threads
(resulting from client connections), the initial
INSTRUMENTED
value is determined by
whether the user account associated with the thread
matches any row in the
setup_actors
table. For
background threads, INSTRUMENTED
is
YES
by default.
setup_actors
is not consulted
because there is no associated user for background
threads. For any thread, its
INSTRUMENTED
value can be changed
during the life of the thread.
The initial setup_actors
contents look like this:
mysql> SELECT * FROM setup_actors;
+------+------+------+
| HOST | USER | ROLE |
+------+------+------+
| % | % | % |
+------+------+------+
The Performance Schema uses the HOST
and USER
columns to match each new
foreground thread. (ROLE
is unused.)
The INSTRUMENTED
value for the thread
becomes YES
if any row matches. This
enables instrumenting to be applied selectively per host,
user, or combination of host and user.
The HOST
and USER
columns should contain a literal host or user name, or
'%'
to match any name. By default,
monitoring is enabled for all new foreground threads
because the setup_actors
table initially contains a row with '%'
for both HOST
and
USER
. To perform more limited matching
such as to enable monitoring only for some foreground
threads, you must delete this row because it matches any
connection.
Suppose that you modify
setup_actors
as follows:
DELETE FROM setup_actors;
Now setup_actors
is empty and there are
no rows that could match incoming connections.
Consequently, the Performance Schema will set the
INSTRUMENTED
column for all new
foreground threads to NO
.
Suppose that you further modify
setup_actors
:
INSERT INTO setup_actors (HOST,USER,ROLE) VALUES('localhost','joe','%'); INSERT INTO setup_actors (HOST,USER,ROLE) VALUES('%','sam','%');
Now the Performance Schema determines how to set the
INSTRUMENTED
value for new connection
threads as follows:
If joe
connects from the local
host, the connection matches the first inserted row.
If joe
connects from any other
host, there is no match.
If sam
connects from any host, the
connection matches the second inserted row.
For any other connection, there is no match.
Modifications to the
setup_actors
table do not
affect existing threads.
The setup_consumers
table lists the
available consumer types and which are enabled:
mysql> SELECT * FROM setup_consumers;
+--------------------------------+---------+
| NAME | ENABLED |
+--------------------------------+---------+
| events_stages_current | NO |
| events_stages_history | NO |
| events_stages_history_long | NO |
| events_statements_current | YES |
| events_statements_history | NO |
| events_statements_history_long | NO |
| events_waits_current | NO |
| events_waits_history | NO |
| events_waits_history_long | NO |
| global_instrumentation | YES |
| thread_instrumentation | YES |
| statements_digest | YES |
+--------------------------------+---------+
Modify the setup_consumers
table to affect pre-filtering at the consumer stage and
determine the destinations to which events are sent. To
enable or disable a consumer, set its
ENABLED
value to YES
or NO
.
Modifications to the
setup_consumers
table affect
monitoring immediately.
If you disable a consumer, the server does not spend time maintaining it. For example, you can disable history consumers if you do not care about historical event information:
mysql>UPDATE setup_consumers
->SET ENABLED = 'NO' WHERE NAME LIKE '%history%';
The consumer settings in the
setup_consumers
table form a
hierarchy from higher levels to lower. The following
principles apply:
Destinations associated with a consumer receive no events unless the Performance Schema checks the consumer and the consumer is enabled.
A consumer is checked only if all consumers that it depends on (if any) are enabled.
If a consumer is not checked, or is checked but is disabled, other consumers that depend on it are not checked.
Dependent consumers may have their own dependent consumers.
If an event would not be sent to any destination, the Performance Schema does not produce it.
The following lists describe the available consumer values. For discussion of several representative consumer configurations and their effect on instrumentation, see Section 20.2.3.2.1.5, “Example Consumer Configurations”.
Global and Thread Consumers
global_instrumentation
is the
highest level consumer. If
global_instrumentation
is
NO
, it disables global
instrumentation. All other settings are lower level
and are not checked; it does not matter what they are
set to. No global or per thread information is
maintained and no individual events are collected in
the current-events or event-history tables. If
global_instrumentation
is
YES
, the Performance Schema
maintains information for global states and also
checks the thread_instrumentation
consumer.
thread_instrumentation
is checked
only if global_instrumentation
is
YES
. Otherwise, if
thread_instrumentation
is
NO
, it disables thread-specific
instrumentation and all lower-level settings are
ignored. No information is maintained per thread and
no individual events are collected in the
current-events or event-history tables. If
thread_instrumentation
is
YES
, the Performance Schema
maintains thread-specific information and also checks
events_
consumers.
xxx
_current
Statement Digest Consumer
This consumer requires
global_instrumentation
to be
YES
or it is not checked. There is no
dependency on the Statement Event consumers. This means
you can obtain statistics per digest without having to
collect statistics in
events_statements_current
,
which is advantageous in terms of overhead.
Wait Event Consumers
These consumers require both
global_instrumentation
and
thread_instrumentation
to be
YES
or they are not checked.
events_waits_current
, if
NO
, disables collection of
individual wait events in the
events_waits_current
table. If YES
, it enables wait
event collection and the Performance Schema checks the
events_waits_history
and
events_waits_history_long
consumers.
events_waits_history
is not checked
if event_waits_current
is
NO
. Otherwise, a
events_waits_history
value of
NO
or YES
disables or enables collection of wait events in the
events_waits_history
table.
events_waits_history_long
is not
checked if event_waits_current
is
NO
. Otherwise, a
events_waits_history_long
value of
NO
or YES
disables or enables collection of wait events in the
events_waits_history_long
table.
Stage Event Consumers
These consumers require both
global_instrumentation
and
thread_instrumentation
to be
YES
or they are not checked.
events_stages_current
, if
NO
, disables collection of
individual stage events in the
events_stages_current
table. If YES
, it enables stage
event collection and the Performance Schema checks the
events_stages_history
and
events_stages_history_long
consumers.
events_stages_history
is not
checked if event_stages_current
is
NO
. Otherwise, a
events_stages_history
value of
NO
or YES
disables or enables collection of stage events in the
events_stages_history
table.
events_stages_history_long
is not
checked if event_stages_current
is
NO
. Otherwise, a
events_stages_history_long
value of
NO
or YES
disables or enables collection of stage events in the
events_stages_history_long
table.
Statement Event Consumers
These consumers require both
global_instrumentation
and
thread_instrumentation
to be
YES
or they are not checked.
events_statements_current
, if
NO
, disables collection of
individual statement events in the
events_statements_current
table. If YES
, it enables statement
event collection and the Performance Schema checks the
events_statements_history
and
events_statements_history_long
consumers.
events_statements_history
is not
checked if
events_statements_current
is
NO
. Otherwise, a
events_statements_history
value of
NO
or YES
disables or enables collection of statement events in
the
events_statements_history
table.
events_statements_history_long
is
not checked if
events_statements_current
is
NO
. Otherwise, a
events_statements_history_long
value of NO
or
YES
disables or enables collection
of statement events in the
events_statements_history_long
table.
The consumer settings in the
setup_consumers
table form a
hierarchy from higher levels to lower. The following
discussion describes how consumers work, showing specific
configurations and their effects as consumer settings are
enabled progressively from high to low. The consumer
values shown are representative. The general principles
described here apply to other consumer values that may be
available.
The configuration descriptions occur in order of increasing functionality and overhead. If you do not need the information provided by enabling lower-level settings, disable them and the Performance Schema will execute less code on your behalf and you will have less information to sift through.
Suppose that the
setup_consumers
table
contains the following hierarchy of values:
global_instrumentation thread_instrumentation events_waits_current events_waits_history events_waits_history_long
If a given consumer setting is NO
, the
Performance Schema disables the instrumentation associated
with the consumer and ignores all lower-level settings. If
a given setting is YES
, the Performance
Schema enables the instrumentation associated with it and
checks the settings at the next lowest level.
Each configuration description indicates which setup elements the Performance Schema checks and which output tables it maintains (that is, for which tables it collects information).
Server configuration state:
mysql> SELECT * FROM setup_consumers;
+---------------------------+---------+
| NAME | ENABLED |
+---------------------------+---------+
| global_instrumentation | NO |
...
+---------------------------+---------+
In this configuration, nothing is instrumented.
Setup elements checked:
Table setup_consumers
,
consumer global_instrumentation
Output tables maintained:
None
Server configuration state:
mysql> SELECT * FROM setup_consumers;
+---------------------------+---------+
| NAME | ENABLED |
+---------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | NO |
...
+---------------------------+---------+
In this configuration, instrumentation is maintained only for global states. Per-thread instrumentation is disabled.
Additional setup elements checked, relative to the preceding configuration:
Table setup_consumers
,
consumer thread_instrumentation
Table setup_instruments
Table setup_objects
Table setup_timers
Additional output tables maintained, relative to the preceding configuration:
table_lock_waits_summary_by_table
table_io_waits_summary_by_index_usage
table_io_waits_summary_by_table
Server configuration state:
mysql> SELECT * FROM setup_consumers;
+---------------------------+---------+
| NAME | ENABLED |
+---------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | YES |
| events_waits_current | NO |
...
+---------------------------+---------+
In this configuration, instrumentation is maintained globally and per thread. No individual wait events are collected in the current-events or event-history tables.
Additional setup elements checked, relative to the preceding configuration:
Table setup_consumers
,
consumer events_waits_current
Table setup_actors
Column threads.instrumented
Additional output tables maintained, relative to the preceding configuration:
Server configuration state:
mysql> SELECT * FROM setup_consumers;
+---------------------------+---------+
| NAME | ENABLED |
+---------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | YES |
| events_waits_current | YES |
| events_waits_history | NO |
| events_waits_history_long | NO |
...
+---------------------------+---------+
In this configuration, instrumentation is maintained globally and per thread. Individual wait events are collected in the current-events table, but not in the event-history tables.
Additional setup elements checked, relative to the preceding configuration:
Consumer
events_waits_history
Consumer
events_waits_history_long
Additional output tables maintained, relative to the preceding configuration:
The preceding configuration collects no wait event history
because the events_waits_history
and
events_waits_history_long
consumers are
disabled. Those consumers can be enabled separately or
together to collect event history per thread, globally, or
both.
This configuration collects event history per thread, but not globally:
mysql> SELECT * FROM setup_consumers;
+---------------------------+---------+
| NAME | ENABLED |
+---------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | YES |
| events_waits_current | YES |
| events_waits_history | YES |
| events_waits_history_long | NO |
...
+---------------------------+---------+
Event-history tables maintained for this configuration:
This configuration collects event history globally, but not per thread:
mysql> SELECT * FROM setup_consumers;
+---------------------------+---------+
| NAME | ENABLED |
+---------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | YES |
| events_waits_current | YES |
| events_waits_history | NO |
| events_waits_history_long | YES |
...
+---------------------------+---------+
Event-history tables maintained for this configuration:
This configuration collects event history per thread and globally:
mysql> SELECT * FROM setup_consumers;
+---------------------------+---------+
| NAME | ENABLED |
+---------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | YES |
| events_waits_current | YES |
| events_waits_history | YES |
| events_waits_history_long | YES |
...
+---------------------------+---------+
Event-history tables maintained for this configuration:
Names given for filtering operations can be as specific or general as required. To indicate a single instrument or consumer, specify its name in full:
mysql>UPDATE setup_instruments
->SET ENABLED = 'NO'
->WHERE NAME = 'wait/synch/mutex/myisammrg/MYRG_INFO::mutex';
mysql>UPDATE setup_consumers
->SET ENABLED = 'NO' WHERE NAME = 'events_waits_current';
To specify a group of instruments or consumers, use a pattern that matches the group members:
mysql>UPDATE setup_instruments
->SET ENABLED = 'NO'
->WHERE NAME LIKE 'wait/synch/mutex/%';
mysql>UPDATE setup_consumers
->SET ENABLED = 'NO' WHERE NAME LIKE '%history%';
If you use a pattern, it should be chosen so that it matches all the items of interest and no others. For example, to select all file I/O instruments, it is better to use a pattern that includes the entire instrument name prefix:
... WHERE NAME LIKE 'wait/io/file/%';
A pattern of '%/file/%'
will match other
instruments that have a component of
'/file/'
anywhere in the name. Even less
suitable is the pattern '%file%'
because
it will match instruments with 'file'
anywhere in the name, such as
wait/synch/mutex/sql/LOCK_des_key_file
.
To check which instrument or consumer names a pattern matches, perform a simple test:
mysql>SELECT NAME FROM setup_instruments WHERE NAME LIKE '
mysql>pattern
';SELECT NAME FROM setup_consumers WHERE NAME LIKE '
pattern
';
For information about the types of names that are supported, see Section 20.4, “Performance Schema Instrument Naming Conventions”.
It is always possible to determine what instruments the
Performance Schema includes by checking the
setup_instruments
table. For
example, to see what file-related events are instrumented for
the InnoDB
storage engine, use this query:
mysql> SELECT * FROM setup_instruments WHERE NAME LIKE 'wait/io/file/innodb/%';
+--------------------------------------+---------+-------+
| NAME | ENABLED | TIMED |
+--------------------------------------+---------+-------+
| wait/io/file/innodb/innodb_data_file | YES | YES |
| wait/io/file/innodb/innodb_log_file | YES | YES |
| wait/io/file/innodb/innodb_temp_file | YES | YES |
+--------------------------------------+---------+-------+
An exhaustive description of precisely what is instrumented is not given in this documentation, for several reasons:
What is instrumented is the server code. Changes to this code occur often, which also affects the set of instruments.
It is not practical to list all the instruments because there are hundreds of them.
As described earlier, it is possible to find out by
querying the
setup_instruments
table. This
information is always up to date for your version of
MySQL, also includes instrumentation for instrumented
plugins you might have installed that are not part of the
core server, and can be used by automated tools.
Pre-filtering limits which event information is collected and is
independent of any particular user. By contrast, post-filtering is
performed by individual users through the use of queries with
appropriate WHERE
clauses that restrict what
event information to select from the events available after
pre-filtering has been applied.
In Section 20.2.3.2.1, “Event Pre-Filtering”, an example
showed how to pre-filter for file instruments. If the event tables
contain both file and nonfile information, post-filtering is
another way to see information only for file events. Add a
WHERE
clause to queries to restrict event
selection appropriately:
mysql>SELECT THREAD_ID, NUMBER_OF_BYTES
->FROM events_waits_history
->WHERE EVENT_NAME LIKE 'wait/io/file/%'
->AND NUMBER_OF_BYTES IS NOT NULL;
+-----------+-----------------+ | THREAD_ID | NUMBER_OF_BYTES | +-----------+-----------------+ | 11 | 66 | | 11 | 47 | | 11 | 139 | | 5 | 24 | | 5 | 834 | +-----------+-----------------+
An instrument name consists of a sequence of components separated
by '/'
characters. Example names:
wait/io/file/myisam/log wait/io/file/mysys/charset wait/lock/table/sql/handler wait/synch/cond/mysys/COND_alarm wait/synch/cond/sql/BINLOG::update_cond wait/synch/mutex/mysys/BITMAP_mutex wait/synch/mutex/sql/LOCK_delete wait/synch/rwlock/sql/Query_cache_query::lock stage/sql/closing tables stage/sql/Sorting result statement/com/Execute statement/com/Query statement/sql/create_table statement/sql/lock_tables
The instrument name space has a tree-like structure. The components of an instrument name from left to right provide a progression from more general to more specific. The number of components a name has depends on the type of instrument.
The interpretation of a given component in a name depends on the
components to the left of it. For example,
myisam
appears in both of the following names,
but myisam
in the first name is related to file
I/O, whereas in the second it is related to a synchronization
instrument:
wait/io/file/myisam/log wait/synch/cond/myisam/MI_SORT_INFO::cond
Instrument names consist of a prefix with a structure defined by
the Performance Schema implementation and a suffix defined by the
developer implementing the instrument code. The top-level
component of an instrument prefix indicates the type of
instrument. This component also determines which event timer in
the setup_timers
table applies to the
instrument. For the prefix part of instrument names, the top level
indicates the type of instrument.
The suffix part of instrument names comes from the code for the instruments themselves. Suffixes may include levels such as these:
A name for the major component (a server module such as
myisam
, innodb
,
mysys
, or sql
) or a
plugin name.
The name of a variable in the code, in the form
XXX
(a global variable) or
(a member CCC
:MMM
MMM
in class
CCC
). Examples:
COND_thread_cache
,
THR_LOCK_myisam
,
BINLOG::LOCK_index
.
Top-Level Instrument Components
idle
: An instrumented idle event. This
instrument has no further components.
stage
: An instrumented stage event.
statement
: An instrumented statement event.
wait
: An instrumented wait event.
Idle Instrument Components
idle
The idle instrument. The Performance Schema generates idle
events as discussed in the description of the
socket_instances.STATE
column in
Section 20.9.2.5, “The socket_instances
Table”.
Stage Instrument Components
Stage instruments have names of the form
stage/
,
where code_area
/stage_name
code_area
is a value such as
sql
or myisam
, and
stage_name
indicates the stage of
statement processing, such as Sorting result
or
Sending data
. Stages correspond to the thread
states displayed by SHOW
PROCESSLIST
or that are visible in the
INFORMATION_SCHEMA.PROCESSLIST
table.
Statement Instrument Components
statement/com
: An instrumented command
operation. These have names corresponding to
COM_
operations (see the xxx
mysql_com.h
header
file and sql/sql_parse.cc
. For example,
the statement/com/Connect
and
statement/com/Fetch
instruments correspond
to the COM_CONNECT
and
COM_FETCH
commands.
statement/sql
: An instrumented SQL
statement operation. For example, the
statement/sql/create_db
and
statement/sql/select
instruments are used
for CREATE DATABASE
and
SELECT
statements.
Wait Instrument Components
wait/io
An instrumented I/O operation.
wait/io/file
An instrumented file I/O operation. For files, the wait is
the time waiting for the file operation to complete (for
example, a call to fwrite()
). Due to
caching, the physical file I/O on the disk might not
happen within this call.
wait/io/socket
An instrumented socket operation. Socket instruments have
names of the form
wait/io/socket/sql/
.
The server has a listening socket for each network
protocol that it supports. The instruments associated with
listening sockets for TCP/IP or Unix socket file
connections have a socket_type
socket_type
value of server_tcpip_socket
or
server_unix_socket
, respectively. When
a listening socket detects a connection, the server
transfers the connection to a new socket managed by a
separate thread. The instrument for the new connection
thread has a socket_type
value
of client_connection
.
wait/io/table
An instrumented table I/O operation. These include row-level accesses to persistent base tables or temporary tables. Operations that affect rows are fetch, insert, update, and delete. For a view, waits are associated with base tables referenced by the view.
Unlike most waits, a table I/O wait can include other
waits. For example, table I/O might include file I/O or
memory operations. Thus,
events_waits_current
for a
table I/O wait usually has two rows. For more information,
see
Section 20.6, “Performance Schema Atom and Molecule Events”.
Some row operations might cause multiple table I/O waits. For example, an insert might activate a trigger that causes an update.
wait/lock
An instrumented lock operation.
wait/lock/table
An instrumented table lock operation.
wait/synch
An instrumented synchronization object. For synchronization
objects, the TIMER_WAIT
time includes the
amount of time blocked while attempting to acquire a lock on
the object, if any.
wait/synch/cond
A condition is used by one thread to signal to other threads that something they were waiting for has happened. If a single thread was waiting for a condition, it can wake up and proceed with its execution. If several threads were waiting, they can all wake up and compete for the resource for which they were waiting.
wait/synch/mutex
A mutual exclusion object used to permit access to a resource (such as a section of executable code) while preventing other threads from accessing the resource.
wait/synch/rwlock
A read/write lock object used to lock a specific variable for access while preventing its use by other threads. A shared read lock can be acquired simultaneously by multiple threads. An exclusive write lock can be acquired by only one thread at a time.
There are several status variables associated with the Performance Schema:
mysql> SHOW STATUS LIKE 'perf%';
+------------------------------------------+-------+
| Variable_name | Value |
+------------------------------------------+-------+
| Performance_schema_cond_classes_lost | 0 |
| Performance_schema_cond_instances_lost | 0 |
| Performance_schema_file_classes_lost | 0 |
| Performance_schema_file_handles_lost | 0 |
| Performance_schema_file_instances_lost | 0 |
| Performance_schema_locker_lost | 0 |
| Performance_schema_mutex_classes_lost | 0 |
| Performance_schema_mutex_instances_lost | 0 |
| Performance_schema_rwlock_classes_lost | 0 |
| Performance_schema_rwlock_instances_lost | 0 |
| Performance_schema_table_handles_lost | 0 |
| Performance_schema_table_instances_lost | 0 |
| Performance_schema_thread_classes_lost | 0 |
| Performance_schema_thread_instances_lost | 0 |
+------------------------------------------+-------+
The Performance Schema status variables provide information about instrumentation that could not be loaded or created due to memory constraints. Names for these variables have several forms:
Performance_schema_
indicates how many instruments of type
xxx
_classes_lostxxx
could not be loaded.
Performance_schema_
indicates how many instances of object type
xxx
_instances_lostxxx
could not be created.
Performance_schema_
indicates how many instances of object type
xxx
_handles_lostxxx
could not be opened.
Performance_schema_locker_lost
indicates
how many events are “lost” or not recorded.
For example, if a mutex is instrumented in the server source but
the server cannot allocate memory for the instrumentation at
runtime, it increments
Performance_schema_mutex_classes_lost
.
The mutex still functions as a synchronization object (that is,
the server continues to function normally), but performance data
for it will not be collected. If the instrument can be allocated,
it can be used for initializing instrumented mutex instances. For
a singleton mutex such as a global mutex, there will be only one
instance. Other mutexes have an instance per connection, or per
page in various caches and data buffers, so the number of
instances varies over time. Increasing the maximum number of
connections or the maximum size of some buffers will increase the
maximum number of instances that might be allocated at once. If
the server cannot create a given instrumented mutex instance, it
increments
Performance_schema_mutex_instances_lost
.
Suppose that the following conditions hold:
The server was started with the
--performance_schema_max_mutex_classes=200
option and thus has room for 200 mutex instruments.
150 mutex instruments have been loaded already.
The plugin named plugin_a
contains 40 mutex
instruments.
The plugin named plugin_b
contains 20 mutex
instruments.
The server allocates mutex instruments for the plugins depending on how many they need and how many are available, as illustrated by the following sequence of statements:
INSTALL PLUGIN plugin_a
The server now has 150+40 = 190 mutex instruments.
UNINSTALL PLUGIN plugin_a;
The server still has 190 instruments. All the historical data generated by the plugin code is still available, but new events for the instruments are not collected.
INSTALL PLUGIN plugin_a;
The server detects that the 40 instruments are already defined, so no new instruments are created, and previously assigned internal memory buffers are reused. The server still has 190 instruments.
INSTALL PLUGIN plugin_b;
The server has room for 200-190 = 10 instruments (in this case,
mutex classes), and sees that the plugin contains 20 new
instruments. 10 instruments are loaded, and 10 are discarded or
“lost.” The
Performance_schema_mutex_classes_lost
indicates the number of instruments (mutex classes) lost:
mysql> SHOW STATUS LIKE "perf%mutex_classes_lost";
+---------------------------------------+-------+
| Variable_name | Value |
+---------------------------------------+-------+
| Performance_schema_mutex_classes_lost | 10 |
+---------------------------------------+-------+
1 row in set (0.10 sec)
The instrumentation still works and collects (partial) data for
plugin_b
.
When the server cannot create a mutex instrument, these results occur:
No row for the instrument is inserted into the
setup_instruments
table.
Performance_schema_mutex_classes_lost
increases by 1.
Performance_schema_mutex_instances_lost
does not change. (When the mutex instrument is not created, it
cannot be used to create instrumented mutex instances later.)
The pattern just described applies to all types of instruments, not just mutexes.
A value of
Performance_schema_mutex_classes_lost
greater than 0 can happen in two cases:
To save a few bytes of memory, you start the server with
--performance_schema_max_mutex_classes=
,
where N
N
is less than the default
value. The default value is chosen to be sufficient to load
all the plugins provided in the MySQL distribution, but this
can be reduced if some plugins are never loaded. For example,
you might choose not to load some of the storage engines in
the distribution.
You load a third-party plugin that is instrumented for the
Performance Schema but do not allow for the plugin's
instrumentation memory requirements when you start the server.
Because it comes from a third party, the instrument memory
consumption of this engine is not accounted for in the default
value chosen for
performance_schema_max_mutex_classes
.
If the server has insufficient resources for the plugin's
instruments and you do not explicitly allocate more using
--performance_schema_max_mutex_classes=
,
loading the plugin leads to starvation of instruments.
N
If the value chosen for
performance_schema_max_mutex_classes
is too small, no error is reported in the error log and there is
no failure at runtime. However, the content of the tables in the
performance_schema
database will miss events.
The
Performance_schema_mutex_classes_lost
status variable is the only visible sign to indicate that some
events were dropped internally due to failure to create
instruments.
If an instrument is not lost, it is known to the Performance
Schema, and is used when instrumenting instances. For example,
wait/synch/mutex/sql/LOCK_delete
is the name of
a mutex instrument in the
setup_instruments
table. This single
instrument is used when creating in the code (in
THD::LOCK_delete
) however many instances of the
mutex are needed as the server runs. In this case,
LOCK_delete
is a mutex that is per connection
(THD
), so if a server has 1000 connections,
there are 1000 threads, and 1000 instrumented
LOCK_delete
mutex instances
(THD::LOCK_delete
).
If the server does not have room for all these 1000 instrumented
mutexes (instances), some mutexes are created with
instrumentation, and some are created without instrumentation. If
the server can create only 800 instances, 200 instances are lost.
The server continues to run, but increments
Performance_schema_mutex_instances_lost
by 200 to indicate that instances could not be created.
A value of
Performance_schema_mutex_instances_lost
greater than 0 can happen when the code initializes more mutexes
at runtime than were allocated for
--performance_schema_max_mutex_instances=
.
N
The bottom line is that if
SHOW STATUS LIKE
'perf%'
says that nothing was lost (all values are
zero), the Performance Schema data is accurate and can be relied
upon. If something was lost, the data is incomplete, and the
Performance Schema could not record everything given the
insufficient amount of memory it was given to use. In this case,
the specific
Performance_schema_
variable indicates the problem area.
xxx
_lost
It might be appropriate in some cases to cause deliberate instrument starvation. For example, if you do not care about performance data for file I/O, you can start the server with all Performance Schema parameters related to file I/O set to 0. No memory will be allocated for file-related classes, instances, or handles, and all file events will be lost.
Use SHOW ENGINE
PERFORMANCE_SCHEMA STATUS
to inspect the internal
operation of the Performance Schema code:
mysql> SHOW ENGINE PERFORMANCE_SCHEMA STATUS\G
...
*************************** 3. row ***************************
Type: performance_schema
Name: events_waits_history.row_size
Status: 76
*************************** 4. row ***************************
Type: performance_schema
Name: events_waits_history.row_count
Status: 10000
*************************** 5. row ***************************
Type: performance_schema
Name: events_waits_history.memory
Status: 760000
...
*************************** 57. row ***************************
Type: performance_schema
Name: performance_schema.memory
Status: 26459600
...
The intent of this statement is to help the DBA to understand the
effects that different options have on memory requirements. For a
description of the field meanings, see
Section 13.7.5.16, “SHOW ENGINE
Syntax”.
For a table I/O event, there are usually two rows in
events_waits_current
, not one. For
example, a row fetch might result in rows like this:
Row# EVENT_NAME TIMER_START TIMER_END ---- ---------- ----------- --------- 1 wait/io/file/myisam/dfile 10001 10002 2 wait/io/table/sql/handler 10000 NULL
The row fetch causes a file read. In the example, the table I/O
fetch event started before the file I/O event but has not finished
(its TIMER_END
value is
NULL
). The file I/O event is
“nested” within the table I/O event.
This occurs because, unlike other “atomic” wait
events such as for mutexes or file I/O, table I/O events are
“molecular” and include (overlap with) other events.
In events_waits_current
, the table
I/O event usually has two rows:
One row for the most recent table I/O wait event
One row for the most recent wait event of any kind
Usually, but not always, the “of any kind” wait event
differs from the table I/O event. As each subsidiary event
completes, it disappears from
events_waits_current
. At this point,
and until the next subsidiary event begins, the table I/O wait is
also the most recent wait of any kind.
As of MySQL 5.6.5, the Performance Schema maintains statement digest information. Digesting converts a SQL statement to normalized form and computes a hash value for the result. Normalization permits statements that are similar to be grouped and summarized to expose information about the types of statements the server is executing and how often they occur. This section describes how statement normalizing occurs and how it can be useful.
Statement digesting involves these Performance Schema components:
A statement_digest
consumer in the
setup_consumers
table controls
whether the Performance Schema maintains digest information.
The statement event tables
(events_statements_current
,
events_statements_history
, and
events_statements_history_long
)
have DIGEST
and
DIGEST_TEXT
columns that contain digest MD5
values and the corresponding normalized statement text
strings.
A
events_statements_summary_by_digest
table provides aggregated statement digest information.
Normalizing a statement transforms the statement text to a more standardized string representation that preserves the general statement structure while removing information not essential to the structure. Object identifiers such as database and table names are preserved. Values and comments are removed, and whitespace is adjusted. The Performance Schema does not retain information such as names, passwords, dates, and so forth.
Consider these statements:
SELECT * FROM orders WHERE customer_id=10 AND quantity>20 SELECT * FROM orders WHERE customer_id = 20 AND quantity > 100
To normalize these statements, the Performance Schema replaces
data values by ?
and adjusts whitespace. Both
statements yield the same normalized form and thus are considered
“the same”:
SELECT * FROM orders WHERE customer_id = ? AND quantity > ?
The normalized statement contains less information but is still representative of the original statement. Other similar statements that have different comparison values have the same normalized form.
Now consider these statements:
SELECT * FROM customers WHERE customer_id = 1000 SELECT * FROM orders WHERE customer_id = 1000
In this case, the statements are not “the same.” The object identifiers differ, so the statements yield different normalized forms:
SELECT * FROM customers WHERE customer_id = ? SELECT * FROM orders WHERE customer_id = ?
Normalized statements have a fixed length. If normalization produces a statement that exceeds this length, the text ends with “...”. Long statements that differ only in the part that occurs following the “...” are considered to be the same. Consider these statements:
SELECT * FROM mytable WHERE cola = 10 AND colb = 20 SELECT * FROM mytable WHERE cola = 10 AND colc = 20
If the cutoff happened to be right after the
AND
, both statements would have this normalized
form:
SELECT * FROM mytable WHERE cola = ? AND ...
In this case, the difference in the second column name is lost and both statements are considered the same.
For each normalized statement, the Performance Schema computes a
hash digest value and stores that value and the statement in the
DIGEST
and DIGEST_TEXT
columns of the statement event tables
(events_statements_current
,
events_statements_history
, and
events_statements_history_long
). In
addition, information for statements with the same
DIGEST
value is aggregated in the
events_statements_summary_by_digest
summary table. The Performance Schema uses MD5 hash values because
they are fast to compute and have a favorable statistical
distribution that minimizes collisions.
The
events_statements_summary_by_digest
summary table has a fixed size, so when it becomes full,
statements that have a DIGEST
value not
matching an existing value in the table are grouped in a special
row with DIGEST
= NULL
. This
permits all statements to be counted. However, if the
DIGEST
= NULL
row accounts
for a significant percentage of the statements executed, it might
be desirable to increase the size of the summary table. To do
this, set the
performance_schema_digests_size
system variable to a larger value at server startup. The default
size is 200.
The statement digest summary table provides a profile of the statements executed by the server. It shows what kinds of statements an application is executing and how often. An application developer can use this information together with other information in the table to assess the application's performance characteristics. For example, table columns that show wait times, lock times, or index use may highlight types of queries that are inefficient. This gives the developer insight into which parts of the application need attention.
The name of the performance_schema
database is
lowercase, as are the names of tables within it. Queries should
specify the names in lowercase.
Most tables in the performance_schema
database
are read only and cannot be modified. Some of the setup tables
have columns that can be modified to affect Performance Schema
operation; some also permit rows to be inserted or deleted.
Truncation is permitted to clear collected events, so
TRUNCATE TABLE
can be used on
tables containing those kinds of information, such as tables named
with a prefix of events_waits_
.
TRUNCATE TABLE
can also be used
with summary tables, but except for
events_statements_summary_by_digest
,
the effect is to reset the summary columns to 0 or
NULL
, not to remove rows.
Privileges are as for other databases and tables:
Tables in the performance_schema
database can
be grouped as follows:
Setup tables. These tables are used to configure and display monitoring characteristics.
Current events table. The
events_waits_current
table
contains the most recent event for each thread.
History tables. These tables have the same structure as
events_waits_current
but contain
more rows. The
events_waits_history
table
contains the most recent 10 events per thread.
events_waits_history_long
contains the most recent 10,000 events.
To change the sizes of these tables, set the
performance_schema_events_waits_history_size
and
performance_schema_events_waits_history_long_size
system variables at server startup.
Summary tables. These tables contain information aggregated over groups of events, including those that have been discarded from the history tables.
Instance tables. These tables document what types of objects are instrumented. An instrumented object, when used by the server, produces an event. These tables provide event names and explanatory notes or status information.
Miscellaneous tables. These do not fall into any of the other table groups.
The setup tables provide information about the current
instrumentation and enable the monitoring configuration to be
changed. For this reason, some columns in these tables can be
changed if you have the UPDATE
privilege.
The use of tables rather than individual variables for setup information provides a high degree of flexibility in modifying Performance Schema configuration. For example, you can use a single statement with standard SQL syntax to make multiple simultaneous configuration changes.
These setup tables are available:
setup_actors
: How to initialize
monitoring for new foreground threads
setup_consumers
: The
destinations to which event information can be sent and
stored
setup_instruments
: The classes
of instrumented objects for which events can be collected
setup_objects
: Which objects
should be monitored
setup_timers
: The current event
timer
The setup_actors
table contains
information that determines whether to enable monitoring for
new foreground server threads; that is, threads associated
with client connections. This table has a maximum size of 100
rows by default. The size may be changed by modifying the
performance_schema_setup_actors_size
system variable at server startup.
The setup_actors
table has these
columns:
HOST
The host name. This should be a literal name, or
'%'
to mean “any host.”
USER
The user name. This should be a literal name, or
'%'
to mean “any user.”
ROLE
Unused.
For each new foreground thread, the Performance Schema matches
the user and host for the the thread against the rows of the
setup_actors
table. Based on
whether any row matches, the INSTRUMENTED
column of the threads
table row
for the thread is set to YES
or
NO
. This enables instrumenting to be
applied selectively per host, user, or combination of host and
user.
The initial contents of the
setup_actors
table match any user
and host combination, so monitoring for all foreground threads
is enabled by default:
mysql> SELECT * FROM setup_actors;
+------+------+------+
| HOST | USER | ROLE |
+------+------+------+
| % | % | % |
+------+------+------+
Modifications to the setup_actors
table do not affect existing threads.
For more information about the role of the
setup_actors
table in event
monitoring, see
Section 20.2.3.2.1, “Event Pre-Filtering”.
The setup_consumers
table lists
the types of consumers for which event information can be
stored and which are enabled:
mysql> SELECT * FROM setup_consumers;
+--------------------------------+---------+
| NAME | ENABLED |
+--------------------------------+---------+
| events_stages_current | NO |
| events_stages_history | NO |
| events_stages_history_long | NO |
| events_statements_current | YES |
| events_statements_history | NO |
| events_statements_history_long | NO |
| events_waits_current | NO |
| events_waits_history | NO |
| events_waits_history_long | NO |
| global_instrumentation | YES |
| thread_instrumentation | YES |
+--------------------------------+---------+
The consumer settings in the
setup_consumers
table form a
hierarchy from higher levels to lower. For detailed
information about the effect of enabling different consumers,
see Section 20.2.3.2.1.4, “Pre-Filtering by Consumer”.
The setup_consumers
table has
these columns:
NAME
The consumer name.
ENABLED
Whether the consumer is enabled. This column can be modified. If you disable a consumer, the server does not spend time adding event information to it.
Modifications to the
setup_consumers
table affect
monitoring immediately.
The setup_instruments
table lists
classes of instrumented objects for which events can be
collected:
mysql> SELECT * FROM setup_instruments;
+------------------------------------------------------------+---------+-------+
| NAME | ENABLED | TIMED |
+------------------------------------------------------------+---------+-------+
...
| wait/synch/mutex/sql/LOCK_global_read_lock | YES | YES |
| wait/synch/mutex/sql/LOCK_global_system_variables | YES | YES |
| wait/synch/mutex/sql/LOCK_lock_db | YES | YES |
| wait/synch/mutex/sql/LOCK_manager | YES | YES |
...
| wait/synch/rwlock/sql/LOCK_grant | YES | YES |
| wait/synch/rwlock/sql/LOGGER::LOCK_logger | YES | YES |
| wait/synch/rwlock/sql/LOCK_sys_init_connect | YES | YES |
| wait/synch/rwlock/sql/LOCK_sys_init_slave | YES | YES |
...
| wait/io/file/sql/binlog | YES | YES |
| wait/io/file/sql/binlog_index | YES | YES |
| wait/io/file/sql/casetest | YES | YES |
| wait/io/file/sql/dbopt | YES | YES |
...
Each instrument added to the source code provides a row for
this table, even when the instrumented code is not executed.
When an instrument is enabled and executed, instrumented
instances are created, which are visible in the
*_instances
tables.
The setup_instruments
table has
these columns:
NAME
The instrument name. Instrument names have multiple parts
and form a hierarchy, as discussed in
Section 20.4, “Performance Schema Instrument Naming Conventions”.
Events produced from execution of an instrument have an
EVENT_NAME
value that is taken from the
instrument NAME
value. (Events do not
really have a “name,” but this provides a way
to associate events with instruments.)
ENABLED
Whether the instrument is enabled. This column can be modified. A disabled instrument produces no events.
TIMED
Whether the instrument is timed. This column can be modified.
If an enabled instrument is not timed, the instrument code
is enabled, but the timer is not. Events produced by the
instrument have NULL
for the
TIMER_START
,
TIMER_END
, and
TIMER_WAIT
timer values. This in turn
causes those values to be ignored when calculating the
sum, minimum, maximum, and average time values in summary
tables.
Modifications to the
setup_instruments
table affect
monitoring immediately.
For more information about the role of the
setup_instruments
table in event
filtering, see
Section 20.2.3.2.1, “Event Pre-Filtering”.
The setup_objects
table controls
whether particular objects are monitored. This table has a
maximum size of 100 rows by default. The size may be changed
by modifying the
performance_schema_setup_objects_size
system variable at server startup.
The initial setup_objects
contents look like this:
mysql> SELECT * FROM setup_objects;
+-------------+--------------------+-------------+---------+-------+
| OBJECT_TYPE | OBJECT_SCHEMA | OBJECT_NAME | ENABLED | TIMED |
+-------------+--------------------+-------------+---------+-------+
| TABLE | mysql | % | NO | NO |
| TABLE | performance_schema | % | NO | NO |
| TABLE | information_schema | % | NO | NO |
| TABLE | % | % | YES | YES |
+-------------+--------------------+-------------+---------+-------+
For object types listed in
setup_objects
, the Performance
Schema uses the table to how to monitor them. Object matching
is based on the OBJECT_SCHEMA
and
OBJECT_NAME
columns. Objects for which
there is no match are not monitored.
When the Performance Schema checks for a match in
setup_objects
, it tries to find
more specific matches first. For example, with a table
db1.t1
, it looks for a match for
'db1'
and 't1'
, then for
'db1'
and '%'
, then for
'%'
and '%'
. The order
in which matching occurs matters because different
setup_objects
rows can have
different ENABLED
and
TIMED
values.
The effect of the default object configuration is to
instrument all tables except those in the
mysql
,
INFORMATION_SCHEMA
, and
performance_schema
databases. Tables in the
INFORMATION_SCHEMA
database are not
instrumented regardless of the contents of
setup_objects
; the row for
information_schema.%
simply makes this
default explicit.
The setup_objects
table has these
columns:
OBJECT_TYPE
The type of object to instrument. Currently, this is
always 'TABLE'
(base table).
OBJECT_SCHEMA
The schema that contains the object. This should be a
literal name, or '%'
to mean “any
schema.”
OBJECT_NAME
The name of the instrumented object. This should be a
literal name, or '%'
to mean “any
object.”
ENABLED
Whether events for the object are instrumented. This column can be modified.
This column was added in MySQL 5.6.3. For earlier versions in which it is not present, the Performance Schema enables monitoring only for objects matched by some row in the table; monitoring is implicitly disabled for nonmatching objects.
TIMED
Whether events for the object are timed. This column can be modified.
Rows can be inserted into or deleted from
setup_objects
by users with the
INSERT
or
DELETE
privilege on the table.
For existing rows, only the ENABLED
and
TIMED
columns can be modified, by users
with the UPDATE
privilege on
the table.
Modifications to the
setup_objects
table affect object
monitoring immediately.
For more information about the role of the
setup_objects
table in event
filtering, see
Section 20.2.3.2.1, “Event Pre-Filtering”.
The setup_timers
table shows the
currently selected event timers:
mysql> SELECT * FROM setup_timers;
+-----------+-------------+
| NAME | TIMER_NAME |
+-----------+-------------+
| idle | MICROSECOND |
| wait | CYCLE |
| stage | NANOSECOND |
| statement | NANOSECOND |
+-----------+-------------+
The setup_timers
table has these
columns:
NAME
The type of instrument the timer is used for.
TIMER_NAME
The timer that applies to the instrument type. This column can be modified.
The setup_timers.TIMER_NAME
value can be
changed to select a different timer. The value can be any of
the values in the
performance_timers.TIMER_NAME
column. For
an explanation of how event timing occurs, see
Section 20.2.3.1, “Performance Schema Event Timing”.
Modifications to the setup_timers
table affect monitoring immediately. Events already in
progress use the original timer for the begin time and the new
timer for the end time, which leads to unpredictable results.
If you make timer changes, you may want to use
TRUNCATE TABLE
to reset
Performance Schema statistics.
Instance tables document what types of objects are instrumented. They provide event names and explanatory notes or status information:
cond_instances
: Condition
synchronization object instances
file_instances
: File instances
mutex_instances
: Mutex
synchronization object instances
rwlock_instances
: Lock
synchronization object instances
socket_instances
: Active
connection instances
These tables list instrumented synchronization objects, files,
and connections. There are three types of synchronization
objects: cond
, mutex
, and
rwlock
. Each instance table has an
EVENT_NAME
or NAME
column
to indicate the instrument associated with each row. Instrument
names have multiple parts and form a hierarchy, as discussed in
Section 20.4, “Performance Schema Instrument Naming Conventions”.
The mutex_instances.LOCKED_BY_THREAD_ID
and
rwlock_instances.WRITE_LOCKED_BY_THREAD_ID
columns are extremely important for investigating performance
bottlenecks or deadlocks. For examples of how to use them for
this purpose, see Section 20.14, “Using the Performance Schema to Diagnose Problems”
The cond_instances
table lists
all the conditions seen by the Performance Schema while the
server executes. A condition is a synchronization mechanism
used in the code to signal that a specific event has happened,
so that a thread waiting for this condition can resume work.
When a thread is waiting for something to happen, the condition name is an indication of what the thread is waiting for, but there is no immediate way to tell which other thread, or threads, will cause the condition to happen.
The cond_instances
table has
these columns:
NAME
The instrument name associated with the condition.
OBJECT_INSTANCE_BEGIN
The address in memory of the condition that was instrumented.
The file_instances
table lists
all the files seen by the Performance Schema when executing
file I/O instrumentation. If a file on disk has never been
opened, it will not be in
file_instances
. When a file is
deleted from the disk, it is also removed from the
file_instances
table.
The file_instances
table has
these columns:
FILE_NAME
The file name.
EVENT_NAME
The instrument name associated with the file.
OPEN_COUNT
The count of open handles on the file. If a file was
opened and then closed, it was opened 1 time, but
OPEN_COUNT
will be 0. To list all the
files currently opened by the server, use WHERE
OPEN_COUNT > 0
.
The mutex_instances
table lists
all the mutexes seen by the Performance Schema while the
server executes. A mutex is a synchronization mechanism used
in the code to enforce that only one thread at a given time
can have access to some common resource. The resource is said
to be “protected” by the mutex.
When two threads executing in the server (for example, two user sessions executing a query simultaneously) do need to access the same resource (a file, a buffer, or some piece of data), these two threads will compete against each other, so that the first query to obtain a lock on the mutex will cause the other query to wait until the first is done and unlocks the mutex.
The work performed while holding a mutex is said to be in a “critical section,” and multiple queries do execute this critical section in a serialized way (one at a time), which is a potential bottleneck.
The mutex_instances
table has
these columns:
NAME
The instrument name associated with the mutex.
OBJECT_INSTANCE_BEGIN
The address in memory of the mutex that was instrumented.
LOCKED_BY_THREAD_ID
When a thread currently has a mutex locked,
LOCKED_BY_THREAD_ID
is the
THREAD_ID
of the locking thread,
otherwise it is NULL
.
For every mutex instrumented in the code, the Performance Schema provides the following information.
The setup_instruments
table
lists the name of the instrumentation point, with the
prefix wait/synch/mutex/
.
When some code creates a mutex, a row is added to the
mutex_instances
table. The
OBJECT_INSTANCE_BEGIN
column is a
property that uniquely identifies the mutex.
When a thread attempts to lock a mutex, the
events_waits_current
table
shows a row for that thread, indicating that it is waiting
on a mutex (in the EVENT_NAME
column),
and indicating which mutex is waited on (in the
OBJECT_INSTANCE_BEGIN
column).
When a thread succeeds in locking a mutex:
events_waits_current
shows that the wait on the mutex is completed (in the
TIMER_END
and
TIMER_WAIT
columns)
The completed wait event is added to the
events_waits_history
and
events_waits_history_long
tables
mutex_instances
shows
that the mutex is now owned by the thread (in the
THREAD_ID
column).
When a thread unlocks a mutex,
mutex_instances
shows that
the mutex now has no owner (the
THREAD_ID
column is
NULL
).
When a mutex object is destroyed, the corresponding row is
removed from mutex_instances
.
By performing queries on both of the following tables, a monitoring application or a DBA can detect bottlenecks or deadlocks between threads that involve mutexes:
events_waits_current
, to see
what mutex a thread is waiting for
mutex_instances
, to see which
other thread currently owns a mutex
The rwlock_instances
table lists
all the rwlock
instances (read write locks)
seen by the Performance Schema while the server executes. An
rwlock
is a synchronization mechanism used
in the code to enforce that threads at a given time can have
access to some common resource following certain rules. The
resource is said to be “protected” by the
rwlock
. The access is either shared (many
threads can have a read lock at the same time) or exclusive
(only one thread can have a write lock at a given time).
Depending on how many threads are requesting a lock, and the nature of the locks requested, access can be either granted in shared mode, granted in exclusive mode, or not granted at all, waiting for other threads to finish first.
The rwlock_instances
table has
these columns:
NAME
The instrument name associated with the lock.
OBJECT_INSTANCE_BEGIN
The address in memory of the lock that was instrumented.
WRITE_LOCKED_BY_THREAD_ID
When a thread currently has an rwlock
locked in exclusive (write) mode,
WRITE_LOCKED_BY_THREAD_ID
is the
THREAD_ID
of the locking thread,
otherwise it is NULL
.
READ_LOCKED_BY_COUNT
When a thread currently has an rwlock
locked in shared (read) mode,
READ_LOCKED_BY_COUNT
is incremented by
1. This is a counter only, so it cannot be used directly
to find which thread holds a read lock, but it can be used
to see whether there is a read contention on an
rwlock
, and see how many readers are
currently active.
By performing queries on both of the following tables, a monitoring application or a DBA may detect some bottlenecks or deadlocks between threads that involve locks:
events_waits_current
, to see
what rwlock
a thread is waiting for
rwlock_instances
, to see
which other thread currently owns an
rwlock
There is a limitation: The
rwlock_instances
can be used only
to identify the thread holding a write lock, but not the
threads holding a read lock.
The socket_instances
table
provides a real-time snapshot of the active connections to the
MySQL server. The table contains one row per TCP/IP or Unix
socket file connection. Information available in this table
includes network activity such as socket instances, socket
operations, and number of bytes transmitted and received.
mysql> SELECT * FROM socket_instances\G
*************************** 1. row ***************************
EVENT_NAME: wait/io/socket/sql/server_unix_socket
OBJECT_INSTANCE_BEGIN: 4316619408
THREAD_ID: 1
SOCKET_ID: 16
IP:
PORT: 0
STATE: ACTIVE
*************************** 2. row ***************************
EVENT_NAME: wait/io/socket/sql/client_connection
OBJECT_INSTANCE_BEGIN: 4316644608
THREAD_ID: 21
SOCKET_ID: 39
IP: 127.0.0.1
PORT: 55233
STATE: ACTIVE
*************************** 3. row ***************************
EVENT_NAME: wait/io/socket/sql/server_tcpip_socket
OBJECT_INSTANCE_BEGIN: 4316699040
THREAD_ID: 1
SOCKET_ID: 14
IP: 0.0.0.0
PORT: 50603
STATE: ACTIVE
Socket instruments have names of the form
wait/io/socket/sql/
and are used like this:
socket_type
The server has a listening socket for each network
protocol that it supports. The instruments associated with
listening sockets for TCP/IP or Unix socket file
connections have a socket_type
value of server_tcpip_socket
or
server_unix_socket
, respectively.
When a listening socket detects a connection, the server
transfers the connection to a new socket managed by a
separate thread. The instrument for the new connection
thread has a socket_type
value
of client_connection
.
When a connection terminates, the row in
socket_instances
corresponding to it is deleted.
The socket_instances
table has
these columns:
EVENT_NAME
The name of the wait/io/socket/*
instrument that produced the event. This is a
setup_instruments.NAME
value.
Instrument names have multiple parts and form a hierarchy,
as discussed in
Section 20.4, “Performance Schema Instrument Naming Conventions”.
OBJECT_INSTANCE_BEGIN
This column uniquely identifies the socket. The value is the address of an object in memory.
THREAD_ID
The internal thread identifier assigned by the server. Each socket is managed by a single thread, so each socket can be mapped to a thread which can be mapped to a server process.
SOCKET_ID
The internal file handle assigned to the socket.
IP
The client IP address. The value may be either an IPv4 or IPv6 address, or blank to indicate a Unix socket file connection.
PORT
The TCP/IP port number, in the range from 0 to 65535.
STATE
The socket status, either IDLE
or
ACTIVE
. Wait times for active sockets
are tracked using the corresponding socket instrument.
Wait times for idle sockets are tracked using the
idle
instrument.
A socket is idle if it is waiting for a request from the
client. When a socket becomes idle, the event row in
socket_instances
that is
tracking the socket switches from a status of
ACTIVE
to IDLE
. The
EVENT_NAME
value remains
wait/io/socket/*
, but timing for the
instrument is suspended. Instead, an event is generated in
the events_waits_current
table with an EVENT_NAME
value of
idle
.
When the next request is received, the
idle
event terminates, the socket
instance switches from IDLE
to
ACTIVE
, and timing of the socket
instrument resumes.
The IP:PORT
column combination value
identifies the connection. This combination value is used in
the OBJECT_NAME
column of the
events_waits_
tables, to identify the connection from which socket events
come:
xxx
For the Unix domain listener socket
(server_unix_socket
), the port is 0,
and the IP is ''
.
For client connections via the Unix domain listener
(client_connection
), the port is 0, and
the IP is ''
.
For the TCP/IP server listener socket
(server_tcpip_socket
), the port is
always the master port (for example, 3306), and the IP is
always 0.0.0.0
.
For client connections via the TCP/IP listener
(client_connection
), the port is
whatever the server assigns, but never 0. The IP is the IP
of the originating host (127.0.0.1
or
::1
for the local host)
The socket_instances
table was
added in MySQL 5.6.3.
These tables store wait events:
events_waits_current
: Current
wait events
events_waits_history
: The most
recent wait events for each thread
events_waits_history_long
: The
most recent wait events overall
The events_waits_current
table
contains current wait events, one row per thread showing the
current status of the thread's most recent monitored wait
event.
The events_waits_current
table
can be truncated with TRUNCATE
TABLE
.
Of the tables that contain wait event rows,
events_waits_current
is the most
fundamental. Other tables that contain wait event rows are
logically derived from the current events. For example, the
events_waits_history
and
events_waits_history_long
tables
are collections of the most recent wait events, up to a fixed
number of rows.
The events_waits_current
table
has these columns:
THREAD_ID
The thread associated with the event. The
THREAD_ID
and
EVENT_ID
values taken together form a
primary key that uniquely identifies the row. No two rows
will have the same pair of values.
EVENT_ID
The thread current event number when the event starts.
END_EVENT_ID
This column is set to NULL
when the
event starts, and updated to the thread current event
number when the event ends. This column was added in MySQL
5.6.4.
EVENT_NAME
The name of the instrument that produced the event. This
is a setup_instruments.NAME
value.
Instrument names have multiple parts and form a hierarchy,
as discussed in
Section 20.4, “Performance Schema Instrument Naming Conventions”.
SOURCE
The name of the source file containing the instrumented code that produced the event and the line number in the file at which the instrumentation occurs. This enables you to check the source to determine exactly what code is involved. For example, if a mutex or lock is being blocked, you can check the context in which this occurs.
TIMER_START
,
TIMER_END
,
TIMER_WAIT
Timing information for the event. The unit for these
values is picoseconds (trillionths of a second). The
TIMER_START
and
TIMER_END
values indicate when event
timing started and ended. TIMER_WAIT
is
the event elapsed time (duration).
If an event has not finished, TIMER_END
and TIMER_WAIT
are
NULL
.
If an event is produced from an instrument that has
TIMED = NO
, timing information is not
collected, and TIMER_START
,
TIMER_END
, and
TIMER_WAIT
are all
NULL
.
For discussion of picoseconds as the unit for event times and factors that affect time values, see Section 20.2.3.1, “Performance Schema Event Timing”.
SPINS
For a mutex, the number of spin rounds. If the value is
NULL
, the code does not use spin rounds
or spinning is not instrumented.
OBJECT_SCHEMA
,
OBJECT_NAME
,
OBJECT_TYPE
,
OBJECT_INSTANCE_BEGIN
These columns identify the object “being acted on.” What that means depends on the object type.
For a synchronization object (cond
,
mutex
, rwlock
):
OBJECT_SCHEMA
,
OBJECT_NAME
, and
OBJECT_TYPE
are
NULL
.
OBJECT_INSTANCE_BEGIN
is the
address of the synchronization object in memory.
For a file I/O object:
OBJECT_SCHEMA
is
NULL
.
OBJECT_NAME
is the file name.
OBJECT_TYPE
is
FILE
.
OBJECT_INSTANCE_BEGIN
is an address
in memory.
For a socket object:
OBJECT_NAME
is the
IP:PORT
value for the socket.
OBJECT_INSTANCE_BEGIN
is an address
in memory.
For a table I/O object:
OBJECT_SCHEMA
is the name of the
schema that contains the table.
OBJECT_NAME
is the table name.
OBJECT_TYPE
is
TABLE
for a persistent base table
or TEMPORARY TABLE
for a temporary
table.
OBJECT_INSTANCE_BEGIN
is an address
in memory.
An OBJECT_INSTANCE_BEGIN
value itself
has no meaning, except that different values indicate
different objects.
OBJECT_INSTANCE_BEGIN
can be used for
debugging. For example, it can be used with GROUP
BY OBJECT_INSTANCE_BEGIN
to see whether the load
on 1,000 mutexes (that protect, say, 1,000 pages or blocks
of data) is spread evenly or just hitting a few
bottlenecks. This can help you correlate with other
sources of information if you see the same object address
in a log file or another debugging or performance tool.
INDEX_NAME
The name of the index used. PRIMARY
indicates the table primary index. NULL
means that no index was used.
NESTING_EVENT_ID
The EVENT_ID
value of the event within
which this event is nested. Before MySQL 5.6.3, this
column is always NULL
.
NESTING_EVENT_TYPE
The nesting event type. The value is
statement
, stage
, or
wait
. This column was added in MySQL
5.6.3.
OPERATION
The type of operation performed, such as
lock
, read
, or
write
.
NUMBER_OF_BYTES
The number of bytes read or written by the operation. For
table I/O waits, NUMBER_OF_BYTES
is
NULL
.
FLAGS
Reserved for future use.
The events_waits_history
table
contains the most recent 10 wait events per thread. The table
size may be changed by modifying the
performance_schema_events_waits_history_size
system variable at server startup. As new events are added to
the table, older events are discarded if the table is full.
Events are not added to the table until they have ended.
The events_waits_history
table
has the same structure as
events_waits_current
. See
Section 20.9.3.1, “The events_waits_current
Table”.
The events_waits_history
table
can be truncated with TRUNCATE
TABLE
.
The events_waits_history_long
table contains the most recent 10,000 wait events. The table
size may be changed by modifying the
performance_schema_events_waits_history_long_size
system variable at server startup. As new events are added to
the table, older events are discarded if the table is full.
Events are not added to the table until they have ended.
The events_waits_history_long
table has the same structure as
events_waits_current
. See
Section 20.9.3.1, “The events_waits_current
Table”.
The events_waits_history_long
table can be truncated with TRUNCATE
TABLE
.
As of MySQL 5.6.3, the Performance Schema instruments stages,
which are steps during the statement-execution process, such as
parsing a statement, opening a table, or performing a
filesort
operation. Stages correspond to the
thread states displayed by SHOW
PROCESSLIST
or that are visible in the
INFORMATION_SCHEMA.PROCESSLIST
table. Stages begin and end when state values change.
Within the event hierarchy, wait events nest within stage events, which nest within statement events.
These tables store stage events:
events_stages_current
: Current
stage events
events_stages_history
: The most
recent stage events for each thread
events_stages_history_long
: The
most recent stage events overall
The events_stages_current
table
contains current stage events, one row per thread showing the
current status of the thread's most recent monitored stage
event.
The events_stages_current
table
can be truncated with TRUNCATE
TABLE
.
Of the tables that contain stage event rows,
events_stages_current
is the most
fundamental. Other tables that contain stage event rows are
logically derived from the current events. For example, the
events_stages_history
and
events_stages_history_long
tables
are collections of the most recent stage events, up to a fixed
number of rows.
The events_stages_current
table
has these columns:
THREAD_ID
The thread associated with the event. The
THREAD_ID
and
EVENT_ID
values taken together form a
primary key that uniquely identifies the row. No two rows
will have the same pair of values.
EVENT_ID
The thread current event number when the event starts.
END_EVENT_ID
This column is set to NULL
when the
event starts, and updated to the thread current event
number when the event ends. This column was added in MySQL
5.6.4.
EVENT_NAME
The name of the instrument that produced the event. This
is a setup_instruments.NAME
value.
Instrument names have multiple parts and form a hierarchy,
as discussed in
Section 20.4, “Performance Schema Instrument Naming Conventions”.
SOURCE
The name of the source file containing the instrumented code that produced the event and the line number in the file at which the instrumentation occurs. This enables you to check the source to determine exactly what code is involved.
TIMER_START
,
TIMER_END
,
TIMER_WAIT
Timing information for the event. The unit for these
values is picoseconds (trillionths of a second). The
TIMER_START
and
TIMER_END
values indicate when event
timing started and ended. TIMER_WAIT
is
the event elapsed time (duration).
If an event has not finished, TIMER_END
and TIMER_WAIT
are
NULL
.
If an event is produced from an instrument that has
TIMED = NO
, timing information is not
collected, and TIMER_START
,
TIMER_END
, and
TIMER_WAIT
are all
NULL
.
For discussion of picoseconds as the unit for event times and factors that affect time values, see Section 20.2.3.1, “Performance Schema Event Timing”.
NESTING_EVENT_ID
The EVENT_ID
value of the event within
which this event is nested. The nesting event for a stage
event is usually a statement event.
NESTING_EVENT_TYPE
The nesting event type. The value is
statement
, stage
, or
wait
.
The events_stages_current
table
was added in MySQL 5.6.3.
The events_stages_history
table
contains the most recent 10 stage events per thread. The table
size may be changed by modifying the
performance_schema_events_stages_history_size
system variable at server startup. As new events are added to
the table, older events are discarded if the table is full.
Events are not added to the table until they have ended.
The events_stages_history
table
has the same structure as
events_stages_current
. See
Section 20.9.4.1, “The events_stages_current
Table”.
The events_stages_history
table
can be truncated with TRUNCATE
TABLE
.
The events_stages_history
table
was added in MySQL 5.6.3.
The events_stages_history_long
table contains the most recent 10,000 stage events. The table
size may be changed by modifying the
performance_schema_events_stages_history_long_size
system variable at server startup. As new events are added to
the table, older events are discarded if the table is full.
Events are not added to the table until they have ended.
The events_stages_history_long
table has the same structure as
events_stages_current
. See
Section 20.9.4.1, “The events_stages_current
Table”.
The events_stages_history_long
table can be truncated with TRUNCATE
TABLE
.
The events_stages_history_long
table was added in MySQL 5.6.3.
As of MySQL 5.6.3, the Performance Schema instruments statement execution. Statement events occur at a high level of the event hierarchy. Wait events nest within stage events, which nest within statement events.
These tables store statement events:
events_statements_current
:
Current statement events
events_statements_history
: The
most recent statement events for each thread
events_statements_history_long
:
The most recent statement events overall
Statement monitoring begins from the moment when the server sees that activity is requested on a thread, to the moment when all activity has ceased. Typically, this means from the time the server gets the first packet from the client to the time the server has finished sending the response. Monitoring occurs only for top-level statements. Statements within stored programs and subqueries are not seen separately.
A request from a client can be either a command or a SQL statement:
Server commands correspond to the
COM_
defined in the xxx
codesmysql_com.h
header file
and processed in sql/sql_parse.cc
.
Examples are COM_PING
,
COM_QUERY
, and
COM_QUIT
. Instruments for commands have
names that begin with statement/com
, such
as statement/com/Ping
,
statement/com/Query
, and
statement/com/Quit
.
SQL statements are expressed as text, such as
DELETE FROM t1
or SELECT * FROM
t2
. Instruments for SQL statements have names that
begin with statement/sql
, such as
statement/sql/delete
and
statement/sql/select
.
There are also special error-handling instruments:
statement/com/Error
accounts for messages
received by the server that are out of band. It can be used
to detect commands sent by clients that the server does not
understand. This may be helpful for purposes such as
identifying clients that are misconfigured or using a
version of MySQL more recent than that of the server, or
clients that are attempting to attack the server.
statement/sql/error
accounts for SQL
statements that fail to parse. It can be used to detect
malformed queries sent by clients. A query that fails to
parse differs from a query that parses but fails due to an
error during execution. For example, SELECT *
FROM
is malformed, and the
statement/sql/error
instrument is used.
By contrast, SELECT *
parses but fails
with a No tables used
error. In this
case, statement/sql/select
is used and
the statement event contains information to indicate the
nature of the error.
Details for a request are not initially known. For example,
because a SQL statement is sent as text inside a
COM_QUERY
packet, the proper
statement/sql/*
instrument is not known at
the moment the request is received. The Performance Schema at
first instruments such a request using an event with an
EVENT_NAME
of
statement/com/Query
. Then it changes the
EVENT_NAME
value to either a valid
statement/sql/*
name if the statement parses,
or to statement/sql/error
if it does not.
The events_statements_current
table contains current statement events, one row per thread
showing the current status of the thread's most recent
monitored statement event.
The events_statements_current
table can be truncated with TRUNCATE
TABLE
.
Of the tables that contain statement event rows,
events_statements_current
is the
most fundamental. Other tables that contain statement event
rows are logically derived from the current events. For
example, the
events_statements_history
and
events_statements_history_long
tables are collections of the most recent statement events, up
to a fixed number of rows.
The events_statements_current
table has these columns:
THREAD_ID
The thread associated with the event. The
THREAD_ID
and
EVENT_ID
values taken together form a
primary key that uniquely identifies the row. No two rows
will have the same pair of values.
EVENT_ID
The thread current event number when the event starts.
END_EVENT_ID
This column is set to NULL
when the
event starts, and updated to the thread current event
number when the event ends. This column was added in MySQL
5.6.4.
EVENT_NAME
The name of the instrument from which the event was
collected. This is a
setup_instruments.NAME
value.
Instrument names have multiple parts and form a hierarchy,
as discussed in
Section 20.4, “Performance Schema Instrument Naming Conventions”.
For SQL statements, the EVENT_NAME
value initially is statement/com/Query
until the statement is parsed, then changes to a more
appropriate value, as described in
Section 20.9.5, “Performance Schema Statement Event Tables”.
SOURCE
The name of the source file containing the instrumented code that produced the event and the line number in the file at which the instrumentation occurs. This enables you to check the source to determine exactly what code is involved.
TIMER_START
,
TIMER_END
,
TIMER_WAIT
Timing information for the event. The unit for these
values is picoseconds (trillionths of a second). The
TIMER_START
and
TIMER_END
values indicate when event
timing started and ended. TIMER_WAIT
is
the event elapsed time (duration).
If an event has not finished, TIMER_END
and TIMER_WAIT
are
NULL
.
If an event is produced from an instrument that has
TIMED = NO
, timing information is not
collected, and TIMER_START
,
TIMER_END
, and
TIMER_WAIT
are all
NULL
.
For discussion of picoseconds as the unit for event times and factors that affect time values, see Section 20.2.3.1, “Performance Schema Event Timing”.
LOCK_TIME
The time spent waiting for table locks. This value is computed in microseconds but normalized to picoseconds for easier comparison with other Performance Schema timers.
SQL_TEXT
The text of the SQL statement. For a command not
associated with a SQL statement, the value is
NULL
.
DIGEST
The statement digest MD5 value as a string of 32
hexadecimal characters, or NULL
if the
statement_digest
consumer is
no
. For more information about
statement digesting, see
Section 20.7, “Performance Schema Statement Digests”.
This column was added in MySQL 5.6.5.
DIGEST_TEXT
The normalized statement digest text, or
NULL
if the
statement_digest
consumer is
no
. For more information about
statement digesting, see
Section 20.7, “Performance Schema Statement Digests”.
This column was added in MySQL 5.6.5.
CURRENT_SCHEMA
The default database for the statement,
NULL
if there is none.
OBJECT_SCHEMA
,
OBJECT_NAME
,
OBJECT_TYPE
Reserved. Currently NULL
.
OBJECT_INSTANCE_BEGIN
This column identifies the statement. The value is the address of an object in memory.
MYSQL_ERRNO
The statement error number, from the statement diagnostics area.
RETURNED_SQLSTATE
The statement SQLSTATE value, from the statement diagnostics area.
MESSAGE_TEXT
The statement error message, from the statement diagnostics area.
ERRORS
Whether an error occurred for the statement. The value is
0 if the SQLSTATE value begins with 00
(completion) or 01
(warning). The value
is 1 is the SQLSTATE value is anything else.
WARNINGS
The number of warnings, from the statement diagnostics area.
ROWS_AFFECTED
The number of rows affected by the statement. For a
description of the meaning of “affected,” see
Section 21.9.3.1, “mysql_affected_rows()
”.
ROWS_SENT
The number of rows returned by the statement.
ROWS_EXAMINED
The number of rows read from storage engines during statement execution.
CREATED_TMP_DISK_TABLES
Like the
Created_tmp_disk_tables
status variable, but specific to the statement.
CREATED_TMP_TABLES
Like the
Created_tmp_tables
status variable, but specific to the statement.
SELECT_FULL_JOIN
Like the
Select_full_join
status
variable, but specific to the statement.
SELECT_FULL_RANGE_JOIN
Like the
Select_full_range_join
status variable, but specific to the statement.
SELECT_RANGE
Like the Select_range
status variable, but specific to the statement.
SELECT_RANGE_CHECK
Like the
Select_range_check
status variable, but specific to the statement.
SELECT_SCAN
Like the Select_scan
status variable, but specific to the statement.
SORT_MERGE_PASSES
Like the
Sort_merge_passes
status
variable, but specific to the statement.
SORT_RANGE
Like the Sort_range
status variable, but specific to the statement.
SORT_ROWS
Like the Sort_rows
status variable, but specific to the statement.
SORT_SCAN
Like the Sort_scan
status variable, but specific to the statement.
NO_INDEX_USED
1 if the statement performed a table scan without using an index, 0 otherwise.
NO_GOOD_INDEX_USED
1 if the server found no good index to use for the
statement, 0 otherwise. For additional information, see
the description of the Extra
column
from EXPLAIN
output for the
Range checked for each record
value in
Section 8.8.2, “EXPLAIN
Output Format”.
NESTING_EVENT_ID
,
NESTING_EVENT_TYPE
Reserved. Currently NULL
.
The events_statements_current
table was added in MySQL 5.6.3.
The events_statements_history
table contains the most recent 10 events per thread. The table
size may be changed by modifying the
performance_schema_events_statements_history_size
system variable at server startup. As new events are added to
the table, older events are discarded if the table is full.
Events are not added to the table until they have ended.
The events_statements_history
table has the same structure as
events_statements_current
. See
Section 20.9.5.1, “The events_statements_current
Table”.
The events_statements_history
table can be truncated with TRUNCATE
TABLE
.
The events_statements_history
table was added in MySQL 5.6.3.
The
events_statements_history_long
table contains the most recent 10,000 events. The table size
may be changed by modifying the
performance_schema_events_statements_history_long_size
system variable at server startup. As new events are added to
the table, older events are discarded if the table is full.
Events are not added to the table until they have ended.
The
events_statements_history_long
table has the same structure as
events_statements_current
. See
Section 20.9.5.1, “The events_statements_current
Table”.
The
events_statements_history_long
table can be truncated with TRUNCATE
TABLE
.
The
events_statements_history_long
table was added in MySQL 5.6.3.
As of MySQL 5.6.3, the Performance Schema provides statistics about connections to the server. When a client connects, it does so under a particular user name and from a particular host. The Performance Schema tracks connections per account (user name plus host name) and separately per user name and per host name, using these tables:
The meaning of “account” in the connection tables
is similar to its meaning in the MySQL grant tables in the
mysql
database, in the sense that the term
refers to a combination of user and host values. Where they
differ is that in the grant tables, the host part of an account
can be a pattern, whereas in the connection tables the host
value is always a specific nonpattern host name.
The connection tables all have
CURRENT_CONNECTIONS
and
TOTAL_CONNECTIONS
columns to track the
current and total number of connections per “tracking
value” on which statistics are based. The tables differ
in what they use for the tracking value. The
accounts
table has
USER
and HOST
columns to
track connections per user name plus host name combination. The
users
and
hosts
tables have a
USER
and HOST
column,
respectively, to track connections per user name and per host
name.
Suppose that clients named user1
and
user2
each connect one time from
hosta
and hostb
. The
Performance Schema tracks the connections as follows:
The accounts
table will have
four rows, for the
user1
/hosta
,
user1
/hostb
,
user2
/hosta
, and
user2
/hostb
account
values, each row counting one connection per account.
The users
table will have two
rows, for user1
and
user2
, each row counting two connections
per user name.
The hosts
table will have two
rows, for hosta
and
hostb
, each row counting two connections
per host name.
When a client connects, the Performance Schema determines which
row in each connection table applies to the connection, using
the tracking value appropriate to each table. If there is no
such row, one is added. Then the Performance Schema increments
by one the CURRENT_CONNECTIONS
and
TOTAL_CONNECTIONS
columns in that row.
When a client disconnects, the Performance Schema decrements by
one the CURRENT_CONNECTIONS
column in the row
and leaves the TOTAL_CONNECTIONS
column
unchanged.
Each connection table can be truncated with
TRUNCATE TABLE
, which has this
effect:
Rows with CURRENT_CONNECTIONS = 0
are
deleted.
For rows with CURRENT_CONNECTIONS > 0
,
TOTAL_CONNECTIONS
is reset to
CURRENT_CONNECTIONS
.
Connection summary tables that depend on the connection table are truncated implicitly (summary values are set to 0). For more information about implicit truncation, see Section 20.9.8.7, “Connection Summary Tables”.
The accounts
table contains a row
for each account that has connected to the MySQL server. For
each account, the table counts the current and total number of
connections. The table size may be changed by modifying the
performance_schema_accounts_size
system variable at server startup. To disable account
statistics, set this variable to 0.
The accounts
table has the
following columns. For a description of how the Performance
Schema maintains rows in this table, including the effect of
TRUNCATE TABLE
, see
Section 20.9.6, “Performance Schema Connection Tables”.
USER
The client user name for the connection.
HOST
The host name from which the client connected.
CURRENT_CONNECTIONS
The current number of connections for the account.
TOTAL_CONNECTIONS
The total number of connections for the account.
The accounts
table was added in
MySQL 5.6.3.
The hosts
table contains a row
for each host from which clients have connected to the MySQL
server. For each host name, the table counts the current and
total number of connections. The table size may be changed by
modifying the
performance_schema_hosts_size
system variable at server startup. To disable host statistics,
set this variable to 0.
The hosts
table has the following
columns. For a description of how the Performance Schema
maintains rows in this table, including the effect of
TRUNCATE TABLE
, see
Section 20.9.6, “Performance Schema Connection Tables”.
HOST
The host name from which the client connected.
CURRENT_CONNECTIONS
The current number of connections for the host.
TOTAL_CONNECTIONS
The total number of connections for the host.
The hosts
table was added in
MySQL 5.6.3.
The users
table contains a row
for each user who has connected to the MySQL server. For each
user name, the table counts the current and total number of
connections. The table size may be changed by modifying the
performance_schema_users_size
system variable at server startup. To disable user statistics,
set this variable to 0.
The users
table has the following
columns. For a description of how the Performance Schema
maintains rows in this table, including the effect of
TRUNCATE TABLE
, see
Section 20.9.6, “Performance Schema Connection Tables”.
USER
The client user name for the connection.
CURRENT_CONNECTIONS
The current number of connections for the user.
TOTAL_CONNECTIONS
The total number of connections for the user.
The users
table was added in
MySQL 5.6.3.
As of MySQL 5.6.6, application programs can provide key/value
connection attributes to be passed to the server at connect
time, using the mysql_options()
and mysql_options4()
C API
functions. The Performance Schema provides tables that expose
this information through SQL statements:
session_account_connect_attrs
:
Connection attributes per for the current session
session_connect_attrs
:
Connection attributes for all sessions
Both tables have the same columns. The difference between them
is that session_connect_attrs
displays connection attributes for all sessions, whereas
session_account_connect_attrs
displays connections only for your own account.
PROCESSLIST_ID
The connection identifier for the session.
ATTR_NAME
The attribute name.
ATTR_VALUE
The attribute value.
ORDINAL_POSITION
The order in which the attribute was added to the set of connection attributes.
Summary tables provide aggregated information for terminated events over time. The tables in this group summarize event data in different ways.
Event Wait Summaries:
events_waits_summary_global_by_event_name
:
Wait events summarized per event name
events_waits_summary_by_instance
:
Wait events summarized per instance
events_waits_summary_by_thread_by_event_name
:
Wait events summarized per thread and event name
Stage Summaries:
events_stages_summary_by_thread_by_event_name
:
Stage waits summarized per thread and event name
events_stages_summary_global_by_event_name
:
Stage waits summarized per event name
Statement Summaries:
events_statements_summary_by_thread_by_event_name
:
Statement events summarized per thread and event name
events_statements_summary_global_by_event_name
:
Statement events summarized per event name
Object Wait Summaries:
objects_summary_global_by_type
:
Object summaries
File I/O Summaries:
file_summary_by_event_name
:
File events summarized per event name
file_summary_by_instance
: File
events summarized per file instance
Table I/O and Lock Wait Summaries:
table_io_waits_summary_by_index_usage
:
Table I/O waits per index
table_io_waits_summary_by_table
:
Table I/O waits per table
table_lock_waits_summary_by_table
:
Table lock waits per table
Connection Summaries:
events_waits_summary_by_account_by_event_name
:
Wait events summarized per account and event name
events_waits_summary_by_user_by_event_name
:
Wait events summarized per user name and event name
events_waits_summary_by_host_by_event_name
:
Wait events summarized per host name and event name
events_stages_summary_by_account_by_event_name
:
Stage events summarized per account and event name
events_stages_summary_by_user_by_event_name
:
Stage events summarized per user name and event name
events_stages_summary_by_host_by_event_name
:
Stage events summarized per host name and event name
events_statements_summary_by_account_by_event_name
:
Statement events summarized per account and event name
events_statements_summary_by_user_by_event_name
:
Statement events summarized per user name and event name
events_statements_summary_by_host_by_event_name
:
Statement events summarized per host name and event name
Socket Summaries:
socket_summary_by_instance
:
Socket waits and I/O summarized per instance
socket_summary_by_event_name
:
Socket waits and I/O summarized per event name
Each summary table has grouping columns that determine how to group the data to be aggregated, and summary columns that contain the aggregated values. Tables that summarize events in similar ways often have similar sets of summary columns and differ only in the grouping columns used to determine how events are aggregated.
Summary tables can be truncated with
TRUNCATE TABLE
. Except for
events_statements_summary_by_digest
,
the effect is to reset the summary columns to 0 or
NULL
, not to remove rows. This enables you to
clear collected values and restart aggregation. That might be
useful, for example, after you have made a runtime configuration
change.
The event waits summary tables aggregate general information about event waits:
events_waits_summary_global_by_event_name
:
Wait events summarized per event name
events_waits_summary_by_instance
:
Wait events summarized per instance
events_waits_summary_by_thread_by_event_name
:
Wait events summarized per thread and event name
For example:
mysql> SELECT * FROM events_waits_summary_global_by_event_name\G
...
*************************** 6. row ***************************
EVENT_NAME: wait/synch/mutex/sql/BINARY_LOG::LOCK_index
COUNT_STAR: 8
SUM_TIMER_WAIT: 2119302
MIN_TIMER_WAIT: 196092
AVG_TIMER_WAIT: 264912
MAX_TIMER_WAIT: 569421
...
*************************** 9. row ***************************
EVENT_NAME: wait/synch/mutex/sql/hash_filo::lock
COUNT_STAR: 69
SUM_TIMER_WAIT: 16848828
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 244185
MAX_TIMER_WAIT: 735345
...
TRUNCATE TABLE
is permitted for
wait summary tables. It resets the counters to zero rather
than removing rows.
The event wait summary tables have these grouping columns to indicate how events are aggregated:
events_waits_summary_global_by_event_name
has an EVENT_NAME
column. Each row
summarizes events for a given instrument. An instrument
might be used to create multiple instances of the
instrumented object. For example, if there is an
instrument for a mutex that is created for each
connection, there are as many instances as there are
connections. The summary row for the instrument summarizes
over all these instances.
events_waits_summary_by_instance
has EVENT_NAME
and
OBJECT_INSTANCE_BEGIN
columns. Each row
summarizes events for a given instrument instance. If an
instrument is used to create multiple instances, each
instance has a unique
OBJECT_INSTANCE_BEGIN
value, so these
instances are summarized separately in this table.
events_waits_summary_by_thread_by_event_name
has THREAD_ID
and
EVENT_NAME
columns. Each row summarizes
events for a given thread and instrument.
The event waits summary tables have these summary columns containing aggregated values:
COUNT_STAR
The number of summarized events. This value includes all events, whether timed or not.
SUM_TIMER_WAIT
The total wait time of the summarized timed events. This
value is calculated only for timed events because nontimed
events have a wait time of NULL
. The
same is true for the other
values.
xxx
_TIMER_WAIT
MIN_TIMER_WAIT
The minimum wait time of the summarized timed events.
AVG_TIMER_WAIT
The average wait time of the summarized timed events.
MAX_TIMER_WAIT
The maximum wait time of the summarized timed events.
The stage summary tables provide aggregated information about stage events:
events_stages_summary_by_thread_by_event_name
:
Stage waits summarized per thread and event name
events_stages_summary_global_by_event_name
:
Stage waits summarized per event name
For example:
mysql> SELECT * FROM events_stages_summary_global_by_event_name\G
...
*************************** 5. row ***************************
EVENT_NAME: stage/sql/checking permissions
COUNT_STAR: 57
SUM_TIMER_WAIT: 26501888880
MIN_TIMER_WAIT: 7317456
AVG_TIMER_WAIT: 464945295
MAX_TIMER_WAIT: 12858936792
...
*************************** 9. row ***************************
EVENT_NAME: stage/sql/closing tables
COUNT_STAR: 37
SUM_TIMER_WAIT: 662606568
MIN_TIMER_WAIT: 1593864
AVG_TIMER_WAIT: 17907891
MAX_TIMER_WAIT: 437977248
...
TRUNCATE TABLE
is permitted for
stage summary tables. It resets the counters to zero rather
than removing rows.
The stage summary tables have these grouping columns to indicate how events are aggregated:
events_stages_summary_global_by_event_name
has an EVENT_NAME
column. Each row
summarizes events for a given instrument.
events_stages_summary_by_thread_by_event_name
has THREAD_ID
and
EVENT_NAME
columns. Each row summarizes
events for a given thread instrument instance.
The stage summary tables have these summary columns containing
aggregated values: COUNT_STAR
,
SUM_TIMER_WAIT
,
MIN_TIMER_WAIT
,
AVG_TIMER_WAIT
, and
MAX_TIMER_WAIT
. These columns are analogous
to the columns of the same names in the
events_waits_summary_global_by_event_name
and
events_waits_summary_by_thread_by_event_name
tables, except that the stage summary tables aggregate waits
from events_stages_current
rather
than events_waits_current
.
These tables were added in MySQL 5.6.3.
The statement summary tables aggregate information about statement events:
events_statements_summary_by_digest
:
Statement events summarized per digest value
events_statements_summary_by_thread_by_event_name
:
Statement events summarized per thread and event name
events_statements_summary_global_by_event_name
:
Statement events summarized per event name
For example:
mysql> SELECT * FROM events_statements_summary_global_by_event_name\G
*************************** 1. row ***************************
EVENT_NAME: statement/sql/select
COUNT_STAR: 25
SUM_TIMER_WAIT: 1535983999000
MIN_TIMER_WAIT: 209823000
AVG_TIMER_WAIT: 61439359000
MAX_TIMER_WAIT: 1363397650000
SUM_LOCK_TIME: 20186000000
SUM_ERRORS: 0
SUM_WARNINGS: 0
SUM_ROWS_AFFECTED: 0
SUM_ROWS_SENT: 388
SUM_ROWS_EXAMINED: 370
SUM_CREATED_TMP_DISK_TABLES: 0
SUM_CREATED_TMP_TABLES: 0
SUM_SELECT_FULL_JOIN: 0
SUM_SELECT_FULL_RANGE_JOIN: 0
SUM_SELECT_RANGE: 0
SUM_SELECT_RANGE_CHECK: 0
SUM_SELECT_SCAN: 6
SUM_SORT_MERGE_PASSES: 0
SUM_SORT_RANGE: 0
SUM_SORT_ROWS: 0
SUM_SORT_SCAN: 0
SUM_NO_INDEX_USED: 6
SUM_NO_GOOD_INDEX_USED: 0
...
TRUNCATE TABLE
is permitted for
statement summary tables. For
events_statements_summary_by_digest
,
it empties the table. For the other statement summary tables,
it resets the counters to zero rather than removing rows.
The statement summary tables have these grouping columns to indicate how events are aggregated:
events_statements_summary_by_digest
has a DIGEST
column. Each row
summarizes events for a given digest value. (The
DIGEST_TEXT
column contains the
corresponding normalized statement digest text, but is
neither a grouping nor summary column.)
events_statements_summary_global_by_event_name
has an EVENT_NAME
column. Each row
summarizes events for a given instrument.
events_statements_summary_by_thread_by_event_name
has THREAD_ID
and
EVENT_NAME
columns. Each row summarizes
events for a given thread instrument instance.
The statement summary tables have these summary columns containing aggregated values:
COUNT_STAR
,
SUM_TIMER_WAIT
,
MIN_TIMER_WAIT
,
AVG_TIMER_WAIT
,
MAX_TIMER_WAIT
These columns are analogous to the columns of the same
names in the
events_waits_summary_global_by_event_name
and
events_waits_summary_by_thread_by_event_name
tables, except that the statement summary tables aggregate
waits from
events_statements_current
rather than
events_waits_current
.
SUM_
xxx
The aggregate of the corresponding
xxx
column in the
events_statements_current
table. For example, the SUM_LOCK_TIME
and SUM_ERRORS
columns in statement
summary tables are the aggregates of the
LOCK_TIME
and ERRORS
columns in
events_statements_current
table.
The
events_statements_summary_by_digest
table has these additional summary columns:
FIRST_SEEN_TIMESTAMP
,
LAST_SEEN_TIMESTAMP
The times at which a statement with the given digest value were first seen and most recently seen.
These tables were added in MySQL 5.6.3, except that
events_statements_summary_by_digest
was added in 5.6.5.
If the statement_digest
consumer is
enabled, aggregation into
events_statements_summary_by_digest
occurs as follows when a statement completes. Aggregation is
based on the DIGEST
value computed for the
statement.
If a
events_statements_summary_by_digest
row already exists with the digest value for the statement
that just completed, statistics for the statement are
aggregated to that row. The LAST_SEEN
column is updated to the current time.
If no row has the digest value for the statement that just
completed, and the table is not full, a new row is created
for the statement. The FIRST_SEEN
and
LAST_SEEN
columns are initialized with
the current time.
If no row has the statement digest value for the statement
that just completed, and the table is full, the statistics
for the statement that just completed are added to a
special “catch-all” row with
DIGEST
= NULL
, which
is created if necessary. If the row is created, the
FIRST_SEEN
and
LAST_SEEN
columns are initialized with
the current time. Otherwise, the
LAST_SEEN
column is updated with the
current time.
The row with DIGEST
=
NULL
is maintained because Performance
Schema tables have a maximum size due to memory constraints.
The DIGEST
= NULL
row
permits digests that do not match other rows to be counted
even if the summary table is full, using a common
“other” bucket. This row helps you estimate
whether the digest summary is representative:
A DIGEST
= NULL
row
that has a COUNT_STAR
value that
represents 5% of all digests shows that the digest summary
table is very representative; the other rows cover 95% of
the statements seen.
A DIGEST
= NULL
row
that has a COUNT_STAR
value that
represents 50% of all digests shows that the digest
summary table is not very representative; the other rows
cover only half the statements seen. Most likely the DBA
should increase the maximum table size so that more of the
rows counted in the DIGEST
=
NULL
row would be counted using more
specific rows instead. To do this, set the
performance_schema_digests_size
system variable to a larger value at server startup. The
default size is 200.
The
objects_summary_global_by_type
aggregates object wait events. For example:
mysql> SELECT * FROM objects_summary_global_by_type\G
...
*************************** 3. row ***************************
OBJECT_TYPE: TABLE
OBJECT_SCHEMA: test
OBJECT_NAME: t
COUNT_STAR: 3
SUM_TIMER_WAIT: 263126976
MIN_TIMER_WAIT: 1522272
AVG_TIMER_WAIT: 87708678
MAX_TIMER_WAIT: 258428280
...
*************************** 10. row ***************************
OBJECT_TYPE: TABLE
OBJECT_SCHEMA: mysql
OBJECT_NAME: user
COUNT_STAR: 14
SUM_TIMER_WAIT: 365567592
MIN_TIMER_WAIT: 1141704
AVG_TIMER_WAIT: 26111769
MAX_TIMER_WAIT: 334783032
...
TRUNCATE TABLE
is permitted for
object summary tables. It resets the counters to zero rather
than removing rows.
The
objects_summary_global_by_type
table has these grouping columns to indicate how events are
aggregated: OBJECT_TYPE
,
OBJECT_SCHEMA
, and
OBJECT_NAME
. Each row summarizes events for
the given object.
objects_summary_global_by_type
has the same summary columns as the
events_waits_summary_by_
tables. See Section 20.9.8.1, “Event Wait Summary Tables”.
xxx
The file I/O summary tables aggregate information about I/O operations:
file_summary_by_event_name
:
File events summarized per event name
file_summary_by_instance
:
File events summarized per file instance
For example:
mysql>SELECT * FROM file_summary_by_event_name\G
... *************************** 2. row *************************** EVENT_NAME: wait/io/file/sql/binlog COUNT_STAR: 31 SUM_TIMER_WAIT: 8243784888 MIN_TIMER_WAIT: 0 AVG_TIMER_WAIT: 265928484 MAX_TIMER_WAIT: 6490658832 ... mysql>SELECT * FROM file_summary_by_instance\G
... *************************** 2. row *************************** FILE_NAME: /var/mysql/share/english/errmsg.sys EVENT_NAME: wait/io/file/sql/ERRMSG EVENT_NAME: wait/io/file/sql/ERRMSG OBJECT_INSTANCE_BEGIN: 4686193384 COUNT_STAR: 5 SUM_TIMER_WAIT: 13990154448 MIN_TIMER_WAIT: 26349624 AVG_TIMER_WAIT: 2798030607 MAX_TIMER_WAIT: 8150662536 ...
TRUNCATE TABLE
is permitted for
file I/O summary tables. It resets the counters to zero rather
than removing rows.
The file I/O summary tables have these grouping columns to indicate how events are aggregated:
file_summary_by_event_name
has an EVENT_NAME
column. Each row
summarizes events for a given instrument.
file_summary_by_instance
has
FILE_NAME
,
EVENT_NAME
, and (as of MySQL 5.6.4)
OBJECT_INSTANCE_BEGIN
columns. Each row
summarizes events for a given file instrument instance.
The file I/O summary tables have the following summary columns
containing aggregated values. (Before MySQL 5.6.4, the tables
contain only the Only COUNT_READ
COUNT_WRITE
SUM_NUMBER_OF_BYTES_READ
, and
SUM_NUMBER_OF_BYTES_WRITE
aggregation
columns.) Some columns are more general and have values that
are the same as the sum of the values of more fine-grained
columns. In this way, aggregations at higher levels are
available directly without the need for user-defined views
that sum lower-level columns.
COUNT_STAR
,
SUM_TIMER_WAIT
,
MIN_TIMER_WAIT
,
AVG_TIMER_WAIT
,
MAX_TIMER_WAIT
These columns aggregate all I/O operations.
COUNT_READ
,
SUM_TIMER_READ
,
MIN_TIMER_READ
,
AVG_TIMER_READ
,
MAX_TIMER_READ
,
SUM_NUMBER_OF_BYTES_READ
These columns aggregate all read operations, including
FGETS
, FGETC
,
FREAD
, and READ
.
COUNT_WRITE
,
SUM_TIMER_WRITE
,
MIN_TIMER_WRITE
,
AVG_TIMER_WRITE
,
MAX_TIMER_WRITE
,
SUM_NUMBER_OF_BYTES_WRITE
These columns aggregate all write operations, including
FPUTS
, FPUTC
,
FPRINTF
, VFPRINTF
,
FWRITE
, and PWRITE
.
COUNT_MISC
,
SUM_TIMER_MISC
,
MIN_TIMER_MISC
,
AVG_TIMER_MISC
,
MAX_TIMER_MISC
These columns aggregate all other I/O operations,
including CREATE
,
DELETE
, OPEN
,
CLOSE
, STREAM_OPEN
,
STREAM_CLOSE
, SEEK
,
TELL
, FLUSH
,
STAT
, FSTAT
,
CHSIZE
, RENAME
, and
SYNC
. There are no byte counts for
these operations.
The MySQL server uses several techniques to avoid I/O operations by caching information read from files, so it is possible that statements you might expect to result in I/O events will not. You may be able to ensure that I/O does occur by flushing caches or restarting the server to reset its state.
The following sections describe the table I/O and lock wait summary tables:
table_io_waits_summary_by_index_usage
:
Table I/O waits per index
table_io_waits_summary_by_table
:
Table I/O waits per table
table_lock_waits_summary_by_table
:
Table lock waits per table
The
table_io_waits_summary_by_table
table aggregates all table I/O wait events, as generated by
the wait/io/table/sql/handler
instrument.
The grouping is by table.
TRUNCATE TABLE
is permitted
for table I/O summary tables. It resets the counters to zero
rather than removing rows. Truncating this table also
truncates the
table_io_waits_summary_by_index_usage
table.
The
table_io_waits_summary_by_table
table has these grouping columns to indicate how events are
aggregated: OBJECT_TYPE
,
OBJECT_SCHEMA
, and
OBJECT_NAME
. These columns have the same
meaning as in the
events_waits_current
table.
They identify the table to which the row applies.
table_io_waits_summary_by_table
has the following summary columns containing aggregated
values. As indicated in the column descriptions, some
columns are more general and have values that are the same
as the sum of the values of more fine-grained columns. For
example, columns that aggregate all writes hold the sum of
the corresponding columns that aggregate inserts, updates,
and deletes. In this way, aggregations at higher levels are
available directly without the need for user-defined views
that sum lower-level columns.
COUNT_STAR
,
SUM_TIMER_WAIT
,
MIN_TIMER_WAIT
,
AVG_TIMER_WAIT
,
MAX_TIMER_WAIT
These columns aggregate all I/O operations. They are the
same as the sum of the corresponding
and
xxx
_READ
columns.
xxx
_WRITE
COUNT_READ
,
SUM_TIMER_READ
,
MIN_TIMER_READ
,
AVG_TIMER_READ
,
MAX_TIMER_READ
These columns aggregate all read operations. They are
the same as the sum of the corresponding
columns.
xxx
_FETCH
COUNT_WRITE
,
SUM_TIMER_WRITE
,
MIN_TIMER_WRITE
,
AVG_TIMER_WRITE
,
MAX_TIMER_WRITE
These columns aggregate all write operations. They are
the same as the sum of the corresponding
,
xxx
_INSERT
,
and
xxx
_UPDATE
columns.
xxx
_DELETE
COUNT_FETCH
,
SUM_TIMER_FETCH
,
MIN_TIMER_FETCH
,
AVG_TIMER_FETCH
,
MAX_TIMER_FETCH
These columns aggregate all fetch operations.
COUNT_INSERT
,
SUM_TIMER_INSERT
,
MIN_TIMER_INSERT
,
AVG_TIMER_INSERT
,
MAX_TIMER_INSERT
These columns aggregate all insert operations.
COUNT_UPDATE
,
SUM_TIMER_UPDATE
,
MIN_TIMER_UPDATE
,
AVG_TIMER_UPDATE
,
MAX_TIMER_UPDATE
These columns aggregate all update operations.
COUNT_DELETE
,
SUM_TIMER_DELETE
,
MIN_TIMER_DELETE
,
AVG_TIMER_DELETE
,
MAX_TIMER_DELETE
These columns aggregate all delete operations.
The
table_io_waits_summary_by_index_usage
table aggregates all table index I/O wait events, as
generated by the
wait/io/table/sql/handler
instrument. The
grouping is by table index.
TRUNCATE TABLE
is permitted
for table I/O summary tables. It resets the counters to zero
rather than removing rows. This table is also truncated by
truncation of the
table_io_waits_summary_by_table
table. A DDL operation that changes the index structure of a
table may cause the per-index statistics to be reset.
The structure of
table_io_waits_summary_by_index_usage
is nearly identical to
table_io_waits_summary_by_table
.
The only difference is the additional group column,
INDEX_NAME
, which corresponds to the name
of the index that was used when the table I/O wait event was
recorded:
A value of PRIMARY
indicates that
table I/O used the primary index.
A value of NULL
means that table I/O
used no index.
Inserts are counted against INDEX_NAME =
NULL
.
The
table_lock_waits_summary_by_table
table aggregates all table lock wait events, as generated by
the wait/lock/table/sql/handler
instrument. The grouping is by table.
TRUNCATE TABLE
is permitted
for table lock summary tables. It resets the counters to
zero rather than removing rows.
This table contains information about internal and external locks:
An internal lock corresponds to a lock in the SQL layer.
This is currently implemented by a call to
thr_lock()
. In event rows, these
locks are distinguished by the
OPERATION
column, which will have one
of these values:
read normal read with shared locks read high priority read no insert write allow write write concurrent insert write delayed write low priority write normal
An external lock corresponds to a lock in the storage
engine layer. This is currently implemented by a call to
handler::external_lock()
. In event
rows, these locks are distinguished by the
OPERATION
column, which will have one
of these values:
read external write external
The
table_lock_waits_summary_by_table
table has these grouping columns to indicate how events are
aggregated: OBJECT_TYPE
,
OBJECT_SCHEMA
, and
OBJECT_NAME
. These columns have the same
meaning as in the
events_waits_current
table.
They identify the table to which the row applies.
table_lock_waits_summary_by_table
has the following summary columns containing aggregated
values. As indicated in the column descriptions, some
columns are more general and have values that are the same
as the sum of the values of more fine-grained columns. For
example, columns that aggregate all writes hold the sum of
the corresponding columns that aggregate inserts, updates,
and deletes. In this way, aggregations at higher levels are
available directly without the need for user-defined views
that sum lower-level columns.
COUNT_STAR
,
SUM_TIMER_WAIT
,
MIN_TIMER_WAIT
,
AVG_TIMER_WAIT
,
MAX_TIMER_WAIT
These columns aggregate all lock operations. They are
the same as the sum of the corresponding
and
xxx
_READ
columns.
xxx
_WRITE
COUNT_READ
,
SUM_TIMER_READ
,
MIN_TIMER_READ
,
AVG_TIMER_READ
,
MAX_TIMER_READ
These columns aggregate all read-lock operations. They
are the same as the sum of the corresponding
,
xxx
_READ_NORMAL
,
xxx
_READ_WITH_SHARED_LOCKS
,
and
xxx
_READ_HIGH_PRIORITY
columns.
xxx
_READ_NO_INSERT
COUNT_WRITE
,
SUM_TIMER_WRITE
,
MIN_TIMER_WRITE
,
AVG_TIMER_WRITE
,
MAX_TIMER_WRITE
These columns aggregate all write-lock operations. They
are the same as the sum of the corresponding
,
xxx
_WRITE_ALLOW_WRITE
,
xxx
_WRITE_CONCURRENT_INSERT
,
xxx
_WRITE_DELAYED
,
and
xxx
_WRITE_LOW_PRIORITY
columns.
xxx
_WRITE_NORMAL
COUNT_READ_NORMAL
,
SUM_TIMER_READ_NORMAL
,
MIN_TIMER_READ_NORMAL
,
AVG_TIMER_READ_NORMAL
,
MAX_TIMER_READ_NORMAL
These columns aggregate internal read locks.
COUNT_READ_WITH_SHARED_LOCKS
,
SUM_TIMER_READ_WITH_SHARED_LOCKS
,
MIN_TIMER_READ_WITH_SHARED_LOCKS
,
AVG_TIMER_READ_WITH_SHARED_LOCKS
,
MAX_TIMER_READ_WITH_SHARED_LOCKS
These columns aggregate internal read locks.
COUNT_READ_HIGH_PRIORITY
,
SUM_TIMER_READ_HIGH_PRIORITY
,
MIN_TIMER_READ_HIGH_PRIORITY
,
AVG_TIMER_READ_HIGH_PRIORITY
,
MAX_TIMER_READ_HIGH_PRIORITY
These columns aggregate internal read locks.
COUNT_READ_NO_INSERT
,
SUM_TIMER_READ_NO_INSERT
,
MIN_TIMER_READ_NO_INSERT
,
AVG_TIMER_READ_NO_INSERT
,
MAX_TIMER_READ_NO_INSERT
These columns aggregate internal read locks.
COUNT_READ_EXTERNAL
,
SUM_TIMER_READ_EXTERNAL
,
MIN_TIMER_READ_EXTERNAL
,
AVG_TIMER_READ_EXTERNAL
,
MAX_TIMER_READ_EXTERNAL
These columns aggregate external read locks.
COUNT_WRITE_ALLOW_WRITE
,
SUM_TIMER_WRITE_ALLOW_WRITE
,
MIN_TIMER_WRITE_ALLOW_WRITE
,
AVG_TIMER_WRITE_ALLOW_WRITE
,
MAX_TIMER_WRITE_ALLOW_WRITE
These columns aggregate internal write locks.
COUNT_WRITE_CONCURRENT_INSERT
,
SUM_TIMER_WRITE_CONCURRENT_INSERT
,
MIN_TIMER_WRITE_CONCURRENT_INSERT
,
AVG_TIMER_WRITE_CONCURRENT_INSERT
,
MAX_TIMER_WRITE_CONCURRENT_INSERT
These columns aggregate internal write locks.
COUNT_WRITE_DELAYED
,
SUM_TIMER_WRITE_DELAYED
,
MIN_TIMER_WRITE_DELAYED
,
AVG_TIMER_WRITE_DELAYED
,
MAX_TIMER_WRITE_DELAYED
These columns aggregate internal write locks.
COUNT_WRITE_LOW_PRIORITY
,
SUM_TIMER_WRITE_LOW_PRIORITY
,
MIN_TIMER_WRITE_LOW_PRIORITY
,
AVG_TIMER_WRITE_LOW_PRIORITY
,
MAX_TIMER_WRITE_LOW_PRIORITY
These columns aggregate internal write locks.
COUNT_WRITE_NORMAL
,
SUM_TIMER_WRITE_NORMAL
,
MIN_TIMER_WRITE_NORMAL
,
AVG_TIMER_WRITE_NORMAL
,
MAX_TIMER_WRITE_NORMAL
These columns aggregate internal write locks.
COUNT_WRITE_EXTERNAL
,
SUM_TIMER_WRITE_EXTERNAL
,
MIN_TIMER_WRITE_EXTERNAL
,
AVG_TIMER_WRITE_EXTERNAL
,
MAX_TIMER_WRITE_EXTERNAL
These columns aggregate external write locks.
The connection summary tables are similar to the corresponding
events_
tables, except that aggregation occurs per account, user, or
host, rather than by thread.
xxx
_summary_by_thread_by_event_name
The Performance Schema maintains summary tables that aggregate connection statistics by event name and account, user, or host. Separate groups of tables are available that aggregate wait, stage, and statement events, which results in this set of connection summary tables:
events_waits_summary_by_account_by_event_name
:
Wait events summarized per account and event name
events_waits_summary_by_user_by_event_name
:
Wait events summarized per user name and event name
events_waits_summary_by_host_by_event_name
:
Wait events summarized per host name and event name
events_stages_summary_by_account_by_event_name
:
Stage events summarized per account and event name
events_stages_summary_by_user_by_event_name
:
Stage events summarized per user name and event name
events_stages_summary_by_host_by_event_name
:
Stage events summarized per host name and event name
events_statements_summary_by_account_by_event_name
:
Statement events summarized per account and event name
events_statements_summary_by_user_by_event_name
:
Statement events summarized per user name and event name
events_statements_summary_by_host_by_event_name
:
Statement events summarized per host name and event name
In other words, the connection summary tables have names of
the form
events_
,
where xxx
_summary_yyy
_by_event_namexxx
is
waits
, stages
, or
statements
, and
yyy
is account
,
user
, or host
.
The connection summary tables provide an intermediate aggregation level:
tables are more detailed than connection summary tables
xxx
_summary_by_thread_by_event_name
tables are less detailed than connection summary tables
xxx
_summary_global_by_event_name
TRUNCATE TABLE
is permitted for
connection summary tables. It resets the counters to zero
rather than removing rows. In addition, connection summary
tables are implicitly truncated if a connection table on which
they depend is truncated.
Table 20.1, “Effect of Implicit Table Truncation”,
describes the relationship between connection table truncation
and implicitly truncated tables.
Table 20.1. Effect of Implicit Table Truncation
Truncated Table | Implicitly Truncated Summary Tables |
---|---|
accounts | Tables with names matching %_by_account% ,
%_by_thread% |
hosts | Tables with names matching %_by_account% ,
%_by_host% ,
%_by_thread% |
users | Tables with names matching %_by_account% ,
%_by_user% ,
%_by_thread% |
The connection summary tables have these grouping columns to indicate how events are aggregated:
For tables with _by_account
in the
name, the USER
,
HOST
, and EVENT_NAME
columns group events per account and event name.
For tables with _by_host
in the name,
the HOST
and
EVENT_NAME
columns group events per
host name and event name.
For tables with _by_user
in the name,
the USER
and
EVENT_NAME
columns group events per
user name and event name.
The connection summary tables have these summary columns
containing aggregated values: COUNT_STAR
,
SUM_TIMER_WAIT
,
MIN_TIMER_WAIT
,
AVG_TIMER_WAIT
, and
MAX_TIMER_WAIT
. These are similar to the
columns of the same names in the
events_waits_summary_by_instance
table.
The connection summary tables were added in MySQL 5.6.3.
The socket summary tables aggregate timer and byte count information for socket operations:
socket_summary_by_instance
:
Aggregate timer and byte count statistics generated by the
wait/io/socket/*
instruments for all
socket I/O operations, per socket instance. When a
connection terminates, the row in
socket_summary_by_instance
corresponding to it is deleted.
socket_summary_by_event_name
:
Aggregate timer and byte count statistics generated by the
wait/io/socket/*
instruments for all
socket I/O operations, per socket instrument.
The socket summary tables do not aggregate waits generated by
idle
events while sockets are waiting for
the next request from the client. For idle
event aggregations, use the wait-event summary tables; see
Section 20.9.8.1, “Event Wait Summary Tables”.
TRUNCATE TABLE
is permitted for
socket summary tables. Except for
events_statements_summary_by_digest
,
tt resets the counters to zero rather than removing rows.
The socket summary tables have these grouping columns to indicate how events are aggregated:
socket_summary_by_instance
has an OBJECT_INSTANCE_BEGIN
column.
Each row summarizes events for a given object.
socket_summary_by_event_name
has an EVENT_NAME
column. Each row
summarizes events for a given instrument.
The socket summary tables have these summary columns containing aggregated values:
COUNT_STAR
,
SUM_TIMER_WAIT
,
MIN_TIMER_WAIT
,
AVG_TIMER_WAIT
,
MAX_TIMER_WAIT
These columns aggregate all operations.
COUNT_READ
,
SUM_TIMER_READ
,
MIN_TIMER_READ
,
AVG_TIMER_READ
,
MAX_TIMER_READ
,
SUM_NUMBER_OF_BYTES_READ
These columns aggregate all receive operations
(RECV
, RECVFROM
, and
RECVMSG
).
COUNT_WRITE
,
SUM_TIMER_WRITE
,
MIN_TIMER_WRITE
,
AVG_TIMER_WRITE
,
MAX_TIMER_WRITE
,
SUM_NUMBER_OF_BYTES_WRITE
These columns aggregate all send operations
(SEND
, SENDTO
, and
SENDMSG
).
COUNT_MISC
,
SUM_TIMER_MISC
,
MIN_TIMER_MISC
,
AVG_TIMER_MISC
,
MAX_TIMER_MISC
These columns aggregate all other socket operations, such
as CONNECT
, LISTEN
,
ACCEPT
, CLOSE
, and
SHUTDOWN
. There are no byte counts for
these operations.
The socket_summary_by_instance
table also has an EVENT_NAME
column that
indicates the class of the socket:
client_connection
,
server_tcpip_socket
,
server_unix_socket
. This column can be
grouped on to isolate, for example, client activity from that
of the server listening sockets.
These tables were added in MySQL 5.6.3.
The following sections describe tables that do not fall into the table categories discussed in the preceding sections:
host_cache
: Information from
the internal host cache.
performance_timers
: Which event
timers are available.
threads
: Information about
server threads.
The host_cache
table provides
access to the contents of the host cache, which contains
client host name and IP address information and is used to
avoid DNS lookups. (See Section 8.11.5.2, “DNS Lookup Optimization and the Host Cache”.) The
host_cache
table exposes the
contents of the host cache so that it can be examined using
SELECT
statements. The
Performance Schema must be enabled or this table is empty.
FLUSH HOSTS
and TRUNCATE
TABLE host_cache
have the same effect: They clear
the host cache. This also empties the
host_cache
table (because it is
the visible representation of the cache) and unblocks any
blocked hosts (see Section C.5.2.6, “Host '
”.)
host_name
' is
blockedFLUSH HOSTS
requires the RELOAD
privilege.
TRUNCATE TABLE
requires the
DROP
privilege for the
host_cache
table.
The host_cache
table has these
columns:
IP
The IP address of the client that connected to the server, expressed as a string.
HOST
The resolved DNS host name for that client IP, or
NULL
if the name is unknown.
HOST_VALIDATED
Whether the IP-to-host name-to-IP DNS resolution was
performed successfully for the client IP. If
HOST_VALIDATED
is
YES
, the HOST
column
is used as the host name corresponding to the IP so that
calls to DNS can be avoided. While
HOST_VALIDATED
is
NO
, DNS resolution is attempted again
for each connect, until it eventually completes with
either a valid result or a permanent error. This
information enables the server to avoid caching bad or
missing host names during temporary DNS failures, which
would affect clients forever.
SUM_CONNECT_ERRORS
The number of connection errors that are deemed
“blocking” (assessed against the
max_connect_errors
system
variable). Currently, only protocol handshake errors are
counted, and only for hosts that passed validation
(HOST_VALIDATED = YES
).
COUNT_HOST_BLOCKED_ERRORS
The number of connections that were blocked because
SUM_CONNECT_ERRORS
exceeded the value
of the max_connect_errors
system variable.
COUNT_NAMEINFO_TRANSIENT_ERRORS
The number of transient errors during IP-to-host name DNS resolution.
COUNT_NAMEINFO_PERMANENT_ERRORS
The number of permanent errors during IP-to-host name DNS resolution.
COUNT_FORMAT_ERRORS
The number of host name format errors. MySQL does not
perform matching of Host
column values
in the mysql.user
table against host
names for which one or more of the initial components of
the name are entirely numeric, such as
1.2.example.com
. The client IP address
is used instead. For the rationale why this type of
matching does not occur, see
Section 6.2.3, “Specifying Account Names”.
COUNT_ADDRINFO_TRANSIENT_ERRORS
The number of transient errors during host name-to-IP reverse DNS resolution.
COUNT_ADDRINFO_PERMANENT_ERRORS
The number of permanent errors during host name-to-IP reverse DNS resolution.
COUNT_FCRDNS_ERRORS
The number of forward-confirmed reverse DNS errors. These errors occur when IP-to-host name-to-IP DNS resolution produces an IP address that does not match the client originating IP address.
COUNT_HOST_ACL_ERRORS
The number of errors that occur because no user from the
client host can possibly log in. In such cases, the server
returns
ER_HOST_NOT_PRIVILEGED
and
does not even ask for a user name or password.
COUNT_NO_AUTH_PLUGIN_ERRORS
The number of errors due to requests for an unavailable authentication plugin. A plugin can be unavailable if, for example, it was never loaded or a load attempt failed.
COUNT_AUTH_PLUGIN_ERRORS
The number of errors reported by authentication plugins.
An authentication plugin can report different error codes
to indicate the root cause of a failure. Depending on the
type of error, one of these columns is incremented:
COUNT_AUTHENTICATION_ERRORS
,
COUNT_AUTH_PLUGIN_ERRORS
,
COUNT_HANDSHAKE_ERRORS
. New return
codes are an optional extension to the existing plugin
API. Unknown or unexpected plugin errors are counted in
the COUNT_AUTH_PLUGIN_ERRORS
column.
COUNT_HANDSHAKE_ERRORS
The number of errors detected at the wire protocol level.
COUNT_PROXY_USER_ERRORS
The number of errors detected when a proxy user A is proxied to another user B who does not exist.
COUNT_PROXY_USER_ACL_ERRORS
The number of errors detected when a proxy user A is
proxied to another user B who does exist but for whom A
does not have the PROXY
privilege.
COUNT_AUTHENTICATION_ERRORS
The number of errors caused by failed authentication.
COUNT_SSL_ERRORS
The number of errors due to SSL problems.
COUNT_MAX_USER_CONNECTIONS_ERRORS
The number of errors caused by exceeding per-user connection quotas. See Section 6.3.4, “Setting Account Resource Limits”.
COUNT_MAX_USER_CONNECTIONS_PER_HOUR_ERRORS
The number of errors caused by exceeding per-user connections-per-hour quotas. See Section 6.3.4, “Setting Account Resource Limits”.
COUNT_DEFAULT_DATABASE_ERRORS
The number of errors related to the default database. For example, the database did not exist or the user had no privileges for accessing it.
COUNT_INIT_CONNECT_ERRORS
The number of errors caused by execution failures of
statements in the
init_connect
system
variable value.
COUNT_LOCAL_ERRORS
The number of errors local to the server implementation and not related to the network, authentication, or authorization. For example, out-of-memory conditions fall into this category.
COUNT_UNKNOWN_ERRORS
The number of other, unknown errors not accounted for by
other columns in this table. This column is reserved for
future use, in case new error conditions must be reported,
and if preserving the backward compatibility and table
structure of the host_cache
table is required.
FIRST_SEEN
The timestamp of the first connection attempt seen from
the client in the IP
column.
LAST_SEEN
The timestamp of the last connection attempt seen from the
client in the IP
column.
FIRST_ERROR_SEEN
The timestamp of the first error seen from the client in
the IP
column.
LAST_ERROR_SEEN
The timestamp of the last error seen from the client in
the IP
column.
The host_cache
table was added in
MySQL 5.6.5.
The performance_timers
table
shows which event timers are available:
mysql> SELECT * FROM performance_timers;
+-------------+-----------------+------------------+----------------+
| TIMER_NAME | TIMER_FREQUENCY | TIMER_RESOLUTION | TIMER_OVERHEAD |
+-------------+-----------------+------------------+----------------+
| CYCLE | 2389029850 | 1 | 72 |
| NANOSECOND | NULL | NULL | NULL |
| MICROSECOND | 1000000 | 1 | 585 |
| MILLISECOND | 1035 | 1 | 738 |
| TICK | 101 | 1 | 630 |
+-------------+-----------------+------------------+----------------+
If the values associated with a given timer name are
NULL
, that timer is not supported on your
platform. The rows that do not contain NULL
indicate which timers you can use in
setup_timers
.
The performance_timers
table has
these columns:
TIMER_NAME
The name by which to refer to the timer when configuring
the setup_timers
table.
TIMER_FREQUENCY
The number of timer units per second. For a cycle timer,
the frequency is generally related to the CPU speed. For
example, on a system with a 2.4GHz processor, the
CYCLE
may be close to 2400000000.
TIMER_RESOLUTION
Indicates the number of timer units by which timer values increase. If a timer has a resolution of 10, its value increases by 10 each time.
TIMER_OVERHEAD
The minimal number of cycles of overhead to obtain one timing with the given timer. The Performance Schema determines this value by invoking the timer 20 times during initialization and picking the smallest value. The total overhead really is twice this amount because the instrumentation invokes the timer at the start and end of each event. The timer code is called only for timed events, so this overhead does not apply for nontimed events.
The threads
table contains a row
for each server thread. Each row contains information about a
thread and indicates whether monitoring is enabled for it:
mysql> SELECT * FROM threads\G
*************************** 1. row ***************************
THREAD_ID: 1
NAME: thread/sql/main
TYPE: BACKGROUND
PROCESSLIST_ID: NULL
PROCESSLIST_USER: NULL
PROCESSLIST_HOST: NULL
PROCESSLIST_DB: NULL
PROCESSLIST_COMMAND: NULL
PROCESSLIST_TIME: 80284
PROCESSLIST_STATE: NULL
PROCESSLIST_INFO: NULL
PARENT_THREAD_ID: NULL
ROLE: NULL
INSTRUMENTED: YES
...
*************************** 4. row ***************************
THREAD_ID: 51
NAME: thread/sql/one_connection
TYPE: FOREGROUND
PROCESSLIST_ID: 34
PROCESSLIST_USER: paul
PROCESSLIST_HOST: localhost
PROCESSLIST_DB: performance_schema
PROCESSLIST_COMMAND: Query
PROCESSLIST_TIME: 0
PROCESSLIST_STATE: Sending data
PROCESSLIST_INFO: SELECT * FROM threads
PARENT_THREAD_ID: 1
ROLE: NULL
INSTRUMENTED: YES
...
If you have the PROCESS
privilege, you can see all threads. Otherwise, you can see
only your own threads (that is, threads associated with the
MySQL account that you are using).
The threads
table has these
columns:
THREAD_ID
A unique thread identifier.
NAME
The name associated with the thread instrumentation code
in the server. For example,
thread/sql/one_connection
corresponds
to the thread function in the code responsible for
handling a user connection, and
thread/sql/main
stands for the
main()
function of the server.
TYPE
The thread type, either FOREGROUND
or
BACKGROUND
. User connection threads are
foreground threads. Threads associated with internal
server activity are background threads. Examples are
internal InnoDB
threads, “binlog
dump” threads sending information to slaves, and
slave I/O and SQL threads.
PROCESSLIST_ID
For threads that are displayed in the
INFORMATION_SCHEMA.PROCESSLIST
table, this is the PROCESSLIST.ID
value, which is also the value that
CONNECTION_ID()
would
return within that thread. For background threads (threads
not associated with a user connection),
PROCESSLIST_ID
is 0, so the values are
not unique.
PROCESSLIST_USER
The user associated with a foreground thread,
NULL
for a background thread.
PROCESSLIST_HOST
The host name of the client associated with a foreground
thread, NULL
for a background thread.
PROCESSLIST_DB
The default database for the thread, or
NULL
if there is none.
PROCESSLIST_COMMAND
The type of command the thread is executing. For
descriptions for thread commands, see
Section 8.12.5, “Examining Thread Information”. The value of this
column corresponds to the
COM_
commands of the client/server protocol and
xxx
Com_
status variables. See
Section 5.1.6, “Server Status Variables”
xxx
PROCESSLIST_TIME
The time in seconds that the thread has been in its current state.
PROCESSLIST_STATE
An action, event, or state that indicates what the thread
is doing. For descriptions of
PROCESSLIST_STATE
values, see
Section 8.12.5, “Examining Thread Information”.
Most states correspond to very quick operations. If a thread stays in a given state for many seconds, there might be a problem that needs to be investigated.
PROCESSLIST_INFO
The statement the thread is executing, or
NULL
if it is not executing any
statement. The statement might be the one sent to the
server, or an innermost statement if the statement
executes other statements. For example, if a
CALL
statement executes a stored
procedure that is executing a
SELECT
statement, the
PROCESSLIST_INFO
value shows the
SELECT
statement.
PARENT_THREAD_ID
If this thread is a subthread (spawned by another thread),
this is the THREAD_ID
value of the
spawning thread. Thread spawning occurs, for example, to
handle insertion of rows from INSERT
DELAYED
statements.
ROLE
Unused.
INSTRUMENTED
Whether the thread is instrumented:
For foreground threads, the initial
INSTRUMENTED
value is determined by
whether the user account associated with the thread
matches any row in the
setup_actors
table.
Matching is based on the values of the
PROCESSLIST_USER
and
PROCESSLIST_HOST
columns.
If the thread spawns a subthread, matching occurs again for the subthread.
For background threads,
INSTRUMENTED
is
YES
by default.
setup_actors
is not
consulted because there is no associated user for
background threads.
For any thread, its INSTRUMENTED
value can be changed during the lifetime of the
thread. This is the only
threads
table column that
can be modified.
For thread monitoring to occur, these things must be true:
The thread_instrumentation
consumer
in the setup_consumers
table must be YES
.
The thread.INSTRUMENTED
column must
be YES
.
Monitoring occurs only for those thread events
produced from instruments that are enabled, as
specified in the
setup_instruments
table.
The initial contents of the
threads
table are based on the
threads in existence when Performance Schema initialization
occurs. Thereafter, a new row is added each time the server
creates a thread.
Removal of rows from the threads
table occurs when threads end. For a thread associated with a
client session, removal occurs when the session ends. If a
client has auto-reconnect enabled and the session reconnects
after a disconnect, the session will be associated with a new
row in the threads
table (one
that has a different PROCESSLIST_ID
value).
The initial INSTRUMENTED
value for the new
thread may be different from that of the original thread: The
setup_actors
table may have
changed in the meantime, and if the
INSTRUMENTED
value for the original thread
was changed after it was initialized, that change does not
carry over to the new thread.
The threads
table columns with
names having a prefix of PROCESS_
provide
information similar to that available from the
INFORMATION_SCHEMA.PROCESSLIST
table or the SHOW PROCESSLIST
statement. Thus, all three sources provide thread-monitoring
information. Use of threads
differs from use of the other two sources in these ways:
Access to threads
does not
require a mutex and has minimal impact on server
performance.
INFORMATION_SCHEMA.PROCESSLIST
and SHOW PROCESSLIST
have
negative performance consequences because they require a
mutex.
threads
provides additional
information for each thread, such as whether it is a
foreground or background thread, and the location within
the server associated with the thread.
threads
provides information
about background threads, so it can be used to monitor
activity the other thread information sources cannot.
You can turn thread monitoring on or off. To control
monitoring of existing threads, set the
INSTRUMENTED
column of the
threads
table. To control the
initial INSTRUMENTED
value for new
foreground threads, use the
setup_actors
table.
For these reasons, DBAs who perform server monitoring using
INFORMATION_SCHEMA.PROCESSLIST
or
SHOW PROCESSLIST
may wish to
monitor using threads
instead.
Options can be specified at server startup on the command line or in option files to configure Performance Schema instruments and consumers. Runtime configuration is also possible in many cases (see Section 20.2.3, “Performance Schema Runtime Configuration”), but startup configuration must be used when runtime configuration is too late to affect instruments that have already been initialized during the startup process.
The options have the following meanings:
--performance_schema_consumer_
consumer_name
=value
Configure a Performance Schema consumer. For details, see Section 20.2.2, “Performance Schema Startup Configuration”.
--performance_schema_instrument=
instrument_name
=value
Configure a Performance Schema instrument. For details, see Section 20.2.2, “Performance Schema Startup Configuration”.
The Performance Schema implements several system variables that provide configuration information:
mysql> SHOW VARIABLES LIKE 'perf%';
+--------------------------------------------------------+---------+
| Variable_name | Value |
+--------------------------------------------------------+---------+
| performance_schema | ON |
| performance_schema_accounts_size | 100 |
| performance_schema_digests_size | 200 |
| performance_schema_events_stages_history_long_size | 10000 |
| performance_schema_events_stages_history_size | 10 |
| performance_schema_events_statements_history_long_size | 10000 |
| performance_schema_events_statements_history_size | 10 |
| performance_schema_events_waits_history_long_size | 10000 |
| performance_schema_events_waits_history_size | 10 |
| performance_schema_hosts_size | 100 |
| performance_schema_max_cond_classes | 80 |
| performance_schema_max_cond_instances | 1000 |
| performance_schema_max_file_classes | 50 |
| performance_schema_max_file_handles | 32768 |
| performance_schema_max_file_instances | 10000 |
| performance_schema_max_mutex_classes | 200 |
| performance_schema_max_mutex_instances | 1000000 |
| performance_schema_max_rwlock_classes | 30 |
| performance_schema_max_rwlock_instances | 1000000 |
| performance_schema_max_socket_classes | 10 |
| performance_schema_max_socket_instances | 1000 |
| performance_schema_max_stage_classes | 150 |
| performance_schema_max_statement_classes | 165 |
| performance_schema_max_table_handles | 10000 |
| performance_schema_max_table_instances | 1000 |
| performance_schema_max_thread_classes | 50 |
| performance_schema_max_thread_instances | 1000 |
| performance_schema_session_connect_attrs_size | 512 |
| performance_schema_setup_actors_size | 100 |
| performance_schema_setup_objects_size | 100 |
| performance_schema_users_size | 100 |
+--------------------------------------------------------+---------+
As of MySQL 5.6.6, the Performance Schema automatically sizes the values of several of its parameters at server startup if they are not set explicitly. For more information, see Section 20.2.2, “Performance Schema Startup Configuration”.
Table 20.2. Performance Schema Variable Reference
The variables have the following meanings:
Command-Line Format | --performance_schema=# | ||
Option-File Format | performance_schema=# | ||
Option Sets Variable | Yes, performance_schema | ||
Variable Name | performance_schema | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | boolean | ||
Default | OFF | ||
Permitted Values (>= 5.6.6) | |||
Type | boolean | ||
Default | ON |
The value of this variable is ON
or
OFF
to indicate whether the Performance
Schema is enabled. By default, the value is
ON
by default as of MySQL 5.6.6 and
OFF
before that. At server startup, you can
specify this variable with no value or a value of
ON
or 1 to enable it, or with a value of
OFF
or 0 to disable it.
performance_schema_accounts_size
Version Introduced | 5.6.3 | ||
Variable Name | performance_schema_accounts_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 10 | ||
Range | 0 .. 1048576 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) | ||
Range | -1 .. 1048576 |
The number of rows in the
accounts
table. If this variable
is 0, the Performance Schema does not maintain connection
statistics in the accounts
table.
This variable was added in MySQL 5.6.3.
performance_schema_digests_size
Version Introduced | 5.6.5 | ||
Command-Line Format | --performance_schema_digests_size=# | ||
Option-File Format | performance_schema_digests_size=# | ||
Option Sets Variable | Yes, performance_schema_digests_size | ||
Variable Name | performance_schema_digests_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 200 | ||
Range | 0 .. 200 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) | ||
Range | -1 .. 200 |
The maximum number of rows in the
events_statements_summary_by_digest
table. This variable was added in MySQL 5.6.5.
performance_schema_events_stages_history_long_size
Version Introduced | 5.6.3 | ||
Command-Line Format | --performance_schema_events_stages_history_long_size=# | ||
Option-File Format | performance_schema_events_stages_history_long_size=# | ||
Option Sets Variable | Yes, performance_schema_events_stages_history_long_size | ||
Variable Name | performance_schema_events_stages_history_long_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 10000 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The number of rows in the
events_stages_history_long
table.
This variable was added in MySQL 5.6.3.
performance_schema_events_stages_history_size
Version Introduced | 5.6.3 | ||
Command-Line Format | --performance_schema_events_stages_history_size=# | ||
Option-File Format | performance_schema_events_stages_history_size=# | ||
Option Sets Variable | Yes, performance_schema_events_stages_history_size | ||
Variable Name | performance_schema_events_stages_history_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 10 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The number of rows per thread in the
events_stages_history
table. This
variable was added in MySQL 5.6.3.
performance_schema_events_statements_history_long_size
Version Introduced | 5.6.3 | ||
Command-Line Format | --performance_schema_events_statements_history_long_size=# | ||
Option-File Format | performance_schema_events_statements_history_long_size=# | ||
Option Sets Variable | Yes, performance_schema_events_statements_history_long_size | ||
Variable Name | performance_schema_events_statements_history_long_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 10000 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The number of rows in the
events_statements_history_long
table. This variable was added in MySQL 5.6.3.
performance_schema_events_statements_history_size
Version Introduced | 5.6.3 | ||
Command-Line Format | --performance_schema_events_statements_history_size=# | ||
Option-File Format | performance_schema_events_statements_history_size=# | ||
Option Sets Variable | Yes, performance_schema_events_statements_history_size | ||
Variable Name | performance_schema_events_statements_history_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 10 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The number of rows per thread in the
events_statements_history
table.
This variable was added in MySQL 5.6.3.
performance_schema_events_waits_history_long_size
Command-Line Format | --performance_schema_events_waits_history_long_size=# | ||
Option-File Format | performance_schema_events_waits_history_long_size=# | ||
Option Sets Variable | Yes, performance_schema_events_waits_history_long_size | ||
Variable Name | performance_schema_events_waits_history_long_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 10000 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The number of rows in the
events_waits_history_long
table.
performance_schema_events_waits_history_size
Command-Line Format | --performance_schema_events_waits_history_size=# | ||
Option-File Format | performance_schema_events_waits_history_size=# | ||
Option Sets Variable | Yes, performance_schema_events_waits_history_size | ||
Variable Name | performance_schema_events_waits_history_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 10 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The number of rows per thread in the
events_waits_history
table.
Version Introduced | 5.6.3 | ||
Command-Line Format | --performance_schema_hosts_size=# | ||
Option-File Format | performance_schema_hosts_size=# | ||
Option Sets Variable | Yes, performance_schema_hosts_size | ||
Variable Name | performance_schema_hosts_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 10 | ||
Range | 0 .. 1048576 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) | ||
Range | -1 .. 1048576 |
The number of rows in the hosts
table. If this variable is 0, the Performance Schema does not
maintain connection statistics in the
hosts
table. This variable was
added in MySQL 5.6.3.
performance_schema_max_cond_classes
Command-Line Format | --performance_schema_max_cond_classes=# | ||
Option-File Format | performance_schema_max_cond_classes=# | ||
Option Sets Variable | Yes, performance_schema_max_cond_classes | ||
Variable Name | performance_schema_max_cond_classes | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | 80 |
The maximum number of condition instruments.
performance_schema_max_cond_instances
Command-Line Format | --performance_schema_max_cond_instances=# | ||
Option-File Format | performance_schema_max_cond_instances=# | ||
Option Sets Variable | Yes, performance_schema_max_cond_instances | ||
Variable Name | performance_schema_max_cond_instances | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 1000 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The maximum number of instrumented condition objects.
performance_schema_max_file_classes
Command-Line Format | --performance_schema_max_file_classes=# | ||
Option-File Format | performance_schema_max_file_classes=# | ||
Option Sets Variable | Yes, performance_schema_max_file_classes | ||
Variable Name | performance_schema_max_file_classes | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | 50 |
The maximum number of file instruments.
performance_schema_max_file_handles
Command-Line Format | --performance_schema_max_file_handles=# | ||
Option-File Format | performance_schema_max_file_handles=# | ||
Option Sets Variable | Yes, performance_schema_max_file_handles | ||
Variable Name | performance_schema_max_file_handles | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | 32768 |
The maximum number of opened file objects.
The value of
performance_schema_max_file_handles
should be greater than the value of
open_files_limit
:
open_files_limit
affects the
maximum number of open file handles the server can support and
performance_schema_max_file_handles
affects how many of these file handles can be instrumented.
performance_schema_max_file_instances
Command-Line Format | --performance_schema_max_file_instances=# | ||
Option-File Format | performance_schema_max_file_instances=# | ||
Option Sets Variable | Yes, performance_schema_max_file_instances | ||
Variable Name | performance_schema_max_file_instances | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 10000 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The maximum number of instrumented file objects.
performance_schema_max_mutex_classes
Command-Line Format | --performance_schema_max_mutex_classes=# | ||
Option-File Format | performance_schema_max_mutex_classes=# | ||
Option Sets Variable | Yes, performance_schema_max_mutex_classes | ||
Variable Name | performance_schema_max_mutex_classes | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | 200 |
The maximum number of mutex instruments.
performance_schema_max_mutex_instances
Command-Line Format | --performance_schema_max_mutex_instances=# | ||
Option-File Format | performance_schema_max_mutex_instances=# | ||
Option Sets Variable | Yes, performance_schema_max_mutex_instances | ||
Variable Name | performance_schema_max_mutex_instances | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 1000 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The maximum number of instrumented mutex objects.
performance_schema_max_rwlock_classes
Command-Line Format | --performance_schema_max_rwlock_classes=# | ||
Option-File Format | performance_schema_max_rwlock_classes=# | ||
Option Sets Variable | Yes, performance_schema_max_rwlock_classes | ||
Variable Name | performance_schema_max_rwlock_classes | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | 20 |
The maximum number of rwlock instruments.
performance_schema_max_rwlock_instances
Command-Line Format | --performance_schema_max_rwlock_instances=# | ||
Option-File Format | performance_schema_max_rwlock_instances=# | ||
Option Sets Variable | Yes, performance_schema_max_rwlock_instances | ||
Variable Name | performance_schema_max_rwlock_instances | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 1000 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The maximum number of instrumented rwlock objects.
performance_schema_max_socket_classes
Version Introduced | 5.6.3 | ||
Command-Line Format | --performance_schema_max_socket_classes=# | ||
Option-File Format | performance_schema_max_socket_classes=# | ||
Option Sets Variable | Yes, performance_schema_max_socket_classes | ||
Variable Name | performance_schema_max_socket_classes | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | 10 |
The maximum number of socket instruments. This variable was added in MySQL 5.6.3.
performance_schema_max_socket_instances
Version Introduced | 5.6.3 | ||
Command-Line Format | --performance_schema_max_socket_instances=# | ||
Option-File Format | performance_schema_max_socket_instances=# | ||
Option Sets Variable | Yes, performance_schema_max_socket_instances | ||
Variable Name | performance_schema_max_socket_instances | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 1000 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The maximum number of instrumented socket objects. This variable was added in MySQL 5.6.3.
performance_schema_max_stage_classes
Version Introduced | 5.6.3 | ||
Command-Line Format | --performance_schema_max_stage_classes=# | ||
Option-File Format | performance_schema_max_stage_classes=# | ||
Option Sets Variable | Yes, performance_schema_max_stage_classes | ||
Variable Name | performance_schema_max_stage_classes | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | 100 |
The maximum number of stage instruments. This variable was added in MySQL 5.6.3.
performance_schema_max_statement_classes
Version Introduced | 5.6.3 | ||
Command-Line Format | --performance_schema_max_statement_classes=# | ||
Option-File Format | performance_schema_max_statement_classes=# | ||
Option Sets Variable | Yes, performance_schema_max_statement_classes | ||
Variable Name | performance_schema_max_statement_classes | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | 100 |
The maximum number of statement instruments. This variable was added in MySQL 5.6.3.
performance_schema_max_table_handles
Command-Line Format | --performance_schema_max_table_handles=# | ||
Option-File Format | performance_schema_max_table_handles=# | ||
Option Sets Variable | Yes, performance_schema_max_table_handles | ||
Variable Name | performance_schema_max_table_handles | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 100000 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The maximum number of opened table objects.
performance_schema_max_table_instances
Command-Line Format | --performance_schema_max_table_instances=# | ||
Option-File Format | performance_schema_max_table_instances=# | ||
Option Sets Variable | Yes, performance_schema_max_table_instances | ||
Variable Name | performance_schema_max_table_instances | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 50000 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The maximum number of instrumented table objects.
performance_schema_max_thread_classes
Command-Line Format | --performance_schema_max_thread_classes=# | ||
Option-File Format | performance_schema_max_thread_classes=# | ||
Option Sets Variable | Yes, performance_schema_max_thread_classes | ||
Variable Name | performance_schema_max_thread_classes | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | 50 |
The maximum number of thread instruments.
performance_schema_max_thread_instances
Command-Line Format | --performance_schema_max_thread_instances=# | ||
Option-File Format | performance_schema_max_thread_instances=# | ||
Option Sets Variable | Yes, performance_schema_max_thread_instances | ||
Variable Name | performance_schema_max_thread_instances | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 1000 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) |
The maximum number of instrumented thread objects.
The max_connections
and
max_delayed_threads
system
variables affect how many threads are run in the server.
performance_schema_max_thread_instances
affects how many of these running threads can be instrumented.
If you increase
max_connections
or
max_delayed_threads
, you
should consider increasing
performance_schema_max_thread_instances
so that
performance_schema_max_thread_instances
is greater than the sum of
max_connections
and
max_delayed_threads
.
performance_schema_session_connect_attrs_size
Version Introduced | 5.6.6 | ||
Variable Name | performance_schema_session_connect_attrs_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | -1 (autosized) | ||
Range | -1 .. 1048576 |
The amount of preallocated memory per thread used to hold
connection attribute strings. If the connection attribute
strings are larger than the reserved storage, the
Performance_schema_session_connect_attrs_lost
status variable is incremented. This variable was added in
MySQL 5.6.7.
performance_schema_setup_actors_size
Version Introduced | 5.6.1 | ||
Command-Line Format | --performance_schema_setup_actors_size=# | ||
Option-File Format | performance_schema_setup_actors_size=# | ||
Option Sets Variable | Yes, performance_schema_setup_actors_size | ||
Variable Name | performance_schema_setup_actors_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | 100 |
The number of rows in the
setup_actors
table.
performance_schema_setup_objects_size
Version Introduced | 5.6.1 | ||
Command-Line Format | --performance_schema_setup_objects_size=# | ||
Option-File Format | performance_schema_setup_objects_size=# | ||
Option Sets Variable | Yes, performance_schema_setup_objects_size | ||
Variable Name | performance_schema_setup_objects_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values | |||
Type | numeric | ||
Default | 100 |
The number of rows in the
setup_objects
table.
Version Introduced | 5.6.3 | ||
Command-Line Format | --performance_schema_users_size=# | ||
Option-File Format | performance_schema_users_size=# | ||
Option Sets Variable | Yes, performance_schema_users_size | ||
Variable Name | performance_schema_users_size | ||
Variable Scope | Global | ||
Dynamic Variable | No | ||
Permitted Values (<= 5.6.5) | |||
Type | numeric | ||
Default | 10 | ||
Range | 0 .. 1048576 | ||
Permitted Values (>= 5.6.6) | |||
Type | numeric | ||
Default | -1 (autosized) | ||
Range | -1 .. 1048576 |
The number of rows in the users
table. If this variable is 0, the Performance Schema does not
maintain connection statistics in the
users
table. This variable was
added in MySQL 5.6.3.
The Performance Schema implements several status variables that provide information about instrumentation that could not be loaded or created due to memory constraints:
mysql> SHOW STATUS LIKE 'perf%';
+-------------------------------------------+-------+
| Variable_name | Value |
+-------------------------------------------+-------+
| Performance_schema_accounts_lost | 0 |
| Performance_schema_cond_classes_lost | 0 |
| Performance_schema_cond_instances_lost | 0 |
| Performance_schema_file_classes_lost | 0 |
| Performance_schema_file_handles_lost | 0 |
| Performance_schema_file_instances_lost | 0 |
| Performance_schema_hosts_lost | 0 |
| Performance_schema_locker_lost | 0 |
| Performance_schema_mutex_classes_lost | 0 |
| Performance_schema_mutex_instances_lost | 0 |
| Performance_schema_rwlock_classes_lost | 0 |
| Performance_schema_rwlock_instances_lost | 0 |
| Performance_schema_socket_classes_lost | 0 |
| Performance_schema_socket_instances_lost | 0 |
| Performance_schema_stage_classes_lost | 0 |
| Performance_schema_statement_classes_lost | 0 |
| Performance_schema_table_handles_lost | 0 |
| Performance_schema_table_instances_lost | 0 |
| Performance_schema_thread_classes_lost | 0 |
| Performance_schema_thread_instances_lost | 0 |
| Performance_schema_users_lost | 0 |
+-------------------------------------------+-------+
Names for these variables have several forms:
Performance_schema_accounts_lost
The number of times a row could not be added to the
accounts
table because it was
full. This variable was added in MySQL 5.6.3.
Performance_schema_
xxx
_classes_lost
How many instruments of type xxx
could not be loaded.
The number of times a row could not be added to the
hosts
table because it was full.
This variable was added in MySQL 5.6.3.
Performance_schema_
xxx
_instances_lost
How many instances of object type
xxx
could not be created.
Performance_schema_
xxx
_handles_lost
How many instances of object type
xxx
could not be opened.
Performance_schema_locker_lost
How many events are “lost” or not recorded, due to the following conditions:
Events are recursive (for example, waiting for A caused a wait on B, which caused a wait on C).
The depth of the nested events stack is greater than the limit imposed by the implementation.
Currently, events recorded by the Performance Schema are not recursive, so this variable should always be 0.
Performance_schema_session_connect_attrs_lost
The number of times a connection attribute string was larger than the reserved storage. This variable was added in MySQL 5.6.7.
The number of times a row could not be added to the
users
table because it was full.
This variable was added in MySQL 5.6.3.
For information on using these variables to check Performance Schema status, see Section 20.5, “Performance Schema Status Monitoring”.
Removing a plugin with UNINSTALL
PLUGIN
does not affect information already collected for
code in that plugin. Time spent executing the code while the
plugin was loaded was still spent even if the plugin is unloaded
later. The associated event information, including aggregate
information, remains readable in
performance_schema
database tables. For
additional information about the effect of plugin installation and
removal, see
Section 20.5, “Performance Schema Status Monitoring”.
A plugin implementor who instruments plugin code should document its instrumentation characteristics to enable those who load the plugin to account for its requirements. For example, a third-party storage engine should include in its documentation how much memory the engine needs for mutex and other instruments.
The Performance Schema is a tool to help a DBA do performance tuning by taking real measurements instead of “wild guesses.” This section demonstrates some ways to use the Performance Schema for this purpose. The discussion here relies on the use of event filtering, which is described in Section 20.2.3.2, “Performance Schema Event Filtering”.
The following example provides one methodology that you can use to analyze a repeatable problem, such as investigating a performance bottleneck. To begin, you should have a repeatable use case where performance is deemed “too slow” and needs optimization, and you should enable all instrumentation (no pre-filtering at all).
Run the use case.
Using the Performance Schema tables, analyze the root cause of the performance problem. This analysis will rely heavily on post-filtering.
For problem areas that are ruled out, disable the corresponding instruments. For example, if analysis shows that the issue is not related to file I/O in a particular storage engine, disable the file I/O instruments for that engine. Then truncate the history and summary tables to remove previously collected events.
Repeat the process at step 1.
At each iteration, the Performance Schema output, particularly
the events_waits_history_long
table, will contain less and less “noise” caused
by nonsignificant instruments, and given that this table has a
fixed size, will contain more and more data relevant to the
analysis of the problem at hand.
At each iteration, investigation should lead closer and closer to the root cause of the problem, as the “signal/noise” ratio will improve, making analysis easier.
Once a root cause of performance bottleneck is identified, take the appropriate corrective action, such as:
Tune the server parameters (cache sizes, memory, and so forth).
Tune a query by writing it differently,
Tune the database schema (tables, indexes, and so forth).
Tune the code (this applies to storage engine or server developers only).
Start again at step 1, to see the effects of the changes on performance.
The mutex_instances.LOCKED_BY_THREAD_ID
and
rwlock_instances.WRITE_LOCKED_BY_THREAD_ID
columns are extremely important for investigating performance
bottlenecks or deadlocks. This is made possible by Performance
Schema instrumentation as follows:
Suppose that thread 1 is stuck waiting for a mutex.
You can determine what the thread is waiting for:
SELECT * FROM events_waits_current WHERE THREAD_ID = thread_1
;
Say the query result identifies that the thread is waiting for
mutex A, found in
events_waits_current.OBJECT_INSTANCE_BEGIN
.
You can determine which thread is holding mutex A:
SELECT * FROM mutex_instances WHERE OBJECT_INSTANCE_BEGIN = mutex_A
;
Say the query result identifies that it is thread 2 holding
mutex A, as found in
mutex_instances.LOCKED_BY_THREAD_ID
.
You can see what thread 2 is doing:
SELECT * FROM events_waits_current WHERE THREAD_ID = thread_2
;