- Administration >
- MongoDB Performance
MongoDB Performance¶
On this page
As you develop and operate applications with MongoDB, you may need to analyze the performance of the application and its database. When you encounter degraded performance, it is often a function of database access strategies, hardware availability, and the number of open database connections.
Some users may experience performance limitations as a result of inadequate or inappropriate indexing strategies, or as a consequence of poor schema design patterns. Locking Performance discusses how these can impact MongoDB’s internal locking.
Performance issues may indicate that the database is operating at capacity and that it is time to add additional capacity to the database. In particular, the application’s working set should fit in the available physical memory. See Memory and the MMAPv1 Storage Engine for more information on the working set.
In some cases performance issues may be temporary and related to abnormal traffic load. As discussed in Number of Connections, scaling can help relax excessive traffic.
Database Profiling can help you to understand what operations are causing degradation.
Locking Performance¶
MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock.
Lock-related slowdowns can be intermittent. To see if the lock has been
affecting your performance, refer to the locks
section and the globalLock section of the
serverStatus
output.
Dividing locks.timeAcquiringMicros
by
locks.acquireWaitCount
can give an approximate average wait time for a particular lock mode.
locks.deadlockCount
provide
the number of times the lock acquisitions encountered deadlocks.
If globalLock.currentQueue.total
is consistently high,
then there is a chance that a large number of requests are waiting for
a lock. This indicates a possible concurrency issue that may be affecting
performance.
If globalLock.totalTime
is
high relative to uptime
, the database has
existed in a lock state for a significant amount of time.
Long queries can result from ineffective use of indexes; non-optimal schema design; poor query structure; system architecture issues; or insufficient RAM resulting in page faults and disk reads.
Memory and the MMAPv1 Storage Engine¶
Memory Use¶
With the MMAPv1 storage engine, MongoDB uses
memory-mapped files to store data. Given a data set of sufficient size,
the mongod
process will allocate all available memory on the system
for its use.
While this is intentional and aids performance, the memory mapped files make it difficult to determine if the amount of RAM is sufficient for the data set.
The memory usage statuses metrics of the
serverStatus
output can provide insight into MongoDB’s
memory use.
The mem.resident
field provides the
amount of resident memory in use. If this exceeds the amount of system
memory and there is a significant amount of data on disk that isn’t in RAM,
you may have exceeded the capacity of your system.
You can inspect mem.mapped
to check the
amount of mapped memory that mongod
is using. If this value is
greater than the amount of system memory, some operations will require a
page faults to read data from disk.
Page Faults¶
With the MMAPv1 storage engine, page faults can occur as MongoDB reads from or writes data to parts of its data files that are not currently located in physical memory. In contrast, operating system page faults happen when physical memory is exhausted and pages of physical memory are swapped to disk.
MongoDB reports its triggered page faults as the total number of
page faults in one second. To check for page faults, see
the extra_info.page_faults
value
in the serverStatus
output.
Rapid increases in the MongoDB page fault counter may indicate that the server has too little physical memory. Page faults also can occur while accessing large data sets or scanning an entire collection.
A single page fault completes quickly and is not problematic. However, in aggregate, large volumes of page faults typically indicate that MongoDB is reading too much data from disk.
MongoDB can often “yield” read locks after a page fault, allowing other database
processes to read while mongod
loads the next page into memory.
Yielding the read lock following a page fault improves concurrency, and also
improves overall throughput in high volume systems.
Increasing the amount of RAM accessible to MongoDB may help reduce the
frequency of page faults. If this is not possible, you may want to consider
deploying a sharded cluster or adding shards
to your deployment to distribute load among mongod
instances.
See What are page faults? for more information.
Number of Connections¶
In some cases, the number of connections between the applications and the
database can overwhelm the ability of the server to handle requests. The
following fields in the serverStatus
document can provide insight:
connections
is a container for the following two fields:connections.current
the total number of current clients connected to the database instance.connections.available
the total number of unused connections available for new clients.
If there are numerous concurrent application requests, the database may have trouble keeping up with demand. If this is the case, then you will need to increase the capacity of your deployment.
For read-heavy applications, increase the size of your replica set and distribute read operations to secondary members.
For write-heavy applications, deploy sharding and add one or more
shards to a sharded cluster to distribute load among
mongod
instances.
Spikes in the number of connections can also be the result of application or driver errors. All of the officially supported MongoDB drivers implement connection pooling, which allows clients to use and reuse connections more efficiently. Extremely high numbers of connections, particularly without corresponding workload is often indicative of a driver or other configuration error.
Unless constrained by system-wide limits, MongoDB has no limit on
incoming connections. On Unix-based systems, you can modify system limits
using the ulimit
command, or by editing your system’s
/etc/sysctl
file. See UNIX ulimit Settings for more
information.
Database Profiling¶
MongoDB’s “Profiler” is a database profiling system that can help identify inefficient queries and operations.
The following profiling levels are available:
Level | Setting |
---|---|
0 | Off. No profiling |
1 | On. Only includes “slow” operations |
2 | On. Includes all operations |
Enable the profiler by setting the
profile
value using the following command in the
mongo
shell:
db.setProfilingLevel(1)
The slowOpThresholdMs
setting defines what constitutes a “slow”
operation. To set the threshold above which the profiler considers
operations “slow” (and thus, included in the level 1
profiling
data), you can configure slowOpThresholdMs
at runtime as an argument to
the db.setProfilingLevel()
operation.
See
The documentation of db.setProfilingLevel()
for more
information.
By default, mongod
records all “slow” queries to its
log
, as defined by slowOpThresholdMs
.
Note
Because the database profiler can negatively impact performance, only enable profiling for strategic intervals and as minimally as possible on production systems.
You may enable profiling on a per-mongod
basis. This
setting will not propagate across a replica set or
sharded cluster.
You can view the output of the profiler in the system.profile
collection of your database by issuing the show profile
command in
the mongo
shell, or with the following operation:
db.system.profile.find( { millis : { $gt : 100 } } )
This returns all operations that lasted longer than 100 milliseconds.
Ensure that the value specified here (100
, in this example) is above the
slowOpThresholdMs
threshold.
You must use the $query
operator to access the query
field of documents within system.profile
.