- Frequently Asked Questions >
- FAQ: Concurrency
FAQ: Concurrency¶
On this page
- What type of locking does MongoDB use?
- How granular are locks in MongoDB?
- How do I see the status of locks on my mongod instances?
- Does a read or write operation ever yield the lock?
- Which operations lock the database?
- Which administrative commands lock the database?
- Does a MongoDB operation ever lock more than one database?
- How does sharding affect concurrency?
- How does concurrency affect a replica set primary?
- How does concurrency affect secondaries?
- What kind of concurrency does MongoDB provide for JavaScript operations?
- Does MongoDB support transactions?
- What isolation guarantees does MongoDB provide?
- Can reads see changes that have not been committed to disk?
Changed in version 3.0.
MongoDB allows multiple clients to read and write the same data. In order to ensure consistency, it uses locking and other concurrency control measures to prevent multiple clients from modifying the same piece of data simultaneously. Together, these mechanisms guarantee that all writes to a single document occur either in full or not at all and that clients never see an inconsistent view of the data.
What type of locking does MongoDB use?¶
MongoDB uses multi-granularity locking [1] that allows operations to lock at the global, database or collection level, and allows for individual storage engines to implement their own concurrency control below the collection level (e.g., at the document-level in WiredTiger).
MongoDB uses reader-writer locks that allow concurrent readers shared access to a resource, such as a database or collection, but in MMAPv1, give exclusive access to a single write operation.
In addition to a shared (S) locking mode for reads and an exclusive (X) locking mode for write operations, intent shared (IS) and intent exclusive (IX) modes indicate an intent to read or write a resource using a finer granularity lock. When locking at a certain granularity all higher levels are locked using an intent lock.
For example, when locking a collection for writing (using mode X), both the corresponding database lock and the global lock must be locked in intent exclusive (IX) mode. A single database can simultaneously be locked in IS and IX mode, but an exclusive (X) lock cannot coexist with any other modes, and a shared (S) lock can only coexists with intent shared (IS) locks.
Locks are fair, with reads and writes being queued in order. However, to optimize throughput, when one request is granted, all other compatible requests will be granted at the same time, potentially releasing them before a conflicting request. For example, consider a case in which an X lock was just released, and in which the conflict queue contains the following items:
IS → IS → X → X → S → IS
In strict first-in, first-out (FIFO) ordering, only the first two IS modes would be granted. Instead MongoDB will actually grant all IS and S modes, and once they all drain, it will grant X, even if new IS or S requests have been queued in the meantime. As a grant will always move all other requests ahead in the queue, no starvation of any request is possible.
[1] | See the Wikipedia page on Multiple granularity locking for more information. |
How granular are locks in MongoDB?¶
Changed in version 3.0.
For WiredTiger¶
Beginning with version 3.0, MongoDB ships with the WiredTiger storage engine.
For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.
Some global operations, typically short lived operations involving multiple databases, still require a global “instance-wide” lock. Some other operations, such as dropping a collection, still require an exclusive database lock.
For MMAPv1¶
The MMAPv1 storage engine uses collection-level locking as of the 3.0 release series, an improvement on earlier versions in which the database lock was the finest-grain lock. Third-party storage engines may either use collection-level locking or implement their own finer-grained concurrency control.
For example, if you have six collections in a database using the MMAPv1 storage engine and an operation takes a collection-level write lock, the other five collections are still available for read and write operations. An exclusive database lock makes all six collections unavailable for the duration of the operation holding the lock.
How do I see the status of locks on my mongod instances?¶
For reporting on lock utilization information on locks, use any of the following methods:
- db.serverStatus(),
- db.currentOp(),
- mongotop,
- mongostat, and/or
- the MongoDB Cloud Manager or Ops Manager, an on-premise solution available in MongoDB Enterprise Advanced
Specifically, the locks document in the output of serverStatus, or the locks field in the current operation reporting provides insight into the type of locks and amount of lock contention in your mongod instance.
To terminate an operation, use db.killOp().
Does a read or write operation ever yield the lock?¶
In some situations, read and write operations can yield their locks.
Long running read and write operations, such as queries, updates, and deletes, yield under many conditions. MongoDB operations can also yield locks between individual document modifications in write operations that affect multiple documents like update() with the multi parameter.
MongoDB’s MMAPv1 storage engine uses heuristics based on its access pattern to predict whether data is likely in physical memory before performing a read. If MongoDB predicts that the data is not in physical memory, an operation will yield its lock while MongoDB loads the data into memory. Once data is available in memory, the operation will reacquire the lock to complete the operation.
For storage engines supporting document level concurrency control, such as WiredTiger, yielding is not necessary when accessing storage as the intent locks, held at the global, database and collection level, do not block other readers and writers.
Changed in version 2.6: MongoDB does not yield locks when scanning an index even if it predicts that the index is not in memory.
Which operations lock the database?¶
Changed in version 2.2.
The following table lists common database operations and the types of locks they use.
Operation | Lock Type |
---|---|
Issue a query | Read lock |
Get more data from a cursor | Read lock |
Insert data | Write lock |
Remove data | Write lock |
Update data | Write lock |
Map-reduce | Read lock and write lock, unless operations are specified as non-atomic. Portions of map-reduce jobs can run concurrently. |
Create an index | Building an index in the foreground, which is the default, locks the database for extended periods of time. |
Deprecated since version 3.0. |
Write lock. The db.eval() method takes a global write lock while evaluating the JavaScript function. To avoid taking this global write lock, you can use the eval command with nolock: true. |
Deprecated since version 3.0. |
Write lock. By default, eval command takes a global write lock while evaluating the JavaScript function. If used with nolock: true, the eval command does not take a global write lock while evaluating the JavaScript function. However, the logic within the JavaScript function may take write locks for write operations. |
aggregate() | Read lock |
Which administrative commands lock the database?¶
Certain administrative commands can exclusively lock the database for extended periods of time. In some deployments, for large databases, you may consider taking the mongod instance offline so that clients are not affected. For example, if a mongod is part of a replica set, take the mongod offline and let other members of the set service load while maintenance is in progress.
The following administrative operations require an exclusive (i.e. write) lock on the database for extended periods:
- db.collection.createIndex(), when issued without setting background to true,
- reIndex,
- compact,
- db.repairDatabase(),
- db.createCollection(), when creating a very large (i.e. many gigabytes) capped collection,
- db.collection.validate(), and
- db.copyDatabase(). This operation may lock all databases. See Does a MongoDB operation ever lock more than one database?.
The following administrative commands lock the database but only hold the lock for a very short time:
- db.collection.dropIndex(),
- db.getLastError(),
- db.isMaster(),
- rs.status() (i.e. replSetGetStatus),
- db.serverStatus(),
- db.auth(), and
- db.addUser().
Does a MongoDB operation ever lock more than one database?¶
The following MongoDB operations lock multiple databases:
- db.copyDatabase() must lock the entire mongod instance at once.
- db.repairDatabase() obtains a global write lock and will block other operations until it finishes.
- Journaling, which is an internal operation, locks all databases for short intervals. All databases share a single journal.
- User authentication requires a read lock on the admin database for deployments using 2.6 user credentials. For deployments using the 2.4 schema for user credentials, authentication locks the admin database as well as the database the user is accessing.
- All writes to a replica set’s primary lock both the database receiving the writes and then the local database for a short time. The lock for the local database allows the mongod to write to the primary’s oplog and accounts for a small portion of the total time of the operation.
How does sharding affect concurrency?¶
Sharding improves concurrency by distributing collections over multiple mongod instances, allowing shard servers (i.e. mongos processes) to perform any number of operations concurrently to the various downstream mongod instances.
In a sharded cluster, locks apply to each individual shard, not to the whole cluster; i.e. each mongod instance is independent of the others in the shard cluster and uses its own locks. The operations on one mongod instance do not block the operations on any others.
How does concurrency affect a replica set primary?¶
With replica sets, when MongoDB writes to a collection on the primary, MongoDB also writes to the primary’s oplog, which is a special collection in the local database. Therefore, MongoDB must lock both the collection’s database and the local database. The mongod must lock both databases at the same time to keep the database consistent and ensure that write operations, even with replication, are “all-or-nothing” operations.
When writing to a replica set, the lock’s scope applies to the primary.
How does concurrency affect secondaries?¶
In replication, MongoDB does not apply writes serially to secondaries. Secondaries collect oplog entries in batches and then apply those batches in parallel. Secondaries do not allow reads while applying the write operations, and apply write operations in the order that they appear in the oplog.
What kind of concurrency does MongoDB provide for JavaScript operations?¶
Changed in version 2.4: The V8 JavaScript engine added in 2.4 allows multiple JavaScript operations to run at the same time. Prior to 2.4, a single mongod could only run a single JavaScript operation at once.
Does MongoDB support transactions?¶
MongoDB does not support multi-document transactions.
However, MongoDB does provide atomic operations on a single document. Often these document-level atomic operations are sufficient to solve problems that would require ACID transactions in a relational database.
For example, in MongoDB, you can embed related data in nested arrays or nested documents within a single document and update the entire document in a single atomic operation. Relational databases might represent the same kind of data with multiple tables and rows, which would require transaction support to update the data atomically.
See also
What isolation guarantees does MongoDB provide?¶
MongoDB provides the following guarantees in the presence of concurrent read and write operations. These guarantees hold on systems configured with either the MMAPv1 or WiredTiger storage engines.
Write operations are atomic with respect to a single document; i.e. if a write is updating multiple fields in the document, a reader will never see the document with only some of the fields updated.
With a single mongod instance, a set of read and write operations to a single document is serializable. With replica sets, only in the absence of a rollback, is a set of read and write operations to a single document serializable.
Correctness with respect to query predicates, e.g. db.collection.find() will only return documents that match and db.collection.update() will only write to matching documents.
Correctness with respect to sort. For read operations that request a sort order (e.g. db.collection.find() or db.collection.aggregate()), the sort order will not be violated due to concurrent writes.
Although MongoDB provides these strong guarantees for single-document operations, read and write operations may access an arbitrary number of documents during execution. Multi-document operations do not occur transactionally and are not isolated from concurrent writes. This means that the following behaviors are expected under the normal operation of the system, for both the MMAPv1 and WiredTiger storage engines:
- Non-point-in-time read operations. Suppose a read operation begins at time t1 and starts reading documents. A write operation then commits an update to one of the documents at some later time t2. The reader may see the updated version of the document, and therefore does not see a point-in-time snapshot of the data.
- Non-serializable operations. Suppose a read operation reads a document d1 at time t1 and a write operation updates d1 at some later time t3. This introduces a read-write dependency such that, if the operations were to be serialized, the read operation must precede the write operation. But also suppose that the write operation updates document d2 at time t2 and the read operation subsequently reads d2 at some later time t4. This introduces a write-read dependency which would instead require the read operation to come after the write operation in a serializable schedule. There is a dependency cycle which makes serializability impossible.
- Dropped results for MMAPv1. For MMAPv1, reads may miss matching documents that are updated or deleted during the course of the read operation. However, data that has not been modified during the operation will always be visible.
See also
Can reads see changes that have not been committed to disk?¶
Changed in version 3.2: MongoDB 3.2 introduces readConcern option. Clients using majority readConcern cannot see the results of writes before they are made durable.
Readers, using "local" readConcern can see the results of writes before they are made durable, regardless of write concern level or journaling configuration. As a result, applications may observe the following behaviors:
- MongoDB will allow a concurrent reader to see the result of the write operation before the write is acknowledged to the client application. For details on when writes are acknowledged for different write concern levels, see Write Concern.
- Reads can see data which may subsequently be rolled back in cases such as replica set failover or power loss. It does not mean that read operations can see documents in a partially written or otherwise inconsistent state.
Other systems refer to these semantics as read uncommitted.
Changed in version 3.2.
Thank you for your feedback!
We're sorry! You can Report a Problem to help us improve this page.