OPTIONS

WiredTiger Storage Engine

Starting in MongoDB 3.0, the WiredTiger storage engine is available in the 64-bit builds.

Changed in version 3.2: The WiredTiger storage engine is the default storage engine starting in MongoDB 3.2. For existing deployments, if you do not specify the --storageEngine or the storage.engine setting, MongoDB 3.2 can automatically determine the storage engine used to create the data files in the --dbpath or storage.dbPath. See Default Storage Engine Change.

Document Level Concurrency

WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time.

For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.

Some global operations, typically short lived operations involving multiple databases, still require a global “instance-wide” lock. Some other operations, such as dropping a collection, still require an exclusive database lock.

Snapshots and Checkpoints

WiredTiger uses MultiVersion Concurrency Control (MVCC). At the start of an operation, WiredTiger provides a point-in-time snapshot of the data to the transaction. A snapshot presents a consistent view of the in-memory data.

When writing to disk, WiredTiger writes all the data in a snapshot to disk in a consistent way across all data files. The now-durable data act as a checkpoint in the data files. The checkpoint ensures that the data files are consistent up to and including the last checkpoint; i.e. checkpoints can act as recovery points.

MongoDB configures WiredTiger to create checkpoints (i.e. write the snapshot data to disk) at intervals of 60 seconds or 2 gigabytes of journal data.

During the write of a new checkpoint, the previous checkpoint is still valid. As such, even if MongoDB terminates or encounters an error while writing a new checkpoint, upon restart, MongoDB can recover from the last valid checkpoint.

The new checkpoint becomes accessible and permanent when WiredTiger’s metadata table is atomically updated to reference the new checkpoint. Once the new checkpoint is accessible, WiredTiger frees pages from the old checkpoints.

Using WiredTiger, even without journaling, MongoDB can recover from the last checkpoint; however, to recover changes made after the last checkpoint, run with journaling.

Journal

WiredTiger uses a write-ahead transaction log in combination with checkpoints to ensure data durability.

The WiredTiger journal persists all data modifications between checkpoints. If MongoDB exits between checkpoints, it uses the journal to replay all data modified since the last checkpoint. For information on the frequency with which MongoDB writes the journal data to disk, see Journaling Process.

WiredTiger journal is compressed using the snappy compression library. To specify an alternate compression algorithm or no compression, use the storage.wiredTiger.engineConfig.journalCompressor setting.

Note

Minimum log record size for WiredTiger is 128 bytes. If a log record is 128 bytes or smaller, WiredTiger does not compress that record.

You can disable journaling by setting storage.journal.enabled to false, which can reduce the overhead of maintaining the journal.

For standalone instances, not using the journal means that you will lose some data modifications when MongoDB exits unexpectedly between checkpoints. For members of replica sets, the replication process may provide sufficient durability guarantees.

Compression

With WiredTiger, MongoDB supports compression for all collections and indexes. Compression minimizes storage use at the expense of additional CPU.

By default, WiredTiger uses block compression with the snappy compression library for all collections and prefix compression for all indexes.

For collections, block compression with zlib is also available. To specify an alternate compression algorithm or no compression, use the storage.wiredTiger.collectionConfig.blockCompressor setting.

For indexes, to disable prefix compression, use the storage.wiredTiger.indexConfig.prefixCompression setting.

Compression settings are also configurable on a per-collection and per-index basis during collection and index creation. See Specify Storage Engine Options and db.collection.createIndex() storageEngine option.

For most workloads, the default compression settings balance storage efficiency and processing requirements.

The WiredTiger journal is also compressed by default. For information on journal compression, see Journal.

Memory Use

With WiredTiger, MongoDB utilizes both the WiredTiger cache and the filesystem cache.

Changed in version 3.2: Starting in MongoDB 3.2, the WiredTiger cache, by default, will use the larger of either:

  • 60% of RAM minus 1 GB, or
  • 1 GB.

For systems with up to 10 GB of RAM, the new default setting is less than or equal to the 3.0 default setting (For MongoDB 3.0, the WiredTiger cache uses either 1 GB or half of the installed physical RAM, whichever is larger).

For systems with more than 10 GB of RAM, the new default setting is greater than the 3.0 setting.

Via the filesystem cache, MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes. Data in the filesystem cache is compressed.

To adjust the size of the WiredTiger cache, see storage.wiredTiger.engineConfig.cacheSizeGB and --wiredTigerCacheSizeGB. Avoid increasing the WiredTiger cache size above its default value.

Was this page helpful?

Yes No

Thank you for your feedback!

We're sorry! You can Report a Problem to help us improve this page.