OPTIONS

Production Notes

This page details system configurations that affect MongoDB, especially in production.

Note

MongoDB Cloud Manager, a hosted service, and Ops Manager, an on-premise solution, provide monitoring, backup, and automation of MongoDB instances. See the MongoDB Cloud Manager documentation and Ops Manager documentation for more information.

MongoDB Binaries

Supported Platforms

Platform 3.2 3.0 2.6 2.4 2.2
Amazon Linux
Debian 7
Fedora 8+    
RHEL/CentOS 6.2+
RHEL/CentOS 7.0+    
SLES 11
SLES 12        
Solaris 64-bit
Ubuntu 12.04
Ubuntu 14.04    
Microsoft Azure
Windows Vista/Server 2008R2/2012+
OSX 10.7+  

Changed in version 3.2: MongoDB can now use the WiredTiger storage engine on all supported platforms.

Use the Latest Stable Packages

Be sure you have the latest stable release.

All releases are available on the Downloads page. The Downloads page is a good place to verify the current stable release, even if you are installing via a package manager.

Use 64-bit Builds

Always use 64-bit builds for production.

Although the 32-bit builds exist, they are unsuitable for production deployments. 32-bit builds also do not support the WiredTiger storage engine. For more information, see the 32-bit limitations page

Note

Starting in MongoDB 3.2, 32-bit binaries are deprecated and will be unavailable in future releases.

MongoDB dbPath

Changed in version 3.2: As of MongoDB 3.2, MongoDB uses the WiredTiger storage engine by default.

Changed in version 3.0: MongoDB includes support for two storage engines: MMAPv1, the storage engine available in previous versions of MongoDB, and WiredTiger.

The files in the dbPath directory must correspond to the configured storage engine. mongod will not start if dbPath contains data files created by a storage engine other than the one specified by --storageEngine.

Concurrency

MMAPv1

Changed in version 3.0: Beginning with MongoDB 3.0, MMAPv1 provides collection-level locking: All collections have a unique readers-writer lock that allows multiple clients to modify documents in different collections at the same time.

For MongoDB versions 2.2 through 2.6 series, each database has a readers-writer lock that allows concurrent read access to a database, but gives exclusive access to a single write operation per database. See the Concurrency page for more information. In earlier versions of MongoDB, all write operations contended for a single readers-writer lock for the entire mongod instance.

WiredTiger

WiredTiger supports concurrent access by readers and writers to the documents in a collection. Clients can read documents while write operations are in progress, and multiple threads can modify different documents in a collection at the same time.

Data Consistency

Journaling

MongoDB uses write ahead logging to an on-disk journal. Journaling guarantees that MongoDB can quickly recover write operations that were written to the journal but not written to data files in cases where mongod terminated as a result of a crash or other serious failure.

Leave journaling enabled in order to ensure that mongod will be able to recover its data files and keep the data files in a valid state following a crash. See Journaling for more information.

Read Concern

New in version 3.2.

To ensure that a single thread can read its own writes, use "majority" read concern and "majority" write concern against the primary of the replica set.

For more information on read concern, see Read Concern.

Write Concern

Write concern describes the level of acknowledgement requested from MongoDB for write operations. The level of the write concerns affects how quickly the write operation returns. When write operations have a weak write concern, they return quickly. With stronger write concerns, clients must wait after sending a write operation until MongoDB confirms the write operation at the requested write concern level. With insufficient write concerns, write operations may appear to a client to have succeeded, but may not persist in some cases of server failure.

See the Write Concern document for more information about choosing an appropriate write concern level for your deployment.

Networking

Use Trusted Networking Environments

Always run MongoDB in a trusted environment, with network rules that prevent access from all unknown machines, systems, and networks. As with any sensitive system that is dependent on network access, your MongoDB deployment should only be accessible to specific systems that require access, such as application servers, monitoring services, and other MongoDB components.

Note

By default, authorization is not enabled, and mongod assumes a trusted environment. Enable authorization mode as needed. For more information on authentication mechanisms supported in MongoDB as well as authorization in MongoDB, see Authentication and Role-Based Access Control.

For additional information and considerations on security, refer to the documents in the Security Section, specifically:

For Windows users, consider the Windows Server Technet Article on TCP Configuration when deploying MongoDB on Windows.

Disable HTTP Interface

MongoDB provides an HTTP interface to check the status of the server and, optionally, run queries. The HTTP interface is disabled by default. Do not enable the HTTP interface in production environments.

Deprecated since version 3.2: HTTP interface for MongoDB

See HTTP Status Interface.

Manage Connection Pool Sizes

To avoid overloading the connection resources of a single mongod or mongos instance, ensure that clients maintain reasonable connection pool sizes. Adjust the connection pool size to suit your use case, beginning at 110-115% of the typical number of concurrent database requests.

The connPoolStats command returns information regarding the number of open connections to the current database for mongos and mongod instances in sharded clusters.

See also Allocate Sufficient RAM and CPU.

Hardware Considerations

MongoDB is designed specifically with commodity hardware in mind and has few hardware requirements or limitations. MongoDB’s core components run on little-endian hardware, primarily x86/x86_64 processors. Client libraries (i.e. drivers) can run on big or little endian systems.

Allocate Sufficient RAM and CPU

MMAPv1

Due to its concurrency model, the MMAPv1 storage engine does not require many CPU cores . As such, increasing the number of cores can help but does not provide significant return.

Increasing the amount of RAM accessible to MongoDB may help reduce the frequency of page faults.

WiredTiger

The WiredTiger storage engine is multithreaded and can take advantage of many CPU cores. Specifically, the total number of active threads (i.e. concurrent operations) relative to the number of CPUs can impact performance:

  • Throughput increases as the number of concurrent active operations increases up to the number of CPUs.
  • Throughput decreases as the number of concurrent active operations exceeds the number of CPUs by some threshold amount.

The threshold amount depends on your application. You can determine the optimum number of concurrent active operations for your application by experimenting and measuring throughput. The output from mongostat provides statistics on the number of active reads/writes in the (ar|aw) column.

With WiredTiger, MongoDB utilizes both the WiredTiger cache and the filesystem cache.

Changed in version 3.2: Starting in MongoDB 3.2, the WiredTiger cache, by default, will use the larger of either:

  • 60% of RAM minus 1 GB, or
  • 1 GB.

For systems with up to 10 GB of RAM, the new default setting is less than or equal to the 3.0 default setting (For MongoDB 3.0, the WiredTiger cache uses either 1 GB or half of the installed physical RAM, whichever is larger).

For systems with more than 10 GB of RAM, the new default setting is greater than the 3.0 setting.

Via the filesystem cache, MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes. Data in the filesystem cache is compressed.

To adjust the size of the WiredTiger cache, see storage.wiredTiger.engineConfig.cacheSizeGB and --wiredTigerCacheSizeGB. Avoid increasing the WiredTiger cache size above its default value.

Note

The storage.wiredTiger.engineConfig.cacheSizeGB only limits the size of the WiredTiger cache, not the total amount of memory used by mongod. The WiredTiger cache is only one component of the RAM used by MongoDB. MongoDB also automatically uses all free memory on the machine via the filesystem cache (data in the filesystem cache is compressed).

In addition, the operating system will use any free RAM to buffer filesystem blocks.

To accommodate the additional consumers of RAM, you may have to decrease WiredTiger cache size.

The default WiredTiger cache size value assumes that there is a single mongod instance per node. If a single node contains multiple instances, then you should decrease the setting to accommodate the other mongod instances.

If you run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system, you must set storage.wiredTiger.engineConfig.cacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container.

To view statistics on the cache and eviction rate, see the wiredTiger.cache field returned from the serverStatus command.

See also

Concurrency

Use Solid State Disks (SSDs)

MongoDB has good results and a good price-performance ratio with SATA SSD (Solid State Disk).

Use SSD if available and economical. Spinning disks can be performant, but SSDs’ capacity for random I/O operations works well with the update model of MMAPv1.

Commodity (SATA) spinning drives are often a good option, as the random I/O performance increase with more expensive spinning drives is not that dramatic (only on the order of 2x). Using SSDs or increasing RAM may be more effective in increasing I/O throughput.

MongoDB and NUMA Hardware

Running MongoDB on a system with Non-Uniform Access Memory (NUMA) can cause a number of operational problems, including slow performance for periods of time and high system process usage.

When running MongoDB servers and clients on NUMA hardware, you should configure a memory interleave policy so that the host behaves in a non-NUMA fashion. MongoDB checks NUMA settings on start up when deployed on Linux (since version 2.0) and Windows (since version 2.6) machines. If the NUMA configuration may degrade performance, MongoDB prints a warning.

See also

Configuring NUMA on Windows

On Windows, memory interleaving must be enabled through the machine’s BIOS. Please consult your system documentation for details.

Configuring NUMA on Linux

When running MongoDB on Linux, you may instead use the numactl command and start the MongoDB programs (mongod, including the config servers; mongos; or clients) in the following manner:

numactl --interleave=all <path>

where <path> is the path to the program you are starting. Then, disable zone reclaim in the proc settings using the following command:

echo 0 > /proc/sys/vm/zone_reclaim_mode

To fully disable NUMA behavior, you must perform both operations. For more information, see the Documentation for /proc/sys/vm/*.

Disk and Storage Systems

Swap

Assign swap space for your systems. Allocating swap space can avoid issues with memory contention and can prevent the OOM Killer on Linux systems from killing mongod.

For the MMAPv1 storage engine, the method mongod uses to map files to memory ensures that the operating system will never store MongoDB data in swap space. On Windows systems, using MMAPv1 requires extra swap space due to commitment limits. For details, see MongoDB on Windows.

For the WiredTiger storage engine, given sufficient memory pressure, WiredTiger may store data in swap space .

RAID

Most MongoDB deployments should use disks backed by RAID-10.

RAID-5 and RAID-6 do not typically provide sufficient performance to support a MongoDB deployment.

Avoid RAID-0 with MongoDB deployments. While RAID-0 provides good write performance, it also provides limited availability and can lead to reduced performance on read operations, particularly when using Amazon’s EBS volumes.

Remote Filesystems

With the MMAPv1 storage engine, the Network File System protocol (NFS) is not recommended as you may see performance problems when both the data files and the journal files are hosted on NFS. You may experience better performance if you place the journal on local or iscsi volumes.

With the WiredTiger storage engine, WiredTiger objects may be stored on remote file systems if the remote file system conforms to ISO/IEC 9945-1:1996 (POSIX.1). Because remote file systems are often slower than local file systems, using a remote file system for storage may degrade performance.

If you decide to use NFS, add the following NFS options to your /etc/fstab file: bg, nolock, and noatime.

Separate Components onto Different Storage Devices

For improved performance, consider separating your database’s data, journal, and logs onto different storage devices, based on your application’s access and write pattern.

For the WiredTiger storage engine, you can also store the indexes on a different storage device. See storage.wiredTiger.engineConfig.directoryForIndexes.

Note

Using different storage devices will affect your ability to create snapshot-style backups of your data, since the files will be on different devices and volumes.

Scheduling for Virtual Devices

Local block devices attached to virtual machine instances via the hypervisor should use a noop scheduler for best performance. The noop scheduler allows the operating system to defer I/O scheduling to the underlying hypervisor.

Architecture

Replica Sets

See the Replica Set Architectures document for an overview of architectural considerations for replica set deployments.

Sharded Clusters

See the Sharded Cluster Production Architecture document for an overview of recommended sharded cluster architectures for production deployments.

See also

Design Notes

Compression

WiredTiger can compress collection data using either snappy or zlib compression library. snappy provides a lower compression rate but has little performance cost, whereas zlib provides better compression rate but has a higher performance cost.

By default, WiredTiger uses snappy compression library. To change the compression setting, see storage.wiredTiger.collectionConfig.blockCompressor.

WiredTiger uses prefix compression on all indexes by default.

Platform Specific Considerations

Note

MongoDB uses the GNU C Library (glibc) if available on a system. MongoDB requires version at least glibc-2.12-1.2.el6 to avoid a known bug with earlier versions. For best results use at least version 2.13.

MongoDB on Linux

Kernel and File Systems

When running MongoDB in production on Linux, it is recommended that you use Linux kernel version 2.6.36 or later.

With the MMAPv1 storage engine, MongoDB preallocates its database files before using them and often creates large files. As such, you should use the XFS and EXT4 file systems. If possible, use XFS as it generally performs better with MongoDB.

With the WiredTiger storage engine, use of XFS is strongly recommended to avoid performance issues that have been observed when using EXT4 with WiredTiger.

  • In general, if you use the XFS file system, use at least version 2.6.25 of the Linux Kernel.

  • In general, if you use the EXT4 file system, use at least version 2.6.23 of the Linux Kernel.

  • Some Linux distributions require different versions of the kernel to support using XFS and/or EXT4:

    Linux Distribution Filesystem Kernel Version
    CentOS 5.5 ext4, xfs 2.6.18-194.el5
    CentOS 5.6 ext4, xfs 2.6.18-3.0.el5
    CentOS 5.8 ext4, xfs 2.6.18-308.8.2.el5
    CentOS 6.1 ext4, xfs 2.6.32-131.0.15.el6.x86_64
    RHEL 5.6 ext4 2.6.18-3.0
    RHEL 6.0 xfs 2.6.32-71
    Ubuntu 10.04.4 LTS ext4, xfs 2.6.32-38-server
    Amazon Linux AMI release 2012.03 ext4 3.2.12-3.2.4.amzn1.x86_64

fsync() on Directories

Important

MongoDB requires a filesystem that supports fsync() on directories. For example, HGFS and Virtual Box’s shared folders do not support this operation.

MongoDB and TLS/SSL Libraries

On Linux platforms, you may observe one of the following statements in the MongoDB log:

<path to SSL libs>/libssl.so.<version>: no version information available (required by /usr/bin/mongod)
<path to SSL libs>/libcrypto.so.<version>: no version information available (required by /usr/bin/mongod)

These warnings indicate that the system’s TLS/SSL libraries are different from the TLS/SSL libraries that the mongod was compiled against. Typically these messages do not require intervention; however, you can use the following operations to determine the symbol versions that mongod expects:

objdump -T <path to mongod>/mongod | grep " SSL_"
objdump -T <path to mongod>/mongod | grep " CRYPTO_"

These operations will return output that resembles one the of the following lines:

0000000000000000      DF *UND*       0000000000000000  libssl.so.10 SSL_write
0000000000000000      DF *UND*       0000000000000000  OPENSSL_1.0.0 SSL_write

The last two strings in this output are the symbol version and symbol name. Compare these values with the values returned by the following operations to detect symbol version mismatches:

objdump -T <path to TLS/SSL libs>/libssl.so.1*
objdump -T <path to TLS/SSL libs>/libcrypto.so.1*

This procedure is neither exact nor exhaustive: many symbols used by mongod from the libcrypto library do not begin with CRYPTO_.

MongoDB on Windows

MongoDB Using MMAPv1

Install Hotfix for MongoDB 2.6.6 and Later

Microsoft has released a hotfix for Windows 7 and Windows Server 2008 R2, KB2731284, that repairs a bug in these operating systems’ use of memory-mapped files that adversely affects the performance of MongoDB using the MMAPv1 storage engine.

Install this hotfix to obtain significant performance improvements on MongoDB 2.6.6 and later releases in the 2.6 series, which use MMAPv1 exclusively, and on 3.0 and later when using MMAPv1 as the storage engine.

Configure Windows Page File For MMAPv1

Configure the page file such that the minimum and maximum page file size are equal and at least 32 GB. Use a multiple of this size if, during peak usage, you expect concurrent writes to many databases or collections. However, the page file size does not need to exceed the maximum size of the database.

A large page file is needed as Windows requires enough space to accommodate all regions of memory mapped files made writable during peak usage, regardless of whether writes actually occur.

The page file is not used for database storage and will not receive writes during normal MongoDB operation. As such, the page file will not affect performance, but it must exist and be large enough to accommodate Windows’ commitment rules during peak database use.

Note

Dynamic page file sizing is too slow to accommodate the rapidly fluctuating commit charge of an active MongoDB deployment. This can result in transient overcommitment situations that may lead to abrupt server shutdown with a VirtualProtect error 1455.

MongoDB 3.0 Using WiredTiger

For MongoDB instances using the WiredTiger storage engine, performance on Windows is comparable to performance on Linux.

MongoDB on Virtual Environments

This section describes considerations when running MongoDB in some of the more common virtual environments.

For all platforms, consider Scheduling for Virtual Devices.

EC2

MongoDB is compatible with EC2. MongoDB Cloud Manager provides integration with Amazon Web Services (AWS) and lets you deploy new EC2 instances directly from MongoDB Cloud Manager. See Configure AWS Integration for more details.

Azure

For all MongoDB deployments using Azure, you must mount the volume that hosts the mongod instance’s dbPath with the Host Cache Preference READ/WRITE.

This applies to all Azure deployments, using any guest operating system.

If your volumes have inappropriate cache settings, MongoDB may eventually shut down with the following error:

[DataFileSync] FlushViewOfFile for <data file> failed with error 1 ...
[DataFileSync] Fatal Assertion 16387

These shut downs do not produce data loss when storage.journal.enabled is set to true. You can safely restart mongod at any time following this event.

The performance characteristics of MongoDB may change with READ/WRITE caching enabled.

The TCP keepalive on the Azure load balancer is 240 seconds by default, which can cause it to silently drop connections if the TCP keepalive on your Azure systems is greater than this value. You should set tcp_keepalive_time to 120 to ameliorate this problem.

On Linux systems:

  • To view the keep alive setting, you can use one of the following commands:

    sysctl net.ipv4.tcp_keepalive_time
    

    Or:

    cat /proc/sys/net/ipv4/tcp_keepalive_time
    

    The value is measured in seconds.

  • To change the tcp_keepalive_time value, you can use one of the following command:

    sudo sysctl -w net.ipv4.tcp_keepalive_time=<value>
    

    Or:

    echo <value> | sudo tee /proc/sys/net/ipv4/tcp_keepalive_time
    

    These operations do not persist across system reboots. To persist the setting, add the following line to /etc/sysctl.conf:

    net.ipv4.tcp_keepalive_time = <value>
    

    On Linux, mongod and mongos processes limit the keepalive to a maximum of 300 seconds (5 minutes) on their own sockets by overriding keepalive values greater than 5 minutes.

For Windows systems:

  • To view the keep alive setting, issue the following command:

    reg query HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters /v KeepAliveTime
    

    The registry value is not present by default. The system default, used if the value is absent, is 7200000 milliseconds or 0x6ddd00 in hexadecimal.

  • To change the KeepAliveTime value, use the following command in an Administrator Command Prompt, where <value> is expressed in hexadecimal (e.g. 0x0124c0 is 120000):

    reg add HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ /v KeepAliveTime /d <value>
    

    Windows users should consider the Windows Server Technet Article on KeepAliveTime for more information on setting keep alive for MongoDB deployments on Windows systems.

VMWare

MongoDB is compatible with VMWare.

As some users have run into issues with VMWare’s memory overcommit feature, you should disable the feature.

Further, MongoDB is known to run poorly with VMWare’s balloon driver (vmmemctl), so you should disable this as well. VMWare uses the balloon driver to reduce physical memory usage on the host hardware by allowing the hypervisor to swap to disk while hiding this fact from the guest, which continues to see the same amount of (virtual) physical memory. This interferes with MongoDB’s memory management, and you are likely to experience significant performance degradation.

It is possible to clone a virtual machine running MongoDB. You might use this function to spin up a new virtual host to add as a member of a replica set. If you clone a VM with journaling enabled, the clone snapshot will be valid. If not using journaling, first stop mongod, then clone the VM, and finally, restart mongod.

Performance Monitoring

iostat

On Linux, use the iostat command to check if disk I/O is a bottleneck for your database. Specify a number of seconds when running iostat to avoid displaying stats covering the time since server boot.

For example, the following command will display extended statistics and the time for each displayed report, with traffic in MB/s, at one second intervals:

iostat -xmt 1

Key fields from iostat:

  • %util: this is the most useful field for a quick check, it indicates what percent of the time the device/drive is in use.
  • avgrq-sz: average request size. Smaller number for this value reflect more random IO operations.

bwm-ng

bwm-ng is a command-line tool for monitoring network use. If you suspect a network-based bottleneck, you may use bwm-ng to begin your diagnostic process.

Backups

To make backups of your MongoDB database, please refer to MongoDB Backup Methods Overview.

Was this page helpful?

Yes No

Thank you for your feedback!

We're sorry! You can Report a Problem to help us improve this page.