- Sharding >
- Sharded Cluster Balancer
Sharded Cluster Balancer¶
On this page
The MongoDB balancer is a background process that monitors the number of chunks on each shard. When the number of chunks on a given shard reaches specific migration thresholds, the balancer attempts to automatically migrate chunks between shards and reach an equal number of chunks per shard.
The balancing procedure for sharded clusters is entirely transparent to the user and application layer, though there may be some performance impact while the procedure takes place.
Cluster Balancer¶
The balancer process is responsible for redistributing the chunks of a sharded collection evenly among the shards for every sharded collection. By default, the balancer process is always enabled.
Changed in version 3.4: The balancer runs on the primary of the config server replica set.
When a balancer process is active, config server acquires a “lock” by
modifying a document in the lock
collection in the
Config Database. This “balancer” lock is never released.
To address uneven chunk distribution for a sharded collection, the balancer migrates chunks from shards with more chunks to shards with a fewer number of chunks. The balancer migrates the chunks until there is an even distribution of chunks for the collection across the shards. For details about chunk migration, see Chunk Migration Procedure.
Changed in version 2.6: Chunk migrations can have an impact on disk space. Starting in MongoDB 2.6, the source shard automatically archives the migrated documents by default. For details, see moveChunk directory.
Chunk migrations carry some overhead in terms of bandwidth and workload, both of which can impact database performance. The balancer attempts to minimize the impact by:
Restricting a shard to at most one migration at any given time; i.e. a shard cannot participate in multiple chunk migrations at the same time. To migrate multiple chunks from a shard, the balancer migrates the chunks one at a time.
Changed in version 3.4: Starting in MongoDB 3.4, MongoDB can perform parallel chunk migrations. Observing the restriction that a shard can participate in at most one migration at a time, for a sharded cluster with n shards, MongoDB can perform at most n/2 (rounded down) simultaneous chunk migrations.
See also Asynchronous Chunk Migration Cleanup.
Starting a balancing round only when the difference in the number of chunks between the shard with the greatest number of chunks for a sharded collection and the shard with the lowest number of chunks for that collection reaches the migration threshold.
You may disable the balancer temporarily for maintenance. See Disable the Balancer for details.
You can also limit the window during which the balancer runs to prevent it from impacting production traffic. See Schedule the Balancing Window for details.
Note
The specification of the balancing window is relative to the local time zone of the primary of the config server replica set.
See also
Adding and Removing Shards from the Cluster¶
Adding a shard to a cluster creates an imbalance, since the new shard has no chunks. While MongoDB begins migrating data to the new shard immediately, it can take some time before the cluster balances. See the Add Shards to a Cluster tutorial for instructions on adding a shard to a cluster.
Removing a shard from a cluster creates a similar imbalance, since chunks residing on that shard must be redistributed throughout the cluster. While MongoDB begins draining a removed shard immediately, it can take some time before the cluster balances. Do not shutdown the servers associated to the removed shard during this process.
When you remove a shard in a cluster with an uneven chunk distribution, the balancer first removes the chunks from the draining shard and then balances the remaining uneven chunk distribution.
See the Remove Shards from an Existing Sharded Cluster tutorial for instructions on safely removing a shard from a cluster.
Chunk Migration Procedure¶
All chunk migrations use the following procedure:
The balancer process sends the
moveChunk
command to the source shard.The source starts the move with an internal
moveChunk
command. During the migration process, operations to the chunk route to the source shard. The source shard is responsible for incoming write operations for the chunk.The destination shard builds any indexes required by the source that do not exist on the destination.
The destination shard begins requesting documents in the chunk and starts receiving copies of the data. See also Chunk Migration and Replication.
After receiving the final document in the chunk, the destination shard starts a synchronization process to ensure that it has the changes to the migrated documents that occurred during the migration.
When fully synchronized, the source shard connects to the config database and updates the cluster metadata with the new location for the chunk.
After the source shard completes the update of the metadata, and once there are no open cursors on the chunk, the source shard deletes its copy of the documents.
Note
If the balancer needs to perform additional chunk migrations from the source shard, the balancer can start the next chunk migration without waiting for the current migration process to finish this deletion step. See Asynchronous Chunk Migration Cleanup.
Changed in version 2.6: The source shard automatically archives the migrated documents by default. For more information, see moveChunk directory.
The migration process ensures consistency and maximizes the availability of chunks during balancing.
Migration Thresholds¶
To minimize the impact of balancing on the cluster, the balancer only begins balancing after the distribution of chunks for a sharded collection has reached certain thresholds. The thresholds apply to the difference in number of chunks between the shard with the most chunks for the collection and the shard with the fewest chunks for that collection. The balancer has the following thresholds:
Number of Chunks | Migration Threshold |
---|---|
Fewer than 20 | 2 |
20-79 | 4 |
80 and greater | 8 |
The balancer stops running on the target collection when the difference between the number of chunks on any two shards for that collection is less than two, or a chunk migration fails.
Asynchronous Chunk Migration Cleanup¶
To migrate multiple chunks from a shard, the balancer migrates the chunks one at a time. However, the balancer does not wait for the current migration’s delete phase to complete before starting the next chunk migration. See Chunk Migration for the chunk migration process and the delete phase.
This queuing behavior allows shards to unload chunks more quickly in cases of heavily imbalanced cluster, such as when performing initial data loads without pre-splitting and when adding new shards.
This behavior also affects the moveChunk
command, and
migration scripts that use the moveChunk
command may
proceed more quickly.
In some cases, the delete phases may persist longer. If multiple delete phases are queued but not yet complete, a crash of the replica set’s primary can orphan data from multiple migrations.
The _waitForDelete
, available as a setting for the balancer as well
as the moveChunk
command, can alter the behavior so that
the delete phase of the current migration blocks the start of the next
chunk migration. The _waitForDelete
is generally for internal
testing purposes. For more information, see
Wait for Delete.
Chunk Migration and Replication¶
Changed in version 3.4.
During chunk migration, the _secondaryThrottle
value determines when
the balancer proceeds with the next document in the chunk:
If
true
, then by default, each document move during chunk migration propagates to at least one secondary before the balancer proceeds with the next document. This is equivalent to a write concern of{ w: 2 }
.Note
The
writeConcern
field in the balancer configuration document allows you to specify a different write concern semantics with the_secondaryThrottle
option. For an example, see Change Replication Behavior for Chunk Migration.If
false
, the balancer does not wait for replication to a secondary and instead continues with the next document.
Starting in MongoDB 3.4, for WiredTiger,
the default value _secondaryThrottle
is false
for all chunk
migrations.
The default value remains true
for MMAPv1.
To update the _secondaryThrottle
parameter for the balancer, see
Change Replication Behavior for Chunk Migration for an example.
Independent of the secondaryThrottle
setting, certain phases of
the chunk migration have the following replication policy:
- MongoDB briefly pauses all application reads and writes to the collection being migrated, on the source shard, before updating the config servers with the new location for the chunk, and resumes the application reads and writes after the update. The chunk move requires all writes to be acknowledged by majority of the members of the replica set both before and after committing the chunk move to config servers.
- When an outgoing chunk migration finishes and cleanup occurs, all writes must be replicated to a majority of servers before further cleanup (from other outgoing migrations) or new incoming migrations can proceed.
Maximum Number of Documents Per Chunk to Migrate¶
MongoDB cannot move a chunk if the number of documents in the chunk exceeds
either 250000 documents or 1.3 times the result of dividing the configured
chunk size by the average document size.
db.collection.stats()
includes the avgObjSize
field,
which represents the average document size in the collection.
Shard Size¶
By default, MongoDB attempts to fill all available disk space with data on every shard as the data set grows. To ensure that the cluster always has the capacity to handle data growth, monitor disk usage as well as other performance metrics.
See the Change the Maximum Storage Size for a Given Shard tutorial for instructions on setting the maximum size for a shard.