- Sharding >
- Sharded Cluster Administration >
- Sharded Cluster Config Server Administration >
- Upgrade Config Servers to Replica Set (Downtime)
Upgrade Config Servers to Replica Set (Downtime)¶
On this page
Important
In version 3.4, MongoDB removes support for SCCC config servers. Before you can upgrade your sharded clusters to 3.4, you must convert your config servers from SCCC to CSRS. To convert to CSRS, follow the procedure to change your 3.2.x SCCC to 3.2.x CSRS.
The following procedure upgrades three mirrored config servers to a config server replica set.
Prerequisites¶
- All binaries in the sharded clusters must be at least version 3.2. See Upgrade a Sharded Cluster to 3.2 for instructions to upgrade the sharded cluster.
- The existing config servers must be in sync.
Procedure¶
Important
The procedure outlined in this tutorial requires downtime. If all the sharded cluster binaries are at least version 3.2.4, you can also convert the config servers to replica set without downtime. For details, see Upgrade Config Servers to Replica Set.
Disable the balancer as described in Disable the Balancer.
Connect a
mongo
shell to the first config server listed in theconfigDB
setting of themongos
and runrs.initiate()
to initiate the single member replica set.rs.initiate( { _id: "csReplSet", version: 1, configsvr: true, members: [ { _id: 0, host: "<host>:<port>" } ] } )
_id
corresponds to the replica set name for the config servers.version
set to 1, corresponding to the initial version of the replica set configuration.configsvr
must be set betrue
.members
array contains a document that specifies:members._id
which is a numeric identifier for the member.members.host
which is a string corresponding to the config server’s hostname and port.
Restart this config server as a single member replica set with:
- the
--replSet
option set to the replica set name specified during thers.initiate()
, - the
--configsvrMode
option set to the legacy config server mode Sync Cluster Connection Config (sccc
), - the
--configsvr
option, and - the
--storageEngine
option set to the storage engine used by this config server. For this upgrade procedure, the existing config server can be using either MMAPv1 or WiredTiger.
Include additional options as specific to your deployment.
mongod --configsvr --replSet csReplSet --configsvrMode=sccc --storageEngine <storageEngine> --port <port> --dbpath <path>
Or if using a configuration file, specify the
replication.replSetName:
,sharding.clusterRole
,sharding.configsvrMode
andnet.port
.sharding: clusterRole: configsvr configsvrMode: sccc replication: replSetName: csReplSet net: port: <port> storage: dbPath: <path> engine: <storageEngine>
- the
Start the new
mongod
instances to add to the replica set. These instances must use the WiredTiger storage engine. Starting in 3.2, the default storage engine is WiredTiger for newmongod
instances with new data paths.Important
- Do not add existing config servers to the replica set.
- Use new dbpaths for the new instances.
The number of new
mongod
instances to add depends on the config server currently in the single-member replica set:- If the config server is using MMAPv1, start 3 new
mongod
instances. - If the config server is using WiredTiger, start 2 new
mongod
instances.
Note
The example in this procedure assumes that the existing config servers use MMAPv1.
For each new
mongod
instance to add, include the--configsvr
and the--replSet
options:mongod --configsvr --replSet csReplSet --port <port> --dbpath <path>
Or if using a configuration file:
sharding: clusterRole: configsvr replication: replSetName: csReplSet net: port: <port> storage: dbPath: <path>
Using the
mongo
shell connected to the replica set config server, add the newmongod
instances as non-voting, priority 0 members:rs.add( { host: <host:port>, priority: 0, votes: 0 } )
Once all the new members have been added as non-voting, priority 0 members, ensure that the new nodes have completed the initial sync and have reached
SECONDARY
state. To check the state of the replica set members, runrs.status()
in themongo
shell:rs.status()
Shut down one of the other non-replica set config servers; i.e. either the second and third config server listed in the
configDB
setting of themongos
.Reconfigure the replica set to allow all members to vote and have default priority of
1
.var cfg = rs.conf(); cfg.members[0].priority = 1; cfg.members[1].priority = 1; cfg.members[2].priority = 1; cfg.members[3].priority = 1; cfg.members[0].votes = 1; cfg.members[1].votes = 1; cfg.members[2].votes = 1; cfg.members[3].votes = 1; rs.reconfig(cfg);
Step down the first config server, i.e. the server started with
--configsvrMode=sccc
.rs.stepDown()
Shut down the following members of the sharded cluster:
- The
mongos
instances. - The shards.
- The remaining non-replica set config servers.
- The
Shut down the first config server.
If the first config server uses the MMAPv1 storage engine, remove the member from the replica set. Connect a
mongo
shell to the current primary and users.remove()
:Important
If the first config server uses the WiredTiger storage engine, do not remove.
rs.remove("<hostname>:<port>")
If the first config server uses WiredTiger, restart the first config server in config server replica set (
CSRS
) mode; i.e. restart without the--configsvrMode=sccc
option:Important
If the first config server uses the MMAPv1 storage engine, do not restart.
mongod --configsvr --replSet csReplSet --storageEngine wiredTiger --port <port> --dbpath <path>
Or if using a configuration file, omit the
sharding.configsvrMode
setting:sharding: clusterRole: configsvr replication: replSetName: csReplSet net: port: <port> storage: dbPath: <path> engine: <storageEngine>
Restart the shards.
Restart
mongos
instances with updated--configdb
orconfigDB
setting.For the updated
--configdb
orconfigDB
setting, specify the replica set name for the config servers and the members in the replica set.mongos --configdb csReplSet/<rsconfigsver1:port1>,<rsconfigsver2:port2>,<rsconfigsver3:port3>
Re-enable the balancer as described in Enable the Balancer.