Configuring Control Center¶
You can configure Confluent Control Center through a configuration file that is passed to Control Center on start.
Parameters are provided in the form of key/value pairs. Lines beginning with #
are ignored.
Required Settings¶
In almost all installations, you will want to configure these parameters:
bootstrap.servers
- A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping; this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form
host1:port1,host2:port2,...
. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).- Type: list
- Default: “localhost:9092”
- Importance: high
zookeeper.connect
- Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form
hostname1:port1,hostname2:port2,hostname3:port3,...
The server may also have a ZooKeeper chroot path as part of it’s ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. If so the consumer should use the same chroot path in its connection string. For example to give a chroot path of/chroot/path
you would give the connection string ashostname1:port1,hostname2:port2,hostname3:port3/chroot/path
.- Type: list
- Default: “localhost:2181”
- Importance: high
confluent.controlcenter.connect.cluster
- Base url for connect cluster
- Type: string
- Default: “localhost:8083”
- Importance: high
confluent.controlcenter.license
- Confluent will issue a license key to each subscriber, allowing the subscriber to unlock the full functionality of Control Center. The license key will be a short snippet of text that you can copy and paste. Paste this value into the license key. If you do not provide a license key, Control Center will stop working after 30 days. If you are a subscriber, please contact Confluent Support for more information.
- Type: string
- Default: None
- Importance: high
Optional Settings¶
We allow you to change some other parameters that change how Control Center behaves, such as internal topic names, data file locations, and replication settings. The default values for most of these settings are suitable for production use, but you can change these if needed.
confluent.controlcenter.id
- Identifier used as a prefix so that multiple instances of Control Center can co-exist.
- Type: integer
- Default: 1
- Importance: low
confluent.monitoring.interceptor.topic
- The durable topic that acts as the durable log for the data
- Type: string
- Default: “_confluent-monitoring”
- Importance: low
confluent.controlcenter.data.dir
- Location for ControlCenter specific data
- Type: path
- Default: “/tmp/confluent/control-center”
- Importance: low
confluent.controlcenter.name
- Control Center Name
- Type: string
- Default: “_confluent-controlcenter”
- Importance: low
confluent.controlcenter.internal.topics.partitions
- Number of partitions used internally by Control Center.
- Type: integer
- Default: 4
- Importance: low
confluent.controlcenter.internal.topics.replication
- Replication factor used internally by Control Center.
- Type: integer
- Default: 3
- Importance: low
confluent.controlcenter.internal.topics.retention.ms
- Maximum time that internal data is stored in Kafka.
- Type: long
- Default: 86400000
- Importance: low
confluent.controlcenter.internal.topics.changelog.segment.bytes
- Segment size for internal changelog topics in Kafka. This must be as small as broker settings log.cleaner.dedupe.buffer.size / log.cleaner.threads to guarantee enough space in the broker’s dedupe buffer for compaction to work.
- Type: long
- Default: 134217728
- Importance: low
confluent.monitoring.interceptor.topic.partitions
- Number of partitions for interceptor metrics topic.
- Type: integer
- Default: 12
- Importance: low
confluent.monitoring.interceptor.topic.replication
- Replication factor for interceptor metrics topic.
- Type: integer
- Default: 3
- Importance: low
confluent.monitoring.interceptor.topic.retention.ms
- Maximum time that interceptor data is stored in Kafka.
- Type: long
- Default: None
- Importance: low
confluent.controlcenter.connect.timeout
- Timeout in millis for calls to connect cluster
- Type: long
- Default: 15000
- Importance: low
Kafka Encryption, Authentication, Authorization Settings¶
confluent.controlcenter.streams.security.protocol
- Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
- Type: string
- Default: PLAINTEXT
- Importance: low
confluent.controlcenter.streams.ssl.keystore.location
- The location of the key store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.keystore.password
- The store password for the key store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.key.password
- The password of the private key in the key store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.truststore.location
- The location of the trust store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.truststore.location
- The location of the trust store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.truststore.password
- The password for the trust store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.sasl.mechanism
- SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.
- Type: string
- Default: GSSAPI
- Importance: low
confluent.controlcenter.streams.sasl.kerberos.service.name
- The Kerberos principal name that Kafka runs as. This can be defined either in Kafka’s JAAS config or in Kafka’s config.
- Type: string
- Default: GSSAPI
- Importance: low
HTTPS Settings¶
If you secure web access to Control Center with SSL, you may also need to configure the following parameters.
confluent.controlcenter.rest.listeners
Comma-separated list of listeners that listen for API requests over either http or https. If a listener uses https, the appropriate SSL configuration parameters need to be set as well.
- Type: list
- Default: “http://0.0.0.0:9021“
- Importance: high
confluent.controlcenter.rest.ssl.keystore.location
Used for https. Location of the keystore file to use for SSL. IMPORTANT: Jetty requires that the key’s CN, stored in the keystore, must match the FQDN.
- Type: string
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.keystore.password
Used for https. The store password for the keystore file.
- Type: password
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.key.password
Used for https. The password of the private key in the keystore file.
- Type: password
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.truststore.location
Used for https. Location of the trust store. Required only to authenticate https clients.
- Type: string
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.truststore.password
Used for https. The store password for the trust store file.
- Type: password
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.keystore.type
Used for https. The type of keystore file.
- Type: string
- Default: “JKS”
- Importance: medium
confluent.controlcenter.rest.ssl.truststore.type
Used for https. The type of trust store file.
- Type: string
- Default: “JKS”
- Importance: medium
confluent.controlcenter.rest.ssl.protocol
Used for https. The SSL protocol used to generate the SslContextFactory.
- Type: string
- Default: “TLS”
- Importance: medium
confluent.controlcenter.rest.ssl.provider
Used for https. The SSL security provider name. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: medium
confluent.controlcenter.rest.ssl.client.auth
Used for https. Whether or not to require the https client to authenticate via the server’s trust store.
- Type: boolean
- Default: false
- Importance: medium
confluent.controlcenter.rest.ssl.enabled.protocols
Used for https. The list of protocols enabled for SSL connections. Comma-separated list. Leave blank to use Jetty’s defaults.
- Type: list
- Default: “” (Jetty’s default)
- Importance: medium
confluent.controlcenter.rest.ssl.keymanager.algorithm
Used for https. The algorithm used by the key manager factory for SSL connections. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
confluent.controlcenter.rest.ssl.trustmanager.algorithm
Used for https. The algorithm used by the trust manager factory for SSL connections. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
confluent.controlcenter.rest.ssl.cipher.suites
Used for https. A list of SSL cipher suites. Comma-separated list. Leave blank to use Jetty’s defaults.
- Type: list
- Default: “” (Jetty’s default)
- Importance: low
confluent.controlcenter.rest.ssl.endpoint.identification.algorithm
Used for https. The endpoint identification algorithm to validate the server hostname using the server certificate. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
Internal Kafka Streams Settings¶
Because Control Center reads and writes data to Kafka, we also allow you to change some producer and consumer configurations. We do not recommend changing these values unless advised by Confluent Support. Some examples of values used internally are given. These settings map 1:1 with producer/consumer configs used internally by Confluent Control Center and all use the prefix confluent.controlcenter.streams.
.
confluent.controlcenter.streams.num.stream.threads
- The number of threads to execute stream processing
- Type: integer
- Default: 8
- Importance: low
confluent.controlcenter.streams.session.timeout.ms
- The timeout used to detect a consumer failure
- Type: integer
- Default: 275000
- Importance: low
confluent.controlcenter.streams.request.timeout.ms
- The maximum amount of time the client will wait for the response of a request
- Type: integer
- Default: 285000
- Importance: low
confluent.controlcenter.streams.retries
- Number of retries in case of production failure
- Type: integer
- Default: maximum integer (effectively infinite)
- Importance: low
confluent.controlcenter.streams.retry.backoff.ms
- Time to wait before retrying in case of production failure
- Type: long
- Default: 100
- Importance: low
confluent.controlcenter.streams.compression.type
- Compression type to use on internal topic production
- Type: string
- Default: lz4
- Importance: low