Configuring Control Center¶
Base Settings¶
You can configure Confluent Control Center through a configuration file that is passed to Control Center on start.
A sample configuration is included at etc/confluent-control-center/control-center.properties
.
Parameters are provided in the form of key/value pairs. Lines beginning with #
are ignored.
bootstrap.servers
- A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping; this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form
host1:port1,host2:port2,...
. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).- Type: list
- Default: “localhost:9092”
- Importance: high
zookeeper.connect
- Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form
hostname1:port1,hostname2:port2,hostname3:port3,...
The server may also have a ZooKeeper chroot path as part of it’s ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. If so the consumer should use the same chroot path in its connection string. For example, for a chroot path of/chroot/path
, you would specify the connection string ashostname1:port1,hostname2:port2,hostname3:port3/chroot/path
.- Type: list
- Default: “localhost:2181”
- Importance: high
confluent.license
- Confluent will issue a license key to each subscriber, allowing the subscriber to unlock the full functionality of Control Center. The license key will be a short snippet of text that you can copy and paste. Paste this value into the license key. If you do not provide a license key, Control Center will stop working after 30 days. If you are a subscriber, please contact Confluent Support for more information.
confluent.controlcenter.license
is a deprecated synonym for this configuration key.- Type: string
- Default: None
- Importance: high
In production, we recommend running Control Center using a separate Kafka cluster from the one being monitored. In this case, the following settings must be configured for the Kafka cluster being monitored if you wish to use Connect. If you are simply testing Control Center locally, you do not need to adjust these settings.
confluent.controlcenter.connect.bootstrap.servers
Bootstrap servers for Kafka cluster backing the Connect cluster. If left unspecified, falls back to the
bootstrap.servers
setting.- Type: list
- Default: []
- Importance: medium
confluent.controlcenter.connect.zookeeper.connect
ZooKeeper connection string for the Kafka cluster backing the Connect cluster. If left unspecified, falls back to the
zookeeper.connect
setting.- Type: string
- Default: “”
- Importance: medium
Logging¶
By default Control Center will output it’s logs to stdout. Logging configuration is defined in etc/confluent-control-center/log4j.properties
.
We also supply etc/confluent-control-center/log4j-rolling.properties
as an example of setting up Control Center with rolling log
files that may be easier to manage. You can select your desired log4j config by setting the CONTROL_CENTER_LOG4J_OPTS
env variable
when starting Control Center.
Optional Settings¶
We allow you to change some other parameters that change how Control Center behaves, such as internal topic names, data file locations, and replication settings. The default values for most of these settings are suitable for production use, but you can change these if needed.
General¶
confluent.controlcenter.connect.cluster
- Comma-separated list of URLs for the Connect cluster. This must be set if you wish to manage a connect cluster.
- Type: string
- Default: “localhost:8083”
- Importance: high
confluent.controlcenter.data.dir
- Location for Control Center specific data. Although the data stored in this directory can be recomputed, doing so is expensive and can affect the availability of Control Center‘s stream monitoring functionality. For production, you should set this to a durable location.
- Type: path
- Default: “/var/lib/confluent-control-center”
- Importance: high
confluent.controlcenter.rest.listeners
Comma-separated list of listeners that listen for API requests over either http or https. If a listener uses https, the appropriate SSL configuration parameters need to be set as well. The first value will be used in the body of any alert emails sent from Control Center.
- Type: list
- Default: “http://0.0.0.0:9021“
- Importance: high
confluent.controlcenter.id
- Identifier used as a prefix so that multiple instances of Control Center can co-exist.
- Type: string
- Default: “1”
- Importance: low
confluent.controlcenter.name
- Control Center Name
- Type: string
- Default: “_confluent-controlcenter-3.3.1”
- Importance: low
confluent.controlcenter.internal.topics.partitions
- Number of partitions used internally by Control Center.
- Type: integer
- Default: 4
- Importance: low
confluent.controlcenter.internal.topics.replication
- Replication factor used internally by Control Center. It is not recommended to reduce this value except in a development environment.
- Type: integer
- Default: 3
- Importance: low
confluent.controlcenter.internal.topics.retention.ms
- Maximum time in milliseconds that internal data is stored in Kafka.
- Type: long
- Default: 86400000
- Importance: low
confluent.controlcenter.internal.topics.changelog.segment.bytes
- Segment size in bytes for internal changelog topics in Kafka. This must be as small as broker settings
log.cleaner.dedupe.buffer.size
/log.cleaner.threads
to guarantee enough space in the broker’s dedupe buffer for compaction to work.- Type: long
- Default: 134217728
- Importance: low
confluent.controlcenter.connect.timeout.ms
- Timeout in milliseconds for calls to connect cluster
- Type: long
- Default: 15000
- Importance: low
confluent.metrics.topic.replication
Replication factor for metrics topic. It is not recommended to reduce this value except in a development environment.
- Type: int
- Default: 3
- Importance: low
confluent.metrics.topic.partitions
Partition count for metrics topic
- Type: int
- Default: 12
- Importance: low
confluent.metrics.topic.skip.backlog.minutes
- Skip backlog older than x minutes ago for broker metrics data. Set this to 0 if you want to process from the latest offsets. This config overrides
confluent.controlcenter.streams.consumer.auto.offset.reset
(deprecated) for the metrics input topic.- Type: long
- Default: 15
confluent.controlcenter.disk.skew.warning.min.bytes
- Threshold for the max difference in disk usage across all brokers before disk skew warning is published
- Type: long
- Default: 1073741824
- Importance: low
Monitoring Settings¶
These optional settings are for the Stream Monitoring functionality. The default settings work for the majority of use cases and scales.
confluent.monitoring.interceptor.topic
The Kafka topic that stores monitoring interceptor data. This setting must match the
confluent.monitoring.interceptor.topic
configuration used by the interceptors in your application. Usually you should not change this setting unless you are running multiple instances of Control Center with client monitoring interceptor data being reported to the same Kafka cluster.- Type: string
- Default: “_confluent-monitoring”
- Importance: high
confluent.monitoring.interceptor.topic.partitions
- Number of partitions for the monitoring interceptor data topic
- Type: integer
- Default: 12
- Importance: low
confluent.monitoring.interceptor.topic.replication
Replication factor for monitoring topic. It is not recommended to reduce this value except in a development environment.
- Type: int
- Default: 3
- Importance: low
confluent.monitoring.interceptor.topic.retention.ms
- Maximum time that interceptor data is stored in Kafka.
- Type: long
- Default: None
- Importance: low
confluent.monitoring.interceptor.topic.skip.backlog.minutes
- Skip backlog older than x minutes ago for monitoring interceptor data. Set this to 0 if you want to process from the latest offsets. This config overrides
confluent.controlcenter.streams.consumer.auto.offset.reset
(deprecated) for the monitoring input topic.- Type: long
- Default: 15
- Importance: low
UI Authentication Settings¶
These optional settings allow you to enable and configure authentication for accessing the Control Center web interface. See the UI Authentication guide for more detail on configuring authentication.
confluent.controlcenter.rest.authentication.method
- Authentication method to use. One of [NONE, BASIC].
- Type: string
- Default: NONE
- Importance: low
confluent.controlcenter.rest.authentication.realm
- Realm to be used by Control Center when authenticating.
- Type: string
- Default: “”
- Importance: low
confluent.controlcenter.rest.authentication.roles
- Roles that are authenticated to access Control Center.
- Type: string
- Default: “*”
- Importance: low
Email Settings¶
These optional settings control the SMTP server and account used when an alerts triggers the email action.
confluent.controlcenter.mail.enabled
- Enable email alerts. If this setting is false you will not be able to add email alert actions in the web user interface.
- Type: boolean
- Default: false
- Importance: low
confluent.controlcenter.mail.host.name
- Hostname of outgoing SMTP server.
- Type: string
- Default: localhost
- Importance: low
confluent.controlcenter.mail.port
- SMTP port open on
confluent.controlcenter.mail.host.name
.- Type: integer
- Default: 587
- Importance: low
confluent.controlcenter.mail.from
- The ‘from’ address for emails sent from Control Center.
- Type: string
- Default: c3@confluent.io
- Importance: low
confluent.controlcenter.mail.bounce.address
- Override for
confluent.controlcenter.mail.from
config to send message bounce notifications.- Type: string
- Importance: low
confluent.controlcenter.mail.ssl.checkserveridentity
- Forces validation of server’s certificate when using STARTTLS or SSL.
- Type: boolean
- Default: false
- Importance: low
confluent.controlcenter.mail.starttls.required
- Forces using STARTTLS.
- Type: boolean
- Default: false
- Importance: low
confluent.controlcenter.mail.username
- Username for username/password authentication. Authentication with your SMTP server will only be performed if this value is set.
- Type: string
- Importance: low
confluent.controlcenter.mail.password
- Password for username/password authentication.
- Type: string
- Importance: low
Kafka Encryption, Authentication, Authorization Settings¶
These settings control the authentication and authorization between Control Center and the Kafka cluster containing its data, including the Stream Monitoring and System Health metrics. You will need to configure these settings if you have configured your Kafka cluster with any security features.
Note that these are the standard Kafka authentication and authorization settings prefixed with
confluent.controlcenter.streams.
.
confluent.controlcenter.streams.security.protocol
- Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
- Type: string
- Default: PLAINTEXT
- Importance: low
confluent.controlcenter.streams.ssl.keystore.location
- The location of the key store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.keystore.password
- The store password for the key store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.key.password
- The password of the private key in the key store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.truststore.location
- The location of the trust store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.truststore.location
- The location of the trust store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.truststore.password
- The password for the trust store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.sasl.mechanism
- SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.
- Type: string
- Default: GSSAPI
- Importance: low
confluent.controlcenter.streams.sasl.kerberos.service.name
- The Kerberos principal name that Kafka runs as. This can be defined either in Kafka’s JAAS config or in Kafka’s config.
- Type: string
- Default: GSSAPI
- Importance: low
HTTPS Settings¶
If you secure web access to Control Center with SSL, you may also need to configure the following parameters.
confluent.controlcenter.rest.ssl.keystore.location
Used for https. Location of the keystore file to use for SSL. IMPORTANT: Jetty requires that the key’s CN, stored in the keystore, must match the FQDN.
- Type: string
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.keystore.password
Used for https. The store password for the keystore file.
- Type: password
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.key.password
Used for https. The password of the private key in the keystore file.
- Type: password
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.truststore.location
Used for https. Location of the trust store. Required only to authenticate https clients.
- Type: string
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.truststore.password
Used for https. The store password for the trust store file.
- Type: password
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.keystore.type
Used for https. The type of keystore file.
- Type: string
- Default: “JKS”
- Importance: medium
confluent.controlcenter.rest.ssl.truststore.type
Used for https. The type of trust store file.
- Type: string
- Default: “JKS”
- Importance: medium
confluent.controlcenter.rest.ssl.protocol
Used for https. The SSL protocol used to generate the SslContextFactory.
- Type: string
- Default: “TLS”
- Importance: medium
confluent.controlcenter.rest.ssl.provider
Used for https. The SSL security provider name. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: medium
confluent.controlcenter.rest.ssl.client.auth
Used for https. Whether or not to require the https client to authenticate via the server’s trust store.
- Type: boolean
- Default: false
- Importance: medium
confluent.controlcenter.rest.ssl.enabled.protocols
Used for https. The list of protocols enabled for SSL connections. Comma-separated list. Leave blank to use Jetty’s defaults.
- Type: list
- Default: “” (Jetty’s default)
- Importance: medium
confluent.controlcenter.rest.ssl.keymanager.algorithm
Used for https. The algorithm used by the key manager factory for SSL connections. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
confluent.controlcenter.rest.ssl.trustmanager.algorithm
Used for https. The algorithm used by the trust manager factory for SSL connections. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
confluent.controlcenter.rest.ssl.cipher.suites
Used for https. A list of SSL cipher suites. Comma-separated list. Leave blank to use Jetty’s defaults.
- Type: list
- Default: “” (Jetty’s default)
- Importance: low
confluent.controlcenter.rest.ssl.endpoint.identification.algorithm
Used for https. The endpoint identification algorithm to validate the server hostname using the server certificate. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
Internal Kafka Streams Settings¶
Because Control Center reads and writes data to Kafka, we also allow you to change some producer and consumer configurations. We do not recommend changing these values unless advised by Confluent Support. Some examples of values used internally are given. These settings map 1:1 with producer/consumer configs used internally by Confluent Control Center and all use the prefix confluent.controlcenter.streams.{producer,consumer}.
.
confluent.controlcenter.streams.num.stream.threads
- The number of threads to execute stream processing
- Type: integer
- Default: 8
- Importance: low
confluent.controlcenter.streams.consumer.session.timeout.ms
- The timeout used to detect a consumer failure
- Type: integer
- Default: 275000
- Importance: low
confluent.controlcenter.streams.consumer.request.timeout.ms
- The maximum amount of time the client will wait for the response of a request
- Type: integer
- Default: 285000
- Importance: low
confluent.controlcenter.streams.producer.retries
- Number of retries in case of production failure
- Type: integer
- Default: maximum integer (effectively infinite)
- Importance: low
confluent.controlcenter.streams.producer.retry.backoff.ms
- Time to wait before retrying in case of production failure
- Type: long
- Default: 100
- Importance: low
confluent.controlcenter.streams.producer.compression.type
- Compression type to use on internal topic production
- Type: string
- Default: lz4
- Importance: low
Internal Command Settings¶
The command topic is used to store Control Center‘s internal configuration data. It will reuse the defaults/overrides for Kafka Streams, but allows the following overrides.
confluent.controlcenter.command.topic
- Topic used to store Control Center configuration
- Type: string
- Default: “_confluent-command”
- Importance: low
confluent.controlcenter.command.topic.replication
Replication factor for command topic. It is not recommended to reduce this value except in a development environment.
- Type: int
- Default: 3
- Importance: low