Upgrade

The Confluent Platform will be referred to as “CP.” Apache Kafka will be referred to as “Kafka.”. If you’re upgrading from a 1.0.x release to 2.0.x the release process will be slightly different from upgrading from 2.0.0 to 2.0.1 as the major release includes changes to the broker protocol. These differences are noted as special steps below when upgrading the brokers.

Preparation

Consider the below guidelines in preparation for the upgrade.

  • Always back up all configuration files before upgrading. This includes /etc/kafka, /etc/kafka-rest, /etc/schema-registry and /etc/camus.
  • Read through the documentation and draft an upgrade plan that matches your specific requirements and environment before starting the upgrade process. Put differently, don’t start working through this guide on a live cluster. Read the guide entirely, make a plan, then execute the plan.
  • Pay careful consideration to the order in which components are upgraded. Kafka is backward compatible, which means that clients from 0.8.x releases (CP 1.0.x) will work with brokers from 0.9.x releases (CP 2.0.x), but not vice-versa. This means you always need to plan upgrades such that all brokers are upgraded before clients. Clients include any application that uses Kafka producer or consumer, command line tools, Camus, Schema Registry and Rest Proxy.
    • IMPORTANT: although not recommended, some deployments have clients co-located with brokers (on the same node). In these cases, both the broker and clients share the same packages. This is problematic because all brokers must be upgraded before clients are upgraded. Pay careful attention to this when upgrading.
  • Kafka 0.9.0.0 contains breaking changes and deprecations w.r.t. previous major versions. Refer to the Kafka documentation to understand how they affect applications using Kafka.
  • Kafka 0.9.0.1 on the other hand is straightforward to upgrade from 0.9.0.0. Refer to the Kafka documentation for a small list of notable changes when upgrading.
  • Read the Confluent Platform 2.0.1 Release Notes. They contain not only information about noteworthy features, but also changes to configurations that may impact your upgrade.

Step-by-step Guide

  1. Determine if clients are co-located with brokers. If yes, ensure all client processes are not upgraded until all Kafka brokers have been upgraded.
  2. Decide on performing a rolling upgrade or a downtime upgrade. Confluent Platform supports both rolling upgrades (upgrade one broker at a time to avoid cluster downtime) and downtime upgrades (take down the entire cluster, upgrade it, and bring everything back up).
  3. Upgrade all Kafka brokers (more below).
  4. Upgrade Schema Registry, Rest Proxy and Camus (more below).
  5. If it makes sense, build applications that use Kafka producers and consumers against the new 0.9.0.1 libraries and deploy the new versions. See Application Development documentation for more details on using the 0.9.0.1 libraries.

Upgrade All Kafka Brokers

You can upgrade Kafka brokers from CP 1.0.x to CP 2.0.1 by upgrading the installed packages and restarting the respective processes. More on this later.

In a rolling upgrade scenario, upgrade one Kafka broker at a time. In a downtime upgrade scenario, take the entire cluster down, upgrade each Kafka broker, then start the cluster.

Special steps for 1.0.x to 2.0.x upgrade: In a rolling upgrade scenario, upgrading from CP 1.0.x (Kafka 0.8.0, 0.8.1.X or 0.8.2.X) to CP 2.0.1 (Kafka 0.9.0.1) requires special steps because Kafka 0.9.0.0 included a change to the inter-broker protocol. Follow the below steps for a rolling upgrade:

  1. Modify server.properties on all Kafka brokers by adding/changing the following property: inter.broker.protocol.version=0.8.2.X (you can change this while the brokers are still running).
  2. Upgrade each Kafka broker, one at a time (see below).
  3. Once all Kafka brokers have been upgraded, modify server.properties again by changing the following property: inter.broker.protocol.version=0.9.0.1 (0.9.0.1 instead of 0.8.2.X).
  4. Restart each Kafka broker, one at a time, to apply the configuration change.

Instructions for both deb packages and rpm packages are below. For zip and tar archives, the old archives directory can be simply deleted after the new archive folder has been created and any old configuration files copied over.

deb packages via apt

  1. Backup all configuration files from /etc, including /etc/kafka, /etc/kafka-rest, /etc/schema-registry and /etc/camus.

  2. Stop the services and remove the existing packages and their dependencies. As mentioned above, this can be done on one server at a time for rolling upgrade.

    # The example below removes the Kafka package (for Scala 2.10.4)
    $ sudo kafka-server-stop
    $ sudo apt-get remove confluent-kafka-2.10.4
    
    # To remove Confluent-Platform and all its dependencies at once, run the following after stopping all services
    $ sudo apt-get autoremove confluent-platform-2.10.4
    
  3. Remove the repository files of the previous version

    $ sudo add-apt-repository -r "deb http://packages.confluent.io/deb/1.0 stable main"
    
  4. Add the 2.0 repository to /etc/apt/sources.list

    $ sudo add-apt-repository "deb http://packages.confluent.io/deb/2.0 stable main"
    
  5. Refresh repository metadata

    $ sudo apt-get update
    
  6. Install the new version: (Note that if you modified the configuration files, apt will prompt you to resolve the conflicts. You will want to keep your original configuration).

    $ sudo apt-get install confluent-platform-2.11.7
    
    
    # Or install the packages you need one by one. For example, to install just Kafka:
    $ sudo apt-get install confluent-kafka-2.11.7
    
  7. Start services.

    $ sudo kafka-server-start --daemon
    

rpm packages via yum

  1. Backup all configuration files from /etc, including /etc/kafka, /etc/kafka-rest, /etc/schema-registry and /etc/camus.

  2. Stop the services and remove the existing packages and their dependencies. As mentioned above, this can be done on one server at a time for rolling upgrade.

    # The example below removes the Kafka package (for Scala 2.10.4)
    $ sudo kafka-server-stop
    $ sudo yum remove confluent-kafka-2.10.4
    
    # To remove Confluent-Platform and all its dependencies at once, run the following after stopping all services
    $ sudo yum autoremove confluent-platform-2.10.4
    
  3. Remove the repository files of the previous version

    $ sudo rm /etc/yum.repos.d/confluent.repo
    
  4. Add the repository to your /etc/yum.repos.d/ directory in a file named confluent-2.0.repo.

    [confluent-2.0]
    name=Confluent repository for 2.0.x packages
    baseurl=http://packages.confluent.io/rpm/2.0
    gpgcheck=1
    gpgkey=http://packages.confluent.io/rpm/2.0/archive.key
    enabled=1
    
  5. Refresh repository metadata

    $ sudo yum clean all
    
  6. Install the new version. Note that yum may override your existing configuration files, so you will need to restore them from backup after installing the packages:

    $ sudo yum install confluent-platform-2.11.7
    
    # Or install the packages you need one by one. For example, to install just Kafka:
    $ sudo yum install confluent-kafka-2.11.7
    
  7. Start services.

    $ sudo kafka-server-start --daemon
    

Upgrade Schema Registry, Rest Proxy and Camus

The Schema Registry, Rest Proxy, Camus, and any other client (e.g., producer or consumer) can be upgraded once all Kafka brokers have been upgraded.

To upgrade these services, follow the same steps above to upgrade packages (backup config files, remove packages, install upgraded packages, etc.). Then, restart the client processes.

As mentioned above, if it makes sense, build applications that use Kafka producers and consumers against the new 0.9.0.1 libraries and deploy the new versions. See Application Development documentation for more details on using the 0.9.0.1 libraries.