Skip to main content

Creating a Topic with Custom Configurations in Kafka

Hands-On Lab


Photo of Chad Crowell

Chad Crowell

DevOps Training Architect II in Content





There are many custom configurations that we can apply to topics in Kafka. In this hands-on lab, we'll go through creating a topic, applying a custom configuration to that topic, and then testing the custom configuration by alerting the state of the partitions tied to that topic.

What are Hands-On Labs?

Hands-On Labs are scenario-based learning environments where learners can practice without consequences. Don't compromise a system or waste money on expensive downloads. Practice real-world skills without the real-world risk, no assembly required.

Creating a Topic with Custom Configurations in Kafka

The Scenario

Broker failure is a common scenario in Kafka Cluster Administration, and it causes data loss. If there aren't enough in-sync replicas, messages may be lost forever. To ensure this never happens, we've been tasked with enforcing a multiple-replica policy, and ensuring creating topics without this policy will never be allowed by anyone else in the organization. Our instructions for this hands-on lab are:

  • Create a topic with a replication factor of 3 and 3 partitions.
  • Add a custom configuration to the topic that ensures an error will occur if the replication factor falls below 3.
  • Test that the policy is successfully applied, that it changes the replica count to 1, and opens a producer that will send messages to the Kafka cluster.

Logging In

Use the credentials provided in the hands-on lab page to log in to the provided server.

Set Up the Cluster

Get into the directory where our configuration file is, with cd content-kafka-deep-dive. Then, we can build the Kafka cluster:

docker-compose up -d --build

Once that's done, we can get back into our home directory with cd.

Now let's install Java:

sudo apt install default-jdk

If we run ls, we'll see a kafka_2.12-2.2.0.tgz file sitting there. Extract it, then rename the resulting directory to, simply, kafka:

tar -xvf kafka_2.12-2.2.0.tgz && mv kafka_2.12-2.2.0/ kafka

Then get into that directory with cd kakfa.

Create a Topic with Three Partitions and a Replication Factor of 3

Create a topic named transaction:

bin/ --zookeeper localhost:2181 
--topic transaction 
--replication-factor 3 
--partitions 3

Add a Custom Configuration to the Topic

This is how we add the custom configuration min.insync.replicas=3 to the topic transaction:

bin/ --zookeeper localhost:2181 
--entity-type topics 
--entity-name transaction 
--add-config min.insync.replicas=3

We can verify that the topic configuration applied with this:

bin/ --zookeeper localhost:2181 
--entity-type topics 
--entity-name transaction

That should show that min.insync.replicas=3 is what's set.

Change the Replica Count for the Topic

First off, let's create a JSON file named replicacount.json with these contents:

 [{"topic": "transaction", "partition": 0,
 "replicas": [

Now we can execute the replica count change using that JSON file:

bin/ --zookeeper localhost:2181 
--reassignment-json-file replicacount.json

Once we get a "Successfully started..." message, let's describe the topic to see the replica change:

bin/ --zookeeper localhost:2181 
--topic transaction 

In our output, we should see that Partition 0 only has a replica on Broker 2.

Run a Producer to Get an Error Message

Open a producer and send some messages to our topic:

bin/ --broker-list localhost:9092 
--topic transaction 
--producer-property acks=all

If we type a few messages in (just a blurb of text at the > prompt, followed by pressing the Enter key), we'll soon get errors saying NOT_ENOUGH_REPLICAS.


While errors are usually bad, this one isn't. It means that our custom setup is behaving the way we wanted it to. Congratulations!