Apache Kafka Commands : cheat list
Hi folks, hereby is handy kafka commands.
Kafka installation directory : /apps/kafka/live/
server Ip : use DNS or use IP ( my server 192.168.40.117 ) or use localhost or localhost ip (127.0.0.1)
zookeeper port : 2181 (default port)
broker port : 9092 (default port)
1.) Start
zookeeper :
bin/zookeeper-server-start.sh config/zookeeper.properties
Bashkafka broker :
in/kafka-server-start.sh config/server.properties
BashKafka connect debezium :
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties
Bash2.) List & Create & describe & Alter & Delete Topics
List All Topics
bin/kafka-topics.sh --zookeeper 192.168.40.117:2181 --list
BashList Topics which are altered configuration (different from default)
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topics-with-overrides
BashCreate Topic : **replication factor <= number of broker
bin/kafka-topics.sh --zookeeper 192.168.40.117:2181 --topic first_topic --create --partitions 2 --replica
BashCreate Topic with compression algorithm & retention time :
bin/kafka-topics.sh --zookeeper localhost:2181 --topic first_topic --create --partitions 6 --config compression.type=gzip --config retention.ms=86400000 --replication-factor 2
BashCreate Topic with Acknowledge property
bin/kafka-console-producer.sh --broker-list 1localhost:9092 --topic first_topic --producer-property acks=all
BashDescribe Topic:
bin/kafka-topics.sh --zookeeper localhost:2181 --topic first_topic --describe
BashAlter retention time:
bin/kafka-configs.sh --zookeeper 10.10.86.62:2181 --entity-type topics --entity-name ng-fss-files --alter --add-config retention.ms=7200000
BashLatest offset
bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 10.10.86.62:10092,10.10.86.55:10092 --topic ng-fss-files-staging --time -1
BashLast offset
bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 10.10.86.62:10092,10.10.86.55:10092 --topic ng-fss-files-staging --time -2
BashFlush all messages from topic : assuming 2 partition.
** altering the retention time to 1 ms & reverting back to previous retention time once flushed should also work as per other blogs (no luck with me)
step 1.) create json file . eg:
{
"partitions": [
{
"topic": "first_topic",
"partition": 0,
"offset": -1
},
{
"topic": "first_topic",
"partition": 1,
"offset": -1
}
],
"version": 1
}
Bashstep2.) execute the json file
bin/kafka-delete-records.sh --bootstrap-server 10.10.86.62:10092 --offset-json-file purge_first_topic_config_flush.json
Bash3.) Produce / Consume
Produce :
bin/kafka-console-producer.sh --broker-list 192.168.40.117:9092 --topic first_topic
Bashconsumer :
bin/kafka-console-consumer.sh --bootstrap-server 10.10.86.62:10092 --topic first_topic
Bashconsume from beginning : **order is maintained at partition level, reading from beginning in topic not ensure order
bin/kafka-console-consumer.sh --bootstrap-server 10.10.86.62:10092 --topic first_topic --from-beginning
Bashconsumer group:
bin/kafka-console-consumer.sh --bootstrap-server 10.10.86.62:10092 --topic first_topic --group first-topic-consumer
Bash
Pingback: Complete Apache Kafka Guide & It's Architecture : - Assembling Bits of Jarvis