Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka 如何采取卡夫卡主题备份和恢复?_Apache Kafka_Backup_Restore_Kafka Topic - Fatal编程技术网

Apache kafka 如何采取卡夫卡主题备份和恢复?

Apache kafka 如何采取卡夫卡主题备份和恢复?,apache-kafka,backup,restore,kafka-topic,Apache Kafka,Backup,Restore,Kafka Topic,我需要将Kafka中的所有主题备份到以各自主题名称命名的文件中,并需要根据用户需求恢复主题。 注意:此脚本需要在Kerberized环境中运行。 kafkabackup.sh monyear=`date | awk '{print $2$6}'` dat=`date| awk '{print $2$3$6}'` export BACKUPDIR=/root/backup/$monyear mkdir -p $BACKUPDIR mkdir -p $BACKUPDIR/$dat cd $BACKU

我需要将Kafka中的所有主题备份到以各自主题名称命名的文件中,并需要根据用户需求恢复主题。 注意:此脚本需要在Kerberized环境中运行。
kafkabackup.sh

monyear=`date | awk '{print $2$6}'`
dat=`date| awk '{print $2$3$6}'`
export BACKUPDIR=/root/backup/$monyear
mkdir -p $BACKUPDIR
mkdir -p $BACKUPDIR/$dat
cd $BACKUPSDIR
BKDIR=$BACKUPDIR/$dat
##Log into Kafka

##Get topics from Kafka Broker

kinit -kt /etc/security/keytabs/kafka.service.keytab kafka/node1.localdomaino@domain.co
cd /usr/hdp/current/kafka-broker/bin/
export KAFKA_CLIENT_KERBEROS_PARAMS="-Djava.security.auth.login.config=/etc/kafka/conf/kafka_client_jaas.conf"
./kafka-topics.sh --zookeeper adminnode.localdomain:2181 --list > $BKDIR/listtopics.txt

##Remove if any mark of deletion topics exists
sed -i.bak '/deletion/d' $BKDIR/listtopics.txt

## Starting kill script in parallel 

bash checkandkill.sh& 

##Reading the file contents for topics
for line in $(cat $BKDIR/listtopics.txt)
do
    echo $line
    ./test.sh --bootstrap-server node1.localdomain:6668 --topic $line  --consumer.config /home/kafka/conf.properties --from-beginning --security-protocol SASL_SSL > $BKDIR/$line
done

##Delete empty files

/usr/bin/find . -size 0 -delete

## Killing checkandkill daemon

ps -ef |grep -i checkandkill.sh| grep -v grep | awk '{print $2}' | xargs kill

exit
sleep 0.5m
for line in $(cat /root/backup/listtopics.txt)
do
    echo $line
    sleep 1m
    ps -ef |grep -i $line| grep -v grep | awk '{print $2}' | xargs kill
done
当使用者运行时,它会不断等待消息接收。我们需要终止这个过程
checkandkill.sh

monyear=`date | awk '{print $2$6}'`
dat=`date| awk '{print $2$3$6}'`
export BACKUPDIR=/root/backup/$monyear
mkdir -p $BACKUPDIR
mkdir -p $BACKUPDIR/$dat
cd $BACKUPSDIR
BKDIR=$BACKUPDIR/$dat
##Log into Kafka

##Get topics from Kafka Broker

kinit -kt /etc/security/keytabs/kafka.service.keytab kafka/node1.localdomaino@domain.co
cd /usr/hdp/current/kafka-broker/bin/
export KAFKA_CLIENT_KERBEROS_PARAMS="-Djava.security.auth.login.config=/etc/kafka/conf/kafka_client_jaas.conf"
./kafka-topics.sh --zookeeper adminnode.localdomain:2181 --list > $BKDIR/listtopics.txt

##Remove if any mark of deletion topics exists
sed -i.bak '/deletion/d' $BKDIR/listtopics.txt

## Starting kill script in parallel 

bash checkandkill.sh& 

##Reading the file contents for topics
for line in $(cat $BKDIR/listtopics.txt)
do
    echo $line
    ./test.sh --bootstrap-server node1.localdomain:6668 --topic $line  --consumer.config /home/kafka/conf.properties --from-beginning --security-protocol SASL_SSL > $BKDIR/$line
done

##Delete empty files

/usr/bin/find . -size 0 -delete

## Killing checkandkill daemon

ps -ef |grep -i checkandkill.sh| grep -v grep | awk '{print $2}' | xargs kill

exit
sleep 0.5m
for line in $(cat /root/backup/listtopics.txt)
do
    echo $line
    sleep 1m
    ps -ef |grep -i $line| grep -v grep | awk '{print $2}' | xargs kill
done

需要您的帮助来完成恢复脚本。

您可以使用标准的消费者和生产者来完成此操作吗?如果是这样,您是否考虑过MirrorMaker或Confluent Replicator将数据发送到备用Kafka群集?这将为您节省大量的恢复工作。另一个需要考虑的问题是磁盘快照。如果可以关闭群集(或备用镜像群集),则可以定期使用磁盘快照。重要的是你必须阻止卡夫卡,这样才能真正起作用。你还需要备份Zookeeper以便恢复,即使是可能的。在任何情况下,如果您将原始二进制数据从主题(例如,使用Kafka Connect、Spark、NiFi等)转储到HDP的HDF中,那么只需将这些字节读回主题中即可恢复数据感谢您的评论。我们这里没有太多的选择,所以只有我们按照自己的要求来处理。如果有人建议以同样的方式恢复,那就太好了。