Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 卡夫卡HDFS连接器未显示任何数据_Hadoop_Apache Kafka_Hdfs_Apache Kafka Connect_Confluent Platform - Fatal编程技术网

Hadoop 卡夫卡HDFS连接器未显示任何数据

Hadoop 卡夫卡HDFS连接器未显示任何数据,hadoop,apache-kafka,hdfs,apache-kafka-connect,confluent-platform,Hadoop,Apache Kafka,Hdfs,Apache Kafka Connect,Confluent Platform,我收到了以下信息: WorkerSinkTask{id=hdfs-test-0} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask) [2018-07-05 10:58:43,913] INFO [Consumer clientId=consumer-4, groupId=connect-hdfs-test] Discovered group coordina

我收到了以下信息:

 WorkerSinkTask{id=hdfs-test-0} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask)
[2018-07-05 10:58:43,913] INFO [Consumer clientId=consumer-4, groupId=connect-hdfs-test] Discovered group coordinator broker:19092 (id: 2147482646 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-07-05 10:58:43,919] INFO [Consumer clientId=consumer-4, groupId=connect-hdfs-test] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2018-07-05 10:58:43,920] INFO [Consumer clientId=consumer-4, groupId=connect-hdfs-test] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-07-05 10:58:46,950] INFO [Consumer clientId=consumer-4, groupId=connect-hdfs-test] Successfully joined group with generation 9 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-07-05 10:58:46,953] INFO [Consumer clientId=consumer-4, groupId=connect-hdfs-test] Setting newly assigned partitions [hdfs-test-0] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2018-07-05 10:58:46,967] INFO [Consumer clientId=consumer-4, groupId=connect-hdfs-test] Resetting offset for partition hdfs-test-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher)
[2018-07-05 10:58:46,978] INFO Started recovery for topic partition hdfs-test-0 (io.confluent.connect.hdfs.TopicPartitionWriter)
[2018-07-05 10:58:47,004] INFO Finished recovery for topic partition hdfs-test-0 (io.confluent.connect.hdfs.TopicPartitionWriter)
但当我想检查HDFS中的数据是否可用时:

我发现只有这个:


我该怎么办??任何人都可以帮我吗?

不清楚您的connect属性正在做什么,或者您是否正在生成任何数据以便将其接收到HDFSYes,我为kafka topic生成了数据,我使用confluent的控制中心添加了一个接收器hdfs,这是我用来生成数据的方法:/usr/bin/kafka avro控制台生产者\--代理列表代理:19092--主题麋鹿测试\--属性schema.registry.url=\--属性值.schema='{“type”:“record”,“name”:“myrecord”,“字段”:[{“名称”:“f1”,“类型”:“字符串”}]}'一个值不会自动发送到HDFS,因为连接缓冲区事件在内存中。您需要更连续的streamUse控制台生成器或Java(或任何代码)推送更多数据。我的观点是,Connect不会显示任何错误。我不确定您认为是什么错误,但它不会一次发送一条消息,而且不是实时的……Hadoop不喜欢小文件。是的,它最终工作正常,我发送了许多流记录,非常感谢
$ hadoop fs -ls /topics/
Found 1 items
drwxr-xr-x   - root supergroup          0 2018-07-05 10:10 /topics/+tmp