Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/386.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 无法使用Kafka Connect从SAP HANA获取记录_Java_Apache Kafka_Hdfs_Apache Kafka Connect_Confluent Platform - Fatal编程技术网

Java 无法使用Kafka Connect从SAP HANA获取记录

Java 无法使用Kafka Connect从SAP HANA获取记录,java,apache-kafka,hdfs,apache-kafka-connect,confluent-platform,Java,Apache Kafka,Hdfs,Apache Kafka Connect,Confluent Platform,我是Kafka Connect的新手,我正在尝试从SAP S/4 HANA复制/获取数据,并使用Kafka Connect将其保存在HDFS上。到目前为止,我通过以下链接尝试了很多事情: 我的配置如下所示: connect-standalone.properties bootstrap.servers=10.0.4.146:9092 key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.ap

我是Kafka Connect的新手,我正在尝试从SAP S/4 HANA复制/获取数据,并使用Kafka Connect将其保存在HDFS上。到目前为止,我通过以下链接尝试了很多事情:

我的配置如下所示:

connect-standalone.properties

bootstrap.servers=10.0.4.146:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=true
internal.value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/home/user1/ngdbc.jar,/home/user1/kafka-connect-hana-1.0-SNAPSHOT.jar
name=saptohive-source
connector.class=com.sap.kafka.connect.source.hana.HANASourceConnector
tasks.max=1
topics=saptohive
connection.url=jdbc:sap://34.169.244.241:30041/
connection.user="MYUSER"
connection.password="MYPASS"
saptohive.table.name="SAPHANADB"."MARA"
name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=saptohive
hdfs.url=hdfs://10.0.1.244:8020/warehouse/tablespace/external/hive/
flush.size=3
hive.integration=true
hive.metastore.uris=thrift://10.0.1.244:9083/
hive.database=dev_ingestion_raw
schema.compatibility=BACKWARD
hana-source.properties

bootstrap.servers=10.0.4.146:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=true
internal.value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/home/user1/ngdbc.jar,/home/user1/kafka-connect-hana-1.0-SNAPSHOT.jar
name=saptohive-source
connector.class=com.sap.kafka.connect.source.hana.HANASourceConnector
tasks.max=1
topics=saptohive
connection.url=jdbc:sap://34.169.244.241:30041/
connection.user="MYUSER"
connection.password="MYPASS"
saptohive.table.name="SAPHANADB"."MARA"
name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=saptohive
hdfs.url=hdfs://10.0.1.244:8020/warehouse/tablespace/external/hive/
flush.size=3
hive.integration=true
hive.metastore.uris=thrift://10.0.1.244:9083/
hive.database=dev_ingestion_raw
schema.compatibility=BACKWARD
hdfs-sink.properties

bootstrap.servers=10.0.4.146:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=true
internal.value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/home/user1/ngdbc.jar,/home/user1/kafka-connect-hana-1.0-SNAPSHOT.jar
name=saptohive-source
connector.class=com.sap.kafka.connect.source.hana.HANASourceConnector
tasks.max=1
topics=saptohive
connection.url=jdbc:sap://34.169.244.241:30041/
connection.user="MYUSER"
connection.password="MYPASS"
saptohive.table.name="SAPHANADB"."MARA"
name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=saptohive
hdfs.url=hdfs://10.0.1.244:8020/warehouse/tablespace/external/hive/
flush.size=3
hive.integration=true
hive.metastore.uris=thrift://10.0.1.244:9083/
hive.database=dev_ingestion_raw
schema.compatibility=BACKWARD
错误

我不确定到底是什么问题。整个过程停留在:

 (io.confluent.connect.hdfs.HdfsSinkConnectorConfig:223)
[2020-08-31 10:58:36,186] INFO AvroDataConfig values:
        schemas.cache.config = 1000
        enhanced.avro.schema.support = false
        connect.meta.data = true
 (io.confluent.connect.avro.AvroDataConfig:170)
[2020-08-31 10:58:36,190] INFO Hadoop configuration directory  (io.confluent.connect.hdfs.DataWriter:93)
[2020-08-31 10:58:36,467] WARN Unable to load native-hadoop library for your platform... using builtin-java classes where applicable (org.apache.hadoop.util.NativeCodeLoader:62)
[2020-08-31 10:58:37,326] INFO Trying to connect to metastore with URI thrift://10.0.1.244:9083/ (hive.metastore:376)
[2020-08-31 10:58:37,362] INFO Connected to metastore. (hive.metastore:472)
[2020-08-31 10:58:37,437] INFO Sink task WorkerSinkTask{id=hdfs-sink-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:260)
[2020-08-31 10:58:37,523] INFO Discovered coordinator 10.0.1.33:9092 (id: 2147483646 rack: null) for group connect-hdfs-sink. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:607)
[2020-08-31 10:58:37,536] INFO Revoking previously assigned partitions [] for group connect-hdfs-sink (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:419)
[2020-08-31 10:58:37,537] INFO (Re-)joining group connect-hdfs-sink (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:442)
[2020-08-31 10:58:37,547] INFO Successfully joined group connect-hdfs-sink with generation 3 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:409)
[2020-08-31 10:58:37,550] INFO Setting newly assigned partitions [saptohive-0] for group connect-hdfs-sink (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:262)
[2020-08-31 10:58:37,562] INFO Started recovery for topic partition saptohive-0 (io.confluent.connect.hdfs.TopicPartitionWriter:208)
[2020-08-31 10:58:37,570] INFO Finished recovery for topic partition saptohive-0 (io.confluent.connect.hdfs.TopicPartitionWriter:223)
所有文件,即connect-standalone.properties、hana-source.properties、hdfs-sink.properties和两个“.jar”文件,即ngdbc.jar和kafka-connect-hana-1.0-SNAPSHOT.jar,都在同一目录下

我使用的命令是:

connect-standalone connect-standalone.properties hana-source.properties hdfs-sink.properties

我需要知道我做错了什么。任何帮助都将不胜感激。谢谢。

事实上,我们集群中的Kafka和Scala版本比用于构建SAP HANA Kafka连接器的版本旧。用Kafka版本2.x.x创建了一个新集群,它成功了。

您自己编译过这个项目吗?您使用了什么版本的Java+Scala?我的CentOS机器有Scala 2.11.8和Java 1.8,您运行的是哪个Kafka版本?是Scala 2.11还是2.12?通常
NoClassDefFoundError:scala/runtime
表示scala版本问题整个HDP集群实际上使用的是scala 2.11.8。好的,但我想问的是,您是否使用相同的版本构建了github repo