Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在Kafka Connect中找不到适合jdbc:mysql的驱动程序_Jdbc_Apache Kafka_Apache Kafka Connect_Confluent Platform - Fatal编程技术网

在Kafka Connect中找不到适合jdbc:mysql的驱动程序

在Kafka Connect中找不到适合jdbc:mysql的驱动程序,jdbc,apache-kafka,apache-kafka-connect,confluent-platform,Jdbc,Apache Kafka,Apache Kafka Connect,Confluent Platform,connect-standalone.properties connector.class=io.confluent.connect.jdbc.JdbcSourceConnector bootstrap.servers=10.33.62.20:9092,10.33.62.110:9092,10.33.62.200:9092 key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.ka

connect-standalone.properties

connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
bootstrap.servers=10.33.62.20:9092,10.33.62.110:9092,10.33.62.200:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true

offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/grid/1/mukul/confluent-5.0.0/share/java
name=test-source-sqlite-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=5

connection.url=jdbc:mysql://10.32.177.178:3306/test&user=xxxx&password=xxxxx

table.whitelist=banner_hourly_statistics_v2

group.id=test-mysql-kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter

config.storage.topic=demo-1-distributed-config
offset.storage.topic=demo-1-distributed-offset
status.storage.topic=demo-1-distributed-status

bootstrap.servers=10.33.62.20:9092,10.33.62.110:9092,10.33.62.200:9092
mode=bulk
#incrementing.column.name=id
topic.prefix=test-sqlite-jdbc-
source-sqlite.properties

connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
bootstrap.servers=10.33.62.20:9092,10.33.62.110:9092,10.33.62.200:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true

offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/grid/1/mukul/confluent-5.0.0/share/java
name=test-source-sqlite-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=5

connection.url=jdbc:mysql://10.32.177.178:3306/test&user=xxxx&password=xxxxx

table.whitelist=banner_hourly_statistics_v2

group.id=test-mysql-kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter

config.storage.topic=demo-1-distributed-config
offset.storage.topic=demo-1-distributed-offset
status.storage.topic=demo-1-distributed-status

bootstrap.servers=10.33.62.20:9092,10.33.62.110:9092,10.33.62.200:9092
mode=bulk
#incrementing.column.name=id
topic.prefix=test-sqlite-jdbc-
CMD:
connectstandalone/grid/1/mukul/confluent-5.0.0/etc/kafka/connect-standalone.properties/grid/1/mukul/confluent-5.0.0/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties

在启动日志中,它清楚地显示了加载JDBC连接器:

[2018-08-09 06:59:30,072] INFO Loading plugin from: /grid/1/mukul/confluent-5.0.0/share/java/kafka-connect-jdbc (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:218)
[2018-08-09 06:59:30,133] INFO Registered loader: PluginClassLoader{pluginLocation=file:/grid/1/mukul/confluent-5.0.0/share/java/kafka-connect-jdbc/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:241)
[2018-08-09 06:59:30,133] INFO Added plugin 'io.confluent.connect.jdbc.JdbcSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:170)
[2018-08-09 06:59:30,133] INFO Added plugin 'io.confluent.connect.jdbc.JdbcSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:170)
但它失败了,但有以下例外:

Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://10.32.177.178:3306/test&user=xxxx&password=xxxx for configuration Couldn't open connection to jdbc:mysql://10.32.177.178:3306/test&user=xxxx&password=xxx
Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://10.32.177.178:3306/test&user=xxxx&password=xxxx for configuration Couldn't open connection to jdbc:mysql://10.32.177.178:3306/test&user=xxxx&password=xxxx
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
    at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
    at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
    at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:110)
也尝试过更改插件目录,但没有成功。尝试将合流共享/*移动到/usr/share/java,但也没有成功

  • 从URL下载JAR:

  • 放置在插件目录中

  • 运行连接

  • 开始从MySql中提取数据还需要一段时间。

    可能会晚一点。当我使用kafka jdbc连接器连接DB2时,我遇到了同样的问题“找不到驱动程序…”

    第一种可能的解决方案:

    我通过将DB2驱动程序放置在jdbc连接器所在的确切位置来解决这个问题。 在卡夫卡连接中:

    find/-name kafka connect jdbc\*.jar

    从上述命令找到位置后,在该位置复制DB2 jar:

    cp{yourdb2jarlocation}/DB2.jar{copy the location from'find'command}

    范例

    cp/Download/db2.jar/Users/share/java/kafka connect java/

    重新启动kafka connect,它将获取DB2驱动程序

    第二种可能的解决方案:

    下载JT400JAR(jdk-8)并将其放在其他jdbc驱动程序(DB2、SQL等)旁边


    快乐编码:)

    你指的是哪个插件目录?如何确定正在使用哪个目录?@ValerianPereira:您需要将Jar复制到etc/kafka/connect-distributed.properties文件中提到的plugin.path=share/java目录,或者您可以使用etc/kafka/connect-standalone.properties文件中的plugin.path配置来实现独立模式。感谢您的帮助@mukul我已经修复了这个问题。JAR被很好地放置在share/javadir中,但是,为了使更改生效,我没有重新启动服务。