Hive 卡夫卡向蜂巢猛扑——写作失败

Hive 卡夫卡向蜂巢猛扑——写作失败,hive,apache-kafka,apache-flink,flink-streaming,Hive,Apache Kafka,Apache Flink,Flink Streaming,我正在尝试使用以下代码段通过Kafka->Flink->Hive将数据接收到Hive: 但我得到了以下错误: final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); DataStream<GenericRecord> stream = readFromKafka(env); private static final TypeInformation[]

我正在尝试使用以下代码段通过Kafka->Flink->Hive将数据接收到Hive:

但我得到了以下错误:

final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<GenericRecord> stream = readFromKafka(env);


private static final TypeInformation[] FIELD_TYPES = new TypeInformation[]{
        BasicTypeInfo.INT_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO
};

 JDBCAppendTableSink sink = JDBCAppendTableSink.builder()
            .setDrivername("org.apache.hive.jdbc.HiveDriver")
            .setDBUrl("jdbc:hive2://hiveconnstring")
            .setUsername("myuser")
            .setPassword("mypass")
            .setQuery("INSERT INTO testHiveDriverTable (key,value) VALUES (?,?)")
            .setBatchSize(1000)
            .setParameterTypes(FIELD_TYPES)
            .build();

    DataStream<Row> rows = stream.map((MapFunction<GenericRecord, Row>) st1 -> {
                Row row = new Row(2); // 
                row.setField(0, st1.get("SOME_ID")); 
                row.setField(1, st1.get("SOME_ADDRESS"));
                return row;
            });

    sink.emitDataStream(rows);
    env.execute("Flink101");


Caused by: java.lang.RuntimeException: Execution of JDBC statement failed.
at org.apache.flink.api.java.io.jdbc.JDBCOutputFormat.flush(JDBCOutputFormat.java:219)
at org.apache.flink.api.java.io.jdbc.JDBCSinkFunction.snapshotState(JDBCSinkFunction.java:43)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:90)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:356)
... 12 more

Caused by: java.sql.SQLException: Method not supported
at org.apache.hive.jdbc.HiveStatement.executeBatch(HiveStatement.java:381)
at org.apache.flink.api.java.io.jdbc.JDBCOutputFormat.flush(JDBCOutputFormat.java:216)
... 17 more
使用JDBC驱动程序有什么方法可以实现这一点吗

让我知道


提前感谢。

Hive的JDBC实现尚未完成。您的问题已被跟踪


您可以尝试通过将
upload.addBatch
替换为
upload.addBatch
来修补Flink的
jdbcoupputformat.java:202
中的
执行,并删除
jdbcoupputformat.java:216
中对
upload.executeBatch
的调用,从而避免使用批处理。不利的一面是,您会为每个记录发出一个专用的SQL查询,这可能会减慢速度。

Confluent Platform附带HDFS Connect,它具有Hive集成。我们没有其他要求,Flink将成为整个公司的中央数据处理位置,因此需要Kafka->Flink->Hive集成
public class HiveStatement implements java.sql.Statement {
...

  @Override  
  public int[] executeBatch() throws SQLException {
        throw new SQLFeatureNotSupportedException("Method not supported");
  }

..
}