Scala Flink的JDBC接收器失败,出现不可序列化错误

Scala Flink的JDBC接收器失败,出现不可序列化错误,scala,apache-flink,Scala,Apache Flink,下面我将使用mysql数据库作为Flink的接收器。代码编译成功,但在Flink集群中执行作业失败 The program finished with the following exception: The implementation of the AbstractJdbcOutputFormat is not serializable. The object probably contains or references non serializable fields.

下面我将使用mysql数据库作为Flink的接收器。代码编译成功,但在Flink集群中执行作业失败

The program finished with the following exception:

The implementation of the AbstractJdbcOutputFormat is not serializable. The object probably contains or references non serializable fields.
        org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:151)
        org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:126)
        org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:71)
        org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1899)
        org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:189)
        org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1296)
        org.apache.flink.streaming.api.scala.DataStream.addSink(DataStream.scala:1131)
        Aggregator.Aggregator$.main(Aggregator.scala:81)
以下是守则的相关部分:

对象聚合器{
@抛出[异常]
def main(参数:数组[字符串]):单位={
[...]
val counts=stream.map{x=>(
x、 get(“value”).get(“id”).asInt(),
x、 获取(“值”).get(“kpi”).asDouble()
)}
.keyBy(0)
.时间窗口(时间.秒(60))
.sum(1)
counts.print()
val语句生成器:JdbcStatementBuilder[(Int,Double)]=(ps:PreparedStatement,t:(Int,Double))=>{
ps.setInt(1,t._1);
ps.setDouble(2,t._2);
};
val connection=new jdbconnectionoptions.jdbconnectionoptionsbuilder()
.withDriverName(“mysql.Driver”)
.使用密码(“XXX”)
.withUrl(“jdbc:mysql://:3306/”)
.withUsername(“”)
.build();
val jdbccink=jdbccink.sink(
“在表(id,kpi)中插入值(?,)”,
报表生成器,
连接);
counts.addSink(jdbcSink)
环境执行(“聚合器”)
}
}

我不确定代码的哪一部分是这里的问题,以及如何调试。不幸的是,我在Scala中也找不到JDBC接收器的参考实现。感谢您的帮助

对我有效的是显式地创建JdbcStatementBuilder。比如:

val statementBuilder: JdbcStatementBuilder[(Int, Double)] =
  new JdbcStatementBuilder[(Int, Double)] {
    override def accept(ps: PreparedStatement, t: (Int, Double)): Unit = {
      ps.setInt(1, t._1)
      ps.setDouble(2, t._2)
    }
}

是的,在我显式地创建了
JdbcStatementBuilder
之后,它也对我起了作用。从我们的水槽定义来看,在“关闭清理”阶段似乎失败了,我不知道他们为什么需要它。