Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/jpa/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Flink 1.2.0 jdbc从Mysql读取流数据_Java_Mysql_Jdbc_Apache Flink_Flink Streaming - Fatal编程技术网

Java Flink 1.2.0 jdbc从Mysql读取流数据

Java Flink 1.2.0 jdbc从Mysql读取流数据,java,mysql,jdbc,apache-flink,flink-streaming,Java,Mysql,Jdbc,Apache Flink,Flink Streaming,我正在尝试使用Flink 2.1.0从mysql日志表中读取流数据,但是,它只读取一次,然后将停止该过程。我想它继续阅读,如果有传入的数据和打印它。下面是我的代码 public class Database { public static void main(String[] args) throws Exception { // get the execution environment final StreamExecutionEnvironmen

我正在尝试使用Flink 2.1.0从mysql日志表中读取流数据,但是,它只读取一次,然后将停止该过程。我想它继续阅读,如果有传入的数据和打印它。下面是我的代码

public class Database {

    public static void main(String[] args) throws Exception {

        // get the execution environment
        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        TypeInformation[] fieldTypes = new TypeInformation[] { LONG_TYPE_INFO, STRING_TYPE_INFO };
        RowTypeInfo rowTypeInfo = new RowTypeInfo(fieldTypes);

        DataStreamSource source = env.createInput(
            JDBCInputFormat.buildJDBCInputFormat()
                    .setDrivername("com.mysql.jdbc.Driver")
                    .setDBUrl("jdbc:mysql://localhost/log_db")
                    .setUsername("root")
                    .setPassword("pass")
                    .setQuery("select id, SERVER_NAME from ERRORLOG")
                    .setRowTypeInfo(rowTypeInfo)
                    .finish()
        );
        source.print().setParallelism(1);
        env.execute("Error Log Data");
    }
}
我正在使用maven的本地内部运行:

mvn exec:java -Dexec.mainClass=com.test.Database
结果:

09:15:56,394 INFO  org.apache.flink.runtime.taskmanager.Task                     - Freeing task resources for Source: Custom Source (1$
4) (41c66a6dfb97e1d024485f473617a342).
09:15:56,394 INFO  org.apache.flink.core.fs.FileSystem                           - Ensuring all FileSystem streams are closed for Sour$
e: Custom Source (1/4)
09:15:56,394 INFO  org.apache.flink.runtime.taskmanager.Task                     - Sink: Unnamed (1/1) (5212fc2a570152c58ffe3d39d3d805$
0) switched from RUNNING to FINISHED.
09:15:56,394 INFO  org.apache.flink.runtime.taskmanager.Task                     - Freeing task resources for Sink: Unnamed (1/1) (521$
fc2a570152c58ffe3d39d3d805b0).
09:15:56,394 INFO  org.apache.flink.runtime.taskmanager.TaskManager              - Un-registering task and sending final execution sta$
e FINISHED to JobManager for task Source: Custom Source (41c66a6dfb97e1d024485f473617a342)
09:15:56,396 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph        - Source: Custom Source (1/4) (41c66a6dfb97e1d024485f$
73617a342) switched from RUNNING to FINISHED.
09:15:56,396 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor      - 02/22/2017 09:15:56  Source: Custom Source(1/4) swi$
ched to FINISHED 
02/22/2017 09:15:56     Source: Custom Source(1/4) switched to FINISHED 
09:15:56,396 INFO  org.apache.flink.core.fs.FileSystem                           - Ensuring all FileSystem streams are closed for Sink$
 Unnamed (1/1)
09:15:56,397 INFO  org.apache.flink.runtime.taskmanager.TaskManager              - Un-registering task and sending final execution sta$
e FINISHED to JobManager for task Sink: Unnamed (5212fc2a570152c58ffe3d39d3d805b0)
09:15:56,398 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph        - Sink: Unnamed (1/1) (5212fc2a570152c58ffe3d39d3d805$
0) switched from RUNNING to FINISHED.
09:15:56,398 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph        - Job Socket Window Data (0eb15d61031ede785e7ed21ead2$
ceea) switched from state RUNNING to FINISHED.
09:15:56,398 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor      - 02/22/2017 09:15:56  Sink: Unnamed(1/1) switched to 
FINISHED 
02/22/2017 09:15:56     Sink: Unnamed(1/1) switched to FINISHED 
09:15:56,405 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Stopping checkpoint coordinator for job 0eb15d61031$
de785e7ed21ead21ceea
09:15:56,406 INFO  org.apache.flink.runtime.client.JobSubmissionClientActor      - Terminate JobClientActor.
09:15:56,406 INFO  org.apache.flink.runtime.client.JobClient                     - Job execution complete
09:15:56,408 INFO  org.apache.flink.runtime.minicluster.FlinkMiniCluster         - Stopping FlinkMiniCluster.
09:15:56,405 INFO  org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore  - Shutting down

mysql的表数据在开始查询时是固定的,所以该作业应该是flink批处理作业

如果要在有传入数据的情况下读取传入数据,flink无法处理此类情况,因为除非监视binlog,否则flink不知道传入数据


您必须使用canal将binlog从mysql同步到kafka,并运行从kafka读取数据的flink流作业。这是最好的解决方案。

JDBCInputFormat是为批处理应用程序编写的;flink项目中目前没有流式JDBC连接器。您必须创建一个源代码(可以在内部使用JDBCInputFormat),但是您必须注意只发出新值。@ChesnaySchepler谢谢您的回复,您能给我一个如何为mysql创建源代码的示例吗?我还读了Flink中的readFile API,也许我可以将这个表公开给json API,并使用stream从中读取文件?