Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/sql/87.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Mysql b?@mikalailushcytski我试着把它添加到/usr/local/lib/python3.7/site-packages/flink/lib,你是说这个吗?我是说你在另一个线程中问的东西-。将所需的依赖项放入flink类路径,即容器(jobma_Mysql_Sql_Apache Flink_Create Table_Pyflink - Fatal编程技术网

Mysql b?@mikalailushcytski我试着把它添加到/usr/local/lib/python3.7/site-packages/flink/lib,你是说这个吗?我是说你在另一个线程中问的东西-。将所需的依赖项放入flink类路径,即容器(jobma

Mysql b?@mikalailushcytski我试着把它添加到/usr/local/lib/python3.7/site-packages/flink/lib,你是说这个吗?我是说你在另一个线程中问的东西-。将所需的依赖项放入flink类路径,即容器(jobma,mysql,sql,apache-flink,create-table,pyflink,Mysql,Sql,Apache Flink,Create Table,Pyflink,b?@mikalailushcytski我试着把它添加到/usr/local/lib/python3.7/site-packages/flink/lib,你是说这个吗?我是说你在另一个线程中问的东西-。将所需的依赖项放入flink类路径,即容器(jobmanager和taskmanager)中的flink/lib。我明白你的意思,一分钟前刚刚尝试在我的系统中运行flink docker容器,其中卷连接到/flink/lib。docker容器运行良好,但代码仍在运行,但出现相同的错误。我已从下载了


b?@mikalailushcytski我试着把它添加到/usr/local/lib/python3.7/site-packages/flink/lib,你是说这个吗?我是说你在另一个线程中问的东西-。将所需的依赖项放入flink类路径,即容器(jobmanager和taskmanager)中的
flink/lib
。我明白你的意思,一分钟前刚刚尝试在我的系统中运行flink docker容器,其中卷连接到/flink/lib。docker容器运行良好,但代码仍在运行,但出现相同的错误。我已从下载了flink 1.11.2版本,并将flink-connector-jdbc_2.11-1.11.2.jar添加到flink/lib,运行本地flink群集,但仍然出现相同的错误,你试过把
flink-connector-jdbc_2.11-1.11.2.jar
添加到flink类路径中吗-
flink/lib
?@mikalailushcytski我试过把它添加到/usr/local/lib/python3.7/site-packages/flink/lib,你是说这个吗?我的意思是你在另一个线程中问的。将所需的依赖项放入flink类路径,即容器(jobmanager和taskmanager)中的
flink/lib
。我明白你的意思,一分钟前刚刚尝试在我的系统中运行flink docker容器,其中卷连接到/flink/lib。docker容器运行良好,但代码仍在运行,但出现相同的错误。我已从下载了flink 1.11.2版本,并将flink-connector-jdbc_2.11-1.11.2.jar添加到flink/lib,运行本地flink群集,但仍会收到相同的错误,因此连接。
T_CONFIG = TableConfig()
B_EXEC_ENV = ExecutionEnvironment.get_execution_environment()
B_EXEC_ENV.set_parallelism(1)
BT_ENV = BatchTableEnvironment.create(B_EXEC_ENV, T_CONFIG)

ddl = """
            CREATE TABLE nba_player4 (
                 first_name STRING ,
                 last_name STRING,
                 email STRING,
                 id INT
            ) WITH (
                'connector' = 'jdbc',
                'url' = 'jdbc:mysql://localhost:3306/inventory',
                'username' = 'root',
                'password' = 'debezium',
                'table-name' = 'customers'
            )
      """;
BT_ENV.sql_update(ddl)

sinkddl = """
        CREATE TABLE print_table (
         f0 INT,
         f1 INT,
         f2 STRING,
         f3 DOUBLE
        ) WITH (
         'connector' = 'print'
        )
      """;
BT_ENV.sql_update(sinkddl)


sqlquery("SELECT first_name, last_name  FROM nba_player4 ");
BT_ENV.execute("table_job")
py4j.protocol.Py4JJavaError: An error occurred while calling o23.sqlQuery.
: org.apache.flink.table.api.ValidationException: SQL validation failed. findAndCreateTableSource failed.

Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory' in
the classpath.

Reason: Required context properties mismatch.

The following properties are requested:
connector=jdbc
password=debezium
schema.0.data-type=VARCHAR(2147483647)
schema.0.name=first_name
schema.1.data-type=VARCHAR(2147483647)
schema.1.name=last_name
schema.2.data-type=VARCHAR(2147483647)
schema.2.name=email
schema.3.data-type=INT
schema.3.name=id
table-name=customers
url=jdbc:mysql://localhost:3306/inventory
username=root

The following factories have been considered:
org.apache.flink.connector.jdbc.table.JdbcTableSourceSinkFactory
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactory
org.apache.flink.table.filesystem.FileSystemTableFactory
version: '2.1'
services:
  jobmanager:
    build: .
    image: flink:latest
    hostname: "jobmanager"
    expose:
      - "6123"
    ports:
      - "8081:8081"
    command: jobmanager
    environment:
      - JOB_MANAGER_RPC_ADDRESS=jobmanager
  taskmanager:
    image: flink:latest
    expose:
      - "6121"
      - "6122"
    depends_on:
      - jobmanager
    command: taskmanager
    links:
      - jobmanager:jobmanager
    environment:
      - JOB_MANAGER_RPC_ADDRESS=jobmanager
  mysql:
    image: debezium/example-mysql
    ports:
     - "3306:3306"
    environment:
     - MYSQL_ROOT_PASSWORD=debezium
     - MYSQL_USER=mysqluser
     - MYSQL_PASSWORD=mysqlpw 
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                                                            NAMES
cf84c84f7821        flink      "/docker-entrypoint.…"   2 minutes ago       Up 2 minutes        6121-6123/tcp, 8081/tcp                                          _taskmanager_1
09b19142d70a        flink      "/docker-entrypoint.…"   9 minutes ago       Up 9 minutes        6123/tcp, 0.0.0.0:8081->8081/tcp                                 _jobmanager_1
4ac01eb11bf7        debezium/example-mysql      "docker-entrypoint.s…"   3 days ago          Up 9 minutes        0.0.0.0:3306->3306/tcp, 33060/tcp                                keras-flask-dep
docker pull flink:scala_2.12-java8
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html
 base_dir = "/Users/huhu/Documents/projects/webapp/libs/"
 flink_jdbc_jar = f"file://{base_dir}flink-connector-jdbc_2.11-1.11.2.jar"

BT_ENV.get_config().get_configuration().set_string("pipeline.jars",jars)
CREATE TABLE nba_player4 (
    first_name VARCHAR(20),
    last_name  VARCHAR(20),
    email      VARCHAR(50),
    id         VARCHAR(10)
);
CREATE TABLE nba_player4 (
    first_name VARCHAR(20),
    last_name  VARCHAR(20),
    email      VARCHAR(50),
    id         INT PRIMARY KEY AUTO_INCREMENT
);