Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka 内部联接流的KSQLDB右端始终为空_Apache Kafka_Ksqldb - Fatal编程技术网

Apache kafka 内部联接流的KSQLDB右端始终为空

Apache kafka 内部联接流的KSQLDB右端始终为空,apache-kafka,ksqldb,Apache Kafka,Ksqldb,正在运行KSQLDB版本:0.12.0 我对连接的流有问题 将流创建为窗口内部联接查询时,右侧字段全部为空,即使是属于联接条件的字段。 当独立运行同一查询时,我通常会得到字段 下面是设置: 使用docker compose运行服务器: services: ksqldb-server: image: confluentinc/ksqldb-server:0.12.0 hostname: ksqldb-server container_name: ksqldb-serve

正在运行KSQLDB版本:0.12.0

我对连接的流有问题

将流创建为窗口内部联接查询时,右侧字段全部为空,即使是属于联接条件的字段。 当独立运行同一查询时,我通常会得到字段

下面是设置:

使用docker compose运行服务器:

services:
  ksqldb-server:
    image: confluentinc/ksqldb-server:0.12.0
    hostname: ksqldb-server
    container_name: ksqldb-server
    ports:
      - "8088:8088"
    environment:
      KSQL_LISTENERS: http://0.0.0.0:8088
      KSQL_BOOTSTRAP_SERVERS: mybroker:9092
      KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
      KSQL_KSQL_SCHEMA_REGISTRY_URL: http://myregistry:8081
      KSQL_KSQL_STREAMS_AUTO_OFFSET_RESET: 'earliest'
流和连接定义

CREATE STREAM generic_orders (
    ...
    clientrequestid_ string KEY,
    ...
)   WITH (
    KAFKA_TOPIC='thetopic',
    VALUE_FORMAT='AVRO'
);

CREATE STREAM specific_order (
    originatoruserid_ string,
    request_clientid_ bigint,
    bidquoteinfo_volume_ bigint
)   WITH (
    KAFKA_TOPIC='anothertopic',
    VALUE_FORMAT='AVRO'
);
当加入上面的两个时,它将不起作用,我尝试使
request\u clientid\u
也成为一个键。 我怀疑它不会工作,因为
clientrequestid\u
request\u clientid\u
的类型不同,所以我创建了另一个流:

CREATE STREAM specific_order_typed AS
    select originatoruserid_ ,
    CAST(request_clientid_ AS string) as request_clientid_ KEY, 
    bidquoteinfo_volume_ 
    FROM ice_massquote_order
    EMIT CHANGES
    ;
那没用

以下是连接的流:

CREATE STREAM enriched_orders AS
    SELECT i.request_clientid_ as reqid, o.clientrequestid_ as orderreq  FROM specific_order_typed i
    INNER JOIN generic_order o WITHIN 1 HOURS ON o.clientrequestid_ = i.request_clientid_ 
    EMIT CHANGES;
还尝试从流和联接流交换数据。。。结果始终为空:

|623562762 |null
...
当直接运行查询时,虽然我得到了预期的结果

    SELECT i.request_clientid_ as reqid, o.clientrequestid_ as orderreq  FROM specific_order_typed i
    INNER JOIN generic_order o WITHIN 1 HOURS ON o.clientrequestid_ = i.request_clientid_ 
    EMIT CHANGES;

有人知道发生了什么吗?我已经用尽了我所有的想法,所以我通过降级到0.10.2来解决我的问题

我注意到的第一个区别是,当尝试加入原始流时,
generic\u orders
specific\u order
字符串
bigint

CREATE STREAM generic_orders (
    ...
    clientrequestid_ string,
    ...
)   WITH (
    KAFKA_TOPIC='thetopic',
    VALUE_FORMAT='AVRO'
);

CREATE STREAM specific_order (
    originatoruserid_ string,
    request_clientid_ bigint,
    bidquoteinfo_volume_ bigint
)   WITH (
    KAFKA_TOPIC='anothertopic',
    VALUE_FORMAT='AVRO'
);
然后我只需将bigint转换为字符串(之前我也曾尝试过使用0.12):


最后,当读取该流时,值不再为空。

主题中
是实际的消息键还是保存
clientrequestid的字符串?如果
打印主题
,您应该会看到键值。
CREATE STREAM generic_orders (
    ...
    clientrequestid_ string,
    ...
)   WITH (
    KAFKA_TOPIC='thetopic',
    VALUE_FORMAT='AVRO'
);

CREATE STREAM specific_order (
    originatoruserid_ string,
    request_clientid_ bigint,
    bidquoteinfo_volume_ bigint
)   WITH (
    KAFKA_TOPIC='anothertopic',
    VALUE_FORMAT='AVRO'
);
CREATE STREAM enriched_orders AS
    SELECT * FROM specific_order_typed i
    INNER JOIN generic_order o WITHIN 1 HOURS ON o.clientrequestid_ = CAST(i.request_clientid_ AS String)
    EMIT CHANGES;