Json druid kafka索引服务设置

Json druid kafka索引服务设置,json,apache-kafka,druid,Json,Apache Kafka,Druid,我按照文档进行编辑: druid-0.9.2/conf/druid/_common/common.runtime.properties 并补充说: "druid-kafka-indexing-service" 到druid.extensions.loadList并重新启动所有druid服务:middlemanager,overlord,coordinator,broker,historical 我跑: curl -X 'POST' -H 'Content-Type:application/j

我按照文档进行编辑:

druid-0.9.2/conf/druid/_common/common.runtime.properties
并补充说:

"druid-kafka-indexing-service"
druid.extensions.loadList
并重新启动所有druid服务:
middlemanager
overlord
coordinator
broker
historical

我跑:

curl -X 'POST' -H 'Content-Type:application/json' -d @kafka_connect/script.json druid_server:8090/druid/indexer/v1/task
但是得到:

{"error":"Could not resolve type id 'kafka' into a subtype of [simple type, class io.druid.indexing.common.task.Task]\n at [Source: HttpInputOverHTTP@4c467f1c; line: 1, column: 4]"}
输入json有:

{
  "type": "kafka",
  "dataSchema": {
    "dataSource": "sensors-kafka",
    "parser": {
      "type": "string",
      "parseSpec": {
        "format": "json",
        "timestampSpec": {
          "column": "timestamp",
          "format": "auto"
        },
        "dimensionsSpec": {
          "dimensions": ["machine", "key"],
          "dimensionExclusions": [
            "timestamp",
            "value"
          ]
        }
      }
    },
    "metricsSpec": [
      {
        "name": "count",
        "type": "count"
      },
      {
        "name": "value_sum",
        "fieldName": "value",
        "type": "doubleSum"
      },
      {
        "name": "value_min",
        "fieldName": "value",
        "type": "doubleMin"
      },
      {
        "name": "value_max",
        "fieldName": "value",
        "type": "doubleMax"
      }
    ],
    "granularitySpec": {
      "type": "uniform",
      "segmentGranularity": "HOUR",
      "queryGranularity": "NONE"
    }
  },
  "tuningConfig": {
    "type": "kafka",
    "maxRowsPerSegment": 5000000
  },
  "ioConfig": {
    "topic": "sensor",
    "consumerProperties": {
      "bootstrap.servers": "kafka_server:2181"
    },
    "taskCount": 1,
    "replicas": 1,
    "taskDuration": "PT1H"
  }
}
知道我做错了什么吗?根据该文件:
http://druid.io/docs/0.9.2-rc3/development/extensions-core/kafka-ingestion.html
类型是
kafka


是否有方法检查扩展是否已正确加载,或者我是否必须在每个组件的
运行时指定扩展。属性

将在霸王
/druid/indexer/v1/supervisor

curl -X POST -H 'Content-Type: application/json' -d @kafka_connect/script.json http://druid_server:8090/druid/indexer/v1/supervisor

我遇到了类似的问题,我修改了“conf/druid/_common/common.runtime.properties”文件,将“druid kafka索引服务”添加到druid.extensions.loadList中,并将其显示如下:

druid.extensions.loadList=["druid-parser-route", "mysql-metadata-storage", "druid-kafka-indexing-service"]
希望可以帮助其他人

如果使用,您需要设置

druid.extensions.loadList=[“druid直方图”,“druid数据集”, “druid查找缓存全局”、“postgresql元数据存储”, “德鲁伊·卡夫卡索引服务”]

归档
/opt/druid/conf/druid/cluster/_common/common.runtime.properties

如果更改为类似卡夫卡的错误无法由霸王解决,我将再次检查扩展是否正确加载,它将在霸王StartupOk打印出来。我使用的是quickstart指南,但是扩展文档引用的是
conf
而不是
conf quickstart
,我使用的是启动服务。。。现在我得到了一个
错误500
,但这是因为数据源被编码为发送
时间戳
,而不是
YYYY-MM-DD