Json 谷歌云-发布/订阅数据流
我通过一个休息请求给酒吧/酒吧打电话。我试图在Pub/Sub上放置一个主题的列数据,然后进入数据流,最后进入定义了表的大查询 这是所述JSON数据的布局:Json 谷歌云-发布/订阅数据流,json,google-bigquery,google-cloud-platform,google-cloud-dataflow,google-cloud-pubsub,Json,Google Bigquery,Google Cloud Platform,Google Cloud Dataflow,Google Cloud Pubsub,我通过一个休息请求给酒吧/酒吧打电话。我试图在Pub/Sub上放置一个主题的列数据,然后进入数据流,最后进入定义了表的大查询 这是所述JSON数据的布局: [ { "age": "58", "job": "management", "marital": "married", "education": "tertiary", "default": "no", "balance": "2143", "housing": "yes",
[
{
"age": "58",
"job": "management",
"marital": "married",
"education": "tertiary",
"default": "no",
"balance": "2143",
"housing": "yes",
"loan": "no",
"contact": "unknown",
"day": "5",
"month": "may",
"duration": "261",
"campaign": "1",
"pdays": "-1",
"previous": "0",
"poutcome": "unknown",
"y": "no"
}
]
现在,要格式化正确的JSON正文,需要进入以下请求,以便发布/订阅识别:
{
"messages": [{
"attributes": {
"key": "iana.org/language_tag",
"value": "en"
},
"data": "%DATA%"
}]
}
现在,Pub/Sub-REST引用声明“Data”字段需要转换为Base64,我就是这么做的,最终的JSON格式如下所示(%Data%替换为原始消息数据的Base64转换)
Pub/Sub允许这些数据,然后将其放入数据流中,但这就是一切都会中断的地方。DataFlow尝试反序列化信息,但失败并显示以下消息:
(efdf538fc01f50b0): java.lang.RuntimeException: Unable to parse input
com.google.cloud.teleport.templates.common.BigQueryConverters$JsonToTableRow$1.apply(BigQueryConverters.java:58)
com.google.cloud.teleport.templates.common.BigQueryConverters$JsonToTableRow$1.apply(BigQueryConverters.java:47)
org.apache.beam.sdk.transforms.MapElements$1.processElement(MapElements.java:122)
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize instance of com.google.api.services.bigquery.model.TableRow out of START_ARRAY token
at [Source: [{"age":"32","job":"\"admin.\"","marital":"\"single\"","education":"\"secondary\"","default":"\"no\"","balance":"5","housing":"\"yes\"","loan":"\"no\"","contact":"\"unknown\"","day":"12","month":"\"may\"","duration":"593","campaign":"2","pdays":"-1","previous":"0","poutcome":"\"unknown\"","y":"\"no\""}]; line: 1, column: 1]
我认为这与
“data”字段的格式有关:
字段正在格式化,但我尝试了其他方法,但我什么都做不到。经过进一步的实验,问题确实是JSON的格式。通过删除开头的[
和结尾的]
数据流确实能够识别数据,然后将其放入BigQuery。经过进一步的实验,问题确实是JSON是如何格式化的。通过删除开头的[
和结尾的]
数据流确实能够识别数据,然后将其放入BigQuery中
(efdf538fc01f50b0): java.lang.RuntimeException: Unable to parse input
com.google.cloud.teleport.templates.common.BigQueryConverters$JsonToTableRow$1.apply(BigQueryConverters.java:58)
com.google.cloud.teleport.templates.common.BigQueryConverters$JsonToTableRow$1.apply(BigQueryConverters.java:47)
org.apache.beam.sdk.transforms.MapElements$1.processElement(MapElements.java:122)
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize instance of com.google.api.services.bigquery.model.TableRow out of START_ARRAY token
at [Source: [{"age":"32","job":"\"admin.\"","marital":"\"single\"","education":"\"secondary\"","default":"\"no\"","balance":"5","housing":"\"yes\"","loan":"\"no\"","contact":"\"unknown\"","day":"12","month":"\"may\"","duration":"593","campaign":"2","pdays":"-1","previous":"0","poutcome":"\"unknown\"","y":"\"no\""}]; line: 1, column: 1]