Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/sql/71.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何从Kinesis Analytics(SQL)中格式化为字符串的json中选择数据_Sql_Json_Amazon Web Services_Amazon Kinesis - Fatal编程技术网

如何从Kinesis Analytics(SQL)中格式化为字符串的json中选择数据

如何从Kinesis Analytics(SQL)中格式化为字符串的json中选择数据,sql,json,amazon-web-services,amazon-kinesis,Sql,Json,Amazon Web Services,Amazon Kinesis,我有一个kinesis数据流,它以以下格式提供数据: 创建时间:时间戳 有效载荷:varchar6000 有效载荷元件的简化示例 在实时情况下,列有效载荷中的阵列数据。观测值通常在0到200个元素之间 我试图扩展有效载荷中的数据,并为其中的每个元素创建一个新行。本例的预期结果应该是具有以下结构的数据流: 在时间戳处创建\u,-从根 obs_id整数,-来自数据内部。观察值 位置:整数,-来自内部数据.observations.location 位置:整数,-来自内部数据。观测值。位置 版本:根中

我有一个kinesis数据流,它以以下格式提供数据:

创建时间:时间戳 有效载荷:varchar6000

有效载荷元件的简化示例

在实时情况下,列有效载荷中的阵列数据。观测值通常在0到200个元素之间

我试图扩展有效载荷中的数据,并为其中的每个元素创建一个新行。本例的预期结果应该是具有以下结构的数据流:

在时间戳处创建\u,-从根 obs_id整数,-来自数据内部。观察值 位置:整数,-来自内部数据.observations.location 位置:整数,-来自内部数据。观测值。位置 版本:根中的整数

这就是我现在所处的位置,它正在工作,但没有提取json

-- CREATE OR REPLACE STREAM for cleaned up referrer
CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
    "created_at" TIMESTAMP,
    "version" Integer
    );

CREATE OR REPLACE PUMP "myPUMP" AS 
   INSERT INTO "DESTINATION_SQL_STREAM"
      SELECT STREAM 
         "created_at", 
         "version"
      FROM "SOURCE_SQL_STREAM_001";
但是,如果我尝试这样做,它会中断:

-- CREATE OR REPLACE STREAM for cleaned up referrer
CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
    "created_at" TIMESTAMP,
    "version" Integer,
    "obs_id" integer 
    );

CREATE OR REPLACE PUMP "myPUMP" AS 
   INSERT INTO "DESTINATION_SQL_STREAM"
      SELECT STREAM 
         "created_at", 
         "version",
         "data"."observations"."obs_id" as obs_id
      FROM "SOURCE_SQL_STREAM_001";
错误是:找不到表数据

非常感谢您的帮助

编辑:我现在尝试了以下方法:

-- CREATE OR REPLACE STREAM for cleaned up referrer
CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
    "version" Integer
    , "whatever" varchar(10)
);

CREATE OR REPLACE PUMP "myPUMP" AS 
   INSERT INTO "DESTINATION_SQL_STREAM"
      SELECT STREAM 
        "version"
        , json_extract("data", "$.whatever") AS whatever,
      FROM "SOURCE_SQL_STREAM_001";
我得到了一个错误:

org.eigenbase.sql.parser.SqlParseException: Encountered "FROM" at line 10, column 7. Was expecting one of: "*" ... <IDENTIFIER> ... <QUOTED_IDENTIFIER> ... <UNICODE_QUOTED_IDENTIFIER> ... "+" ... "-" ... <UNSIGNED_INTEGER_LITERAL> ... <DECIMAL_NUMERIC_LITERAL> ... <APPROX_NUMERIC_LITERAL> ... <BINARY_STRING_LITERAL> ... <PREFIXED_STRING_LITERAL> ... <QUOTED_STRING> ... <UNICODE_STRING_LITERAL> ... "TRUE" ... "FALSE" ... "UNKNOWN" ... "NULL" ... <LBRACE_D> ... <LBRACE_T> ... <LBRACE_TS> ... "DATE" ... "TIME" ... "TIMESTAMP" ... "INTERVAL" ... "?" ... "CAST" ... "DATEDIFF" ... "EXTRACT" ... "POSITION" ... "CONVERT" ... "TRANSLATE" ... "OVERLAY" ... "FLOOR" ... "CEIL" ... "CEILING" ... "STEP" ... "TUMBLE_WINDOW" ... "SUBSTRING" ... "TRIM" ... "FIRST_VALUE" ... "LAST_VALUE" ... "LAG" ... "NTH_VALUE" ... <LBRACE_FN> ... "MULTISET" ... "SPECIFIC" ... "ABS" ... "ANY" ... "AVG" ... "CARDINALITY" ... "CHAR_LENGTH" ... "CHARACTER_LENGTH" ... "COALESCE" ... "COLLECT" ... "CUME_DIST" ... "COUNT" ... "CURRENT_DATE" ... "CURRENT_TIME" ... "CURRENT_TIMESTAMP" ... "DENSE_RANK" ... "ELEMENT" ... "EVERY" ... "EXP_AVG" ... "EXP" ... "FUSION" ... "INITCAP" ... "LN" ... "LOCALTIME" ... "LOCALTIMESTAMP" ... "LOWER" ... "MAX" ... "MIN" ... "MOD" ... "NULLIF" ... "OCTET_LENGTH" ... "PERCENT_RANK" ... "POWER" ... "RANK" ... "ROW_NUMBER" ... "SQRT" ... "STDDEV" ... "STDDEV_POP" ... "STDDEV_SAMP" ... "SUM" ... "UPPER" ... "VAR_POP" ... "VAR_SAMP" ... "CURRENT_CATALOG" ... "CURRENT_DEFAULT_TRANSFORM_GROUP" ... "CURRENT_PATH" ... "ROWNUM" ... "CURRENT_ROLE" ... "CURRENT_SCHEMA" ... "CURRENT_USER" ... "SESSION_USER" ... "SYSTEM_USER" ... "USER" ... "NEW" ... "CASE" ... "PERIOD" ... "TSDIFF" ... "CURSOR" ... "ROW" ... "NOT" ... "EXISTS" ... "(" ...
根据,

您可以为此使用json_提取

如下

select data from vendor_meraki_data_raw
limit 5 
),

jsondata as(

select
  json_extract(data, '$.data') as fulldata
 from dataset
)

select
  json_extract(fulldata, '$.apMac') as apMac
from jsondata``` 


我尝试了这个,但没有成功,请参阅更新的问题
select data from vendor_meraki_data_raw
limit 5 
),

jsondata as(

select
  json_extract(data, '$.data') as fulldata
 from dataset
)

select
  json_extract(fulldata, '$.apMac') as apMac
from jsondata```