Amazon web services AWS Glue Grok模式,时间戳(毫秒)

Amazon web services AWS Glue Grok模式,时间戳(毫秒),amazon-web-services,logstash,aws-glue,logstash-grok,Amazon Web Services,Logstash,Aws Glue,Logstash Grok,我需要在AWS Glue Classifie中定义一个grok模式,以捕获文件的datetime列上带有毫秒的datestamp(AWS Glue Crawler将其转换为string。我使用了AWS Glue中预定义的datestamp\u EVENTLOG,并尝试将毫秒添加到模式中 分类:datetime Grok模式:%{DATESTAMP\u EVENTLOG:string} 自定义模式: MILLISECONDS (\d){3,7} DATESTAMP_EVENTLOG %{YEAR}

我需要在AWS Glue Classifie中定义一个grok模式,以捕获文件的
datetime
列上带有毫秒的
datestamp
(AWS Glue Crawler将其转换为
string
。我使用了AWS Glue中预定义的
datestamp\u EVENTLOG
,并尝试将毫秒添加到模式中

分类:
datetime

Grok模式:
%{DATESTAMP\u EVENTLOG:string}

自定义模式:

MILLISECONDS (\d){3,7}
DATESTAMP_EVENTLOG %{YEAR}-%{MONTHNUM}-%{MONTHDAY}T%{HOUR}:%{MINUTE}:%{SECOND}.%{MILLISECONDS}

我仍然无法成功地实现模式。有什么想法吗?

我也不知道如何使用分类器实现这一点,但我最终通过编写一个自定义转换到映射脚本(python)将时间戳从字符串转换为日期时间

在我的工作代码.col2下面是一个列,它将爬虫程序指定为字符串,在这里我将它转换为python datetime

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

from datetime import datetime

args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "s3_events", table_name = "events", transformation_ctx = "datasource0")

def convert_dates(rec):
    rec["col2"] = datetime.strptime(rec["col2"], "%d.%m.%Y")
    return rec
custommapping1 = Map.apply(frame = datasource0, f = convert_dates, transformation_ctx = "custommapping1")

applymapping1 = ApplyMapping.apply(frame = custommapping1, mappings = [("col0", "string", "col0", "string"), ("col1", "string", "col1", "string"), ("col2", "date", "col2", "date")], transformation_ctx = "applymapping1")

selectfields2 = SelectFields.apply(frame = applymapping1, paths = ["col2", "col0", "col1"], transformation_ctx = "selectfields2")

resolvechoice3 = ResolveChoice.apply(frame = selectfields2, choice = "MATCH_CATALOG", database = "mydb", table_name = "mytable", transformation_ctx = "resolvechoice3")

resolvechoice4 = ResolveChoice.apply(frame = resolvechoice3, choice = "make_cols", transformation_ctx = "resolvechoice4")

datasink5 = glueContext.write_dynamic_frame.from_catalog(frame = resolvechoice4, database = "mydb", table_name = "mytable", transformation_ctx = "datasink5")
job.commit()

对分类器的误解是,除了JSON、CSV等内置格式之外,它们还用于指定文件格式,而不是指定单个数据类型解析格式

正如user@lilline建议的那样,更改数据类型的最佳方法是使用ApplyMapping函数

创建粘合作业时,您可以选择以下选项:由AWS Glue生成的建议脚本

然后,当从Glue目录中选择表作为源时,可以对数据类型、列名等进行更改

输出代码可能如下所示:

applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("paymentid", "string", "paymentid", "string"), ("updateddateutc", "string", "updateddateutc", "timestamp"), ...], transformation_ctx = "applymapping1")
有效地将UpdateDateUTC字符串强制转换为时间戳

为了创建分类器,您需要指定文件中的每一列

Classifier type: Grok 
Classification: Name Grok 
pattern: %{MY_TIMESTAMP} 
Custom patterns MY_TIMESTAMP (%{USERNAME:test}[,]%{YEAR:year}[-]%{MONTHNUM:mm}[-]%{MONTHDAY:dd} %{TIME:time})

您是将上述python脚本作为ETL作业在AWS Glue中运行还是单独运行?您能澄清一下吗?@CodeHunter是的,我将该脚本作为ETL作业运行。