Python 提取;“数据”;来自Amazon Ion文件
有人使用过Amazon Quantum账本数据库(QLDB)文件吗?如果是这样,您知道如何提取“数据”部分来制定表格吗?也许可以使用python来刮取数据? 我试图从存储在s3中的这些文件中获取“数据”信息(我无法访问QLDB,因此无法直接查询),然后将结果上传到Glue 我正在尝试使用GLue执行ETL作业,但GLue不喜欢Amazon Ion文件,因此我需要从这些文件中查询数据或从文件中获取相关信息 谢谢。 PS:我所说的“数据”信息是指:Python 提取;“数据”;来自Amazon Ion文件,python,amazon-s3,amazon-qldb,amazon-ion,Python,Amazon S3,Amazon Qldb,Amazon Ion,有人使用过Amazon Quantum账本数据库(QLDB)文件吗?如果是这样,您知道如何提取“数据”部分来制定表格吗?也许可以使用python来刮取数据? 我试图从存储在s3中的这些文件中获取“数据”信息(我无法访问QLDB,因此无法直接查询),然后将结果上传到Glue 我正在尝试使用GLue执行ETL作业,但GLue不喜欢Amazon Ion文件,因此我需要从这些文件中查询数据或从文件中获取相关信息 谢谢。 PS:我所说的“数据”信息是指: { PersonId:"4t
{
PersonId:"4tPW8xtKSGF5b6JyTihI1U",
LicenseNumber:"LEWISR261LL",
LicenseType:"Learner",
ValidFromDate:2016–12–20,
ValidToDate:2020–11–15
}
ref:您尝试过使用该库吗 假设问题中提到的数据存在于一个名为“myIonFile.ion”的文件中,并且如果该文件中只有ion对象,我们可以按如下方式从该文件中读取数据:
from amazon.ion import simpleion
file = open("myIonFile.ion", "rb") # opening the file
data = file.read() # getting the bytes for the file
iondata = simpleion.loads(data, single_value=False) # Loading as ion data
print(iondata['PersonId']) # should print "4tPW8xtKSGF5b6JyTihI1U"
有关使用离子库的更多指导,请参阅
此外,我不确定您的用例,但与QLDB的交互也可以通过直接依赖于Ion库的来完成。您是否尝试过使用该库 假设问题中提到的数据存在于一个名为“myIonFile.ion”的文件中,并且如果该文件中只有ion对象,我们可以按如下方式从该文件中读取数据:
from amazon.ion import simpleion
file = open("myIonFile.ion", "rb") # opening the file
data = file.read() # getting the bytes for the file
iondata = simpleion.loads(data, single_value=False) # Loading as ion data
print(iondata['PersonId']) # should print "4tPW8xtKSGF5b6JyTihI1U"
有关使用离子库的更多指导,请参阅
此外,我不确定您的用例,但与QLDB的交互也可以通过直接依赖于离子库的来完成。
AWS Glue能够读取亚马逊离子输入。但是,许多其他服务和应用程序不能,因此使用Glue将离子数据转换为JSON是一个好主意。请注意,Ion是JSON的超集,向JSON添加了一些数据类型,因此将Ion转换为JSON可能会导致一些错误
从QLDB S3导出访问QLDB文档的一个好方法是使用Glue提取文档数据,将其作为JSON存储在S3中,并使用Amazon Athena进行查询。过程如下:
from amazon.ion import simpleion
file = open("myIonFile.ion", "rb") # opening the file
data = file.read() # getting the bytes for the file
iondata = simpleion.loads(data, single_value=False) # Loading as ion data
print(iondata['PersonId']) # should print "4tPW8xtKSGF5b6JyTihI1U"
from awsglue.transforms import *
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from pyspark.sql.functions import explode
from pyspark.sql.functions import col
from awsglue.dynamicframe import DynamicFrame
# Initializations
sc = SparkContext.getOrCreate()
glueContext = GlueContext(sc)
# Load data. 'vehicle-registration-ion' is the name of your database in the Glue catalog for the export data. '2020' is the name of your table in the Glue catalog.
dyn0 = glueContext.create_dynamic_frame.from_catalog(database = "vehicle-registration-ion", table_name = "2020", transformation_ctx = "datasource0")
# Only give me exported records with revisions
dyn1 = dyn0.filter(lambda line: "revisions" in line)
# Now give me just the revisions element and convert to a Spark DataFrame.
df0 = dyn1.select_fields("revisions").toDF()
# Revisions is an array, so give me all of the array items as top-level "rows" instead of being a nested array field.
df1 = df0.select(explode(df0.revisions))
# Now I have a list of elements with "col" as their root node and the revision
# fields ("data", "metadata", etc.) as sub-elements. Explode() gave me the "col"
# root node and some rows with null "data" fields, so filter out the nulls.
df2 = df1.where(col("col.data").isNotNull())
# Now convert back to a DynamicFrame
dyn2 = DynamicFrame.fromDF(df2, glueContext, "dyn2")
# Prep and send the output to S3
applymapping1 = ApplyMapping.apply(frame = dyn2, mappings = [("col.data", "struct", "data", "struct"), ("col.metadata", "struct", "metadata", "struct")], transformation_ctx = "applymapping1")
datasink0 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://YOUR_BUCKET_NAME_HERE/YOUR_DESIRED_OUTPUT_PATH_HERE/"}, format = "json", transformation_ctx = "datasink0")
我希望这有帮助 Nosiphiwe
AWS Glue能够读取亚马逊离子输入。但是,许多其他服务和应用程序不能,因此使用Glue将离子数据转换为JSON是一个好主意。请注意,Ion是JSON的超集,向JSON添加了一些数据类型,因此将Ion转换为JSON可能会导致一些错误
从QLDB S3导出访问QLDB文档的一个好方法是使用Glue提取文档数据,将其作为JSON存储在S3中,并使用Amazon Athena进行查询。过程如下:
from amazon.ion import simpleion
file = open("myIonFile.ion", "rb") # opening the file
data = file.read() # getting the bytes for the file
iondata = simpleion.loads(data, single_value=False) # Loading as ion data
print(iondata['PersonId']) # should print "4tPW8xtKSGF5b6JyTihI1U"
from awsglue.transforms import *
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from pyspark.sql.functions import explode
from pyspark.sql.functions import col
from awsglue.dynamicframe import DynamicFrame
# Initializations
sc = SparkContext.getOrCreate()
glueContext = GlueContext(sc)
# Load data. 'vehicle-registration-ion' is the name of your database in the Glue catalog for the export data. '2020' is the name of your table in the Glue catalog.
dyn0 = glueContext.create_dynamic_frame.from_catalog(database = "vehicle-registration-ion", table_name = "2020", transformation_ctx = "datasource0")
# Only give me exported records with revisions
dyn1 = dyn0.filter(lambda line: "revisions" in line)
# Now give me just the revisions element and convert to a Spark DataFrame.
df0 = dyn1.select_fields("revisions").toDF()
# Revisions is an array, so give me all of the array items as top-level "rows" instead of being a nested array field.
df1 = df0.select(explode(df0.revisions))
# Now I have a list of elements with "col" as their root node and the revision
# fields ("data", "metadata", etc.) as sub-elements. Explode() gave me the "col"
# root node and some rows with null "data" fields, so filter out the nulls.
df2 = df1.where(col("col.data").isNotNull())
# Now convert back to a DynamicFrame
dyn2 = DynamicFrame.fromDF(df2, glueContext, "dyn2")
# Prep and send the output to S3
applymapping1 = ApplyMapping.apply(frame = dyn2, mappings = [("col.data", "struct", "data", "struct"), ("col.metadata", "struct", "metadata", "struct")], transformation_ctx = "applymapping1")
datasink0 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://YOUR_BUCKET_NAME_HERE/YOUR_DESIRED_OUTPUT_PATH_HERE/"}, format = "json", transformation_ctx = "datasink0")
我希望这有帮助