Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/359.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Spark Python Pyspark如何使用一系列字典和嵌入式字典展平列(sparknlp注释器输出)_Python_Scala_Apache Spark_Pyspark_Johnsnowlabs Spark Nlp - Fatal编程技术网

Spark Python Pyspark如何使用一系列字典和嵌入式字典展平列(sparknlp注释器输出)

Spark Python Pyspark如何使用一系列字典和嵌入式字典展平列(sparknlp注释器输出),python,scala,apache-spark,pyspark,johnsnowlabs-spark-nlp,Python,Scala,Apache Spark,Pyspark,Johnsnowlabs Spark Nlp,我试图从sparknlp中提取输出(使用预先训练好的管道“explain\u document\u dl”)。我花了很多时间寻找方法(UDF、explode等),但无法找到可行的解决方案。假设我想从实体列中获取结果和元数据下的提取值。在该列中有一个包含多个字典的数组 当我使用df.withColumn(“entity\u name”,explode(“entities.result”))时,只提取第一个字典中的值 “实体”列的内容是词典列表 试图提供一个可复制的示例/重新创建数据帧(感谢下面@j

我试图从sparknlp中提取输出(使用预先训练好的管道“explain\u document\u dl”)。我花了很多时间寻找方法(UDF、explode等),但无法找到可行的解决方案。假设我想从
实体
列中获取
结果
元数据
下的提取值。在该列中有一个包含多个字典的数组

当我使用
df.withColumn(“entity\u name”,explode(“entities.result”))
时,只提取第一个字典中的值

“实体”列的内容是词典列表

试图提供一个可复制的示例/重新创建数据帧(感谢下面@jonathan提供的建议):

#以一个单元格的内容为例:
d=[{“注释类型”:“块”,“开始”:2740,“结束”:2747,“结果”:“•能力”,“元数据”:{“实体”:“组织”,“句子”:“8”,“块”:“22”},“嵌入”:[],“句子嵌入”:[]},{“注释类型”:“块”,“开始”:2740,“结束”:2747,“结果”:“联邦快递”,“元数据”:{“实体”:“组织”,“句子”:“8”,“块”:“22”},“嵌入”:[],“句子嵌入”:[]]
从pyspark.sql.types导入StructType、StructField、StringType
从数组导入数组
schema=StructType([StructField('annotatorType',StringType(),True),
StructField('begin',IntegerType(),True),
StructField('end',IntegerType(),True),
StructField('result',StringType(),True),
StructField('句子',StringType(),True),
StructField('chunk',StringType(),True),
StructField('metadata',StructType((StructField('entity',StringType(),True)),
StructField('句子',StringType(),True),
StructField('chunk',StringType(),True)
))是的,
StructField('embeddings',StringType(),True),
StructField('句子嵌入',StringType(),True)
]
)
df=spark.createDataFrame(d,schema=schema)
df.show()
在这种情况下,如果是一个字典列表,它将起作用:

+-------------+-----+----+--------+--------+-----+------------+----------+-------------------+
|annotatorType|begin| end|  result|sentence|chunk|    metadata|embeddings|sentence_embeddings|
+-------------+-----+----+--------+--------+-----+------------+----------+-------------------+
|        chunk| 2740|2747|•Ability|    null| null|[ORG, 8, 22]|        []|                 []|
|        chunk| 2740|2747|   Fedex|    null| null|[ORG, 8, 22]|        []|                 []|
+-------------+-----+----+--------+--------+-----+------------+----------+-------------------+

但我一直在研究如何将其应用于一个列,该列包含一些带有多个字典数组的单元格(因此原始单元格有多行)

我尝试将相同的模式应用于
entities
列,并且必须首先将该列转换为json

ent1 = ent1.withColumn("entities2", to_json("entities"))

它适用于具有1个字典数组的单元格,但为具有多个字典数组的单元格(第4行)提供
null

ent1.withColumn(“entities2”,来自_json(“entities2”,schema)).select(“entities2.*).show()
+-------------+-----+----+------+--------+-----+------------+----------+-------------------+
|注释类型|开始|结束|结果|句子|块|元数据|嵌入|句子|嵌入|
+-------------+-----+----+------+--------+-----+------------+----------+-------------------+
|chunk | 166 | 169 | Lyft | null | null |[MISC,0,0]|[]|
|chunk | 11 | 14 | Lyft | null | null |[MISC,0,0]|[]|
|chunk | 52 | 55 | Lyft | null | null |[MISC,1,0]|[]|
|空|空|空|空|空|空|空|空|空|
+-------------+-----+----+------+--------+-----+------------+----------+-------------------+
所需输出为

+-------------+-----+----+----------------+------------------------+----------+-------------------+
|annotatorType|begin| end|         result |    metadata            |embeddings|sentence_embeddings|
+-------------+-----+----+----------------+------------------------+----------+-------------------+
|        chunk|  166| 169|Lyft            |[MISC]                  |        []|                 []|
|        chunk|   11|  14|Lyft            |[MISC]                  |        []|                 []|
|        chunk|   52|  55|Lyft.           |[MISC]                  |        []|                 []|
|        chunk| [..]|[..]|[Lyft,Lyft,     |[MISC,MISC,MISC,        |        []|                 []| 
|             |     |    |FedEx Ground..] |ORG,LOC,ORG,ORG,ORG,ORG]|          |                   |     
+-------------+-----+----+----------------+------------------------+----------+-------------------+
我还尝试为每一行转换为json,但我失去了对原始行的跟踪,并获得了flatted son:

new_df = sqlContext.read.json(ent2.rdd.map(lambda r: r.entities2))
new_df.show()
+-------------+-----+----------+----+------------+----------------+-------------------+
|annotatorType|begin|embeddings| end|    metadata|          result|sentence_embeddings|
+-------------+-----+----------+----+------------+----------------+-------------------+
|        chunk|  166|        []| 169|[0, MISC, 0]|            Lyft|                 []|
|        chunk|   11|        []|  14|[0, MISC, 0]|            Lyft|                 []|
|        chunk|   52|        []|  55|[0, MISC, 1]|            Lyft|                 []|
|        chunk|    0|        []|  11| [0, ORG, 0]|    FedEx Ground|                 []|
|        chunk|  717|        []| 720| [1, LOC, 4]|            Dock|                 []|
|        chunk|  811|        []| 816| [2, ORG, 5]|          Parcel|                 []|
|        chunk| 1080|        []|1095| [3, ORG, 6]|Parcel Assistant|                 []|
|        chunk| 1102|        []|1108| [4, ORG, 7]|         • Daily|                 []|
|        chunk| 1408|        []|1417| [5, ORG, 8]|      Assistants|                 []|
+-------------+-----+----------+----+------------+----------------+-------------------+
我尝试应用UDF遍历“实体”中的数组列表:

def展平(我的命令):
d_结果=默认DICT(列表)
对于我的遗嘱中的sub:
val=sub['result']
d_结果[“结果”]。追加(val)
返回d_结果[“结果”]
ent=ent.withColumn('result',展平(df.entities))
TypeError:列不可编辑
我发现这篇文章与我的问题非常相似,但在将列
entities
转换为json之后,我仍然无法用那篇文章中提供的解决方案来解决它


感谢您的帮助!!理想的python解决方案,但scala中的示例也很有用

获取
null
的原因是
schema
变量不能准确表示作为数据传入的词典列表

    from pyspark.shell import *
    from pyspark.sql.types import *

    schema = StructType([StructField('result', StringType(), True),
                 StructField('metadata', StructType((StructField('entity', StringType(), True),
                                                     StructField('sentence', StringType(), True),
                                                     StructField('chunk', StringType(), True))), True)])

    df = spark.createDataFrame(d1, schema=schema)
    df.show()
如果您喜欢定制的解决方案,可以尝试纯python/pandas方法

    import pandas as pd
    from pyspark.shell import *

    result = []
    metadata_entity = []
    for row in d1:
        result.append(row.get('result'))
        metadata_entity.append(row.get('metadata').get('entity'))

    schema = {'result': [result], 'metadata.entity': [metadata_entity]}
    pandas_df = pd.DataFrame(schema)

    df = spark.createDataFrame(pandas_df)
    df.show()

    # specific columns
    df.select('result','metadata.entity').show()
编辑

在阅读了您一直在尝试的所有方法之后,我认为,
sc.parallelize
仍然适用于相当复杂的情况。我没有你的原始变量,但我可以OCR你的图像并从那里获取它——尽管不再有课堂老师或教学价值观。希望这一切都有用

您始终可以使用所需的结构及其模式创建模拟数据帧

对于具有嵌套数据类型的复杂情况,可以使用SparkContext并读取生成的JSON格式

    import itertools

    from pyspark.shell import *
    from pyspark.sql.functions import *
    from pyspark.sql.types import *

    # assume two lists in two dictionary keys to make four cells
    # since I don't have but entities2, I can just replicate it
    sample = {
        'single_list': [{'annotatorType': 'chunk', 'begin': '166', 'end': '169', 'result': 'Lyft',
                         'metadata': {'entity': 'MISC', 'sentence': '0', 'chunk': '0'}, 'embeddings': [],
                         'sentence_embeddings': []},
                        {'annotatorType': 'chunk', 'begin': '11', 'end': '14', 'result': 'Lyft',
                         'metadata': {'entity': 'MISC', 'sentence': '0', 'chunk': '0'}, 'embeddings': [],
                         'sentence_embeddings': []},
                        {'annotatorType': 'chunk', 'begin': '52', 'end': '55', 'result': 'Lyft',
                         'metadata': {'entity': 'MISC', 'sentence': '1', 'chunk': '0'}, 'embeddings': [],
                         'sentence_embeddings': []}],
        'frankenstein': [
            {'annotatorType': 'chunk', 'begin': '0', 'end': '11', 'result': 'FedEx Ground',
             'metadata': {'entity': 'ORG', 'sentence': '0', 'chunk': '0'}, 'embeddings': [],
             'sentence_embeddings': []},
            {'annotatorType': 'chunk', 'begin': '717', 'end': '720', 'result': 'Dock',
             'metadata': {'entity': 'LOC', 'sentence': '4', 'chunk': '1'}, 'embeddings': [],
             'sentence_embeddings': []},
            {'annotatorType': 'chunk', 'begin': '811', 'end': '816', 'result': 'Parcel',
             'metadata': {'entity': 'ORG', 'sentence': '5', 'chunk': '2'}, 'embeddings': [],
             'sentence_embeddings': []},
            {'annotatorType': 'chunk', 'begin': '1080', 'end': '1095', 'result': 'Parcel Assistant',
             'metadata': {'entity': 'ORG', 'sentence': '6', 'chunk': '3'}, 'embeddings': [],
             'sentence_embeddings': []},
            {'annotatorType': 'chunk', 'begin': '1102', 'end': '1108', 'result': '* Daily',
             'metadata': {'entity': 'ORG', 'sentence': '7', 'chunk': '4'}, 'embeddings': [],
             'sentence_embeddings': []},
            {'annotatorType': 'chunk', 'begin': '1408', 'end': '1417', 'result': 'Assistants',
             'metadata': {'entity': 'ORG', 'sentence': '8', 'chunk': '5'}, 'embeddings': [],
             'sentence_embeddings': []}]
    }

    # since they are structurally different, get two dataframes
    df_single_list = spark.read.json(sc.parallelize(sample.get('single_list')))
    df_frankenstein = spark.read.json(sc.parallelize(sample.get('frankenstein')))

    # print better the table first border
    print('\n')

    # list to create a dataframe schema
    annotatorType = []
    begin = []
    embeddings = []
    end = []
    metadata = []
    result = []
    sentence_embeddings = []

    # PEP8 here to have an UDF instead of lambdas
    # probably a dictionary with actions to avoid IF statements
    function_metadata = lambda x: [x.entity]
    for k, i in enumerate(df_frankenstein.columns):
        if i == 'annotatorType':
            annotatorType.append(df_frankenstein.select(i).rdd.flatMap(lambda x: x).collect())
        if i == 'begin':
            begin.append(df_frankenstein.select(i).rdd.flatMap(lambda x: x).collect())
        if i == 'embeddings':
            embeddings.append(df_frankenstein.select(i).rdd.flatMap(lambda x: x).collect())
        if i == 'end':
            end.append(df_frankenstein.select(i).rdd.flatMap(lambda x: x).collect())
        if i == 'metadata':
            _temp = list(map(function_metadata, df_frankenstein.select(i).rdd.flatMap(lambda x: x).collect()))
            metadata.append(list(itertools.chain.from_iterable(_temp)))
        if i == 'result':
            result.append(df_frankenstein.select(i).rdd.flatMap(lambda x: x).collect())
        if i == 'sentence_embeddings':
            sentence_embeddings.append(df_frankenstein.select(i).rdd.flatMap(lambda x: x).collect())

    # headers
    annotatorType_header = 'annotatorType'
    begin_header = 'begin'
    embeddings_header = 'embeddings'
    end_header = 'end'
    metadata_header = 'metadata'
    result_header = 'result'
    sentence_embeddings_header = 'sentence_embeddings'
    metadata_entity_header = 'metadata.entity'

    frankenstein_schema = StructType(
        [StructField(annotatorType_header, ArrayType(StringType())),
         StructField(begin_header, ArrayType(StringType())),
         StructField(embeddings_header, ArrayType(StringType())),
         StructField(end_header, ArrayType(StringType())),
         StructField(metadata_header, ArrayType(StringType())),
         StructField(result_header, ArrayType(StringType())),
         StructField(sentence_embeddings_header, ArrayType(StringType()))
         ])

    # list of lists of lists of lists of ... lists
    frankenstein_list = [[annotatorType, begin, embeddings, end, metadata, result, sentence_embeddings]]
    df_frankenstein = spark.createDataFrame(frankenstein_list, schema=frankenstein_schema)

    print(df_single_list.schema)
    print(df_frankenstein.schema)

    # let's see how it is
    df_single_list.select(
        annotatorType_header,
        begin_header,
        end_header,
        result_header,
        array(metadata_entity_header),
        embeddings_header,
        sentence_embeddings_header).show()

    # let's see again
    df_frankenstein.select(
        annotatorType_header,
        begin_header,
        end_header,
        result_header,
        metadata_header,
        embeddings_header,
        sentence_embeddings_header).show()
输出:

    StructType(List(StructField(annotatorType,StringType,true),StructField(begin,StringType,true),StructField(embeddings,ArrayType(StringType,true),true),StructField(end,StringType,true),StructField(metadata,StructType(List(StructField(chunk,StringType,true),StructField(entity,StringType,true),StructField(sentence,StringType,true))),true),StructField(result,StringType,true),StructField(sentence_embeddings,ArrayType(StringType,true),true)))
    StructType(List(StructField(annotatorType,ArrayType(StringType,true),true),StructField(begin,ArrayType(StringType,true),true),StructField(embeddings,ArrayType(StringType,true),true),StructField(end,ArrayType(StringType,true),true),StructField(metadata,ArrayType(StringType,true),true),StructField(result,ArrayType(StringType,true),true),StructField(sentence_embeddings,ArrayType(StringType,true),true)))

    +-------------+-----+---+------+----------------------+----------+-------------------+
    |annotatorType|begin|end|result|array(metadata.entity)|embeddings|sentence_embeddings|
    +-------------+-----+---+------+----------------------+----------+-------------------+
    |        chunk|  166|169|  Lyft|                [MISC]|        []|                 []|
    |        chunk|   11| 14|  Lyft|                [MISC]|        []|                 []|
    |        chunk|   52| 55|  Lyft|                [MISC]|        []|                 []|
    +-------------+-----+---+------+----------------------+----------+-------------------+
    +--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+
    |       annotatorType|               begin|                 end|              result|            metadata|          embeddings| sentence_embeddings|
    +--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+
    |[[chunk, chunk, c...|[[0, 717, 811, 10...|[[11, 720, 816, 1...|[[FedEx Ground, D...|[[ORG, LOC, ORG, ...|[[[], [], [], [],...|[[[], [], [], [],...|
    +--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+
您必须分别从每个数据帧中进行选择,因为它们的数据类型不同,但内容已经准备好(如果我从输出中了解您的需求)可以使用


(°)请阅读并回答相应的问题。Spark NLP使用了很多自定义模式,了解上游步骤在这里是至关重要的。我尝试重新创建Spark数据帧,但我只能得到NULL。我不熟悉spark dataframe,所以可能缺少一些东西。我提供了模式作为参考。谢谢乔纳森!我还发现我可以使用json_schema=spark.read.json(df.rdd.map)(lambda row:row.entities2