Apache spark 如何在pyspark中维护令牌特性数组的字到索引映射的顺序?
下面是我正在寻找的pyspark的一个用例。我目前有一个带有单词标记的数据框架,希望构建一个词汇表,然后用词汇表中的索引替换单词。这是我的数据框Apache spark 如何在pyspark中维护令牌特性数组的字到索引映射的顺序?,apache-spark,pyspark,nlp,deep-learning,Apache Spark,Pyspark,Nlp,Deep Learning,下面是我正在寻找的pyspark的一个用例。我目前有一个带有单词标记的数据框架,希望构建一个词汇表,然后用词汇表中的索引替换单词。这是我的数据框 >>> wordDataFrame.show(10, False) +---+-------------------------------------------------+ |id |words | +---+-----------------
>>> wordDataFrame.show(10, False)
+---+-------------------------------------------------+
|id |words |
+---+-------------------------------------------------+
|0 |[hi, i, heard, about, spark] |
|1 |[i, wish, java, could, use, case, spark, classes]|
+---+-------------------------------------------------+
当我使用CountVectorier时
from pyspark.ml.feature import CountVectorizer
cv = CountVectorizer(binary=True)\
.setInputCol("words")\
.setOutputCol("countVec")\
.setToLowercase(True)
.setMinTF(1)\
.setMinDF(1)
fittedCV = cv.fit(wordDataFrame)
fittedCV.transform(wordDataFrame).show(2, False)
+---+-------------------------------------------------+---------------------------------------------------------+
|id |words |features |
+---+-------------------------------------------------+---------------------------------------------------------+
|0 |[hi, i, heard, about, spark] |(11,[0,1,6,8,9],[1.0,1.0,1.0,1.0,1.0]) |
|1 |[i, wish, java, could, use, case, spark, classes]|(11,[0,1,2,3,4,5,7,10],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])|
+---+-------------------------------------------------+---------------------------------------------------------+
接下来是我的词汇表
>>> from pprint import pprint
>>> pprint(dict([(i, x) for i,x in enumerate(fittedCV.vocabulary)]))
{0: 'i',
1: 'spark',
2: 'wish',
3: 'use',
4: 'case',
5: 'java',
6: 'hi',
7: 'could',
8: 'about',
9: 'heard',
10: 'classes'}
我要找的是这个
[hi, i , heard, about, spark] -> [6, 0, 9, 8, 1] instead of [0,1,6,8,9]
基本上维持令牌的顺序。我试图查看文档,但看起来所有矢量器都失去了位置。对于我的情况,我需要保持位置,因为此功能将进入更下游的LSTM层我最近有一个与您类似的用例。我最终使用了
StringIndexer
:
l = [
(0, ["hi", "i", "heard", "about", "spark"]),
(1, ["i", "wish", "java", "could", "use", "case", "spark", "classes"])
]
wordDataFrame = spark.createDataFrame(l, ['id', 'words'])
wordDataFrame.show()
嗯
+---+--------------------+
| id| words|
+---+--------------------+
| 0|[hi, i, heard, ab...|
| 1|[i, wish, java, c...|
+---+--------------------+
from pyspark.ml.feature import StringIndexer
# define indexer
indexer = StringIndexer(inputCol="word_strings", outputCol="word_index")
# use explode to map col<array<string>> => col<string>
# fit indexer on col<string>
indexer = indexer.fit(
wordDataFrame
.select(F.explode(F.col("words")).alias("word_strings"))
)
print(indexer.labels)
['i', 'spark', 'heard', 'classes', 'java', 'could', 'use', 'hi', 'case', 'about', 'wish']
indexedWordDataFrame = (
indexer
.transform(
# use explode to map col<array<string>> => col<string>
# use indexer to transform col<string> => col<double>
wordDataFrame
.withColumn("word_strings", F.explode(F.col("words")))
)
# use groupby + collect_list to map col<double> => col<array<double>>
.groupby("id", "words")
.agg(F.collect_list("word_index").alias("word_index_array"))
)
indexedWordDataFrame.orderBy("id").show()
+---+--------------------+--------------------+
| id| words| word_index_array|
+---+--------------------+--------------------+
| 0|[hi, i, heard, ab...|[7.0, 0.0, 2.0, 9...|
| 1|[i, wish, java, c...|[0.0, 10.0, 4.0, ...|
+---+--------------------+--------------------+