Apache spark 用Spark将句子编码为序列模型

Apache spark 用Spark将句子编码为序列模型,apache-spark,parallel-processing,pyspark,text-classification,Apache Spark,Parallel Processing,Pyspark,Text Classification,我正在进行文本分类,并使用pyspark.ml.feature.Tokenizer标记文本。但是,CountVectorizer将标记化的单词列表转换为单词包模型,而不是序列模型 假设我们有以下带有列id和文本的DataFrame: id | texts ----|---------- 0 | Array("a", "b", "c") 1 | Array("a", "b", "b", "c", "a") each row in texts is a document of type A

我正在进行文本分类,并使用
pyspark.ml.feature.Tokenizer
标记文本。但是,
CountVectorizer
将标记化的单词列表转换为单词包模型,而不是序列模型

假设我们有以下带有列id和文本的DataFrame:

 id | texts
----|----------
 0  | Array("a", "b", "c")
 1  | Array("a", "b", "b", "c", "a")
each row in texts is a document of type Array[String]. Invoking fit of CountVectorizer produces a CountVectorizerModel with vocabulary (a, b, c). Then the output column “vector” after transformation contains:

 id | texts                           | vector
----|---------------------------------|---------------
 0  | Array("a", "b", "c")            | (3,[0,1,2],[1.0,1.0,1.0])
 1  | Array("a", "b", "b", "c", "a")  | (3,[0,1,2],[2.0,2.0,1.0])
我想要的是(第1行)


那么,我是否可以编写自定义函数来并行运行编码呢?或者除了使用spark,还有其他库可以并行执行吗?

您可以使用
StringIndexer
explode

df = spark_session.createDataFrame([
    Row(id=0, texts=["a", "b", "c"]),
    Row(id=1, texts=["a", "b", "b", "c", "a"])
])

data = df.select("id", explode("texts").alias("texts"))
indexer = StringIndexer(inputCol="texts", outputCol="indexed", stringOrderType="alphabetAsc")
indexer\
    .fit(data)\
    .transform(data)\
    .groupBy("id")\
    .agg(collect_list("texts").alias("texts"), collect_list("indexed").alias("vector"))\
    .show(20, False)
输出:

+---+---------------+-------------------------+
|id |texts          |vector                   |
+---+---------------+-------------------------+
|0  |[a, b, c]      |[0.0, 1.0, 2.0]          |
|1  |[a, b, b, c, a]|[0.0, 1.0, 1.0, 2.0, 0.0]|
+---+---------------+-------------------------+
+---+---------------+-------------------------+
|id |texts          |vector                   |
+---+---------------+-------------------------+
|0  |[a, b, c]      |[0.0, 1.0, 2.0]          |
|1  |[a, b, b, c, a]|[0.0, 1.0, 1.0, 2.0, 0.0]|
+---+---------------+-------------------------+