Apache spark Scala和pythonapi中的LSH

Apache spark Scala和pythonapi中的LSH,apache-spark,pyspark,apache-spark-sql,Apache Spark,Pyspark,Apache Spark Sql,我一直在关注这篇文章,以便使用LSH算法获得一些字符串匹配。出于某种原因,通过python API获得结果,但不是在Scala中。我看不出Scala代码中真正缺少了什么 以下是两个代码: from pyspark.ml import Pipeline from pyspark.ml.feature import RegexTokenizer, NGram, HashingTF, MinHashLSH query = spark.createDataFrame(["Bob Jones"], "s

我一直在关注这篇文章,以便使用LSH算法获得一些字符串匹配。出于某种原因,通过python API获得结果,但不是在Scala中。我看不出Scala代码中真正缺少了什么

以下是两个代码:

from pyspark.ml import Pipeline
from pyspark.ml.feature import RegexTokenizer, NGram, HashingTF, MinHashLSH

query = spark.createDataFrame(["Bob Jones"], "string").toDF("text")

db = spark.createDataFrame(["Tim Jones"], "string").toDF("text")

model = Pipeline(stages=[
    RegexTokenizer(
        pattern="", inputCol="text", outputCol="tokens", minTokenLength=1
    ),
    NGram(n=3, inputCol="tokens", outputCol="ngrams"),
    HashingTF(inputCol="ngrams", outputCol="vectors"),
    MinHashLSH(inputCol="vectors", outputCol="lsh")
]).fit(db)

db_hashed = model.transform(db)
query_hashed = model.transform(query)

model.stages[-1].approxSimilarityJoin(db_hashed, query_hashed, 0.75).show()
它返回:

但是Scala什么也不返回,下面是代码:

import org.apache.spark.ml.feature.RegexTokenizer
val tokenizer = new RegexTokenizer().setPattern("").setInputCol("text").setMinTokenLength(1).setOutputCol("tokens")
import org.apache.spark.ml.feature.NGram
val ngram = new NGram().setN(3).setInputCol("tokens").setOutputCol("ngrams")
import org.apache.spark.ml.feature.HashingTF
val vectorizer = new HashingTF().setInputCol("ngrams").setOutputCol("vectors")
import org.apache.spark.ml.feature.{MinHashLSH, MinHashLSHModel}
val lsh = new MinHashLSH().setInputCol("vectors").setOutputCol("lsh")
import org.apache.spark.ml.Pipeline
val pipeline = new Pipeline().setStages(Array(tokenizer, ngram, vectorizer, lsh))
val query = Seq("Bob Jones").toDF("text")
val db = Seq("Tim Jones").toDF("text")
val model = pipeline.fit(db)
val dbHashed = model.transform(db)
val queryHashed = model.transform(query)
model.stages.last.asInstanceOf[MinHashLSHModel].approxSimilarityJoin(dbHashed, queryHashed, 0.75).show

我正在使用Spark 3.0,我知道这是一个测试,但不能在不同的版本上进行测试。我怀疑是否存在这样的错误:)

我在添加setNumHashTables(10)时发现,然后Scala代码返回结果。但是仍然不明白为什么python中没有设置哈希表的数量就重新返回结果。我还发现Scala代码在Spark 2.4.4中没有设置哈希表的数量,所以很明显在3.0中发生了一些变化。当我添加setNumHashTables(10)时,我发现Scala代码返回了我的结果。但是仍然不明白为什么python中没有设置哈希表的数量就重新返回结果。我们还发现Scala代码在Spark 2.4.4中没有设置哈希表的数量,所以显然在3.0中有所改变
import org.apache.spark.ml.feature.RegexTokenizer
val tokenizer = new RegexTokenizer().setPattern("").setInputCol("text").setMinTokenLength(1).setOutputCol("tokens")
import org.apache.spark.ml.feature.NGram
val ngram = new NGram().setN(3).setInputCol("tokens").setOutputCol("ngrams")
import org.apache.spark.ml.feature.HashingTF
val vectorizer = new HashingTF().setInputCol("ngrams").setOutputCol("vectors")
import org.apache.spark.ml.feature.{MinHashLSH, MinHashLSHModel}
val lsh = new MinHashLSH().setInputCol("vectors").setOutputCol("lsh")
import org.apache.spark.ml.Pipeline
val pipeline = new Pipeline().setStages(Array(tokenizer, ngram, vectorizer, lsh))
val query = Seq("Bob Jones").toDF("text")
val db = Seq("Tim Jones").toDF("text")
val model = pipeline.fit(db)
val dbHashed = model.transform(db)
val queryHashed = model.transform(query)
model.stages.last.asInstanceOf[MinHashLSHModel].approxSimilarityJoin(dbHashed, queryHashed, 0.75).show