Scala Spark LSH approxSimilarityJoin花费太多时间

Scala Spark LSH approxSimilarityJoin花费太多时间,scala,apache-spark,pyspark,Scala,Apache Spark,Pyspark,Spark LSHapproxSimilarityJoin花费的时间太长: val column="name" val new_df=df.select("id", "name", "duns_number", "country_id") 1.7 million record val new_df_1= df.select("index", "name",

Spark LSH
approxSimilarityJoin
花费的时间太长:

val column="name"

val new_df=df.select("id", "name", "duns_number", "country_id") 1.7 million record
val new_df_1= df.select("index", "name", "duns_number", "country_id") 0.7 million record

val n_gram = new NGram()
.setInputCol("_"+column)
.setN(4)
.setOutputCol("n_gram_column")

val n_gram_df = n_gram.transform(new_df)
val n_gram_df_1=n_gram.transform(new_df_1)

val validateEmptyVector = udf({ v: Vector => v.numNonzeros > 0 }, DataTypes.BooleanType)

val vectorModeler: CountVectorizerModel = new CountVectorizer()
.setInputCol("n_gram_column")
.setOutputCol("tokenize")
.setVocabSize(456976)
.setMinDF(1)
.fit(n_gram_df)

val vectorizedProductsDF = vectorModeler.transform(n_gram_df)
.filter(validateEmptyVector(col("tokenize")))
.select(col("id"), col(column), col("tokenize"),col("duns_number"),col("country_id"))

val vectorizedProductsDF_1 = vectorModeler.transform(n_gram_df_1)
.filter(validateEmptyVector(col("tokenize")))
.select(col("tokenize"),col(column),col("duns_number"),col("country_id"),col("index"))

val minLshConfig = new MinHashLSH().setNumHashTables(3)
.setInputCol("tokenize")
.setOutputCol("hash")

val lshModel = minLshConfig.fit(vectorizedProductsDF)
val transform_1=lshModel.transform(vectorizedProductsDF)
val transform_2=lshModel.transform(vectorizedProductsDF_1)

val result=lshModel.approxSimilarityJoin(transform_1,transform_2,0.42).toDF

最后一行代码(approxSimilarityJoin)占用了太多时间,最后几项任务分阶段执行。

我尝试了13个执行者,每个执行者有4个核心


spark.sql.shuffle.partitions=600

轻松使用/小心使用分区。你可以看看这篇文章:是的,只要看看下面的规范,这些分区太多了。对于52个核心,您最多应该尝试100-150个分区(“默认”建议是核心数的2倍)