Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 在spark独立群集上运行als程序时出现RDD分区问题_Apache Spark_Pyspark_Apache Spark Standalone - Fatal编程技术网

Apache spark 在spark独立群集上运行als程序时出现RDD分区问题

Apache spark 在spark独立群集上运行als程序时出现RDD分区问题,apache-spark,pyspark,apache-spark-standalone,Apache Spark,Pyspark,Apache Spark Standalone,我正在pyspark中两个节点的spark群集上运行我的ALS程序。如果我禁用checkpointIntervalin ALS参数,它可以正常运行20次迭代。对于超过20次的迭代,它需要启用CheckpointInterval。我还提供了一个checkpoint目录。它给了我以下错误。我没有正确地解决这个问题 同样的程序在单机上运行良好,迭代次数为25次 我的错误是: Py4JJavaError: An error occurred while calling o2574.fit. : or

我正在pyspark中两个节点的spark群集上运行我的ALS程序。如果我禁用checkpointIntervalin ALS参数,它可以正常运行20次迭代。对于超过20次的迭代,它需要启用CheckpointInterval。我还提供了一个checkpoint目录。它给了我以下错误。我没有正确地解决这个问题

同样的程序在单机上运行良好,迭代次数为25次

我的错误是:

 Py4JJavaError: An error occurred while calling o2574.fit.
 : org.apache.spark.SparkException: Checkpoint RDD has a different number
of partitions from original RDD. Original RDD [ID: 3265, num of 
partitions:10]; Checkpoint RDD [ID: 3266, num of partitions: 0].
我的代码是:

 import time
 start = time.time()

 from pyspark.sql import SparkSession


 spark=SparkSession.builder.master('spark://172.16.12.200:7077')
.appName('new').getOrCreate()

 ndf = spark.read.json("Musical_Instruments_5.json")
 pd=ndf.select(ndf['asin'],ndf['overall'],ndf['reviewerID'])


 spark.sparkContext.setCheckpointDir("/home/npproject/jupyter_files /checkpoints")

from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.recommendation import ALS
from pyspark.ml.tuning import TrainValidationSplit,ParamGridBuilder
from pyspark.ml.feature import StringIndexer
from pyspark.ml import Pipeline
from pyspark.sql.functions import col

indexer = [StringIndexer(inputCol=column, outputCol=column+"_index") for column in list(set(pd.columns)-set(['overall'])) ]

pipeline = Pipeline(stages=indexer)
transformed = pipeline.fit(pd).transform(pd)
(training,test)=transformed.randomSplit([0.8, 0.2])
als=ALS(maxIter=25,regParam=0.09,rank=25,userCol="reviewerID_index",
itemCol="asin_index",ratingCol="overall",
checkpointInterval=5,coldStartStrategy="drop",
checkpointInterval=-1,nonnegative=True)
model=als.fit(training)
evaluator=RegressionEvaluator(metricName="rmse",
labelCol="overall",predictionCol="prediction")
predictions=model.transform(test)
rmse=evaluator.evaluate(predictions)
print("RMSE="+str(rmse))
print("Rank: ",model.rank)
print("MaxIter: ",model._java_obj.parent().getMaxIter())
print("RegParam: ",model._java_obj.parent().getRegParam())

user_recs=model.recommendForAllUsers(10).show(20)

end = time.time()
print("execution time",end-start)

Hi可能重复,我已尝试为所有工作节点装入目录,当我在程序中给出装入的检查点目录路径时,它显示为无效的检查点目录。您能否告诉我群集的正确装入方式。Hi可能重复,我已尝试为所有工作节点装入目录,当我在程序中给出装入的检查点目录路径时,它显示为无效的检查点目录。您能告诉我群集的正确装入方式吗。