Apache spark Jupyter笔记本上的spark独立群集缓存内存问题

Apache spark Jupyter笔记本上的spark独立群集缓存内存问题,apache-spark,caching,pyspark,jupyter-notebook,Apache Spark,Caching,Pyspark,Jupyter Notebook,我的代码之前运行得很好,但现在显示出缓存内存问题。我的程序涉及数据帧加载、转换和处理,运行在与pyspark shell连接的jupyter笔记本上。 我不明白主要问题是什么以及如何解决。非常感谢您的帮助 我的代码是: import time start = time.time() from pyspark.sql import SparkSession spark = SparkSession.builder.master('spark://172.16.12.200:7077').app

我的代码之前运行得很好,但现在显示出缓存内存问题。我的程序涉及数据帧加载、转换和处理,运行在与pyspark shell连接的jupyter笔记本上。 我不明白主要问题是什么以及如何解决。非常感谢您的帮助

我的代码是:

import time
start = time.time()

from pyspark.sql import SparkSession

spark = SparkSession.builder.master('spark://172.16.12.200:7077').appName('new').getOrCreate()
ndf = spark.read.json("Musical_Instruments.json")
pd=ndf.select(ndf['asin'],ndf['overall'],ndf['reviewerID'])


spark.sparkContext.setCheckpointDir("/home/npproject/jupyter_files/checkpoints")

from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.recommendation import ALS
from pyspark.ml.tuning import TrainValidationSplit,ParamGridBuilder
from pyspark.ml.feature import StringIndexer
from pyspark.ml import Pipeline
from pyspark.sql.functions import col

indexer = [StringIndexer(inputCol=column, outputCol=column+"_index")       for column in list(set(pd.columns)-set(['overall'])) ]

pipeline = Pipeline(stages=indexer)
transformed = pipeline.fit(pd).transform(pd)
(training,test)=transformed.randomSplit([0.8, 0.2])
   als=ALS(maxIter=5,regParam=0.09,rank=25,userCol="reviewerID_index",itemCol="asin_index",ratingCol="overall",coldStartStrategy="drop",nonnegative=True)
model=als.fit(training)
   evaluator=RegressionEvaluator(metricName="rmse",labelCol="overall",predictionCol="prediction")
predictions=model.transform(test)
rmse=evaluator.evaluate(predictions)
print("RMSE="+str(rmse))
print("Rank: ",model.rank)
print("MaxIter: ",model._java_obj.parent().getMaxIter())
print("RegParam: ",model._java_obj.parent().getRegParam())

user_recs=model.recommendForAllUsers(10).show(20)

end = time.time()
print("execution time",end-start) 
Error:
Py4JJavaError: An error occurred while calling o40.json.
: org.apache.spark.SparkException: Job aborted due to stage failure:    Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in   stage 0.0 (TID 5, 172.16.12.208, executor 1):   java.io.FileNotFoundException: File file:/home/npproject/jupyter_files /Musical_Instruments.json does not exist
It is possible the underlying files have been updated. You can   explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.
错误代码为:

import time
start = time.time()

from pyspark.sql import SparkSession

spark = SparkSession.builder.master('spark://172.16.12.200:7077').appName('new').getOrCreate()
ndf = spark.read.json("Musical_Instruments.json")
pd=ndf.select(ndf['asin'],ndf['overall'],ndf['reviewerID'])


spark.sparkContext.setCheckpointDir("/home/npproject/jupyter_files/checkpoints")

from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.recommendation import ALS
from pyspark.ml.tuning import TrainValidationSplit,ParamGridBuilder
from pyspark.ml.feature import StringIndexer
from pyspark.ml import Pipeline
from pyspark.sql.functions import col

indexer = [StringIndexer(inputCol=column, outputCol=column+"_index")       for column in list(set(pd.columns)-set(['overall'])) ]

pipeline = Pipeline(stages=indexer)
transformed = pipeline.fit(pd).transform(pd)
(training,test)=transformed.randomSplit([0.8, 0.2])
   als=ALS(maxIter=5,regParam=0.09,rank=25,userCol="reviewerID_index",itemCol="asin_index",ratingCol="overall",coldStartStrategy="drop",nonnegative=True)
model=als.fit(training)
   evaluator=RegressionEvaluator(metricName="rmse",labelCol="overall",predictionCol="prediction")
predictions=model.transform(test)
rmse=evaluator.evaluate(predictions)
print("RMSE="+str(rmse))
print("Rank: ",model.rank)
print("MaxIter: ",model._java_obj.parent().getMaxIter())
print("RegParam: ",model._java_obj.parent().getRegParam())

user_recs=model.recommendForAllUsers(10).show(20)

end = time.time()
print("execution time",end-start) 
Error:
Py4JJavaError: An error occurred while calling o40.json.
: org.apache.spark.SparkException: Job aborted due to stage failure:    Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in   stage 0.0 (TID 5, 172.16.12.208, executor 1):   java.io.FileNotFoundException: File file:/home/npproject/jupyter_files /Musical_Instruments.json does not exist
It is possible the underlying files have been updated. You can   explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.

您的回溯显示music_Instruments.json不存在。你确认了吗?您遇到的缓存内存问题是什么?也请发布跟踪信息。你好,Suraj。文件夹中存在music_Instruments.json文件。有关缓存内存问题,请参阅我的错误代码“基础文件可能已更新。您可以通过在SQL中运行“REFRESH TABLE tableName”命令或通过重新创建所涉及的数据集/数据帧来显式地使Spark中的缓存无效“。我还尝试了错误消息中给出的建议,即重新创建数据帧并更改文件位置,但仍然无效。该文件是否存在于群集的所有工作节点上?如果没有,则必须复制文件夹结构,并将文件放在所有节点上。感谢您的回答。搜索后我知道我必须在所有节点上创建文件副本,因为我没有使用任何分布式文件系统,也没有通过主节点的本地磁盘运行文件。但是有没有办法在spark单机群集上运行文件,而不在节点上创建副本,因为我没有要使用Hdfs。根据您的案例和数据负载,有多个选项。如果您的数据很小,那么您可以考虑使用广播变量。对于较大的数据负载,可以使用NFS作为HDFS的替代方案。或者,您可以使用SparkContext将文件添加为
sc.addFile()
,然后使用
SparkFiles.get()
在workers上访问它。这种方法为您节省了必须将文件放置在工作线程上的开销,但底层流程将保持不变。在我看来,使用HDFS是最好的选择,因为相比之下,它为您提供了相当无缝的体验。