Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/variables/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark pyspark:交叉验证程序不工作_Apache Spark_Pyspark_Apache Spark Mllib_Apache Spark Ml - Fatal编程技术网

Apache spark pyspark:交叉验证程序不工作

Apache spark pyspark:交叉验证程序不工作,apache-spark,pyspark,apache-spark-mllib,apache-spark-ml,Apache Spark,Pyspark,Apache Spark Mllib,Apache Spark Ml,我试图调整ALS的参数,但总是选择第一个参数作为最佳选项 from pyspark.sql import SQLContext from pyspark import SparkConf, SparkContext from pyspark.ml.recommendation import ALS from pyspark.ml.tuning import CrossValidator, ParamGridBuilder from pyspark.ml.evaluation import Reg

我试图调整ALS的参数,但总是选择第一个参数作为最佳选项

from pyspark.sql import SQLContext
from pyspark import SparkConf, SparkContext
from pyspark.ml.recommendation import ALS
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import RegressionEvaluator
from math import sqrt

from operator import add

conf = (SparkConf()
         .setMaster("local[4]")
         .setAppName("Myapp")
         .set("spark.executor.memory", "2g"))
sc = SparkContext(conf = conf)

sqlContext = SQLContext(sc)
def computeRmse(data):
    return (sqrt(data.map(lambda x: (x[2] - x[3]) ** 2).reduce(add) / float(data.count())))

dfRatings = sqlContext.createDataFrame([(0, 0, 4.0), (0, 1, 2.0), (1, 1, 3.0), (1, 2, 4.0), (2, 1, 1.0), (2, 2, 5.0)],
                                 ["user", "item", "rating"])

lr1 = ALS()
grid1 = ParamGridBuilder().addGrid(lr1.regParam, [1.0,0.005,2.0]).build()
evaluator1 = RegressionEvaluator(predictionCol=lr1.getPredictionCol(),labelCol=lr1.getRatingCol(), metricName='rmse')
cv1 = CrossValidator(estimator=lr1, estimatorParamMaps=grid1, evaluator=evaluator1, numFolds=2)
cvModel1 = cv1.fit(dfRatings)
a=cvModel1.transform(dfRatings)
print ('rmse with cross validation: {}'.format(computeRmse(a)))

for reg_param in (1.0,0.005,2.0):
    lr = ALS(regParam=reg_param)
    model = lr.fit(dfRatings)
    print ('reg_param: {}, rmse: {}'.format(reg_param,computeRmse(model.transform(dfRatings))))
输出:
交叉验证的rmse:1.1820489116858794
注册参数:1.0,rmse:1.1820489116858794
注册参数:0.005,rmse:0.001573816765686575
注册参数:2.0,rmse:2.1056964491942787

有什么帮助吗


提前感谢,

在CrossValidator中,您将折叠数固定为1。但是,参数numfold。仅使用一个折叠就无法将其分离为训练集和测试集。

抛开其他问题,您根本没有使用足够的数据来执行有意义的交叉验证和评估。正如我在ALS中解释和说明的,当用户或项目从训练集中缺失时,ALS不能提供预测


这意味着交叉验证期间的每个拆分都将有未定义的预测,并且总体评估将未定义。正因为如此,
CrossValidator
将返回第一个可能的模型,因为从它的角度来看,您训练的所有模型都同样糟糕。

我实现了一个
管道
解决方案,在管道的最后一个阶段添加了一个自定义转换器,因此
nan
预测将被丢弃。请注意,此实现是针对Spark<2.2.0的,因为没有引入关键字
coldStartStrategy
。因此,如果使用Spark==2.2.0,则不需要额外的阶段

首先,我介绍应用
nan
drops的定制转换器

from pyspark.ml import Transformer

class DropNAPredictions(Transformer):
    def _transform(self, predictedDF):
        nonNullDF = predictedDF.dropna(subset=['prediction', ])
        predictionDF = nonNullDF.withColumn('prediction', nonNullDF['prediction'].cast('double'))
        return predictionDF
现在,我可以使用交叉验证构建我的管道和培训:

dropna = DropNAPredictions()

als = ALS(maxIter=10, userCol="player", itemCol="item", ratingCol="rating", implicitPrefs=False)

pipeline = Pipeline(stages=[als, dropna])
paramGrid = ParamGridBuilder().addGrid(als.regParam, [0.1, 0.05]) \
    .addGrid(als.rank, [1, 3]) \
    .build()

cv = CrossValidator(estimator=pipeline,
                    estimatorParamMaps=paramGrid,
                    evaluator=RegressionEvaluator(labelCol="rating"),
                    numFolds=3)

cvModel = cv.fit(training)
关于持久性的注意事项:由于自定义转换器,无法保存管道。有一篇文章讨论了序列化定制变压器的选项,但我还没有深入到那个兔子洞去破解一个解决方案。作为临时解决方案,您可以仅序列化ALS模型本身,然后稍后通过向管道添加自定义转换器来重建管道

bestPipeline = cvModel.bestModel
bestModel = bestPipeline.stages[0]  # extracts the ALS model
bestModel.save("s2s_als_stage")

from pyspark.ml.pipeline import PipelineModel
from pyspark.ml.recommendation import ALSModel

mymodel = ALSModel.load('s2s_als_stage')
pipeline = PipelineModel(stages=[mymodel, dropna])  # dropna is the custom transformer
pred_test = pipeline.transform(test)  # score test data

谢谢,我想补充一点,当一个用户ID在尚未训练的情况下被预测时(所有的用户ID数据都用于验证组,没有一个用于训练),预测是nan,所以RegressionEvaluator返回nan。为了解决这个问题,我们必须用MiValidacion来改变回归估值器。例子: