Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/297.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
无效语法错误:使用Python和Spark构建决策树,客户流失预测_Python_Apache Spark_Syntax Error_Decision Tree - Fatal编程技术网

无效语法错误:使用Python和Spark构建决策树,客户流失预测

无效语法错误:使用Python和Spark构建决策树,客户流失预测,python,apache-spark,syntax-error,decision-tree,Python,Apache Spark,Syntax Error,Decision Tree,正如标题所示:我正在研究一个用于客户流失预测的决策树模型。我完全是数据科学、python和spark的初学者 结合讲座和在线文档中的所有示例,我成功地提出了一个决策树模型。唯一的问题是,错误计算功能给了我一个语法错误 基本上,我用于模型的数据如下所示: [LabeledPoint(0.0, [1031.0,947.0,0.333333333333,10.9933333333,10.3,12.0,1.33333333333,10.0133333333,83.6666666667,5.86,55.6

正如标题所示:我正在研究一个用于客户流失预测的决策树模型。我完全是数据科学、python和spark的初学者

结合讲座和在线文档中的所有示例,我成功地提出了一个决策树模型。唯一的问题是,错误计算功能给了我一个语法错误

基本上,我用于模型的数据如下所示:

[LabeledPoint(0.0, [1031.0,947.0,0.333333333333,10.9933333333,10.3,12.0,1.33333333333,10.0133333333,83.6666666667,5.86,55.69,0.596666666667,10.3333333333,0.666666666667,0.0,0.0,0.0,0.666666666667,23.3333333333,2.88333333333,25.0,0.666666666667,0.0,0.0,0.0,0.666666666667,135.333333333,4.44,0.06,0.333333333333,16.3333333333,0.98,0.333333333333,57.6666666667,3.46,0.333333333333,0.0,0.0,0.333333333333,14.0,0.0,0.0,0.0,0.0,0.0,0.0,1307.0,5.66666666667,22.0166666667,130.48,0.0,65.3333333333,0.0,287.333333333,34.0,113.666666667,0.0,0.0,0.0,1.0,1.0,0.0,1.0]),
LabeledPoint(0.0, [4231.0,951.0,1.33333333333,27.5466666667,6.45,22.0,1.0,12.0133333333,46.3333333333,6.45,47.15,1.32333333333,8.81,1.33333333333,0.0,0.0,0.0,1.33333333333,31.6666666667,6.4,42.6566666667,1.33333333333,0.0,0.0,0.0,1.33333333333,0.666666666667,0.0,0.0,57.0,0.0,0.0,57.0,0.0,0.0,57.0,0.0,0.0,57.0,10.6666666667,0.0,0.0,0.0,0.0,0.0,0.0,1307.0,4.0,32.0266666667,156.966666667,0.0,145.43,0.0,1.66666666667,0.0,0.333333333333,0.0,0.0,0.0,1.0,1.0,0.0,1.0]),
LabeledPoint(0.0, [5231.0,523.0,0.666666666667,14.62,1.1,1307.0,0.0,0.0,14.3333333333,1.1,7.57333333333,0.726666666667,4.84,0.666666666667,0.0,0.0,0.0,0.666666666667,8.33333333333,0.323333333333,2.15666666667,0.666666666667,0.0,0.0,0.0,0.666666666667,0.0,0.0,0.0,1307.0,0.0,0.0,1307.0,0.0,0.0,1307.0,0.0,0.0,1307.0,8.33333333333,0.0,0.0,0.0,0.0,0.0,0.0,1307.0,0.0,0.0,47.33,0.0,10.3566666667,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0]),
LabeledPoint(0.0, [6031.0,741.0,7.0,5.38666666667,2.13,58.0,0.333333333333,4.0,21.3333333333,1.35333333333,11.2966666667,0.48,3.2,8.33333333333,0.666666666667,0.0,0.0,8.33333333333,11.3333333333,0.453333333333,3.03,8.33333333333,1.0,0.0133333333333,0.166666666667,8.33333333333,2.33333333333,0.776666666667,0.363333333333,23.0,1.33333333333,0.08,23.0,0.0,0.0,23.0,0.333333333333,0.03,23.0,9.33333333333,0.666666666667,1.33333333333,0.0,0.0,0.0,0.0,1307.0,1.33333333333,16.0,61.25,3.31666666667,10.94,3.65,11.3333333333,7.0,0.0,1.0,0.0,0.0,1.0,1.0,0.0,1.0]),
LabeledPoint(0.0, [8831.0,840.0,5.33333333333,2.21,2.76,35.6666666667,0.666666666667,4.0,66.3333333333,2.76,17.7466666667,0.283333333333,1.20666666667,5.33333333333,0.0,0.0,0.0,5.33333333333,42.6666666667,2.43333333333,16.2166666667,5.33333333333,1.0,0.0,0.0,5.33333333333,1.0,0.0,0.0,23.0,0.0,0.0,23.0,0.666666666667,0.0,23.0,0.0,0.0,23.0,6.33333333333,0.0,1.33333333333,0.0,0.0,0.0,0.0,1307.0,1.66666666667,10.0,62.6333333333,0.0,56.7833333333,0.0,4.33333333333,0.666666666667,2.0,0.0,0.0,0.0,1.0,1.0,0.0,1.0])]
然后我将Spark文档中提供的代码用于决策树:

# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = data.randomSplit([0.7, 0.3])

# Train a DecisionTree model.
#  Empty categoricalFeaturesInfo indicates all features are continuous.
model = DecisionTree.trainClassifier(trainingData, numClasses=2, categoricalFeaturesInfo={},
                                 impurity='gini', maxDepth=5, maxBins=32)

# Evaluate model on test instances and compute test error
predictions = model.predict(testData.map(lambda x: x.features))
labelsAndPredictions = testData.map(lambda lp: lp.label).zip(predictions)
testErr = labelsAndPredictions.filter(lambda (v, p): v != p).count() / float(testData.count())
print('Test Error = ' + str(testErr))
print('Learned classification tree model:')
print(model.toDebugString())

给我的错误是:

  File "<ipython-input-70-e37b435ea51d>", line 1
testErr = labelsAndPredictions.filter(lambda (v, p): v != p).count() / float(testData.count())
                                             ^
SyntaxError: invalid syntax
文件“”,第1行
testErr=labelsandprojections.filter(lambda(v,p):v!=p.count()/float(testData.count())
^
SyntaxError:无效语法
如果有人想看到整个代码,我就把它

我不知道,为什么它会给我这个错误。看来这条线对其他人很管用。因此,我担心这可能与模型创建之前的步骤有关


我非常感谢你的帮助

当有多个参数时,lambda表达式不需要括号,所以
lambda(v,p):
应该是
lambda v,p:

lambda(x,y):
仅在Python 2中是有效的语法。在Python3中,它是一个
SyntaxError

lambda(v,p)是Python 2.7及以下版本的唯一有效Python语法。您可能正在使用Python 3,其中

我相信与3.X兼容的版本会是:

testErr = labelsAndPredictions.filter(lambda seq: seq[0] != seq[1]).count() / float(testData.count())

我没有得到
labelsandprodictions=…]的语法错误。。。;testErr=…
a=lambda(x,v):x*v打印a((2,2))
返回4,但无论如何,您已经阐明了您的观点<代码>仅在Python 2中具有多个参数时@MYGz。在Python3中,它是一个语法错误。我正在使用python2.7是的,这很有效,我很惊讶它与我的数据技巧无关。。谢谢你,节日快乐!:)
testErr = labelsAndPredictions.filter(lambda seq: seq[0] != seq[1]).count() / float(testData.count())