Python 3.x Pyspark mllib+;count或collect方法引发ArrayIndexOutOfBounds异常

Python 3.x Pyspark mllib+;count或collect方法引发ArrayIndexOutOfBounds异常,python-3.x,apache-spark,pyspark,apache-spark-mllib,Python 3.x,Apache Spark,Pyspark,Apache Spark Mllib,我正在学习pyspark和mllib 在使用RF模型预测测试数据之后,我将结果分配到一个名为“预测”的变量中,该变量是RDD 如果调用predictions.count()或predictions.collect(),则它将失败,出现以下异常 你能分享一下你的想法吗?已经花了相当长的时间,但没有找到丢失的东西 predictions = predict(training_data, test_data) File "/mp5/part_d_poc.py", line 36, in

我正在学习pyspark和mllib

在使用RF模型预测测试数据之后,我将结果分配到一个名为“预测”的变量中,该变量是RDD

如果调用predictions.count()或predictions.collect(),则它将失败,出现以下异常

你能分享一下你的想法吗?已经花了相当长的时间,但没有找到丢失的东西

    predictions = predict(training_data, test_data)

  File "/mp5/part_d_poc.py", line 36, in predict

    print(predictions.count())

  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1055, in count

  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1046, in sum

  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 917, in fold

  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 816, in collect

  File "/usr/local/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__

  File "/usr/local/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value

py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.

: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 15.0 failed 1 times, most recent failure: Lost task 0.0 in stage 15.0 (TID 28, localhost, executor driver): java.lang.ArrayIndexOutOfBoundsException: 7
我用以下方法构造了训练数据

raw_training_data.map(lambda row: LabeledPoint(row.split(',')[-1], Vectors.dense(row.split(',')[0:-1])))