pyspark MlLib:排除行中的列值
我试图从一个数据帧创建一个pyspark MlLib:排除行中的列值,pyspark,spark-dataframe,rdd,apache-spark-mllib,Pyspark,Spark Dataframe,Rdd,Apache Spark Mllib,我试图从一个数据帧创建一个LabeledPoint的RDD,以便以后在MlLib中使用它 如果my_target列是sparkDF中的第一列,则下面的代码可以正常工作。但是,如果my_target列不是第一列,如何修改下面的代码以排除my_target以创建正确的标签点 import pyspark.mllib.classification as clf labeledData = sparkDF.rdd.map(lambda row: clf.LabeledPoint(row['my_targ
LabeledPoint
的RDD,以便以后在MlLib中使用它
如果my_target
列是sparkDF中的第一列,则下面的代码可以正常工作。但是,如果my_target
列不是第一列,如何修改下面的代码以排除my_target
以创建正确的标签点
import pyspark.mllib.classification as clf
labeledData = sparkDF.rdd.map(lambda row: clf.LabeledPoint(row['my_target'],row[1:]))
logRegr = clf.LogisticRegressionWithSGD.train(labeledData)
也就是说,行[1:][/code>现在排除第一列中的值;如果我想排除第行第N列中的值,我该怎么做?谢谢
>>> a = [(1,21,31,41),(2,22,32,42),(3,23,33,43),(4,24,34,44),(5,25,35,45)]
>>> df = spark.createDataFrame(a,["foo","bar","baz","bat"])
>>> df.show()
+---+---+---+---+
|foo|bar|baz|bat|
+---+---+---+---+
| 1| 21| 31| 41|
| 2| 22| 32| 42|
| 3| 23| 33| 43|
| 4| 24| 34| 44|
| 5| 25| 35| 45|
+---+---+---+---+
>>> N = 2
# N is the column that you want to exclude (in this example the third, indexing starts at 0)
>>> labeledData = df.rdd.map(lambda row: LabeledPoint(row['foo'],row[:N]+row[N+1:]))
# it is just a concatenation with N that is excluded both in row[:N] and row[N+1:]
>>> labeledData.collect()
[LabeledPoint(1.0, [1.0,21.0,41.0]), LabeledPoint(2.0, [2.0,22.0,42.0]), LabeledPoint(3.0, [3.0,23.0,43.0]), LabeledPoint(4.0, [4.0,24.0,44.0]), LabeledPoint(5.0, [5.0,25.0,45.0])]