Pyspark ML-随机森林分类器-一个热编码不适用于标签

Pyspark ML-随机森林分类器-一个热编码不适用于标签,pyspark,random-forest,apache-spark-ml,one-hot-encoding,Pyspark,Random Forest,Apache Spark Ml,One Hot Encoding,我正在尝试使用pyspark ml(spark 2.4.0)运行一个随机森林分类器,并使用OHE对目标标签进行编码。当我将标签作为整数(字符串索引器)输入时,模型训练良好,但当我使用OneHotCodeEstimator输入一个热编码标签时,模型训练失败。这是火花限制吗 #%% # Test dataframe import pyspark.sql.functions as F from pyspark.ml.feature import StringIndexer,OneHotEncoderE

我正在尝试使用pyspark ml(spark 2.4.0)运行一个随机森林分类器,并使用OHE对目标标签进行编码。当我将标签作为整数(字符串索引器)输入时,模型训练良好,但当我使用OneHotCodeEstimator输入一个热编码标签时,模型训练失败。这是火花限制吗

#%%
# Test dataframe
import pyspark.sql.functions as F
from pyspark.ml.feature import StringIndexer,OneHotEncoderEstimator
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier,LinearSVC
tst=sqlContext.createDataFrame([(1,'01/01/2020','buy',10000,2000),(1,'01/01/2020','sell',10000,3000),(1,'02/01/2020','buy',10000,1000),(1,'02/01/2020','sell',1000,2000),(2,'01/01/2020','sell',1000,3000),(2,'02/01/2020','buy',1000,1000),(2,'02/01/2020','buy',1000,100)],schema=("id","date","transaction","limit","amount"))
# label pipeleing
str_indxr = StringIndexer(inputCol='transaction', outputCol="label")
ohe = OneHotEncoderEstimator(inputCols=['label'],outputCols=['label_ohe'],dropLast=False)
label_pipeline=Pipeline(stages=[str_indxr,ohe])
#%data data pipeleine
data_trans = label_pipeline.fit(tst).transform(tst)
vecAssembler = VectorAssembler(inputCols=["limit","amount"], outputCol="features",handleInvalid='skip')
classifier = RandomForestClassifier(featuresCol='features', labelCol='label_ohe')
data_pipeline = Pipeline(stages=[vecAssembler,classifier])

data_fit = data_pipeline.fit(data_trans)
我得到这个错误:

  ---------------------------------------------------------------------------
IllegalArgumentException                  Traceback (most recent call last)
<ipython-input-18-f08a05d86e2c> in <module>()
      1 if(train_labdata_rf):
----> 2     pipeline_trained,accuracy,test_result_rf = train_test("rf",train_d,test_d)
      3     print("Test set accuracy = " + str(accuracy))
      4     #pipeline_trained.write().overwrite().save("/projects/projectwbvplatformpc/dev/PS-ET_Pipeline/CDM_Classifier/output/pyspark_classifier/pipelines/random_forest")
      5 else:

<ipython-input-4-9709037baa80> in train_test(modelname, train_data, test_data)
     11     """
     12     pipeline=create_pipeline(modelname)
---> 13     pipeline_fit = pipeline.fit(train_data)
     14 
     15     result = pipeline_fit.transform(test_d)

/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/ml/base.py in fit(self, dataset, params)
    130                 return self.copy(params)._fit(dataset)
    131             else:
--> 132                 return self._fit(dataset)
    133         else:
    134             raise ValueError("Params must be either a param map or a list/tuple of param maps, "

/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/ml/pipeline.py in _fit(self, dataset)
    107                     dataset = stage.transform(dataset)
    108                 else:  # must be an Estimator
--> 109                     model = stage.fit(dataset)
    110                     transformers.append(model)
    111                     if i < indexOfLastEstimator:

/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/ml/base.py in fit(self, dataset, params)
    130                 return self.copy(params)._fit(dataset)
    131             else:
--> 132                 return self._fit(dataset)
    133         else:
    134             raise ValueError("Params must be either a param map or a list/tuple of param maps, "

/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/ml/wrapper.py in _fit(self, dataset)
    293 
    294     def _fit(self, dataset):
--> 295         java_model = self._fit_java(dataset)
    296         model = self._create_model(java_model)
    297         return self._copyValues(model)

/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/ml/wrapper.py in _fit_java(self, dataset)
    290         """
    291         self._transfer_params_to_java()
--> 292         return self._java_obj.fit(dataset._jdf)
    293 
    294     def _fit(self, dataset):

/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     77                 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
     78             if s.startswith('java.lang.IllegalArgumentException: '):
---> 79                 raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
     80             raise
     81     return deco

IllegalArgumentException: u'requirement failed: Column label_ohe must be of type numeric but was actually of type struct<type:tinyint,size:int,indices:array<int>,values:array<double>>.'
---------------------------------------------------------------------------
IllegalArgumentException回溯(最后一次最近调用)
在()
1如果(序列号\u labdata\u rf):
---->2管道、精度、测试结果\u rf=列车测试(“rf”、列车d、测试d)
3打印(“测试集精度=“+str(精度))
4#pipeline_-trained.write().overwrite().save(“/projects/projectwbvplatformpc/dev/PS-ET_-pipeline/CDM_-Classifier/output/pyspark_-Classifier/pipelines/random_-forest”)
5其他:
列车内测试(型号名称、列车数据、测试数据)
11     """
12管道=创建管道(modelname)
--->13管道拟合=管道拟合(列车数据)
14
15结果=管道拟合变换(测试d)
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/ml/base.py拟合(self、dataset、params)
130返回自复制(参数).\u拟合(数据集)
131其他:
-->132返回自拟合(数据集)
133其他:
134 raise VALUERROR(“参数必须是参数映射或参数映射的列表/元组,”
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/ml/pipeline.py in_-fit(自我,数据集)
107数据集=stage.transform(数据集)
108 else:#必须是估计值
-->109模型=阶段拟合(数据集)
110变压器。附加(型号)
111如果i132返回自拟合(数据集)
133其他:
134 raise VALUERROR(“参数必须是参数映射或参数映射的列表/元组,”
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/ml/wrapper.py in_-fit(自我,数据集)
293
294定义拟合(自我,数据集):
-->295 java_model=self._fit_java(数据集)
296 model=self.\u创建\u模型(java\u模型)
297返回自身值。\u复制值(型号)
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/ml/wrapper.py in_fit_java(self,dataset)
290         """
291 self._transfer_params_to_java()
-->292返回self.\u java.\u obj.fit(数据集.\u jdf)
293
294定义拟合(自我,数据集):
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in_uuu调用_u(self,*args)
1255 answer=self.gateway\u client.send\u命令(command)
1256返回值=获取返回值(
->1257应答,self.gateway_客户端,self.target_id,self.name)
1258
1259对于临时参数中的临时参数:
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/sql/utils.py装饰(*a,**kw)
77 raise QueryExecutionException(s.split(“:”,1)[1],stackTrace)
78如果s.StartWith('java.lang.IllegalArgumentException:'):
--->79引发IllegalArgumentException(s.split(“:”,1)[1],stackTrace)
80加薪
81返回装饰
IllegalArgumentException:u'要求失败:列标签必须是numeric类型,但实际上是struct类型。'

我找不到任何合适的资源。任何建议都会有帮助。

编辑:pyspark不支持向量作为目标标签,因此只有字符串编码可以工作

有问题的代码是-

classifier=RandomForestClassifier(特性科尔='features',标签col='label\u ohe')
问题在于
labelCol=label\u ohe
的类型,它必须是
NumericType

OHE的输出类型为
Vector

参考-

将StringIndexer输出直接用作-

测试数据帧 导入pyspark.sql.F函数 从pyspark.ml.feature导入StringIndexer,OneHotEncoderEstimator 从pyspark.ml导入管道 从pyspark.ml.classification导入RandomForestClassifier,LinearSVC tst=sqlContext.createDataFrame([(1,'01/01/2020','buy',100002000),(1,'01/01/2020','sell',100003000),(1,'02/01/2020','buy',100001000),(1,'02/01/2020','sell',100002000),(2,'01/2020','sell',100003000),(2,'02/01/2020','buy',10000000),(2,'02/01/2020','buy','buy',10000100)],模式=(“id”,“日期”,“交易”,“限额”,“金额”)) #标签管道 str_indxr=StringIndexer(inputCol='transaction',outputCol=“label”) 标签\u管道=管道(阶段=[str\u indxr]) #%数据胡椒烯 数据转换=标签管道拟合(tst).transform(tst) vecAssembler=VectorAssembler(inputCols=[“限制”,“金额”],outputCol=“功能”,handleInvalid=”跳过“) 分类器=随机森林分类器(featuresCol='features',labelCol='label') 数据管道=管道(阶段=[vecAssembler,classifier]) 数据拟合=数据管道拟合(数据传输)
完整堆栈跟踪please@SomeshwarKale-已更新问题谢谢您的回答,正如我在问题中提到的,我能够使用字符串索引器运行它。但我想尝试OHE,因为一些资源表明,如果目标标签是一个热编码的,因为它删除了顺序关系,那么性能会有所提高ionship。在spark中有什么方法可以做到这一点吗?不幸的是,spark predictor API不允许
labelCol
为向量类型。您可以指定您引用的资源吗?这些是