Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/305.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何使用Python在Spark中对线性回归进行一次热编码?_Python_Encoding_Apache Spark_Machine Learning_Pyspark - Fatal编程技术网

如何使用Python在Spark中对线性回归进行一次热编码?

如何使用Python在Spark中对线性回归进行一次热编码?,python,encoding,apache-spark,machine-learning,pyspark,Python,Encoding,Apache Spark,Machine Learning,Pyspark,我有我为随机森林回归编码编写的代码。但是随机森林回归不需要在索引器之后进行一次热编码。现在我想尝试线性回归,它需要一个热编码。我查阅了Sparks文档,但不知道如何将其合并到当前代码中。如何在当前代码中添加一个热编码步骤 from pyspark.ml.feature import StringIndexer from pyspark.ml.pipeline import Pipeline from pyspark.ml.feature import VectorAssembler import

我有我为
随机森林回归
编码编写的代码。但是
随机森林回归
不需要
索引器之后进行一次热编码
。现在我想尝试
线性回归
,它需要
一个热编码
。我查阅了Sparks文档,但不知道如何将其合并到当前代码中。如何在当前代码中添加一个热编码步骤

from pyspark.ml.feature import StringIndexer
from pyspark.ml.pipeline import Pipeline
from pyspark.ml.feature import VectorAssembler
import org.apache.spark.ml.feature.OneHotEncoder

label_col = "x4"

# converting RDD to dataframe
train_data_df = train_data.toDF(("x0","x1","x2","x3","x4"))

# Indexers encode strings with doubles
string_indexers = [
   StringIndexer(inputCol=x, outputCol="idx_{0}".format(x))
   for x in train_data_df.columns if x != label_col
]

# Assembles multiple columns into a single vector
assembler = VectorAssembler(
    inputCols=["idx_{0}".format(x) for x in train_data_df.columns if x != label_col],
    outputCol="features"
)


pipeline = Pipeline(stages=string_indexers + [assembler])
model = pipeline.fit(train_data_df)
indexed = model.transform(train_data_df)

label_points = (indexed
.select(col(label_col).cast("double").alias("label"), col("features"))
.map(lambda row: LabeledPoint(row.label, row.features)))
更新:

from pyspark.mllib.regression import LinearRegressionWithSGD, LinearRegressionModel


###### FOR TEST DATA ######
label_col_test = "x4"

# converting RDD to dataframe
test_data_df = test_data.toDF(("x0","x1","x2","x3","x4"))

# Indexers encode strings with doubles
string_indexers_test = [
   StringIndexer(inputCol=x, outputCol="idx_{0}".format(x))
   for x in testData_df_1.columns if x != label_col_test
]

# encoders
encoders_test = [
   StringIndexer(inputCol="idx_{0}".format(x), outputCol="enc_{0}".format(x))
   for x in testData_df_1.columns if x != label_col_test
]

# Assembles multiple columns into a single vector
assembler_test = VectorAssembler(
    inputCols=["idx_{0}".format(x) for x in testData_df_1.columns if x != label_col_test],
    outputCol="features"
)


pipeline_test = Pipeline(stages=string_indexers_test + encoders_test + [assembler_test])
model_test = pipeline_test.fit(test_data_df)
indexed_test = model_test.transform(test_data_df)

label_points_test = (indexed_test
    .select(col(label_col_test).cast("float").alias("label"), col("features"))
    .map(lambda row: LabeledPoint(row.label, row.features)))

# Build the model
model = LinearRegressionWithSGD.train(label_points)

valuesAndPreds = label_points_test.map(lambda p: (p.label, model.predict(p.features)))

MSE = valuesAndPreds.map(lambda (v, p): (v - p)**2).reduce(lambda x, y: x + y) / valuesAndPreds.count()
print("Mean Squared Error = " + str(MSE))

您只需将其添加为索引和组装之间的一个步骤:

encoders = [
   StringIndexer(inputCol="idx_{0}".format(x), outputCol="enc_{0}".format(x))
   for x in train_data_df.columns if x != label_col
]

assembler = VectorAssembler(
    inputCols=[
        "enc_{0}".format(x) for x in train_data_df.columns if x != label_col
    ],
    outputCol="features"
)


pipeline = Pipeline(stages=string_indexers + encoders + [assembler])

当我在Spark中运行线性回归时,我得到
MSE
as
inf
。我的代码中有一些错误。我已经在上面的更新部分发布了我剩余的代码。这个问题已经被讨论过不止一次了。如果你不调整你的模型,这是你可以期待的。@JasonDonnald调整模型不是一件容易的事。您是否尝试过对超参数进行网格搜索?一种简单的方法是对迭代次数的条目列表进行迭代,应用模型并进行评估以保留最佳模型的迭代次数。也可以在内部应用CV。您可以将其应用于模型的所有参数(超参数)。这称为网格搜索。我建议您使用scikit阅读网格搜索。了解理论上类似的内容。您能告诉我PySpark中的哪些机器学习模型需要一个热编码,哪些不需要?我搞不懂