如何在SparkR中正确使用ft_字符串索引器和ft_one_hot_编码器处理多个列
我有两个问题:如何在SparkR中正确使用ft_字符串索引器和ft_one_hot_编码器处理多个列,r,apache-spark,sparklyr,R,Apache Spark,Sparklyr,我有两个问题: 如何将多个分类变量转换为spark中虚拟变量的大矩阵 如何使用one_hot_编码器获得正确的输出并运行(逻辑)回归 我被困在如何使用ft\u字符串索引器和ft\u one\u hot\u编码器获取正确的tbl上 例如,我制作了当前数据帧: library(sparklyr) library(tidyverse) sc <- spark_connect(master="yarn-client", spark_home =Sys.getenv("SPARK_HOME"),
ft\u字符串索引器
和ft\u one\u hot\u编码器
获取正确的tbl上
例如,我制作了当前数据帧:
library(sparklyr)
library(tidyverse)
sc <- spark_connect(master="yarn-client", spark_home =Sys.getenv("SPARK_HOME"), app_name = "sparklyr",
version = "2.1.2", hadoop_version = "2.6", config = configs)
df <- data.frame(
a=rep(letters[1:4],5),
b=rep(c("one", "two"), 10),
y=rbinom(n=20,size=1,prob=0.5))
copy_to(sc, df, "df")
我运行以下突变序列,得到如下输出:
df2 <- tbl(sc, "df")
df2 %>%
sdf_mutate(a_idx = ft_string_indexer(a)) %>%
sdf_mutate(b_idx = ft_string_indexer(b)) %>%
sdf_mutate(a_vec = ft_one_hot_encoder(a_idx)) %>%
sdf_mutate(b_vec = ft_one_hot_encoder(b_idx)) %>%
collect()
# A tibble: 20 x 7
a b y a_idx b_idx a_vec b_vec
<chr> <chr> <int> <dbl> <dbl> <list> <list>
1 a one 0 0 0 <dbl [3]> <dbl [1]>
2 b two 1 1 1 <dbl [3]> <dbl [1]>
3 c one 1 2 0 <dbl [3]> <dbl [1]>
4 d two 0 3 1 <dbl [3]> <dbl [1]>
5 a one 1 0 0 <dbl [3]> <dbl [1]>
6 b two 0 1 1 <dbl [3]> <dbl [1]>
7 c one 0 2 0 <dbl [3]> <dbl [1]>
8 d two 1 3 1 <dbl [3]> <dbl [1]>
9 a one 0 0 0 <dbl [3]> <dbl [1]>
10 b two 1 1 1 <dbl [3]> <dbl [1]>
11 c one 1 2 0 <dbl [3]> <dbl [1]>
12 d two 0 3 1 <dbl [3]> <dbl [1]>
13 a one 1 0 0 <dbl [3]> <dbl [1]>
14 b two 0 1 1 <dbl [3]> <dbl [1]>
15 c one 0 2 0 <dbl [3]> <dbl [1]>
16 d two 0 3 1 <dbl [3]> <dbl [1]>
17 a one 0 0 0 <dbl [3]> <dbl [1]>
18 b two 1 1 1 <dbl [3]> <dbl [1]>
19 c one 0 2 0 <dbl [3]> <dbl [1]>
20 d two 0 3 1 <dbl [3]> <dbl [1]>
df2%
sdf_mutate(a_idx=ft_string_indexer(a))%>%
sdf_mutate(b_idx=ft_string_indexer(b))%>%
sdf_变异(a_vec=ft_one_hot_编码器(a_idx))%>%
sdf_变异(b_vec=ft_one_hot_编码器(b_idx))%>%
收集
#一个tibble:20x7
a b y a_idx b_idx a_vec b_vec
1一0 0 0
2 b 2 1 1
3C 1 2 0
4d两个0 3 1
5 a 1 0 0
6b两个01
7C一○二○
8D 2 1 3 1
9一0 0
10B两个11
11 c 1 2 0
12 d 2 0 3 1
13 a 1 0 0
14 b 2 0 1 1
15 c 1 0 2 0
16 d 2 0 3 1
17 a一0 0
18B两个1
19 c 1 0 2 0
20 d 2 0 3 1
该输出似乎不适合在ml_逻辑_回归函数中使用。任何关于如何优化多个列的编码和正确格式并在其上运行回归的帮助都将非常有用 逻辑回归分类器需要一列作为输入,因此您需要从编码的
a_-vec
和b_-vec
中设计该列。为此,您可以像这样使用向量汇编程序:
# Source: table<df> [?? x 3]
# Database: spark_connection
a b y
<chr> <chr> <int>
1 a one 0
2 b two 1
3 c one 1
4 d two 0
5 a one 1
6 b two 0
7 c one 0
8 d two 1
9 a one 0
10 b two 1
# ... with more rows
features_idx = ft_vector_assembler(c("a_vec", "b_vec"))