Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/78.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在SparkyR中完成数据帧_R_Apache Spark_Dplyr_Tidyr_Sparklyr - Fatal编程技术网

在SparkyR中完成数据帧

在SparkyR中完成数据帧,r,apache-spark,dplyr,tidyr,sparklyr,R,Apache Spark,Dplyr,Tidyr,Sparklyr,我试图在SparkyR中复制tidyr:complete函数。我有一个缺少值的数据框,我必须填写这些行。在dplyr/tidyr中,我可以执行以下操作: data <- tibble( "id" = c(1,1,2,2), "dates" = c("2020-01-01", "2020-01-03", "2020-01-01", "2020-01-03"),

我试图在SparkyR中复制
tidyr:complete
函数。我有一个缺少值的数据框,我必须填写这些行。在dplyr/tidyr中,我可以执行以下操作:

data <- tibble(
  "id" = c(1,1,2,2),
  "dates" = c("2020-01-01", "2020-01-03", "2020-01-01", "2020-01-03"),
  "values" = c(3,4,7,8))

# A tibble: 4 x 3
     id dates      values
  <dbl> <chr>       <dbl>
1     1 2020-01-01      3
2     1 2020-01-03      4
3     2 2020-01-01      7
4     2 2020-01-03      8

data %>% 
  mutate(dates = as_date(dates)) %>% 
  group_by(id) %>% 
  complete(dates = seq.Date(min(dates), max(dates), by="day"))

# A tibble: 6 x 3
# Groups:   id [2]
     id dates      values
  <dbl> <date>      <dbl>
1     1 2020-01-01      3
2     1 2020-01-02     NA
3     1 2020-01-03      4
4     2 2020-01-01      7
5     2 2020-01-02     NA
6     2 2020-01-03      8
有没有办法设定自定义项或实现类似的结果


谢谢

在引擎盖下
tidyr::complete
只执行一个完全连接,然后执行可选的NA填充。您可以使用
sdf\u copy\u to
复制其效果,以创建一个新的sdf,该sdf在开始日期和结束日期之间仅为一列
seq.Date
,然后在该sdf和数据集之间执行
完全联接。

下面是一个在Spark中完成所有工作的方法

library(sparklyr)

sc <- spark_connect(master = "local")

data <- tibble(
  id = c(1, 1, 2, 2),
  dates = c("2020-01-02", "2020-01-04", "2020-01-01", "2020-01-03"),
  values = c(1, 2, 3, 4)
)

data_spark <- copy_to(sc, data)
sdf_seq
可用于在Spark中生成序列。这可用于获取
日期
id
的组合

dates_id_combinations <- 
  sdf_seq(
    sc,
    from = 0,
    to = days_info$total_days,
    repartition = 1
  ) %>%
  transmute(
    dates = date_add(local(days_info$first_date), id),
    join_by = TRUE
  ) %>%
  full_join(data_spark %>% distinct(id) %>% mutate(join_by = TRUE)) %>%
  select(dates, id)
dates_id_combinations
#> # Source: spark<?> [?? x 2]
#>   dates         id
#>   <date>     <dbl>
#> 1 2020-01-01     1
#> 2 2020-01-01     2
#> 3 2020-01-02     1
#> 4 2020-01-02     2
#> 5 2020-01-03     1
#> 6 2020-01-03     2
#> 7 2020-01-04     1
#> 8 2020-01-04     2

是的,但是在我的例子中,每个组的日期序列和要加入的数据帧都是不同的。是否有一种方法可以有效地为每个组定义不同的联接?
days_info <-
  data_spark %>%
  summarise(
    first_date = min(dates),
    total_days = datediff(max(dates), min(dates))
  ) %>%
  collect()
days_info
#> # A tibble: 1 x 2
#>   first_date total_days
#>   <chr>           <int>
#> 1 2020-01-01          3
dates_id_combinations <- 
  sdf_seq(
    sc,
    from = 0,
    to = days_info$total_days,
    repartition = 1
  ) %>%
  transmute(
    dates = date_add(local(days_info$first_date), id),
    join_by = TRUE
  ) %>%
  full_join(data_spark %>% distinct(id) %>% mutate(join_by = TRUE)) %>%
  select(dates, id)
dates_id_combinations
#> # Source: spark<?> [?? x 2]
#>   dates         id
#>   <date>     <dbl>
#> 1 2020-01-01     1
#> 2 2020-01-01     2
#> 3 2020-01-02     1
#> 4 2020-01-02     2
#> 5 2020-01-03     1
#> 6 2020-01-03     2
#> 7 2020-01-04     1
#> 8 2020-01-04     2
data_spark %>%
  group_by(id) %>%
  mutate(first_date = min(dates), last_date = max(dates)) %>%
  full_join(dates_id_combinations) %>%
  filter(dates >= min(first_date), dates <= max(last_date)) %>%
  arrange(id, dates) %>%
  select(id, dates)
#> # Source:     spark<?> [?? x 2]
#> # Groups:     id
#> # Ordered by: id, dates
#>      id dates     
#>   <dbl> <chr>     
#> 1     1 2020-01-02
#> 2     1 2020-01-03
#> 3     1 2020-01-04
#> 4     2 2020-01-01
#> 5     2 2020-01-02
#> 6     2 2020-01-03