在R中一圈内的圈

在R中一圈内的圈,r,list,dplyr,apply,lapply,R,List,Dplyr,Apply,Lapply,膝关节炎? 无论如何 我有一个数据帧列表。我经历了这一切,做了一些调整。但是,我需要重叠每个单独的数据帧,也就是说,遍历列表中的每个数据帧 因此,下面的工作 #THIS WORKS bottleneck_fury <- function(bottleneck2) { bottleneck2 <- bottleneck2 %>% mutate(startTime = as_datetime(startTime, tz = "")) %&g

膝关节炎? 无论如何

我有一个数据帧列表。我经历了这一切,做了一些调整。但是,我需要重叠每个单独的数据帧,也就是说,遍历列表中的每个数据帧

因此,下面的工作

#THIS WORKS
bottleneck_fury <- function(bottleneck2) {
  
  bottleneck2 <- bottleneck2 %>% 
    mutate(startTime = as_datetime(startTime, tz = "")) %>% 
    mutate(endTime = as_datetime(endTime, tz = "")) %>% 
    mutate(early_startTime = startTime - 300) %>% #5 min prior
    mutate_at(c("startTime", "endTime", "early_startTime"),anytime) %>%  #formatted time
    mutate(id = rownames(bottleneck2)) 
  
  bottleneck2 
}

all_necks <- lapply(bottleneck_test, bottleneck_fury)
dd <- all_necks[[1]]


OUTPUT
dd <- structure(list(startTime = structure(c(1533122400, 1533132060, 
1533151920, 1533205740, 1533207960), class = c("POSIXct", "POSIXt"
), tzone = ""), endTime = structure(c(1533131340, 1533132540, 
1533153300, 1533207660, 1533218460), class = c("POSIXct", "POSIXt"
), tzone = ""), impact = c(627.06, 26.53, 24.34, 166.84, 761.39
), impactPercent = c(1444.6, 33.98, 30.98, 320.75, 1632.12), 
    impactSpeedDiff = c(25814.55, 806.43, 733.5, 6289.26, 30350.4
    ), maxQueueLength = c(4.829494, 3.605648, 2.241074, 5.760513, 
    5.760513), tmcs = list(c("110N04623", "110-04623", "110N04624", 
    "110-04624", "110N04625", "110-04625", "110N04626", "110-04626"
    ), c("110N04623", "110-04623", "110N04624", "110-04624", 
    "110N04625", "110-04625"), c("110N04623", "110-04623", "110N04624", 
    "110-04624"), c("110N04623", "110-04623", "110N04624", "110-04624", 
    "110N04625", "110-04625", "110N04626", "110-04626", "110N04627"
    ), c("110N04623", "110-04623", "110N04624", "110-04624", 
    "110N04625", "110-04625", "110N04626", "110-04626", "110N04627"
    )), early_startTime = structure(c(1533122100, 1533131760, 
    1533151620, 1533205440, 1533207660), class = c("POSIXct", 
    "POSIXt"), tzone = ""), id = c("1", "2", "3", "4", "5")), row.names = c(NA, 
5L), class = "data.frame")

#这很有效
暴怒%
变异(endTime=as_datetime(endTime,tz=“”))%>%
突变(早期开始时间=开始时间-300)%>5分钟前
在(c(“开始时间”、“结束时间”、“早期开始时间”),任何时间%>%格式化时间进行变异
变异(id=行名(瓶颈2))
瓶颈2
}

所有颈部尽管您使用的数据帧不大,但最好将数据帧集成到一个数据帧中,并用id标识它们。这使该过程更加高效。您可以使用“plyr”包的“ldply”功能来实现这一点,该功能可以将文件集成到特定文件夹中,您可以根据文件名后面的模式选择该文件夹。您可以将一个函数应用于每个文件。在您的情况下,可以通过以下方式进行:

library(plyr)
setwd("~/directory")
all_necks<- ldply(.data = list.files(pattern = "some-pattern"),   .fun = function(x){
              y<-read.csv(x,header=T) 
              y$ID <- x 
              y
              } ### The function read the file and creates a new column that stores a unique ID for each file.
) 
库(plyr)
setwd(“~/目录”)

你有两个同名的函数,第二个有问题吗?2) 运行第一个函数后,数据集是
dd
?问题是#2,是的,这是工作函数后的dd。您可以共享原始数据
瓶颈测试
,并显示预期输出吗?即使子集太大,但也可以通过聊天室中的私人消息这样做
library(plyr)
setwd("~/directory")
all_necks<- ldply(.data = list.files(pattern = "some-pattern"),   .fun = function(x){
              y<-read.csv(x,header=T) 
              y$ID <- x 
              y
              } ### The function read the file and creates a new column that stores a unique ID for each file.
) 
library(dplyr)

nrow_iris <- nrow(iris)
nrow_necks<- nrow(all_necks)

all_necks<- all_necks %>% add_column(Sepal_length = c(iris$Sepal.Length,rep(NA,nrow_necks-nrow_iris)) )
all_necks_2 <- all_necks %>% distinct(maxQueueLength,.keep_all = T)
all_necks_2 <- do.call("rbind",replicate(nrow_iris*length(all_necks_2$maxQueueLength),all_necks,simplify = F ))

all_necks_2<- all_necks_2 %>% group_by(maxQueueLength) %>% mutate(new=maxQueueLength < Petal.Length) %>% ungroup()
all_necks_2 %>% count(vars = new)