计算R中多重插补后的预测均值(或预测概率)和SE

计算R中多重插补后的预测均值(或预测概率)和SE,r,regression,predict,imputation,R,Regression,Predict,Imputation,我想计算预测值和标准误差,但我不能简单地使用predict(),因为我使用的是15个多重插补数据集(Amelia package generated)。我在每个数据集上运行回归模型。然后,使用Amelia函数mi.meld()将结果合并为一组模型系数和标准误差,该函数使用Rubin规则 示例数据和代码: dd<-list() for (i in 1:15){ dd[[i]] <- data.frame( Age=runif(50,20,90), Cat=factor(samp

我想计算预测值和标准误差,但我不能简单地使用predict(),因为我使用的是15个多重插补数据集(Amelia package generated)。我在每个数据集上运行回归模型。然后,使用Amelia函数mi.meld()将结果合并为一组模型系数和标准误差,该函数使用Rubin规则

示例数据和代码:

dd<-list()
for (i in 1:15){
dd[[i]] <- data.frame(
  Age=runif(50,20,90),
  Cat=factor(sample(0:4, 50, replace=T)),
  Outcome = sample(0:1, 50, replace=T)
)}

b.out<-NULL
se.out<-NULL
for(i in 1:15) {
  ols.out<-glm(Outcome~Age+factor(Cat), data=dd[[i]],family="binomial")
  b.out <- rbind(b.out, ols.out$coef)
  se.out <- rbind(se.out, coef(summary(ols.out))[,2])}
mod0 <- mi.meld(q = b.out, se = se.out)

> mod0
$q.mi
     (Intercept)         Age factor(Cat)1 factor(Cat)2 factor(Cat)3 factor(Cat)4
[1,]   0.0466825 -0.00577106    0.5291908  -0.09760264    0.4058684    0.3125109

$se.mi
     (Intercept)        Age factor(Cat)1 factor(Cat)2 factor(Cat)3 
factor(Cat)4
[1,]    1.863276 0.02596468     1.604759     1.398322     1.414589     
1.332743

dd我相信你在近几年后不需要这个答案,但我刚刚研究了一个类似的问题,我想我会把答案放在这里给子孙后代

Andrew Heiss把这个解决方案放在gisthub身上-

我对它做了一点修改(部分原因是我认为自从他写这篇文章以来,tidyverse中“nest”的默认行为可能已经改变了?)

代码(艰苦的工作!)几乎完全来自安德鲁·海斯。这里的注释是我和他的混合体

这是使用Amelia Africa数据集,对于我的实际问题,我有一个不同的数据集(显然),并且前几步做得有点不同,hwich很好

library(tidyverse)
library(Amelia)
library(broom)

# Use the africa dataset from Amelia
data(africa)
set.seed(1234)
imp_amelia <- amelia(x = africa, m = 5, cs = "country", ts = "year", logs = "gdp_pc", p2s = 0) # do the imputations -- for me, it was fine to do this bit in 'mice'

# Gather all the imputed datasets into one data frame and run a model on each
models_imputed_df <- bind_rows(unclass(imp_amelia$imputations), .id = "m") %>%
  group_by(m) %>%
  nest() %>% 
  mutate(model = data %>% map(~ lm(gdp_pc ~ trade + civlib, data = .)))

# again - for my real life problem the models looked very different to this, and used rms - and this was also totally fine.

models_imputed_df
#> # A tibble: 5 x 3
#>   m     data               model   
#>   <chr> <list>             <list>  
#> 1 imp1  <tibble [120 × 7]> <S3: lm>
#> 2 imp2  <tibble [120 × 7]> <S3: lm>
#> 3 imp3  <tibble [120 × 7]> <S3: lm>
#> 4 imp4  <tibble [120 × 7]> <S3: lm>
#> 5 imp5  <tibble [120 × 7]> <S3: lm>


# We want to see how GDP per capita varies with changes in civil liberties, so
# we create a new data frame with values for each of the covariates in the
# model. We include the full range of civil liberties (from 0 to 1) and the mean
# of trade.

# ie. this is a 'skelton' data frame of all your variables that you want to make predictions over.

new_data <- data_frame(civlib = seq(0, 1, 0.1), 
                       trade = mean(africa$trade, na.rm = TRUE))
new_data
#> # A tibble: 11 x 2
#>    civlib trade
#>     <dbl> <dbl>
#>  1  0.     62.6
#>  2  0.100  62.6
#>  3  0.200  62.6
#>  4  0.300  62.6
#>  5  0.400  62.6
#>  6  0.500  62.6
#>  7  0.600  62.6
#>  8  0.700  62.6
#>  9  0.800  62.6
#> 10  0.900  62.6
#> 11  1.00   62.6

# write a function to meld predictions

meld_predictions <- function(x) {
  # x is a data frame with m rows and two columns:
  #
  # m  .fitted  .se.fit
  # 1  1.05     0.34
  # 2  1.09     0.28
  # x  ...      ...

  # Meld the fitted values using Rubin's rules
  x_melded <- mi.meld(matrix(x$.fitted), matrix(x$.se.fit))

  data_frame(.fitted = as.numeric(x_melded$q.mi),
             .se.fit = as.numeric(x_melded$se.mi))
}

# We augment/predict using new_data in each of the imputed models, then we group
# by each of the values of civil liberties (so each value, like 0.1 and 0.2 has
# 5 values, 1 from each of the imputed models), and then we meld those 5
# predicted values into a single value with meld_predictions()

predict_melded <- data_frame(models = models_imputed_df$model) %>%
  mutate(m = 1:n(),
         fitted = models %>% map(~ augment(., newdata = new_data))) %>% 
  unnest(fitted) %>% 
  dplyr::select(-models) %>% #### I needed to add this row to make the code work, once you've used the models to get the fit you don't need them in the data object anymore.  I took this line out because it was slowing everything down, then realised the code only works with this line... not sure why?
  group_by(civlib) %>%  
  nest(data=c(m, .fitted, .se.fit)) %>%  # needed to change this here from gisthub to make the nested 'data' have all the imputations in it, not just estimates from one of the imputations.
  mutate(fitted_melded = data %>% map(~ meld_predictions(.))) %>% 
  unnest(fitted_melded) %>% 
  mutate(ymin = .fitted + (qnorm(0.025) * .se.fit),
         ymax = .fitted + (qnorm(0.975) * .se.fit))


## NB. this is still on the link scale -- you'd need to write an extra few lines to exponentiate everything and get your predictions and se's on the response scale
# Plot!
ggplot(predict_melded, aes(x = civlib, y = .fitted)) +
  geom_line(color = "blue") +
  geom_ribbon(aes(ymin = ymin, ymax = ymax), alpha = 0.2, fill = "blue")
库(tidyverse)
图书馆(阿米莉亚)
图书馆(扫帚)
#使用来自Amelia的非洲数据集
数据(非洲)
种子集(1234)
imp_amelia%
嵌套()%>%
变异(模型=数据%>%map(~lm(gdp\u pc~trade+civlib,数据=))
#再一次-对于我的现实生活问题,模型看起来与此非常不同,并且使用rms-这也很好。
模型_插补_df
#>#tibble:5 x 3
#>m数据模型
#>                   
#>1 imp1
#>2 imp2
#>3 imp3
#>4小鬼4
#>5小鬼5
#我们想看看人均GDP如何随着公民自由的变化而变化,所以
#我们创建了一个新的数据框架,其中包含了模型中每个协变量的值
#模型。我们包括所有公民自由(从0到1)和平均值
#贸易部。
#这是一个“skelton”数据框,包含所有你想要预测的变量。
新数据#A tible:11 x 2
#>公民贸易
#>      
#>  1  0.     62.6
#>  2  0.100  62.6
#>  3  0.200  62.6
#>  4  0.300  62.6
#>  5  0.400  62.6
#>  6  0.500  62.6
#>  7  0.600  62.6
#>  8  0.700  62.6
#>  9  0.800  62.6
#> 10  0.900  62.6
#> 11  1.00   62.6
#编写一个函数来混合预测
混合预测
prediction<- predict(mod1, data.predict, type="response",se.fit=T)
library(tidyverse)
library(Amelia)
library(broom)

# Use the africa dataset from Amelia
data(africa)
set.seed(1234)
imp_amelia <- amelia(x = africa, m = 5, cs = "country", ts = "year", logs = "gdp_pc", p2s = 0) # do the imputations -- for me, it was fine to do this bit in 'mice'

# Gather all the imputed datasets into one data frame and run a model on each
models_imputed_df <- bind_rows(unclass(imp_amelia$imputations), .id = "m") %>%
  group_by(m) %>%
  nest() %>% 
  mutate(model = data %>% map(~ lm(gdp_pc ~ trade + civlib, data = .)))

# again - for my real life problem the models looked very different to this, and used rms - and this was also totally fine.

models_imputed_df
#> # A tibble: 5 x 3
#>   m     data               model   
#>   <chr> <list>             <list>  
#> 1 imp1  <tibble [120 × 7]> <S3: lm>
#> 2 imp2  <tibble [120 × 7]> <S3: lm>
#> 3 imp3  <tibble [120 × 7]> <S3: lm>
#> 4 imp4  <tibble [120 × 7]> <S3: lm>
#> 5 imp5  <tibble [120 × 7]> <S3: lm>


# We want to see how GDP per capita varies with changes in civil liberties, so
# we create a new data frame with values for each of the covariates in the
# model. We include the full range of civil liberties (from 0 to 1) and the mean
# of trade.

# ie. this is a 'skelton' data frame of all your variables that you want to make predictions over.

new_data <- data_frame(civlib = seq(0, 1, 0.1), 
                       trade = mean(africa$trade, na.rm = TRUE))
new_data
#> # A tibble: 11 x 2
#>    civlib trade
#>     <dbl> <dbl>
#>  1  0.     62.6
#>  2  0.100  62.6
#>  3  0.200  62.6
#>  4  0.300  62.6
#>  5  0.400  62.6
#>  6  0.500  62.6
#>  7  0.600  62.6
#>  8  0.700  62.6
#>  9  0.800  62.6
#> 10  0.900  62.6
#> 11  1.00   62.6

# write a function to meld predictions

meld_predictions <- function(x) {
  # x is a data frame with m rows and two columns:
  #
  # m  .fitted  .se.fit
  # 1  1.05     0.34
  # 2  1.09     0.28
  # x  ...      ...

  # Meld the fitted values using Rubin's rules
  x_melded <- mi.meld(matrix(x$.fitted), matrix(x$.se.fit))

  data_frame(.fitted = as.numeric(x_melded$q.mi),
             .se.fit = as.numeric(x_melded$se.mi))
}

# We augment/predict using new_data in each of the imputed models, then we group
# by each of the values of civil liberties (so each value, like 0.1 and 0.2 has
# 5 values, 1 from each of the imputed models), and then we meld those 5
# predicted values into a single value with meld_predictions()

predict_melded <- data_frame(models = models_imputed_df$model) %>%
  mutate(m = 1:n(),
         fitted = models %>% map(~ augment(., newdata = new_data))) %>% 
  unnest(fitted) %>% 
  dplyr::select(-models) %>% #### I needed to add this row to make the code work, once you've used the models to get the fit you don't need them in the data object anymore.  I took this line out because it was slowing everything down, then realised the code only works with this line... not sure why?
  group_by(civlib) %>%  
  nest(data=c(m, .fitted, .se.fit)) %>%  # needed to change this here from gisthub to make the nested 'data' have all the imputations in it, not just estimates from one of the imputations.
  mutate(fitted_melded = data %>% map(~ meld_predictions(.))) %>% 
  unnest(fitted_melded) %>% 
  mutate(ymin = .fitted + (qnorm(0.025) * .se.fit),
         ymax = .fitted + (qnorm(0.975) * .se.fit))


## NB. this is still on the link scale -- you'd need to write an extra few lines to exponentiate everything and get your predictions and se's on the response scale
# Plot!
ggplot(predict_melded, aes(x = civlib, y = .fitted)) +
  geom_line(color = "blue") +
  geom_ribbon(aes(ymin = ymin, ymax = ymax), alpha = 0.2, fill = "blue")