Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/74.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
我如何修改keras/tensorflow代码以(不)考虑图片中的位置?_R_Tensorflow_Keras_Classification - Fatal编程技术网

我如何修改keras/tensorflow代码以(不)考虑图片中的位置?

我如何修改keras/tensorflow代码以(不)考虑图片中的位置?,r,tensorflow,keras,classification,R,Tensorflow,Keras,Classification,我想在RStudio中使用Keras和Tensorflow对图像进行分类 在普通的图像分类模型中,我如何告诉Tensorflow考虑还是不考虑图片中的位置 例如,如果我想让模型说,图片中是否有狗,图片左下角是否有狗 我知道,通常不考虑这一点,例如,我可以偏移图片并对其进行排列,以便模型了解对象可以位于图片中的所有位置 但我怎样才能让它做到这一点呢?要了解功能仅在图片中与在培训中发现的位置相同时才有效 示例代码(一般图像分类): 型号% 图层退出(速率=标志$dropout1)% 层转换2d(过滤

我想在RStudio中使用Keras和Tensorflow对图像进行分类

在普通的图像分类模型中,我如何告诉Tensorflow考虑还是不考虑图片中的位置

例如,如果我想让模型说,图片中是否有狗,图片左下角是否有狗

我知道,通常不考虑这一点,例如,我可以偏移图片并对其进行排列,以便模型了解对象可以位于图片中的所有位置

但我怎样才能让它做到这一点呢?要了解功能仅在图片中与在培训中发现的位置相同时才有效

示例代码(一般图像分类):

型号%
图层退出(速率=标志$dropout1)%
层转换2d(过滤器=标志$conval\u过滤器1,内核大小=c(3,3)
,activation=“relu”,padding=“same”
) %>%
层转换2d(过滤器=标志$conval\u过滤器1,内核大小=c(3,3)
,activation=“relu”
#,padding=“相同”
) %>%
层\u最大\u池\u 2d(池大小=c(2,2))%>%
图层退出(速率=标志$dropout1)%
层转换2d(过滤器=标志$conval\u过滤器2,内核大小=c(3,3)
,activation=“relu”,padding=“same”
) %>%
层转换2d(过滤器=标志$conval\u过滤器2,内核大小=c(3,3)
,activation=“relu”
#,padding=“相同”
) %>%
层\u最大\u池\u 2d(池大小=c(2,2))%>%
图层退出(速率=标志$dropout1)%
层转换2d(过滤器=标志$conval\u过滤器3,内核大小=c(3,3)
,activation=“relu”,padding=“same”
) %>%
层转换2d(过滤器=标志$conval\u过滤器3,内核大小=c(3,3)
,activation=“relu”
#,padding=“相同”
) %>%
层\u最大\u池\u 2d(池大小=c(2,2))%>%
层_展平()%>%
图层密度(单位=标志$DESTREANCE\U units1)%>%
层激活\u relu()%>%
图层退出(速率=标志$dropout1)%
图层密度(单位=标志$DESTREANCE\U units1)%>%
层激活\u relu()%>%
图层退出(速率=标志$dropout1)%
图层密度(单位=标志$DESTREANCE\U units1)%>%
层激活\u relu()%>%
层密度(单位=2,激活=softmax)
模型%>%编译(损失='binary_crossentropy',optimizer=“adadelta”,metrics=c(“精度”))
模型%>%fit(data.training,data.trainLabels,epochs=FLAGS$epochs,view\u metrics=FALSE,
验证(分割=0.2,洗牌=TRUE)

您是否参考了链接中提到的对象检测、边界框和锚的概念,
model <- keras_model_sequential() 
model %>%
  layer_dropout(rate = FLAGS$dropout1) %>%
  layer_conv_2d(filters = FLAGS$convol_filters1, kernel_size = c(3,3)
                , activation = "relu", padding = "same"
  ) %>%
  layer_conv_2d(filters = FLAGS$convol_filters1, kernel_size = c(3,3) 
                , activation = "relu"
                #, padding = "same"
  ) %>%
  layer_max_pooling_2d(pool_size = c(2,2)) %>%
  layer_dropout(rate = FLAGS$dropout1) %>%
  layer_conv_2d(filters = FLAGS$convol_filters2, kernel_size = c(3,3)
                , activation = "relu", padding = "same"
  ) %>%
  layer_conv_2d(filters = FLAGS$convol_filters2, kernel_size = c(3,3)
                , activation = "relu"
                #, padding = "same"
  ) %>%
  layer_max_pooling_2d(pool_size = c(2,2)) %>%
layer_dropout(rate = FLAGS$dropout1) %>%
layer_conv_2d(filters = FLAGS$convol_filters3, kernel_size = c(3,3)
              , activation = "relu", padding = "same"
) %>%
layer_conv_2d(filters = FLAGS$convol_filters3, kernel_size = c(3,3)
              , activation = "relu"
              #, padding = "same"
) %>%
layer_max_pooling_2d(pool_size = c(2,2)) %>%
layer_flatten() %>%
layer_dense(units = FLAGS$dense_units1) %>%   
layer_activation_relu() %>%
layer_dropout(rate = FLAGS$dropout1) %>%
layer_dense(units = FLAGS$dense_units1) %>%   
layer_activation_relu() %>%
layer_dropout(rate = FLAGS$dropout1) %>%
layer_dense(units = FLAGS$dense_units1) %>%   
layer_activation_relu() %>%
layer_dense(units = 2, activation = 'softmax')

model %>% compile(loss = 'binary_crossentropy', optimizer = "adadelta", metrics = c("accuracy"))

model %>% fit(data.training, data.trainLabels, epochs = FLAGS$epochs, view_metrics = FALSE,
                         validation_split = 0.2, shuffle = TRUE)