Python tensorflow在google colab中没有GPU的利润

Python tensorflow在google colab中没有GPU的利润,python,tensorflow,keras,google-colaboratory,Python,Tensorflow,Keras,Google Colaboratory,在GoogleColab中,我写了一个Ipython笔记本,在那里我建立了一个神经网络模型,从我的GoogleDrive中获取数据并训练模型 我的代码运行时没有错误,并训练模型。尽管我没有看到使用colab GPU与默认CPU相比有任何改进。我是否正确使用GPU,或者tensorflow是否可以不使用google colab的GPU 可能与此问题相关的一些代码片段: import tensorflow as tf print(tf.__version__) device_name = tf.te

在GoogleColab中,我写了一个Ipython笔记本,在那里我建立了一个神经网络模型,从我的GoogleDrive中获取数据并训练模型

我的代码运行时没有错误,并训练模型。尽管我没有看到使用colab GPU与默认CPU相比有任何改进。我是否正确使用GPU,或者tensorflow是否可以不使用google colab的GPU

可能与此问题相关的一些代码片段:

import tensorflow as tf
print(tf.__version__)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
  raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, BatchNormalization, Flatten, Dense, TimeDistributed, ReLU, ConvLSTM2D, Activation, Dropout, Reshape
结果:

2.0.0-alpha0
Found GPU at: /device:GPU:0
构建模型:

with tf.device("/gpu:0"):
  model = Sequential()

  #layer1
  model.add(
      TimeDistributed(
          TimeDistributed(
              Conv2D(
                  filters=4, kernel_size=(1,10), strides=(1,10), data_format="channels_last"
              )
          ), input_shape=(40, 5, 7, 100, 1), name="LLConv"
      )
  )
  model.add(TimeDistributed(BatchNormalization(axis=4), name="LBNtes"))
  model.add(TimeDistributed(ReLU(), name="LRelu"))
  #print(model.output_shape)#(None, 40, 5, 7, 10, 4)

  #layer2
  model.add(
      TimeDistributed(
          ConvLSTM2D(
              filters=4, kernel_size=(7,3), strides=(1,1),data_format="channels_last", return_sequences=True
          ), name="LConvLST"
      )
  )

  model.add(TimeDistributed(BatchNormalization(axis=4), name="LBN2"))
  model.add(TimeDistributed(Activation("tanh"), name="Ltanh"))
  #print(model.output_shape)#(None, 40, 5, 1, 8, 4)

  model.add(Reshape((40, 5, 8, 4), name="reshape"))

  #layers3
  model.add(
      ConvLSTM2D(
          filters=1, kernel_size=(4,4), strides=(1,1), data_format="channels_last", name="GConvLSTM", return_sequences=True
      )
  )
  model.add(BatchNormalization(axis=3, name="GBN"))
  model.add(Activation("tanh", name="Gtanh"))
  #print(model.output_shape)#(None, 40, 2, 5, 1)

  model.add(TimeDistributed(Flatten()))
  #print(model.output_shape)#(None, 40, 10)

  model.add(Flatten())
  #layer4
  model.add(Dense(10, name="GDense"))
  model.add(BatchNormalization(axis=-1))
  model.add(ReLU())
  model.add(Dropout(0.5))

  #layer5
  model.add(Dense(1, activation="linear"))


  model.compile(
      loss=tf.keras.losses.MeanSquaredError(),
      optimizer=tf.keras.optimizers.Nadam(lr=0.001, decay=1e-6),
      metrics=['mae', 'mse'],
  )

#model.summary()

EPOCHS = 300
BATCH_SIZE = 15
with tf.device("/gpu:0"):

    history = model.fit(train_features, train_labels, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(test_features,test_labels))
培训模型:

with tf.device("/gpu:0"):
  model = Sequential()

  #layer1
  model.add(
      TimeDistributed(
          TimeDistributed(
              Conv2D(
                  filters=4, kernel_size=(1,10), strides=(1,10), data_format="channels_last"
              )
          ), input_shape=(40, 5, 7, 100, 1), name="LLConv"
      )
  )
  model.add(TimeDistributed(BatchNormalization(axis=4), name="LBNtes"))
  model.add(TimeDistributed(ReLU(), name="LRelu"))
  #print(model.output_shape)#(None, 40, 5, 7, 10, 4)

  #layer2
  model.add(
      TimeDistributed(
          ConvLSTM2D(
              filters=4, kernel_size=(7,3), strides=(1,1),data_format="channels_last", return_sequences=True
          ), name="LConvLST"
      )
  )

  model.add(TimeDistributed(BatchNormalization(axis=4), name="LBN2"))
  model.add(TimeDistributed(Activation("tanh"), name="Ltanh"))
  #print(model.output_shape)#(None, 40, 5, 1, 8, 4)

  model.add(Reshape((40, 5, 8, 4), name="reshape"))

  #layers3
  model.add(
      ConvLSTM2D(
          filters=1, kernel_size=(4,4), strides=(1,1), data_format="channels_last", name="GConvLSTM", return_sequences=True
      )
  )
  model.add(BatchNormalization(axis=3, name="GBN"))
  model.add(Activation("tanh", name="Gtanh"))
  #print(model.output_shape)#(None, 40, 2, 5, 1)

  model.add(TimeDistributed(Flatten()))
  #print(model.output_shape)#(None, 40, 10)

  model.add(Flatten())
  #layer4
  model.add(Dense(10, name="GDense"))
  model.add(BatchNormalization(axis=-1))
  model.add(ReLU())
  model.add(Dropout(0.5))

  #layer5
  model.add(Dense(1, activation="linear"))


  model.compile(
      loss=tf.keras.losses.MeanSquaredError(),
      optimizer=tf.keras.optimizers.Nadam(lr=0.001, decay=1e-6),
      metrics=['mae', 'mse'],
  )

#model.summary()

EPOCHS = 300
BATCH_SIZE = 15
with tf.device("/gpu:0"):

    history = model.fit(train_features, train_labels, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(test_features,test_labels))

确保已安装
tensorflow gpu

首先在启用GPU内核的新colab笔记本上尝试此功能

工具书类
  • 更新:您似乎不再需要在Colab中安装tensorflow gpu,因为当您选择gpu运行时,环境将根据以下视频在引擎盖下安装
    tensorflow gpu


    如果您试图通过运行
    pip install tensorflow gpu
    来更新tensorflow,则您安装的二进制文件可能无法针对Colaboratory提供的gpu硬件进行调优。相反,您应该使用与Colab捆绑在一起的tensorflow版本

    目前,此版本为1.15,但您可以通过运行
    %tensorflow\u version 2.X
    切换到2.X版本。在将来的某个时候,TensorFlow2.X将成为默认版本


    有关更多信息,请参见

    它是哪种型号?请添加代码和所有参数,特别是批处理大小。@Omni这个向上投票的解决方案解释了如何在windows上的tensorflow中使用gpu,我不知道这如何映射到在google colab中执行此操作。@MatiasValdenegro感谢您的回答,该模型是一个具有很多层的深入学习模型(在我的问题中添加了完整模型)