Python 如何在创建多个模型时只打开一次TF动态库?

Python 如何在创建多个模型时只打开一次TF动态库?,python,tensorflow,tflearn,Python,Tensorflow,Tflearn,我在Windows10上使用TFLearn库和Python 3。 我试图知道NN的哪种体系结构在我的任务中是最好的。 为了实现这个目标,我通过改变层数、神经元数和激活函数来重新创建模型,创建学习循环。当我通过此代码创建具有新参数的模型时 def create_model(self, layers, neurons, activation): self.model = input_data(shape=[None, 5, 1], name='input') for _ in

我在Windows10上使用TFLearn库和Python 3。 我试图知道NN的哪种体系结构在我的任务中是最好的。 为了实现这个目标,我通过改变层数、神经元数和激活函数来重新创建模型,创建学习循环。当我通过此代码创建具有新参数的模型时

    def create_model(self, layers, neurons, activation):
    self.model = input_data(shape=[None, 5, 1], name='input')
    for _ in range(layers):
        self.model = fully_connected(self.model, neurons, activation=activation)
    self.model = fully_connected(self.model, 1, activation='linear')
    self.model = regression(self.model, optimizer='adam', loss='mean_square', name='target')
    self.model = tflearn.DNN(self.model, tensorboard_dir='log', tensorboard_verbose=0)
每次调用此函数时,我都会在终端中看到此消息

2020-12-12 16:56:53.676047: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x21c784f9b70 initialized for platform Host (this does not guarantee that
 XLA will be used). Devices:
2020-12-12 16:56:53.681315: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-12-12 16:56:53.683553: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1050 computeCapability: 6.1
coreClock: 1.493GHz coreCount: 5 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 104.43GiB/s
2020-12-12 16:56:53.688392: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
2020-12-12 16:56:53.690786: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll
2020-12-12 16:56:53.693109: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll
2020-12-12 16:56:53.695334: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll
2020-12-12 16:56:53.699347: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll
2020-12-12 16:56:53.702131: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll
2020-12-12 16:56:53.703861: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll
2020-12-12 16:56:53.705667: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-12-12 16:56:54.284391: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-12-12 16:56:54.286234: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263]      0
2020-12-12 16:56:54.287394: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0:   N
2020-12-12 16:56:54.288704: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2
989 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-12-12 16:56:54.294857: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x21c4b1f92a0 initialized for platform CUDA (this does not guarantee that
 XLA will be used). Devices:
2020-12-12 16:56:54.297874: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce GTX 1050, Compute Capability 6.1
2020-12-12 16:56:55.432244: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1050 computeCapability: 6.1
coreClock: 1.493GHz coreCount: 5 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 104.43GiB/s
2020-12-12 16:56:55.437698: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
2020-12-12 16:56:55.439990: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll
2020-12-12 16:56:55.442294: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll
2020-12-12 16:56:55.444853: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll
2020-12-12 16:56:55.446534: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll
2020-12-12 16:56:55.448272: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll
2020-12-12 16:56:55.450792: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll
2020-12-12 16:56:55.453253: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-12-12 16:56:55.454759: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-12-12 16:56:55.456519: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263]      0
2020-12-12 16:56:55.459141: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0:   N
2020-12-12 16:56:55.460402: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2
989 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
---------------------------------
Run id: network_data
Log directory: log/
---------------------------------
Training samples: 1774722
Validation samples: 0

如果动态库已经打开,我可以在创建新模型时跳过打开它们吗?

这会导致完成任务时出现任何问题吗?您在日志中看到所有这些信息,因为您使用的是
Tensorflow GPU
。只要您没有看到任何以
E
W
作为前缀的行,就没有问题。谢谢@TFer2我没有看到任何警告或错误。当消息出现时,我看到删除一个模型和创建另一个模型之间的延迟。也许存在一种方法来指出已经打开了所有需要的库以减少延迟?这会导致任何问题来完成您的任务吗?您在日志中看到所有这些信息,因为您使用的是
Tensorflow GPU
。只要您没有看到任何以
E
W
作为前缀的行,就没有问题。谢谢@TFer2我没有看到任何警告或错误。当消息出现时,我看到删除一个模型和创建另一个模型之间的延迟。也许存在一种方法来指向已经打开的所有需要的库以减少延迟?