Keras 如何获得Kera';要使用多个核心吗?

Keras 如何获得Kera';要使用多个核心吗?,keras,multiprocessing,Keras,Multiprocessing,我读过keras在2.2.4+版本中自动支持多核,但我的作业仅作为一个线程运行 下面是我的代码片段 import numpy as np import tensorflow as tf from tensorflow import keras epochs_ = 1000 batch_size_ = 150 np.random.seed(42) tf.random.set_seed(42) from keras.layers import Dense, SimpleRNN, GRU, LST

我读过keras在2.2.4+版本中自动支持多核,但我的作业仅作为一个线程运行

下面是我的代码片段

import numpy as np
import tensorflow as tf
from tensorflow import keras
epochs_ = 1000
batch_size_ = 150

np.random.seed(42)
tf.random.set_seed(42)

from keras.layers import Dense, SimpleRNN, GRU, LSTM
from keras.optimizers import SGD

#simple RNN
data_ = Lagged_Set

model6 = keras.models.Sequential([
    keras.layers.SimpleRNN(32, return_sequences=True, input_shape=[None, len(data_.columns)]),
    keras.layers.SimpleRNN(32, return_sequences=True, input_shape=[None, len(data_.columns)]),
    keras.layers.SimpleRNN(32, return_sequences=True, input_shape=[None, len(data_.columns)]),
    keras.layers.SimpleRNN(32, return_sequences=True, input_shape=[None, len(data_.columns)]),
    keras.layers.SimpleRNN(32, return_sequences=True, input_shape=[None, len(data_.columns)]),
    keras.layers.SimpleRNN(32, return_sequences=True, input_shape=[None, len(data_.columns)]),
    keras.layers.SimpleRNN(32, return_sequences=True),
    keras.layers.TimeDistributed(keras.layers.Dense(n_ahead))
])

model6.compile(loss="MAPE", optimizer="rmsprop",metrics=['MAPE'])
history = model6.fit(X_train, Y_train, epochs=epochs_,batch_size=batch_size_,validation_data=(X_valid, Y_valid))
我试过这个,但没用

session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=8, inter_op_parallelism_threads=8)

#tf.compat.v1.ConfigProto.set_random_seed(1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)

在您的import语句应用后,请尝试
os.environ['NUMEXPR\u NUM\u THREADS']=
,它仍然只产生一个进程