Python 如何减少我的DQN中的插曲时间?

Python 如何减少我的DQN中的插曲时间?,python,tensorflow,deep-learning,dqn,Python,Tensorflow,Deep Learning,Dqn,我已经修改了OpenAi的环境,使其从倒置的位置开始,并且必须学习上升。我使用Google collab来运行它,因为它比我的笔记本电脑快得多。我想。它的速度非常慢。。。我需要40秒。就一集来说,和我的笔记本电脑上的时间差不多。我甚至试图为谷歌TPU优化它,但没有任何改变。我相信,主要的时间消费者是.fit()和.predict()。我在这里使用.predict() def get_qs(self,state):返回self.model.predict(np.array(state).重塑(-1

我已经修改了OpenAi的环境,使其从倒置的位置开始,并且必须学习上升。我使用Google collab来运行它,因为它比我的笔记本电脑快得多。我想。它的速度非常慢。。。我需要40秒。就一集来说,和我的笔记本电脑上的时间差不多。我甚至试图为谷歌TPU优化它,但没有任何改变。我相信,主要的时间消费者是
.fit()
.predict()
。我在这里使用
.predict()

def get_qs(self,state):返回self.model.predict(np.array(state).重塑(-1,*state.shape),workers=8,use_multiprocessing=True)[0]

这里还有
.fit()

@tf.function
def序列(自身、终端状态、步骤):
“我们的培训课程内容太多,无法满足您的需求”
如果len(self.replay\u memory)update\u target\u EVERY:
self.target\u model.set\u权重(self.model.get\u权重())
self.target\u update\u计数器=0

有人能帮我把事情搞定吗?

你使用的
Tensorflow
的版本是什么?我使用的是Tensorflow 2.3.0和Python 3.8.5
@tf.function 
def train(self, terminal_state, step):
    "Zum trainieren lohnt es sich immer einen größeren Satz von Daten zu nehmen um ein Overfitting zu verhindern"
    if len(self.replay_memory) < MIN_REPLAY_MEMORY_SIZE:
        return
    
   # Get a minibatch of random samples from memory replay table
    minibatch = random.sample(self.replay_memory, MINIBATCH_SIZE)

    # Get current states from minibatch, then query NN model for Q values
    current_states = np.array([transition[0] for transition in minibatch])
    current_qs_list = self.model.predict(current_states)

    # Get future states from minibatch, then query NN model for Q values
    # When using target network, query it, otherwise main network should be queried
    new_current_states = np.array([transition[3] for transition in minibatch])
    future_qs_list = self.target_model.predict(new_current_states, workers = 8, use_multiprocessing = True)

    X = []
    y = []

    # Now we need to enumerate our batches
    for index, (current_states, action, reward, new_current_states, done) in enumerate(minibatch):

        # If not a terminal state, get new q from future states, otherwise set it to 0
        # almost like with Q Learning, but we use just part of equation here
        if not done:
            max_future_q = np.max(future_qs_list[index])
            new_q = reward + DISCOUNT * max_future_q
        else:
            new_q = reward

        # Update Q value for given state
        current_qs = current_qs_list[index]
        current_qs[action] = new_q

        # And append to our training data
        
        X.append(state)
        y.append(current_qs)
    
    # Fit on all samples as one batch, log only on terminal state callbacks=[self.tensorboard] if terminal_state else None
    self.model.fit(np.array(X), np.array(y), batch_size=MINIBATCH_SIZE, verbose=0, shuffle=False, workers = 8, use_multiprocessing = True)
     # Update target network counter every episode
    if terminal_state:
        self.target_update_counter += 1

    # If counter reaches set value, update target network with weights of main network
    if self.target_update_counter > UPDATE_TARGET_EVERY:
        self.target_model.set_weights(self.model.get_weights())
        self.target_update_counter = 0