Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/arrays/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Neural network OpenAI健身房模型';月球着陆器不会聚_Neural Network_Keras_Deep Learning_Reinforcement Learning_Q Learning - Fatal编程技术网

Neural network OpenAI健身房模型';月球着陆器不会聚

Neural network OpenAI健身房模型';月球着陆器不会聚,neural-network,keras,deep-learning,reinforcement-learning,q-learning,Neural Network,Keras,Deep Learning,Reinforcement Learning,Q Learning,我正试图利用keras的深度强化学习来训练一名经纪人,让他学会如何打篮球。问题是我的模型没有收敛。这是我的密码: import numpy as np import gym from keras.models import Sequential from keras.layers import Dense from keras import optimizers def get_random_action(epsilon): return np.random.rand(1) <

我正试图利用keras的深度强化学习来训练一名经纪人,让他学会如何打篮球。问题是我的模型没有收敛。这是我的密码:

import numpy as np
import gym

from keras.models import Sequential
from keras.layers import Dense
from keras import optimizers

def get_random_action(epsilon):
    return np.random.rand(1) < epsilon

def get_reward_prediction(q, a):
    qs_a = np.concatenate((q, table[a]), axis=0)
    x = np.zeros(shape=(1, environment_parameters + num_of_possible_actions))
    x[0] = qs_a
    guess = model.predict(x[0].reshape(1, x.shape[1]))
    r = guess[0][0]
    return r

results = []
epsilon = 0.05
alpha = 0.003
gamma = 0.3
environment_parameters = 8
num_of_possible_actions = 4
obs = 15
mem_max = 100000
epochs = 3
total_episodes = 15000

possible_actions = np.arange(0, num_of_possible_actions)
table = np.zeros((num_of_possible_actions, num_of_possible_actions))
table[np.arange(num_of_possible_actions), possible_actions] = 1

env = gym.make('LunarLander-v2')
env.reset()

i_x = np.random.random((5, environment_parameters + num_of_possible_actions))
i_y = np.random.random((5, 1))

model = Sequential()
model.add(Dense(512, activation='relu', input_dim=i_x.shape[1]))
model.add(Dense(i_y.shape[1]))

opt = optimizers.adam(lr=alpha)

model.compile(loss='mse', optimizer=opt, metrics=['accuracy'])

total_steps = 0
i_x = np.zeros(shape=(1, environment_parameters + num_of_possible_actions))
i_y = np.zeros(shape=(1, 1))

mem_x = np.zeros(shape=(1, environment_parameters + num_of_possible_actions))
mem_y = np.zeros(shape=(1, 1))
max_steps = 40000

for episode in range(total_episodes):
    g_x = np.zeros(shape=(1, environment_parameters + num_of_possible_actions))
    g_y = np.zeros(shape=(1, 1))
    q_t = env.reset()
    episode_reward = 0

    for step_number in range(max_steps):
        if episode < obs:
            a = env.action_space.sample()
        else:
            if get_random_action(epsilon, total_episodes, episode):
                a = env.action_space.sample()
            else:
                actions = np.zeros(shape=num_of_possible_actions)

                for i in range(4):
                    actions[i] = get_reward_prediction(q_t, i)

                a = np.argmax(actions)

        # env.render()
        qa = np.concatenate((q_t, table[a]), axis=0)

        s, r, episode_complete, data = env.step(a)
        episode_reward += r

        if step_number is 0:
            g_x[0] = qa
            g_y[0] = np.array([r])
            mem_x[0] = qa
            mem_y[0] = np.array([r])

        g_x = np.vstack((g_x, qa))
        g_y = np.vstack((g_y, np.array([r])))

        if episode_complete:
            for i in range(0, g_y.shape[0]):
                if i is 0:
                    g_y[(g_y.shape[0] - 1) - i][0] = g_y[(g_y.shape[0] - 1) - i][0]
                else:
                    g_y[(g_y.shape[0] - 1) - i][0] = g_y[(g_y.shape[0] - 1) - i][0] + gamma * g_y[(g_y.shape[0] - 1) - i + 1][0]

            if mem_x.shape[0] is 1:
                mem_x = g_x
                mem_y = g_y
            else:
                mem_x = np.concatenate((mem_x, g_x), axis=0)
                mem_y = np.concatenate((mem_y, g_y), axis=0)

            if np.alen(mem_x) >= mem_max:
                for l in range(np.alen(g_x)):
                    mem_x = np.delete(mem_x, 0, axis=0)
                    mem_y = np.delete(mem_y, 0, axis=0)

        q_t = s

        if episode_complete and episode >= obs:
            if episode%10 == 0:
                model.fit(mem_x, mem_y, batch_size=32, epochs=epochs, verbose=0)

        if episode_complete:
            results.append(episode_reward)
            break
将numpy导入为np
进口健身房
从keras.models导入顺序
从keras.layers导入稠密
来自keras导入优化器
def get_random_动作(ε):
返回np.rand.rand(1)=mem_max:
对于范围内的l(np.alen(g_x)):
mem_x=np.delete(mem_x,0,axis=0)
mem_y=np.delete(mem_y,0,axis=0)
q_t=s
如果事件完成且事件>=obs:
如果事件%10==0:
model.fit(mem_x,mem_y,batch_size=32,epochs=epochs,verbose=0)
如果事件完成:
结果。追加(插曲奖励)
打破

我正在运行数万集,我的模型仍然无法收敛。它将开始减少超过5000集的平均政策变化,同时增加平均报酬,但随后它将脱离深层次,每集的平均报酬实际上在这之后下降。我试过搞乱超参数,但还没有成功。我正在尝试根据。

对代码进行建模。您可能希望更改
get\u random\u action
函数,使其随每一集衰减ε。毕竟,假设你的代理可以学习到一个最优策略,在某个时候你根本不想采取随机行动,对吧?下面是一个稍微不同的版本的
get_random_action
,它可以为您实现这一点:

def get_random_action(epsilon, total_episodes, episode):
        explore_prob = epsilon - (epsilon * (episode / total_episodes))
        return np.random.rand(1) < explore_prob
def get_random_动作(epsilon、总集、集):
探索概率=ε-(ε*(集/总集))
返回np.random.rand(1)
在这个函数的修改版本中,epsilon将随着每一集的出现而略微减少。这可能有助于您的模型收敛


有几种方法可以衰减参数。有关更多信息,请查看。

我最近成功地实现了此功能

基本上,我让代理随机运行3000帧,同时收集这些作为初始训练数据(状态)和标签(奖励),然后,我每100帧训练一次神经网络模型,让模型决定什么样的操作会得到最佳分数


查看我的github,它可能会有所帮助。哦,我的训练迭代也在YouTube上,



嗯。。。我会试试这个,然后再打给你希望这有帮助!