Reinforcement learning 为什么我的代理人在DQN强化学习中总是采取同样的行动

Reinforcement learning 为什么我的代理人在DQN强化学习中总是采取同样的行动,reinforcement-learning,q-learning,policy-gradient-descent,Reinforcement Learning,Q Learning,Policy Gradient Descent,我用DQN算法训练了一个RL代理。在两万集之后,我的奖励趋于一致。现在,当我测试这个代理时,代理总是执行相同的操作,而不管状态如何。我觉得这很奇怪。有人能帮我吗。有没有一个原因,任何人都能想到为什么特工会这样做 奖励计划 当我测试代理时 state = env.reset() print('State: ', state) state_encod = np.reshape(state, [1, state_size]) q_values = model.predict(state_encod

我用DQN算法训练了一个RL代理。在两万集之后,我的奖励趋于一致。现在,当我测试这个代理时,代理总是执行相同的操作,而不管状态如何。我觉得这很奇怪。有人能帮我吗。有没有一个原因,任何人都能想到为什么特工会这样做

奖励计划

当我测试代理时

state = env.reset()
print('State: ', state)

state_encod = np.reshape(state, [1, state_size])
q_values = model.predict(state_encod)
action_key = np.argmax(q_values)
print(action_key)
print(index_to_action_mapping[action_key])
print(q_values[0][0])
print(q_values[0][action_key])

q_values_plotting = []
for i in range(0,action_size):
    q_values_plotting.append(q_values[0][i])


plt.plot(np.arange(0,action_size),q_values_plotting)
每次它都给出相同的q_值图,即使每次初始化的状态都不同。下面是q_值图

测试:

代码

谢谢

添加环境


import gym
import rom_vav_150mm_polyreg as rom
import numpy as np
import random

class VAVenv(gym.Env):

    def __init__(self):
        # Zone temperature set point and limits
        self.temp_sp = 24
        self.temp_sp_max = 24.5
        self.temp_sp_min = 23.7

        # no; of hours in an episode and time interval for each step
        self.MAXSTEPS = 11
        self.time_interval = 5./60. #in hrs

        # constants
        self.zone_volume = 775


    def step(self,state,action):

        # state -> Time, Volume, Load, SAT ,RAT
        # action -> CFM

        action_cfm = action[0]

#        damper_opening = state[2]
        load = state[2]
        sat = state[3]
        current_temp = state[4]

        #input
        inputs_rat = np.array([load,action_cfm, self.zone_volume,current_temp,sat])

        '''
        AFTER 5 MINUTES
        '''
        #output
        output = [self.KStep + self.time_interval,self.zone_volume,rom.load(self.KStep + self.time_interval),
                  sat,rom.rat(inputs_rat)]


        #reward calculation
        thermal_coefficient = -0.1

        zone_temperature = output[4]

        if zone_temperature < self.temp_sp_min:
            temp_penalty = self.temp_sp_min - zone_temperature
        elif zone_temperature > self.temp_sp_max:
            temp_penalty = zone_temperature - self.temp_sp_max
        else :
            temp_penalty = -10

        reward = thermal_coefficient * temp_penalty

        # create next step
        next_state = np.array(output)

        # increment simulation step count
        self.KStep += self.time_interval

        # done - end of one episode, when kSteps reaches the maximum steps in an episode
        done = False
        if self.KStep > self.MAXSTEPS:
            done = True


        return next_state,reward,done


    def reset(self):
        self.KStep = 0

        # initialize all the values of a state
        initial_rat = random.uniform(23,27)
        initial_sat = random.uniform(12,14)
        # return a state
        return np.array([self.KStep,self.zone_volume,
                         rom.load(self.KStep),initial_sat,initial_rat])






进口健身房
将rom_vav_150mm_polyreg作为rom导入
将numpy作为np导入
随机输入
VAVenv类(健身房环境):
定义初始化(自):
#区域温度设定点和限值
自身温度=24
自身温度sp最大值=24.5
自身温度sp最小值=23.7
#没有;一集中的小时数和每个步骤的时间间隔
self.MAXSTEPS=11
self.time_interval=5./60#小时
#常数
self.zone_体积=775
def步骤(自身、状态、操作):
#状态->时间、音量、负载、SAT、RAT
#操作->CFM
动作\u cfm=动作[0]
#风门开度=状态[2]
负载=状态[2]
sat=状态[3]
当前温度=状态[4]
#输入
输入=np.数组([加载、操作、自分区、体积、当前温度、sat])
'''
5分钟后
'''
#输出
输出=[self.KStep+self.time\u interval,self.zone\u volume,rom.load(self.KStep+self.time\u interval),
sat,rom.rat(输入\ rat)]
#报酬计算
热系数=-0.1
区域温度=输出[4]
如果区域温度<自身温度>最小值:
温度惩罚=自身温度sp最小值-区域温度
elif区域温度>自身温度sp最大值:
温度惩罚=区域温度-自身温度最大值
其他:
温度惩罚=-10
奖励=热系数*温度惩罚
#创建下一步
next_state=np.array(输出)
#增量模拟步数
self.KStep+=self.time\u间隔
#完成-一集结束,当kSteps达到一集中的最大步数时
完成=错误
如果self.KStep>self.MAXSTEPS:
完成=正确
返回下一个状态、奖励、完成
def重置(自):
self.KStep=0
#初始化状态的所有值
初始值=随机均匀(23,27)
初始_sat=随机均匀(12,14)
#返回状态
返回np.array([self.KStep,self.zone\u volume,
rom.load(self.KStep)、初始值\u sat、初始值\u rat])

假设您的代码没有任何bug,问题可能与奖励的设计方式有关。那么,你能分享更多关于环境和奖励的信息吗?简单的解释是,DQN学习其中一个动作会获得更大的奖励,因此它总是选择该动作。详细的答案需要一些环境知识。@cvg亲爱的朋友,你的问题解决了吗?我也有同样的问题。

import gym
import rom_vav_150mm_polyreg as rom
import numpy as np
import random

class VAVenv(gym.Env):

    def __init__(self):
        # Zone temperature set point and limits
        self.temp_sp = 24
        self.temp_sp_max = 24.5
        self.temp_sp_min = 23.7

        # no; of hours in an episode and time interval for each step
        self.MAXSTEPS = 11
        self.time_interval = 5./60. #in hrs

        # constants
        self.zone_volume = 775


    def step(self,state,action):

        # state -> Time, Volume, Load, SAT ,RAT
        # action -> CFM

        action_cfm = action[0]

#        damper_opening = state[2]
        load = state[2]
        sat = state[3]
        current_temp = state[4]

        #input
        inputs_rat = np.array([load,action_cfm, self.zone_volume,current_temp,sat])

        '''
        AFTER 5 MINUTES
        '''
        #output
        output = [self.KStep + self.time_interval,self.zone_volume,rom.load(self.KStep + self.time_interval),
                  sat,rom.rat(inputs_rat)]


        #reward calculation
        thermal_coefficient = -0.1

        zone_temperature = output[4]

        if zone_temperature < self.temp_sp_min:
            temp_penalty = self.temp_sp_min - zone_temperature
        elif zone_temperature > self.temp_sp_max:
            temp_penalty = zone_temperature - self.temp_sp_max
        else :
            temp_penalty = -10

        reward = thermal_coefficient * temp_penalty

        # create next step
        next_state = np.array(output)

        # increment simulation step count
        self.KStep += self.time_interval

        # done - end of one episode, when kSteps reaches the maximum steps in an episode
        done = False
        if self.KStep > self.MAXSTEPS:
            done = True


        return next_state,reward,done


    def reset(self):
        self.KStep = 0

        # initialize all the values of a state
        initial_rat = random.uniform(23,27)
        initial_sat = random.uniform(12,14)
        # return a state
        return np.array([self.KStep,self.zone_volume,
                         rom.load(self.KStep),initial_sat,initial_rat])