Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 神经网络不学习(损失保持不变)_Python_Tensorflow_Keras_Neural Network_Reinforcement Learning - Fatal编程技术网

Python 神经网络不学习(损失保持不变)

Python 神经网络不学习(损失保持不变),python,tensorflow,keras,neural-network,reinforcement-learning,Python,Tensorflow,Keras,Neural Network,Reinforcement Learning,我和我的项目伙伴目前在我们最新的大学项目中面临一个问题。 我们的任务是实现一个玩乒乓球游戏的神经网络。我们将球的位置、球的速度和球拍的位置提供给我们的网络,并有三个输出:上下不做任何事情。当一名玩家获得11分后,我们将训练网络的所有状态、做出的决定以及做出的决定的奖励(请参阅reward_cal())。我们面临的问题是,损失持续保持在一个特定的值,仅取决于学习率。正因为如此,网络总是做出同样的决定,即使我们认为它大错特错 请帮助我们找出我们做错了什么我们感谢每一个建议!下面是我们的代码,如果有任

我和我的项目伙伴目前在我们最新的大学项目中面临一个问题。 我们的任务是实现一个玩乒乓球游戏的神经网络。我们将球的位置、球的速度和球拍的位置提供给我们的网络,并有三个输出:上下不做任何事情。当一名玩家获得11分后,我们将训练网络的所有状态、做出的决定以及做出的决定的奖励(请参阅reward_cal())。我们面临的问题是,损失持续保持在一个特定的值,仅取决于学习率。正因为如此,网络总是做出同样的决定,即使我们认为它大错特错

请帮助我们找出我们做错了什么我们感谢每一个建议!下面是我们的代码,如果有任何问题,请随时询问。我们对这个话题还很陌生,所以如果有什么完全愚蠢的事情,请不要粗鲁:D

这是我们的代码:

import sys, pygame, time
import numpy as np
import random
from os.path import isfile
import keras
from keras.optimizers import SGD
from keras.layers import Dense
from keras.layers.core import Flatten


pygame.init()
pygame.mixer.init()

#surface of the game
width = 400
height = 600
black = 0, 0, 0 #RGB value
screen = pygame.display.set_mode((width, height), 0, 32)
#(Resolution(x,y), flags, colour depth)
font = pygame.font.SysFont('arial', 36, bold=True)
pygame.display.set_caption('PyPong') #title of window

#consts for the game
acceleration = 0.0025 # ball becomes faster during the game
mousematch = 1
delay_time = 0
paddleP = pygame.image.load("schlaeger.gif")
playerRect = paddleP.get_rect(center = (200, 550))
paddleC = pygame.image.load("schlaeger.gif")
comRect = paddleC.get_rect(center=(200,50))
ball = pygame.image.load("ball.gif")
ballRect = ball.get_rect(center=(200,300))

#Variables for the game
pointsPlayer = [0]
pointsCom = [0]
playermove = [0, 0]
speedbar = [0, 0]
speed = [6, 6]
hitX = 0

#neural const
learning_rate = 0.01
number_of_actions = 3
filehandler = open('logfile.log', 'a')
filename = sys.argv[1]

#neural variables
states, action_prob_grads, rewards, action_probs = [], [], [], []

reward_sum = 0
episode_number = 0
reward_sums = []




pygame.display.flip()


def pointcontrol(): #having a look at the points in the game and restart()
     if pointsPlayer[0] >= 11:
        print('Player Won ', pointsPlayer[0], '/', pointsCom[0])
        restart(1)
        return 1
     if pointsCom[0] >= 11:
        print('Computer Won ', pointsPlayer[0], '/', pointsCom[0])
        restart(1)
        return 1
     elif pointsCom[0] < 11 and pointsPlayer[0] < 11:
        restart(0)
        return 0

def restart(finished): #resetting the positions and the ball speed and
(if point limit was reached) the points
     ballRect.center = 200,300
     comRect.center = 200,50
     playerRect.center = 200, 550
     speed[0] = 6
     speed[1] = 6
     screen.blit(paddleC, comRect)
     screen.blit(paddleP, playerRect)
     pygame.display.flip()
     if finished:
         pointsPlayer[0] = 0
         pointsCom[0] = 0

def reward_cal(r, gamma = 0.99): #rewarding every move
     discounted_r = np.zeros_like(r) #making zero array with size of
reward array
     running_add = 0
     for t in range(r.size - 1, 0, -1): #iterating beginning in the end
         if r[t] != 0: #if reward -1 or 1 (point made or lost)
             running_add = 0
         running_add = running_add * gamma + r[t] #making every move
before the point the same reward but a little bit smaller
         discounted_r[t] = running_add #putting the value in the new
reward array
     #e.g r = 000001000-1 -> discounted_r = 0.5 0.6 0.7 0.8 0.9 1 -0.7
-0.8 -0.9 -1 values are not really correct just to make it clear
     return discounted_r


#neural net
model = keras.models.Sequential()
model.add(Dense(16, input_dim = (8), kernel_initializer =
'glorot_normal', activation = 'relu'))
model.add(Dense(32, kernel_initializer = 'glorot_normal', activation =
'relu'))
model.add(Dense(number_of_actions, activation='softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam')
model.summary()

if isfile(filename):
     model.load_weights(filename)

# one ball movement before the AI gets to make a decision
ballRect = ballRect.move(speed)
reward_temp = 0.0
if ballRect.left < 0 or ballRect.right > width:
    speed[0] = -speed[0]
if ballRect.top < 0:
    pointsPlayer[0] += 1
    reward_temp = 1.0
    done = pointcontrol()
if ballRect.bottom > height:
    pointsCom[0] += 1
    done = pointcontrol()
    reward_temp = -1.0
if ballRect.colliderect(playerRect):
    speed[1] = -speed[1]
if ballRect.colliderect(comRect):
    speed[1] = -speed[1]
if speed[0] < 0:
    speed[0] -= acceleration
if speed[0] > 0:
    speed[0] += acceleration
if speed[1] < 0:
    speed[1] -= acceleration
if speed[1] > 0 :
    speed[1] += acceleration

while True: #game
     for event in pygame.event.get():
          if event.type == pygame.QUIT:
                pygame.quit()
                sys.exit()

     state = np.array([ballRect.center[0], ballRect.center[1], speed[0],
speed[1], playerRect.center[0], playerRect.center[1], comRect.center[0],
comRect.center[1]])
     states.append(state)
     action_prob = model.predict_on_batch(state.reshape(1, 8))[0, :]

     action_probs.append(action_prob)
     action = np.random.choice(number_of_actions, p=action_prob)
     if(action == 0): playermove = [0, 0]
     elif(action == 1): playermove = [5, 0]
     elif(action == 2): playermove = [-5, 0]
     playerRect = playerRect.move(playermove)

     y = np.array([-1, -1, -1])
     y[action] = 1
     action_prob_grads.append(y-action_prob)

     #enemy move
     comRect = comRect.move(speedbar)
     ballY = ballRect.left+5
     comRectY = comRect.left+30
     if comRect.top <= (height/1.5):
        if comRectY - ballY > 0:
           speedbar[0] = -7
        elif comRectY - ballY < 0:
           speedbar[0] = 7
     if comRect.top > (height/1.5):
        speedbar[0] = 0

     if(mousematch == 1):
          done = 0
          reward_temp = 0.0
          ballRect = ballRect.move(speed)
          if ballRect.left < 0 or ballRect.right > width:
                speed[0] = -speed[0]
          if ballRect.top < 0:
                pointsPlayer[0] += 1
                done = pointcontrol()
                reward_temp = 1.0
          if ballRect.bottom > height:
                pointsCom[0] += 1
                done = pointcontrol()
                reward_temp = -1.0
          if ballRect.colliderect(playerRect):
                speed[1] = -speed[1]
          if ballRect.colliderect(comRect):
                speed[1] = -speed[1]
          if speed[0] < 0:
                speed[0] -= acceleration
          if speed[0] > 0:
                speed[0] += acceleration
          if speed[1] < 0:
                speed[1] -= acceleration
          if speed[1] > 0 :
                speed[1] += acceleration
          rewards.append(reward_temp)

          if (done):
              episode_number += 1
              reward_sums.append(np.sum(rewards))
              if len(reward_sums) > 40:
                  reward_sums.pop(0)
              s = 'Episode %d Total Episode Reward: %f , Mean %f' % (
episode_number, np.sum(rewards), np.mean(reward_sums))
              print(s)
              filehandler.write(s + '\n')
              filehandler.flush()

              # Propagate the rewards back to actions where no reward
was given.
              # Rewards for earlier actions are attenuated
              rewards = np.vstack(rewards)

              action_prob_grads = np.vstack(action_prob_grads)
              rewards = reward_cal(rewards)

              X = np.vstack(states).reshape(-1, 8)

              Y = action_probs + learning_rate * rewards * y


              print('loss: ', model.train_on_batch(X, Y))

              model.save_weights(filename)

              states, action_prob_grads, rewards, action_probs = [], [], [], []

              reward_sum = 0

          screen.fill(black)
          screen.blit(paddleP, playerRect)
          screen.blit(ball, ballRect)
          screen.blit(paddleC, comRect)
          pygame.display.flip()
          pygame.time.delay(delay_time)

那是邪恶的,展示它的力量

Relu有一个没有渐变的“零”区域。当所有输出都为负时,Relu使所有输出都等于零,并消除反向传播

安全使用Relus的最简单解决方案是在它们之前添加
BatchNormalization
层:

model = keras.models.Sequential()

model.add(Dense(16, input_dim = (8), kernel_initializer = 'glorot_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))

model.add(Dense(32, kernel_initializer = 'glorot_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))

model.add(Dense(number_of_actions, activation='softmax'))
这将使层输出的“rougly”一半为零,一半可训练


其他解决方案包括很好地控制您的学习速度和优化器,这可能是初学者非常头疼的问题

欢迎来到SO;请看,以及为什么首先非常感谢您的快速帮助!不幸的是,它在我们的程序中不起作用。我们得到以下错误:model.add(BatchNormalization())NameError:未定义名称“BatchNormalization”是否缺少任何导入?再次感谢!您需要首先从Keras导入它:从Keras.layers导入批处理规范化谢谢Mark抱歉问了个愚蠢的问题:D它似乎起作用了至少损失现在正在改变。我们将在经过一些培训后看到它的外观,并让您不断更新!
model = keras.models.Sequential()

model.add(Dense(16, input_dim = (8), kernel_initializer = 'glorot_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))

model.add(Dense(32, kernel_initializer = 'glorot_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))

model.add(Dense(number_of_actions, activation='softmax'))