Python Tensorflow:检查失败:NDIMS==新的大小。大小()(2对1)
我对tensorflow完全陌生。我在做一个项目,收到一条错误消息:2018-05-13 20:50:57.669722:F T:\src\github\tensorflow\tensorflow/core/framework/tensor.h:630]检查失败:NDIMS==new_size.size()(2对1) Pycharm说:进程结束,退出代码为1073740791(0xC0000409) 我不知道那是什么意思。我正在运行windows和python 3.6 这是我的密码:Python Tensorflow:检查失败:NDIMS==新的大小。大小()(2对1),python,python-3.x,tensorflow,Python,Python 3.x,Tensorflow,我对tensorflow完全陌生。我在做一个项目,收到一条错误消息:2018-05-13 20:50:57.669722:F T:\src\github\tensorflow\tensorflow/core/framework/tensor.h:630]检查失败:NDIMS==new_size.size()(2对1) Pycharm说:进程结束,退出代码为1073740791(0xC0000409) 我不知道那是什么意思。我正在运行windows和python 3.6 这是我的密码: impor
import tensorflow as tf
import gym
import numpy as np
env = gym.make("MountainCar-v0").env
n_inputs = 2
n_hidden = 3
n_output = 3
initializer = tf.contrib.layers.variance_scaling_initializer()
learning_rate = 0.1
X = tf.placeholder(tf.float32, shape=[None,n_inputs])
hidden = tf.layers.dense(X,n_hidden,activation=tf.nn.elu,kernel_initializer=initializer)
logits = tf.layers.dense(hidden,n_output,kernel_initializer=initializer)
outputs = tf.nn.softmax(logits)
index,action = tf.nn.top_k(logits,1)
y = tf.to_float(action)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y,logits=logits)
optimizer = tf.train.AdamOptimizer(learning_rate)
grads_and_vars = optimizer.compute_gradients(cross_entropy)
gradients = [grad for grad, variable in grads_and_vars]
gradient_placeholders = []
grads_and_vars_feed = []
for grad, variable in grads_and_vars:
gradient_placeholder = tf.placeholder(tf.float32, shape=grad.get_shape())
gradient_placeholders.append(gradient_placeholder)
grads_and_vars_feed.append((gradient_placeholder,variable))
training_op = optimizer.apply_gradients(grads_and_vars_feed)
#Variablen und Speicher initialisieren
init = tf.global_variables_initializer()
saver = tf.train.Saver()
#Belohnung der versch. Schritte abziehen
def discount_rewards(rewards, discount_rate):
discounted_rewards = np.empty(len(rewards))
comulative_rewards = 0
for step in reversed(range(len(rewards))):
comulative_rewards = rewards[step] + comulative_rewards * discount_rate
discounted_rewards[step] = comulative_rewards
return discounted_rewards
def discount_and_normalize_rewards(all_rewards, discount_rate):
all_discounted_rewards = [discount_rewards(rewards, discount_rate) for rewards in all_rewards]
#Zusammenfügen aller rewards zu einem array
flat_rewards = np.concatenate(all_discounted_rewards)
reward_mean = flat_rewards.mean()
reward_std = flat_rewards.std()
return [(discount_rewards - reward_mean)/reward_std for discount_rewards in all_discounted_rewards]
n_iterations = 25
n_max_steps = 10000
n_games_per_update = 10
save_iteration = 10
discount_rate = 0.95
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
all_rewards = []
my_rewards = []
all_gradients = []
for game in range(n_games_per_update):
current_rewards = []
current_gradients = []
#env.render()
obs = env.reset()
for step in range(n_max_steps):
action_val,gradient_val = sess.run([action,gradients], feed_dict={X: obs.reshape(1, n_inputs)})
obs, reward, done, info = env.step(action_val)
current_rewards.append(reward)
current_gradients.append(gradient_val)
if done:
break
my_rewards.append(sum(current_rewards))
print(iteration,": ", sum(current_rewards))
all_rewards.append(current_rewards)
all_gradients.append(current_gradients)
all_rewards = discount_and_normalize_rewards(all_rewards,discount_rate)
feed_dict = {}
for var_index, grad_placeholder in enumerate(gradient_placeholders):
mean_gradients = np.mean([reward * all_gradients[game_index][step][var_index] for game_index,rewards in enumerate(all_rewards) for step,reward in enumerate(rewards)],axis=0)
feed_dict[grad_placeholder] = mean_gradients
sess.run(training_op, feed_dict=feed_dict)
if iteration % save_iteration == 0:
saver.save(sess, "./my_policy_net_pg.ckpt")
print("Average: ", sum(my_rewards) / len(my_rewards))
print("Maximum: ", max(my_rewards))
这些行似乎包含多个bug:
index,action=tf.nn.top\k(logits,1)
y=tf.to_浮动(动作)
交叉熵=tf.nn.softmax\u交叉熵\u与逻辑向量v2(标签=y,逻辑向量=logits)
首先,首先返回值,然后返回索引。因此,操作
将保存索引,而不是索引
y
然后成为索引(以浮点形式),您将其作为labels
传递给
这有两个主要问题。首先,应该将标签作为一个热向量传递,而不是作为索引传递。我想这就是为什么你会得到这个错误,你传递的是一维张量而不是二维张量
第二个问题是理论上的(与你的错误无关,但我想指出):因为logits
是你的预测,你从那里得到y
,你基本上是在比较你的logits
。没有学习。你需要提供实际的标签,并以此为基础进行学习
只是一个注释,发布整个错误回溯通常是有益的,而不仅仅是最后一行,因为我现在只是猜测错误在哪里,无法确定。如何纠正导致模型无法学习的logits错误?只是为了去除它们?@MasonChoi在上述案例中,他也使用了预测作为标签。您需要使用培训集中的原始标签作为标签(地面真相),以便网络能够学习。仅仅删除它们就可以删除系统中的任何反馈,但这不会减少反馈。