Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 期望展平_输入具有3维,但得到了具有形状的数组_Python_Tensorflow - Fatal编程技术网

Python 期望展平_输入具有3维,但得到了具有形状的数组

Python 期望展平_输入具有3维,但得到了具有形状的数组,python,tensorflow,Python,Tensorflow,我遵循的是基本分类教程。由于代理的原因,我不得不脱机使用数据集。因此,我使用的不是fashion_mnist数据库,而是mnist数据集 from __future__ import absolute_import, division, print_function, unicode_literals # TensorFlow and tf.keras import tensorflow as tf from tensorflow import keras # Helper librar

我遵循的是基本分类教程。由于代理的原因,我不得不脱机使用数据集。因此,我使用的不是fashion_mnist数据库,而是mnist数据集

from __future__ import absolute_import, division, 

print_function, unicode_literals

# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras

# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Flatten, Dense

# Noting class names
class_names = ['Zero', 'One', 'Two', 'Three', 'Four', 'Five', 'Six', 'Seven', 'Eight', 'Nine']

# Load dataset
mnist = keras.datasets.mnist
path = 'C:/projects/VirtualEnvironment/MyScripts/load/mnist.npz'
(train_x, train_y), (test_x, test_y) = mnist.load_data(path)

# Scale, so that training and testing set is preprocessed in the same way
train_x = train_x / 255.0
test_x = test_y / 255.0
train_y = tf.expand_dims(train_y, axis = -1)
test_y = tf.expand_dims(test_y, axis = -1)

#Build the model

#1. Setup the layers
model = keras.Sequential()
model.add(Flatten(input_shape = (28, 28)))
model.add(Dense(128, activation=tf.nn.relu))
model.add(Dense(10, activation=tf.nn.softmax))


#2. Compile the model
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Train the model
model.fit(train_x, train_y, epochs=1)
print("Finished Training")

# Evaluate how the model performs on the test dataset
test_loss, test_acc = model.evaluate(test_x,  test_y, verbose=2)

我得到了以下错误:
ValueError:checking input时出错:预期flant\u input为3维,但得到了形状为(10000,1)的数组。
我对tensorflow知之甚少,因此如果有人能引导我找到一个有用的网页,或者能向我解释错误的含义,非常感谢

我不确定数组形状问题,但是我可以帮助您解决代理问题,以便您可以正确下载数据集。假设您已从IT中清除了使用工具的所有信息,您可以通过在终端级别导出来设置pip代理:

假设您的登录凭证是COMPANY\username

export http_proxy=http://COMPANY%5Cusername:password@proxy_ip:proxy_port
export https_proxy=http://COMPANY%5Cusername:password@proxy_ip:proxy_port
如果您使用的是conda环境,.condarc位于C:\Users\username,编辑为:

channels:
- defaults

# Show channel URLs when displaying what is going to be downloaded and
# in 'conda list'. The default is False.
show_channel_urls: True
allow_other_channels: True

proxy_servers:
    http: http://COMPANY\username:password@proxy_ip:proxy_port
    https: https://COMPANY\username:password@proxy_ip:proxy_port


ssl_verify: False

希望能有帮助。为了调试数组形状,我建议您在展开尺寸后使用train_y.shape()和train_x.shape()打印train_y和train_x形状。该错误指定其获取的10000D对象具有1个值,这不应该是这种情况。

我不确定数组形状问题,但是我可以帮助您解决代理问题,以便您可以正确下载数据集。假设您已从IT中清除了使用工具的所有信息,您可以通过在终端级别导出来设置pip代理:

假设您的登录凭证是COMPANY\username

export http_proxy=http://COMPANY%5Cusername:password@proxy_ip:proxy_port
export https_proxy=http://COMPANY%5Cusername:password@proxy_ip:proxy_port
如果您使用的是conda环境,.condarc位于C:\Users\username,编辑为:

channels:
- defaults

# Show channel URLs when displaying what is going to be downloaded and
# in 'conda list'. The default is False.
show_channel_urls: True
allow_other_channels: True

proxy_servers:
    http: http://COMPANY\username:password@proxy_ip:proxy_port
    https: https://COMPANY\username:password@proxy_ip:proxy_port


ssl_verify: False
希望能有帮助。为了调试数组形状,我建议您在展开尺寸后使用train_y.shape()和train_x.shape()打印train_y和train_x形状。该错误指定获取的10000D对象的值不应为1。

这适用于我:

(train_x, train_y), (test_x, test_y) = tf.keras.datasets.mnist.load_data()

# Scale, so that training and testing set is preprocessed in the same way
train_x = train_x / 255.0
test_x = test_x / 255.0

model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape = (28, 28)))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))


#2. Compile the model
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Train the model
model.fit(train_x, train_y, epochs=1)

# Evaluate the model
test_loss, test_acc = model.evaluate(test_x,  test_y, verbose=2)
您得到的错误意味着您在代码中的某个地方错误地重塑了输入。

这对我很有用:

(train_x, train_y), (test_x, test_y) = tf.keras.datasets.mnist.load_data()

# Scale, so that training and testing set is preprocessed in the same way
train_x = train_x / 255.0
test_x = test_x / 255.0

model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape = (28, 28)))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))


#2. Compile the model
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Train the model
model.fit(train_x, train_y, epochs=1)

# Evaluate the model
test_loss, test_acc = model.evaluate(test_x,  test_y, verbose=2)

您得到的错误意味着您在代码中的某个地方错误地重新设置了输入的形状。

这是由于代码中的输入错误造成的

改变这个

test\u x=test\u y/255.0


test\u x=test\u x/255.0

这是由于您的代码输入错误造成的

改变这个

test\u x=test\u y/255.0


test\ux=test\ux/255.0

我找到了错误源,它是
test\ux=test\uy/255.0
我找到了错误源,它是
test\ux=test\uy/255.0