Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow-Conv神经网络的定值误差与报警_Python_Tensorflow_Keras_Deep Learning_Conv Neural Network - Fatal编程技术网

Python Tensorflow-Conv神经网络的定值误差与报警

Python Tensorflow-Conv神经网络的定值误差与报警,python,tensorflow,keras,deep-learning,conv-neural-network,Python,Tensorflow,Keras,Deep Learning,Conv Neural Network,我正在使用Tensorflow创建一个CNN模型,该模型将尺寸为124129的图像分为8个类别 我需要帮助理解为什么会出现错误:ValueError:“图像”必须具有3维或4维。 我也收到了警告 WARNING:tensorflow:Model was constructed with shape (1, 124, 129, 8) for input KerasTensor(type_spec=TensorSpec(shape=(1, 124, 129, 8), dtype=tf.float32

我正在使用Tensorflow创建一个CNN模型,该模型将尺寸为124129的图像分为8个类别

我需要帮助理解为什么会出现错误:
ValueError:“图像”必须具有3维或4维。

我也收到了警告

WARNING:tensorflow:Model was constructed with shape (1, 124, 129, 8) for input KerasTensor(type_spec=TensorSpec(shape=(1, 124, 129, 8), dtype=tf.float32, name='input_36'), name='input_36', description="created by layer 'input_36'"), but it was called on an input with incompatible shape (None, 129).
就在我尝试将模型拟合到训练集时出现错误之前

以下是模型的代码:

from tensorflow.keras import layers
from tensorflow.keras import models

for spectrogram, _ in training_spect_data.take(1):
  input_shape = spectrogram.shape

print(input_shape)
print(len(commands))

model = models.Sequential([
    layers.Input((124,129,8), batch_size= 1),
    layers.experimental.preprocessing.Resizing(32, 32), 
    layers.Conv2D(32, 3, activation='relu'),
    layers.Conv2D(64, 3, activation='relu'),
    layers.MaxPooling2D(),
    layers.Flatten(),
    layers.Dense(128, activation='relu'),
    layers.Dense(num_labels),
])

model.summary()
model.compile(
    optimizer=tf.keras.optimizers.Adam(),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=['accuracy'],
)

model.fit(
    training_spect_data, 
    validation_data=validation_spect_data,  
    epochs=10,
  callbacks=tf.keras.callbacks.EarlyStopping(verbose=1, patience=2),
)
如果有帮助,这就是training/test/val集中的一个数据点的样子:

tf.Tensor(
[[4.22809680e-04 1.20909873e-03 1.30543171e-03 ... 1.11539455e-04
  7.03251426e-05 5.72346325e-05]
 [1.37844472e-06 5.68726333e-04 1.01903011e-03 ... 1.72739034e-04
  7.02477628e-05 2.15965847e-05]
 [1.90013321e-04 5.55736362e-04 7.45545258e-04 ... 1.08729822e-04
  1.73325971e-04 1.51859131e-04]
 ...
 [1.93573331e-04 5.46126859e-04 1.61838590e-03 ... 1.15362825e-04
  1.83291835e-04 2.17455061e-04]
 [1.49126354e-04 7.04471953e-04 1.06320635e-03 ... 8.47642514e-05
  3.19860228e-05 1.25371589e-05]
 [1.29039981e-05 2.79012456e-04 5.54071739e-04 ... 3.47834612e-05
  7.82721399e-05 7.47569429e-05]], shape=(124, 129), dtype=float32) tf.Tensor(b'yes', shape=(), dtype=string)
非常感谢您为解决上述错误/警告提供的任何帮助。

您已经掌握了代码

 layers.Input((124,129,8), batch_size= 1)
我认为应该是这样

 layers.Input((124,129), batch_size= 1)
8是与输入形状无关的类数。实际上,我也会省略batch_size参数,以便使用

 layers.Input((124,129))
model.fit将默认批次大小设置为32。您还可以将其作为模型中的最后一层

layers.Dense(num_labels),
我想你最好用

layers.Dense(num_label, activation='softmax')
然后将损失从

loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

如果使用SparseCategoricalCrossentropy,请确保标签是整数

loss=tf.keras.losses.SparseCategoricalCrossentropy