Python ValueError:layer sequential_9的输入0与layer::expected min_ndim=4不兼容,found ndim=3。收到完整形状:[无,无,无]
我正在努力解决分类问题。我不知道为什么会出现这样的错误:Python ValueError:layer sequential_9的输入0与layer::expected min_ndim=4不兼容,found ndim=3。收到完整形状:[无,无,无],python,tensorflow,keras,deep-learning,max-pooling,Python,Tensorflow,Keras,Deep Learning,Max Pooling,我正在努力解决分类问题。我不知道为什么会出现这样的错误: ValueError:layer sequential_9的输入0与layer::expected min_ndim=4不兼容,found ndim=3。收到完整形状:[无,无,无] 这是主代码: model = createModel() filesPath=getFilesPathWithoutSeizure(i, indexPat) history=model.fit_generator(generate_arrays_for_tr
ValueError:layer sequential_9的输入0与layer::expected min_ndim=4不兼容,found ndim=3。收到完整形状:[无,无,无]
这是主代码:
model = createModel()
filesPath=getFilesPathWithoutSeizure(i, indexPat)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75)##problem here
def createModel():
input_shape=(1,11, 3840)
model = Sequential()
#C1
model.add(Conv2D(16, (5, 5), strides=( 2, 2), padding='same',activation='relu',data_format= "channels_first", input_shape=input_shape))
model.add(keras.layers.MaxPooling2D(pool_size=( 2, 2),data_format= "channels_first", padding='same'))
model.add(BatchNormalization())
#C2
model.add(Conv2D(32, ( 3, 3), strides=(1,1), padding='same',data_format= "channels_first", activation='relu'))#incertezza se togliere padding
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
model.add(BatchNormalization())
#c3
model.add(Conv2D(64, (3, 3), strides=( 1,1), padding='same',data_format= "channels_first", activation='relu'))#incertezza se togliere padding
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(256, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
opt_adam = keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy'])
return model
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75), #end=75),#It take the first 75%
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1815, in fit_generator
return self.fit(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
tmp_logs = train_function(iterator)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 823, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 696, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2855, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3065, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 600, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 973, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:789 run_step **
outputs = model.train_step(data)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:747 train_step
y_pred = self(x, training=True)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:975 __call__
input_spec.assert_input_compatibility(self.input_spec, inputs,
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/input_spec.py:191 assert_input_compatibility
raise ValueError('Input ' + str(input_index) + ' of layer ' +
ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]
错误:
model = createModel()
filesPath=getFilesPathWithoutSeizure(i, indexPat)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75)##problem here
def createModel():
input_shape=(1,11, 3840)
model = Sequential()
#C1
model.add(Conv2D(16, (5, 5), strides=( 2, 2), padding='same',activation='relu',data_format= "channels_first", input_shape=input_shape))
model.add(keras.layers.MaxPooling2D(pool_size=( 2, 2),data_format= "channels_first", padding='same'))
model.add(BatchNormalization())
#C2
model.add(Conv2D(32, ( 3, 3), strides=(1,1), padding='same',data_format= "channels_first", activation='relu'))#incertezza se togliere padding
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
model.add(BatchNormalization())
#c3
model.add(Conv2D(64, (3, 3), strides=( 1,1), padding='same',data_format= "channels_first", activation='relu'))#incertezza se togliere padding
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2),data_format= "channels_first", padding='same'))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(256, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
opt_adam = keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy'])
return model
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75), #end=75),#It take the first 75%
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1815, in fit_generator
return self.fit(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
tmp_logs = train_function(iterator)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 823, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 696, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2855, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3065, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 600, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 973, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:789 run_step **
outputs = model.train_step(data)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:747 train_step
y_pred = self(x, training=True)
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:975 __call__
input_spec.assert_input_compatibility(self.input_spec, inputs,
/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/input_spec.py:191 assert_input_compatibility
raise ValueError('Input ' + str(input_index) + ' of layer ' +
ValueError: Input 0 of layer sequential_9 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]
Keras实际上总是隐藏第0个维度,也称为批处理维度。在您放置
input\u shape=(A,B,C)
的任何地方,实际上都不应该提到批处理维度,(A,B,C)
应该是一个对象的形状(或者在您的情况下是图像)。例如,如果你说input\u shape=(1,11,3840)
,那么它实际上意味着用于训练或预测的数据应该是一个形状的numpy数组,类似于(7,1,11,3840)
,即训练在批处理中有7个对象。所以这个7
是批量的大小,并行训练的对象数
因此,如果您的一个对象(例如图像)的形状是(113840)
,那么您必须在任何地方编写输入\u shape=(113840)
,而不必提及批量大小
为什么Keras隐藏第0个批次维度?因为keras期望不同大小的批处理,所以今天您可以提供7个对象,明天提供9个对象,并且相同的网络将同时适用于这两个对象。但是(113840)
的一个对象的形状永远不应该改变,并且由函数生成用于训练的数组()生成的训练所提供的数据应该总是大小(BatchSize,113840)
,其中BatchSize
可以变化,您可以批量生成1
或7
或9
对象图像,每个形状(113840)
如果所有层的图像都应该是三维的,只有一个通道,那么您必须扩展生成的训练数据的dims,请执行X=np。使用展开dims(X,0)
,以便训练X数据的形状(1,1,11,3840)
,例如使用一个对象进行批处理,只有这样您才能使用输入\u shape=(1,11,3840)
此外,我还看到您正在到处写入data\u format=“channels\u first”
,默认情况下,所有函数都是channels\u last
,为了不到处写入,您可以重塑由generate\u array\u为\u training()生成的数据,只要一次,如果它是X
形状(1,11,3840)
然后执行X=X.transpose(0,2,3,1)
。你的通道将成为最后一个维度
转置将一个维度移动到另一个位置。但是对于您的情况,因为您只有1
频道,所以您可以只重塑形状(1,1,11,3840)
的X
,而不是转置。可以通过X=X来重塑形状(1,11,3840,1)
,它将变成形状(1,11,3840,1)
。仅当您不想在任何地方编写“channels\u first”
时,才需要此功能,但如果您不想美化代码,则根本不需要转置/重塑
从我过去的记忆中,Keras不知何故不喜欢大小为1的维度,它基本上尝试在几个不同的函数中删除它们,即,如果Keras看到形状数组(1,2,1,3,1,4)
,它几乎总是尝试将其重塑为(2,3,4)
。因此,np.expand_dims()
实际上被忽略了。在这种情况下,可能唯一的解决方案是至少生成一批大小为2的图像
您也可以阅读我的,尽管这有点不相关,但它可能有助于您理解Keras中的训练/预测是如何工作的,尤其是您可以阅读编号为1-12
的最后几段
更新:由于下一次修改的帮助,问题似乎得到了解决:
在数据生成函数中,需要进行两次DIM扩展,即X=np.expand_-dims(np.expand_-dims(X,0),0)
在数据生成函数中,需要另一个X=X.transpose(0,2,3,1)
网络输入形状的代码设置为input\u shape=(113840,1)
在网络代码中,删除了所有子字符串data\u format=“channels\u first”
哪一行代码触发了该异常?这行代码在你提供的代码中吗?你能不能把全部的异常放进去?它没有起始部分(跳过第一行)。因此,我们无法看到哪一行代码触发了异常。通过函数generate\u arrays\u for_training()
,生成的numpy数组的形状是什么?@Arty是的,这是行history=model.fit\u生成器(generate\u arrays\u for_training)(indexPat,filepath,end=75)##这里的问题
,由generate_arrays\u for_training()生成的numpy数组的形状
(1,113840)@Edaildiz我明白了原因,我现在就写一个答案。非常感谢您的解决方案。但是我不明白为什么我要做X=X.transpose(0,2,3,1)
??@Edayildiz转置可以将一个维度移动到另一个位置。但是,对于您的情况,是的,因为您只有一个通道,而不是转置,您可以只重塑形状(1,1,11,3840)
的XX=X。重塑(1,11,3840,1)
它将变成形状(1,11,3840,1)
。如果您不想先编写“channels\u”
,这一切都是必需的,但如果您不想美化代码,则根本不需要转置/重塑!非常好,因此如果我编写“channels\u last”,那么我的输入形状应该是input\u shape=(1,113840,1)
??不幸的是,通过扩展dimension@Edayildiz我想可能是这样的,至少我从过去的记忆中记得Keras错误地处理ba