Python 3.x Tensorflow-Conv2D批处理输入形状数组形状错误

Python 3.x Tensorflow-Conv2D批处理输入形状数组形状错误,python-3.x,tensorflow,keras,conv-neural-network,tensorflow-datasets,Python 3.x,Tensorflow,Keras,Conv Neural Network,Tensorflow Datasets,我正在尝试使用2D np数组中的一些自定义数据集创建一个基本的张量流CNN。 我似乎无法使输入数据与卷积层的input_shape或batch_input_shape参数对齐。我尝试了变量的每一个顺序,与文档中的相同,但不确定为什么它仍然会产生错误 任何帮助都将不胜感激 import os import pickle import pandas as pd import matplotlib as plt import numpy as np import tensorflow as tf

我正在尝试使用2D np数组中的一些自定义数据集创建一个基本的张量流CNN。 我似乎无法使输入数据与卷积层的input_shape或batch_input_shape参数对齐。我尝试了变量的每一个顺序,与文档中的相同,但不确定为什么它仍然会产生错误

任何帮助都将不胜感激

import os 
import pickle
import pandas as pd
import matplotlib as plt
import numpy as np

import tensorflow as tf

from tensorflow.keras import models, datasets, layers
---------------------------------------------------------------------------
ValueError回溯(最近一次调用上次)
在里面
3个指标=[‘准确度’])
4.
---->5历史=model.fit(训练数据集,历代=10,验证数据=(val数据集))#添加验证数据=(测试数据,测试目标)
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\keras\engine\training.py in fit(self、x、y、批大小、历元、冗余、回调、验证拆分、验证数据、无序、类权重、样本权重、初始历元、每历元的步数、验证步骤、验证频率、最大队列大小、工作者、使用多处理、**kwargs)
817最大队列大小=最大队列大小,
818名工人=工人,
-->819使用\多处理=使用\多处理)
820
821 def评估(自我,
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\keras\engine\training\u v2.py in-fit(self、model、x、y、批大小、历元、冗余、回调、验证拆分、验证数据、洗牌、类权重、样本权重、初始历元、每个历元的步骤、验证步骤、验证频率、最大队列大小、工作人员、使用多处理、**kwargs)
233最大队列大小=最大队列大小,
234名工人=工人,
-->235使用多处理=使用多处理)
236
237总样本数=\u获取\u总样本数\u(训练\u数据\u适配器)
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\keras\engine\training\u v2.py in\u process\u training\u输入(模型、x、y、批次大小、历元、样本权重、类权重、每历元步长、验证拆分、验证数据、验证步骤、无序、分布策略、最大队列大小、工人、使用多处理)
591最大队列大小=最大队列大小,
592名工人=工人,
-->593使用多处理=使用多处理)
594 val_适配器=无
595如果验证数据:
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\keras\engine\training\u v2.py in\u process\u输入(模型、模式、x、y、批次大小、历代、样本权重、类权重、无序排列、步骤、分布策略、最大队列大小、工人、使用多处理)
704最大队列大小=最大队列大小,
705名工人=工人,
-->706使用多处理=使用多处理)
707
708返回适配器
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\keras\engine\data\u adapter.py in\uuuuuuuuu init\uuuuuuuuuu(self、x、y、sample\u权重、标准化函数、**kwargs)
700
701如果标准化功能不是无:
-->702 x=标准化功能(x)
703
704#请注意,dataset实例是不可变的,重用用户很好
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\keras\engine\training\u v2.py标准化函数(数据集)
682返回x,y
683返回x、y、样本重量
-->684返回dataset.map(map\u fn,num\u parallel\u calls=dataset\u ops.AUTOTUNE)
685
686如果模式==ModeKeys.PREDICT:
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\data\ops\dataset\u ops.py in map(self、map\u func、num\u parallel\u调用)
1589其他:
1590返回ParallelMapDataset(
->1591 self,map_func,num_parallel_调用,preserve_cardinality=True)
1592
1593 def平面地图(自身、地图功能):
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\data\ops\dataset\u ops.py in\uuuuuuuu init\uuuuuuuuu(self、input\u dataset、map\u func、num\u parallel\u调用、使用\u inter\u op\u并行、保留\u基数、使用\u legacy\u函数)
3924 self.\u transformation\u name(),
3925数据集=输入数据集,
->3926使用旧功能=使用旧功能)
3927 self.\u num\u parallel\u calls=ops.convert\u to\u tensor(
3928 num\u parallel\u calls,dtype=dtypes.int32,name=“num\u parallel\u calls”)
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\data\ops\dataset\u ops.py in\uuuuuuuu init\uuuuuuuuu(self、func、transformation\u name、dataset、input\u类、input\u形状、input\u类型、input\u结构、将\u添加到\u图形、使用\u遗留\u函数、defun\u kwargs)
3145带有跟踪。资源跟踪者范围(资源跟踪者):
3146#TODO(b/141462134):切换到使用垃圾收集。
->3147 self.\u function=wrapper\u fn.\u get\u concrete\u function\u internal()
3148
3149如果将_添加到_图:
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\eager\function.py in\u get\u concrete\u function\u internal(self,*args,**kwargs)
2393“获取图形函数时绕过错误检查。”“”
2394图形\函数=自身。\获取\具体\函数\内部\垃圾\收集(
->2395*args,**kwargs)
2396#我们将这个具体函数返回给某人,他们可能会保留一个
2397#引用FuncGraph而不保留对
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\eager\function.py in\u get\u concrete\u function\u internal\u garbage\u collected(self,*args,**kwargs)
2387 args,kwargs=无,无
2388带自锁:
->2389图形函数,u,u=self._可能定义函数(args,kwargs)
2390返回图函数
2391
C:\ProgramData\Miniconda3\lib\site packages\tensorflow\u core\python\eager\function.py in\u maybe\u define\u function(self、args、kwargs)
2701
2702 self.\u function\u cache.missed.add(调用上下文键)
->2703图福
BATCH_SIZE = 4
TRAIN_SPLIT = 0.8
VAL_SPLIT = 0.1
TEST_SPLIT = 0.1
with open((CWD+'/CLNY_X.npy'), mode='rb') as f:
    Xt = np.load(f, allow_pickle=True)
with open((CWD+'/CLNY_Y.npy'), mode='rb') as f:
    Y = np.load(f, allow_pickle=True)

X = Xt.reshape(Xt.shape + (1,))
DATASIZE = Y.shape[0]
print("Datasize: ", DATASIZE)
Datasize:  172
# test out with different period moving averages, so we take the
dataset = tf.data.Dataset.from_tensor_slices((X, Y))
for feat, targ in dataset.take(1):
    print('NRows: {}, NCols: {}, Target: {}\nFeat: {}'.format(len(feat), len(feat[0]), targ, feat))
NRows: 10000, NCols: 10, Target: 0.2587999999523163
Feat: [[[5.0292000e+01]
  [1.5998565e-01]
  [7.5094378e-01]
  ...
  [1.0000000e+00]
  [2.5231593e-05]
  [1.4535466e-01]]

 [[5.0492001e+01]
  [2.9965147e-01]
  [1.4065099e+00]
  ...
  [1.8729897e+00]
  [4.7258512e-05]
  [2.7224776e-01]]

 [[5.0692001e+01]
  [2.9965451e-01]
  [1.4065243e+00]
  ...
  [1.8730087e+00]
  [4.7258993e-05]
  [2.7225053e-01]]

 ...

 [[0.0000000e+00]
  [0.0000000e+00]
  [0.0000000e+00]
  ...
  [0.0000000e+00]
  [0.0000000e+00]
  [0.0000000e+00]]

 [[0.0000000e+00]
  [0.0000000e+00]
  [0.0000000e+00]
  ...
  [0.0000000e+00]
  [0.0000000e+00]
  [0.0000000e+00]]

 [[0.0000000e+00]
  [0.0000000e+00]
  [0.0000000e+00]
  ...
  [0.0000000e+00]
  [0.0000000e+00]
  [0.0000000e+00]]]
train_size = int(DATASIZE*TRAIN_SPLIT)
val_size = int(DATASIZE*VAL_SPLIT)
test_size = int(DATASIZE*TEST_SPLIT)

dataset = dataset.shuffle(DATASIZE)
train_dataset = dataset.take(train_size).batch(BATCH_SIZE)
test_dataset = dataset.skip(train_size)
val_dataset = dataset.skip(test_size)
test_dataset = dataset.take(test_size)

CONVERTED_LENGTH = 10000
CONVERTED_WIDTH = 10
model = models.Sequential()
#model.add(layers.Conv1D(32, kernel_size=(10), activation='relu', data_format='channels_last', batch_input_shape=(CONVERTED_LENGTH, CONVERTED_WIDTH, 1)))
model.add(layers.Conv2D(32, kernel_size=(2, 2), activation='relu', batch_input_shape=(CONVERTED_LENGTH, CONVERTED_WIDTH, BATCH_SIZE, 1)))
model.add(layers.Flatten())
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(1, activation='softmax'))
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (10000, 9, 3, 32)         160       
_________________________________________________________________
flatten (Flatten)            (10000, 864)              0         
_________________________________________________________________
dense (Dense)                (10000, 32)               27680     
_________________________________________________________________
dense_1 (Dense)              (10000, 1)                33        
=================================================================
Total params: 27,873
Trainable params: 27,873
Non-trainable params: 0
_________________________________________________________________
model.compile(optimizer='adam',
             loss=tf.keras.losses.MeanSquaredError(),
             metrics=['accuracy'])

history = model.fit(train_dataset, epochs=10, validation_data=(val_dataset)) # add the validation_data=(test_data, test_targets)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-9-c0e1d31b7f23> in <module>
      3              metrics=['accuracy'])
      4 
----> 5 history = model.fit(train_dataset, epochs=10, validation_data=(val_dataset)) # add the validation_data=(test_data, test_targets)

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    817         max_queue_size=max_queue_size,
    818         workers=workers,
--> 819         use_multiprocessing=use_multiprocessing)
    820 
    821   def evaluate(self,

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    233           max_queue_size=max_queue_size,
    234           workers=workers,
--> 235           use_multiprocessing=use_multiprocessing)
    236 
    237       total_samples = _get_total_number_of_samples(training_data_adapter)

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
    591         max_queue_size=max_queue_size,
    592         workers=workers,
--> 593         use_multiprocessing=use_multiprocessing)
    594     val_adapter = None
    595     if validation_data:

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
    704       max_queue_size=max_queue_size,
    705       workers=workers,
--> 706       use_multiprocessing=use_multiprocessing)
    707 
    708   return adapter

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weights, standardize_function, **kwargs)
    700 
    701     if standardize_function is not None:
--> 702       x = standardize_function(x)
    703 
    704     # Note that the dataset instance is immutable, its fine to reusing the user

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in standardize_function(dataset)
    682           return x, y
    683         return x, y, sample_weights
--> 684       return dataset.map(map_fn, num_parallel_calls=dataset_ops.AUTOTUNE)
    685 
    686   if mode == ModeKeys.PREDICT:

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in map(self, map_func, num_parallel_calls)
   1589     else:
   1590       return ParallelMapDataset(
-> 1591           self, map_func, num_parallel_calls, preserve_cardinality=True)
   1592 
   1593   def flat_map(self, map_func):

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in __init__(self, input_dataset, map_func, num_parallel_calls, use_inter_op_parallelism, preserve_cardinality, use_legacy_function)
   3924         self._transformation_name(),
   3925         dataset=input_dataset,
-> 3926         use_legacy_function=use_legacy_function)
   3927     self._num_parallel_calls = ops.convert_to_tensor(
   3928         num_parallel_calls, dtype=dtypes.int32, name="num_parallel_calls")

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in __init__(self, func, transformation_name, dataset, input_classes, input_shapes, input_types, input_structure, add_to_graph, use_legacy_function, defun_kwargs)
   3145       with tracking.resource_tracker_scope(resource_tracker):
   3146         # TODO(b/141462134): Switch to using garbage collection.
-> 3147         self._function = wrapper_fn._get_concrete_function_internal()
   3148 
   3149         if add_to_graph:

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _get_concrete_function_internal(self, *args, **kwargs)
   2393     """Bypasses error checking when getting a graph function."""
   2394     graph_function = self._get_concrete_function_internal_garbage_collected(
-> 2395         *args, **kwargs)
   2396     # We're returning this concrete function to someone, and they may keep a
   2397     # reference to the FuncGraph without keeping a reference to the

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   2387       args, kwargs = None, None
   2388     with self._lock:
-> 2389       graph_function, _, _ = self._maybe_define_function(args, kwargs)
   2390     return graph_function
   2391 

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _maybe_define_function(self, args, kwargs)
   2701 
   2702       self._function_cache.missed.add(call_context_key)
-> 2703       graph_function = self._create_graph_function(args, kwargs)
   2704       self._function_cache.primary[cache_key] = graph_function
   2705       return graph_function, args, kwargs

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   2591             arg_names=arg_names,
   2592             override_flat_arg_shapes=override_flat_arg_shapes,
-> 2593             capture_by_value=self._capture_by_value),
   2594         self._function_attributes,
   2595         # Tell the ConcreteFunction to clean up its graph once it goes out of

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    976                                           converted_func)
    977 
--> 978       func_outputs = python_func(*func_args, **func_kwargs)
    979 
    980       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in wrapper_fn(*args)
   3138           attributes=defun_kwargs)
   3139       def wrapper_fn(*args):  # pylint: disable=missing-docstring
-> 3140         ret = _wrapper_helper(*args)
   3141         ret = structure.to_tensor_list(self._output_structure, ret)
   3142         return [ops.convert_to_tensor(t) for t in ret]

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in _wrapper_helper(*args)
   3080         nested_args = (nested_args,)
   3081 
-> 3082       ret = autograph.tf_convert(func, ag_ctx)(*nested_args)
   3083       # If `func` returns a list of tensors, `nest.flatten()` and
   3084       # `ops.convert_to_tensor()` would conspire to attempt to stack

C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\autograph\impl\api.py in wrapper(*args, **kwargs)
    235       except Exception as e:  # pylint:disable=broad-except
    236         if hasattr(e, 'ag_error_metadata'):
--> 237           raise e.ag_error_metadata.to_exception(e)
    238         else:
    239           raise

ValueError: in converted code:

    C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py:677 map_fn
        batch_size=None)
    C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py:2410 _standardize_tensors
        exception_prefix='input')
    C:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_utils.py:582 standardize_input_data
        str(data_shape))

    ValueError: Error when checking input: expected conv2d_input to have shape (10, 4, 1) but got array with shape (10000, 10, 1)
CONVERTED_LENGTH = 10000
CONVERTED_WIDTH = 10
BATCH_SIZE = 4

model = models.Sequential()
#model.add(layers.Conv1D(32, kernel_size=(10), activation='relu', data_format='channels_last', batch_input_shape=(CONVERTED_LENGTH, CONVERTED_WIDTH, 1)))
model.add(layers.Conv2D(32, kernel_size=(2, 2), activation='relu', batch_input_shape=(BATCH_SIZE, CONVERTED_LENGTH, CONVERTED_WIDTH, 1)))
model.add(layers.Flatten())
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(1, activation='softmax'))
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (4, 9999, 9, 32)          160       
_________________________________________________________________
flatten (Flatten)            (4, 2879712)              0         
_________________________________________________________________
dense (Dense)                (4, 32)                   92150816  
_________________________________________________________________
dense_1 (Dense)              (4, 1)                    33        
=================================================================
Total params: 92,151,009
Trainable params: 92,151,009
Non-trainable params: 0
_________________________________________________________________
train_size = int(DATASIZE*TRAIN_SPLIT)
val_size = int(DATASIZE*VAL_SPLIT)
test_size = int(DATASIZE*TEST_SPLIT)

dataset = dataset.shuffle(DATASIZE)
train_dataset = dataset.take(train_size).batch(BATCH_SIZE)
test_dataset = dataset.skip(train_size)
val_dataset = dataset.skip(test_size).batch(BATCH_SIZE)
test_dataset = dataset.take(test_size).batch(BATCH_SIZE)

CONVERTED_LENGTH = 10000
CONVERTED_WIDTH = 10
model = models.Sequential()
#model.add(layers.Conv1D(32, kernel_size=(10), activation='relu', data_format='channels_last', batch_input_shape=(CONVERTED_LENGTH, CONVERTED_WIDTH, 1)))
model.add(layers.InputLayer(batch_input_shape=(BATCH_SIZE, CONVERTED_LENGTH, CONVERTED_WIDTH, 1)))
model.add(layers.Conv2D(32, kernel_size=(2, 2), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(1, activation='softmax'))
model.summary()