Python 与keras turner一起使用时tensorflow CNN模型的输入形状不匹配

Python 与keras turner一起使用时tensorflow CNN模型的输入形状不匹配,python,tensorflow,conv-neural-network,hyperparameters,Python,Tensorflow,Conv Neural Network,Hyperparameters,我有一个现有的CNN模型,工作良好,代码如下 model = tf.keras.models.Sequential() model.add(tf.keras.layers.InputLayer(input_shape=(train_data.shape[1], 1))) model.add(tf.keras.layers.Conv1D(48, 48, activation=tf.nn.selu, padding='same')) model.add(tf.keras.layers.MaxPool

我有一个现有的CNN模型,工作良好,代码如下

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=(train_data.shape[1], 1)))
model.add(tf.keras.layers.Conv1D(48, 48, activation=tf.nn.selu, padding='same'))
model.add(tf.keras.layers.MaxPool1D(2))
model.add(tf.keras.layers.Conv1D(48, 96, activation=tf.nn.selu, padding='same'))
model.add(tf.keras.layers.MaxPool1D(2))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation=tf.keras.activations.relu))
model.add(tf.keras.layers.Dense(1, activation=tf.keras.activations.sigmoid))
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=loss_function)

model.fit(train_data, train_result, epochs=2000, validation_split=0.2, verbose=0, callbacks=[early_stop])
train_data
是一组时间序列,其中每个序列是一个48值向量

我正试图使用keras turner来优化超参数。参考中的CIFAR示例,我将代码更改如下

def build_model(hp):
    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.InputLayer(input_shape=(train_data.shape[1], 1)))
    # for i in range(hp.Int('conv_blocks', 3, 5, default=3)):
    filters = hp.Int('filters_' + str(1), 12, 96, step=12)
    for _ in range(2):
        model.add(tf.keras.layers.Conv1D(filters, 3, activation=tf.nn.selu, padding='same'))
        if hp.Choice('pooling_' + str(1), ['avg', 'max']) == 'max':
            model.add(tf.keras.layers.MaxPool1D(2))
        else:
            model.add(tf.keras.layers.AvgPool1D(2))
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(hp.Int('hidden_size', 30, 100, step=10, default=50),
                                    activation=tf.keras.activations.relu))
    model.add(tf.keras.layers.Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(tf.keras.layers.Dense(2, activation=tf.keras.activations.softmax))
    model.compile(optimizer=tf.keras.optimizers.Adam(hp.Float('learning_rate', 1e-4, 1e-2, sampling='log')),
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
    return model

import kerastuner as kt

tuner = kt.Hyperband(build_model, objective='val_accuracy', max_epochs=30, hyperband_iterations=2)
tuner.search(train_data, validation_split=0.2, epochs=30, callbacks=[tf.keras.callbacks.EarlyStopping(patience=1)])
但是当我尝试运行时,我得到了以下错误

ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (176039, 48)

有人能帮我找出我做错了什么吗

您的列车数据应该有3个维度,最后一个维度缺失

train_data = train_data.reshape(-1,48,1)
您也没有向模型传递任何标签

这是一个伪工作代码,您需要相应地传递标签

import tensorflow as tf
import numpy as np

def build_model(hp):
    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.InputLayer(input_shape=(48, 1)))
    # for i in range(hp.Int('conv_blocks', 3, 5, default=3)):
    filters = hp.Int('filters_' + str(1), 12, 96, step=12)
    for _ in range(2):
        model.add(tf.keras.layers.Conv1D(filters, 3, activation=tf.nn.selu, padding='same'))
        if hp.Choice('pooling_' + str(1), ['avg', 'max']) == 'max':
            model.add(tf.keras.layers.MaxPool1D(2))
        else:
            model.add(tf.keras.layers.AvgPool1D(2))
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(hp.Int('hidden_size', 30, 100, step=10, default=50),
                                    activation=tf.keras.activations.relu))
    model.add(tf.keras.layers.Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(tf.keras.layers.Dense(2, activation=tf.keras.activations.softmax))
    model.compile(optimizer=tf.keras.optimizers.Adam(hp.Float('learning_rate', 1e-4, 1e-2, sampling='log')),
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
    return model

import kerastuner as kt

tuner = kt.Hyperband(build_model, objective='val_accuracy', max_epochs=30, hyperband_iterations=2)

train_data = np.random.randn(100, 48)

train_data = train_data.reshape(-1,48,1)

train_labels = np.random.randint(0, 2, (100,1))

tuner.search(train_data, train_labels, validation_split=0.2, epochs=30, callbacks=[tf.keras.callbacks.EarlyStopping(patience=1)])

谢谢:)看来重塑就可以了。但是为什么我不需要对标准TensorFlow进行整形呢?如果没有整形,您应该会得到错误
ValueError:layer sequential_4的输入0与layer不兼容:预期的ndim=3,发现的ndim=2。收到完整形状:[None,48]
查看此:嗯,这很奇怪。因为我可以使用TensorFlow运行代码而不会出现任何错误。无论如何,谢谢:)
Epoch 1/2
3/3 [==============================] - 0s 82ms/step - loss: 0.7110 - accuracy: 0.4875 - val_loss: 0.6611 - val_accuracy: 0.6500
Epoch 2/2
3/3 [==============================] - 0s 21ms/step - loss: 0.6937 - accuracy: 0.5000 - val_loss: 0.6599 - val_accuracy: 0.7500

Trial complete
Trial summary
|-Trial ID: adc89daddb79f3e5ea6a8c307352e4ee
|-Score: 0.75
|-Best step: 0
Hyperparameters:
|-dropout: 0.4
|-filters_1: 24
|-hidden_size: 70
|-learning_rate: 0.0002528462794256226
|-pooling_1: avg
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 57ms/step - loss: 0.7196 - accuracy: 0.4750 - val_loss: 0.7451 - val_accuracy: 0.4000
Epoch 2/2
3/3 [==============================] - 0s 21ms/step - loss: 0.7048 - accuracy: 0.5500 - val_loss: 0.7398 - val_accuracy: 0.5000

Trial complete
Trial summary
|-Trial ID: 6042b7a7ca696bf79224cbaf5bc05a42
|-Score: 0.5
|-Best step: 0
Hyperparameters:
|-dropout: 0.1
|-filters_1: 36
|-hidden_size: 50
|-learning_rate: 0.00018055209590750966
|-pooling_1: max
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 58ms/step - loss: 0.7476 - accuracy: 0.4625 - val_loss: 0.7329 - val_accuracy: 0.4500
Epoch 2/2
3/3 [==============================] - 0s 14ms/step - loss: 0.6390 - accuracy: 0.6875 - val_loss: 0.6930 - val_accuracy: 0.4000

Trial complete
Trial summary
|-Trial ID: 394ba122903b467ddf54902b15d04a53
|-Score: 0.44999998807907104
|-Best step: 0
Hyperparameters:
|-dropout: 0.2
|-filters_1: 12
|-hidden_size: 60
|-learning_rate: 0.003343121876306107
|-pooling_1: avg
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 66ms/step - loss: 0.7547 - accuracy: 0.4625 - val_loss: 0.6964 - val_accuracy: 0.4500
Epoch 2/2
3/3 [==============================] - 0s 19ms/step - loss: 0.5858 - accuracy: 0.7500 - val_loss: 0.6720 - val_accuracy: 0.7000

Trial complete
Trial summary
|-Trial ID: 2479d3c548e70bb0b88a5e4540a7923a
|-Score: 0.699999988079071
|-Best step: 0
Hyperparameters:
|-dropout: 0.1
|-filters_1: 72
|-hidden_size: 50
|-learning_rate: 0.003193348791226863
|-pooling_1: max
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 53ms/step - loss: 0.7166 - accuracy: 0.5125 - val_loss: 0.6674 - val_accuracy: 0.6000
Epoch 2/2
3/3 [==============================] - 0s 19ms/step - loss: 0.7243 - accuracy: 0.4625 - val_loss: 0.6569 - val_accuracy: 0.5500

Trial complete
Trial summary
|-Trial ID: 01a8bb49c51eb81f27dbe7d491d40246
|-Score: 0.6000000238418579
|-Best step: 0
Hyperparameters:
|-dropout: 0.4
|-filters_1: 12
|-hidden_size: 90
|-learning_rate: 0.0008793685539613403
|-pooling_1: avg
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 57ms/step - loss: 0.7178 - accuracy: 0.4750 - val_loss: 0.7252 - val_accuracy: 0.4500
Epoch 2/2
3/3 [==============================] - 0s 18ms/step - loss: 0.6906 - accuracy: 0.4750 - val_loss: 0.7161 - val_accuracy: 0.4500

Trial complete
Trial summary
|-Trial ID: fc795bd34f19275f9eef882bece8092a
|-Score: 0.44999998807907104
|-Best step: 0
Hyperparameters:
|-dropout: 0.0
|-filters_1: 48
|-hidden_size: 60
|-learning_rate: 0.0002136185900215609
|-pooling_1: avg
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 52ms/step - loss: 0.7821 - accuracy: 0.4375 - val_loss: 0.7021 - val_accuracy: 0.5500
Epoch 2/2
3/3 [==============================] - 0s 13ms/step - loss: 0.5737 - accuracy: 0.7625 - val_loss: 0.6778 - val_accuracy: 0.5500

Trial complete
Trial summary
|-Trial ID: b7c8fc5ae3ffa33970d8dcf9486667ae
|-Score: 0.550000011920929
|-Best step: 0
Hyperparameters:
|-dropout: 0.0
|-filters_1: 72
|-hidden_size: 100
|-learning_rate: 0.0012802609011755962
|-pooling_1: max
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 52ms/step - loss: 0.6974 - accuracy: 0.5375 - val_loss: 0.7001 - val_accuracy: 0.7000
Epoch 2/2
3/3 [==============================] - 0s 14ms/step - loss: 0.5605 - accuracy: 0.7750 - val_loss: 0.7182 - val_accuracy: 0.5000

Trial complete
Trial summary
|-Trial ID: c5b4f33e7b657342804b394ae3483a22
|-Score: 0.699999988079071
|-Best step: 0
......
......