Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python tf.keras卷积结构不起作用_Python_Tensorflow_Keras_Tensorflow2.0_Autoencoder - Fatal编程技术网

Python tf.keras卷积结构不起作用

Python tf.keras卷积结构不起作用,python,tensorflow,keras,tensorflow2.0,autoencoder,Python,Tensorflow,Keras,Tensorflow2.0,Autoencoder,下面的问题是我在设计一个基本的自动编码器arch时遇到的一个实际问题的简化。 下面的示例足以完全重现我的错误。 我已经试了大约两天了,但是我找不到任何办法 import tensorflow as tf import random import os RES = [256, 256] def generator_data(n): for i in range(n): for j in range(6): yield tf.zeros((1, 25

下面的问题是我在设计一个基本的自动编码器arch时遇到的一个实际问题的简化。 下面的示例足以完全重现我的错误。 我已经试了大约两天了,但是我找不到任何办法

import tensorflow as tf
import random
import os

RES = [256, 256]
def generator_data(n):
    for i in range(n):
        for j in range(6):
            yield tf.zeros((1, 256, 256, 3)), tf.zeros((1, 256, 256, 3))

def mymodel():
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
    # 256 x 256 x 8
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 128 x 128 x 8
    model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same'))
    # 128 x 128 x 16
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 64 x 64 x 16
    model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same'))
    # 64 x 64 x 32
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 32 x 32 x 32

    # 32 x 32 x 32
    model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same'))
    # 32 x 32 x 32
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 64 x 64 x 32
    model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same'))
    # 64 x 64 x 16
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 128 x 128 x 16
    model.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
    # 128 x 128 x 8
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 256 x 256 x 8
    model.add(tf.keras.layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same'))
    return model


if __name__ == "__main__":
    # import some data to play with
    x_val, y_val = zip(*generator_data(20))

    model = mymodel()
    optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
    model.compile(optimizer=optimizer, loss=tf.keras.losses.MeanSquaredError())
    model(tf.zeros((1, 256, 256, 3)))
    model.summary()

    # generator_data(train_list)
    model.fit(x=generator_data(1000),
        validation_data=(list(x_val), list(y_val)),
        verbose=1, epochs=1000)
首先,我有一个model.summary的奇怪行为,它包含:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
conv2d (Conv2D)              multiple                  224
_________________________________________________________________
max_pooling2d (MaxPooling2D) multiple                  0
_________________________________________________________________
conv2d_1 (Conv2D)            multiple                  1168
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 multiple                  0
_________________________________________________________________
conv2d_2 (Conv2D)            multiple                  4640
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 multiple                  0
_________________________________________________________________
conv2d_3 (Conv2D)            multiple                  9248
_________________________________________________________________
up_sampling2d (UpSampling2D) multiple                  0
_________________________________________________________________
conv2d_4 (Conv2D)            multiple                  4624
_________________________________________________________________
up_sampling2d_1 (UpSampling2 multiple                  0
_________________________________________________________________
conv2d_5 (Conv2D)            multiple                  1160
_________________________________________________________________
up_sampling2d_2 (UpSampling2 multiple                  0
_________________________________________________________________
conv2d_6 (Conv2D)            multiple                  73
=================================================================
Total params: 21,137
Trainable params: 21,137
Non-trainable params: 0
输出形状上只有多个。 我已经查过了,但解决方法似乎不起作用。 但第二个也是更重要的是,我得到了一个错误:

ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), for inputs ['input_1'] but instead got the following list of 120 arrays: [<tf.Tensor: shape=(1, 256, 256, 3), dtype=float32, numpy=
array([[[[0., 0., 0.],
         [0., 0., 0.],
         [0., 0., 0.],
         ...,
         [0., 0., 0.],
         [0., 0., 0.],
         [0....
这对我来说毫无意义。我的生成器返回[batch,x-dim,y-dim,channel],我也尝试过[batch,channel,x-dim,y-dim],但也没有运气。在这种情况下,批次等于1,而不是120。 正如我所说,无论发生什么,我都无法解决/调试这些问题,因此我非常感谢您的帮助。 我对DL很陌生,但不熟悉python,我正在使用Tensorflow-2.1.0和python-3.7


非常感谢。

这是工作代码

import tensorflow as tf
import random
import os
import numpy as np

RES = [256, 256]
def generator_data(n):
    for i in range(n):
        for j in range(1):
            yield tf.zeros((1, 256, 256, 3)), tf.zeros((1, 256, 256, 3))

def mymodel():
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
    # 256 x 256 x 8
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 128 x 128 x 8
    model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same'))
    # 128 x 128 x 16
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 64 x 64 x 16
    model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same'))
    # 64 x 64 x 32
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 32 x 32 x 32

    # 32 x 32 x 32
    model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same'))
    # 32 x 32 x 32
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 64 x 64 x 32
    model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same'))
    # 64 x 64 x 16
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 128 x 128 x 16
    model.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
    # 128 x 128 x 8
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 256 x 256 x 8
    model.add(tf.keras.layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same'))
    return model


if __name__ == "__main__":
    # import some data to play with
    z = list(zip(*generator_data(2)))

    x_val = z[0][0]
    y_val = z[0][1]

    model = mymodel()
    optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
    model.compile(optimizer=optimizer, loss=tf.keras.losses.MeanSquaredError())
    model(tf.zeros((1, 256, 256, 3)))
    model.summary()


    print(x_val.numpy().shape)
    print(y_val.numpy().shape)
    model.fit(x=generator_data(10),
        validation_data=(x_val, y_val),
        verbose=1, epochs=1)
您以错误的方式对生成器使用解压缩。我将输出类型转换为一个列表,以便可以订阅。其中一个有用的技巧是在每个步骤中打印X,y的形状和长度,以找出bug所在的位置

更新:

是的,确实如此,但您需要传递一个形状为[batch,256,256,3]的张量。但是如果a是一个列表,并且a[0]具有形状[1,256,256,3],那么您需要将a[0]传递给模型,这就是我所做的。但是,你通过了a。但是a是一个列表,不是一个numpy数组/张量,即使我们将它键入一个numpy数组,我们也会得到shape=1,1,256,256,3-这是无效的

另外,在生成器_数据中,为什么要使用不必要的第二个循环

def generator_data(n):
    for i in range(n):
        for j in range(1): # ??????? Why?
            yield tf.zeros((1, 256, 256, 3)), tf.zeros((1, 256, 256, 3))

这是工作代码

import tensorflow as tf
import random
import os
import numpy as np

RES = [256, 256]
def generator_data(n):
    for i in range(n):
        for j in range(1):
            yield tf.zeros((1, 256, 256, 3)), tf.zeros((1, 256, 256, 3))

def mymodel():
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
    # 256 x 256 x 8
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 128 x 128 x 8
    model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same'))
    # 128 x 128 x 16
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 64 x 64 x 16
    model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same'))
    # 64 x 64 x 32
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 32 x 32 x 32

    # 32 x 32 x 32
    model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same'))
    # 32 x 32 x 32
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 64 x 64 x 32
    model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same'))
    # 64 x 64 x 16
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 128 x 128 x 16
    model.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
    # 128 x 128 x 8
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 256 x 256 x 8
    model.add(tf.keras.layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same'))
    return model


if __name__ == "__main__":
    # import some data to play with
    z = list(zip(*generator_data(2)))

    x_val = z[0][0]
    y_val = z[0][1]

    model = mymodel()
    optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
    model.compile(optimizer=optimizer, loss=tf.keras.losses.MeanSquaredError())
    model(tf.zeros((1, 256, 256, 3)))
    model.summary()


    print(x_val.numpy().shape)
    print(y_val.numpy().shape)
    model.fit(x=generator_data(10),
        validation_data=(x_val, y_val),
        verbose=1, epochs=1)
您以错误的方式对生成器使用解压缩。我将输出类型转换为一个列表,以便可以订阅。其中一个有用的技巧是在每个步骤中打印X,y的形状和长度,以找出bug所在的位置

更新:

是的,确实如此,但您需要传递一个形状为[batch,256,256,3]的张量。但是如果a是一个列表,并且a[0]具有形状[1,256,256,3],那么您需要将a[0]传递给模型,这就是我所做的。但是,你通过了a。但是a是一个列表,不是一个numpy数组/张量,即使我们将它键入一个numpy数组,我们也会得到shape=1,1,256,256,3-这是无效的

另外,在生成器_数据中,为什么要使用不必要的第二个循环

def generator_data(n):
    for i in range(n):
        for j in range(1): # ??????? Why?
            yield tf.zeros((1, 256, 256, 3)), tf.zeros((1, 256, 256, 3))

通过每晚更新到tf-2.2并使用tf.data模块,我成功地解决了这些问题

如果您有相同的问题,请查看此处:

import tensorflow as tf
import random
import os
from functools import partial

RES = [256, 256]
def generator_data(n):
    for i in range(n):
        for j in range(6):
            yield tf.zeros((1, 256, 256, 3)), tf.zeros((1, 256, 256, 3))

def generator_data_val(n):
    for i in range(n):
        for j in range(6):
            yield tf.zeros((256, 256, 3)), tf.zeros((256, 256, 3))


def model():
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same', input_shape=(256, 256, 3)))
    # 256 x 256 x 8
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 128 x 128 x 8
    model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same'))
    # 128 x 128 x 16
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 64 x 64 x 16
    model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same'))
    # 64 x 64 x 32
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 32 x 32 x 32

    # 32 x 32 x 32
    model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same'))
    # 32 x 32 x 32
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 64 x 64 x 32
    model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same'))
    # 64 x 64 x 16
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 128 x 128 x 16
    model.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
    # 128 x 128 x 8
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 256 x 256 x 8
    model.add(tf.keras.layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same'))
    return model


if __name__ == "__main__":
    # import some data to play with
    x_val, y_val = zip(*generator_data_val(5))
    x_val, y_val = list(x_val), list(y_val)


    model = model()
    optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
    model.compile(optimizer=optimizer, loss=tf.keras.losses.MeanSquaredError())
    model(tf.zeros((1, 256, 256, 3)))
    model.summary()
    train_dataset = generator_data(5)

    gen = partial(generator_data, n=5)
    train_dataset = tf.data.Dataset.from_generator(
        gen, output_types=(tf.float32, tf.float32),
        output_shapes=(tf.TensorShape([1, 256, 256, 3]), tf.TensorShape([1, 256, 256, 3]))).repeat()
    val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(2)

    # generator_data(train_list)
    model.fit(x=train_dataset,
        steps_per_epoch=40,
        validation_data=val_dataset,
        verbose=1, epochs=1000)

通过每晚更新到tf-2.2并使用tf.data模块,我成功地解决了这些问题

如果您有相同的问题,请查看此处:

import tensorflow as tf
import random
import os
from functools import partial

RES = [256, 256]
def generator_data(n):
    for i in range(n):
        for j in range(6):
            yield tf.zeros((1, 256, 256, 3)), tf.zeros((1, 256, 256, 3))

def generator_data_val(n):
    for i in range(n):
        for j in range(6):
            yield tf.zeros((256, 256, 3)), tf.zeros((256, 256, 3))


def model():
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same', input_shape=(256, 256, 3)))
    # 256 x 256 x 8
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 128 x 128 x 8
    model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same'))
    # 128 x 128 x 16
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 64 x 64 x 16
    model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same'))
    # 64 x 64 x 32
    model.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
    # 32 x 32 x 32

    # 32 x 32 x 32
    model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same'))
    # 32 x 32 x 32
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 64 x 64 x 32
    model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same'))
    # 64 x 64 x 16
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 128 x 128 x 16
    model.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
    # 128 x 128 x 8
    model.add(tf.keras.layers.UpSampling2D((2, 2)))
    # 256 x 256 x 8
    model.add(tf.keras.layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same'))
    return model


if __name__ == "__main__":
    # import some data to play with
    x_val, y_val = zip(*generator_data_val(5))
    x_val, y_val = list(x_val), list(y_val)


    model = model()
    optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
    model.compile(optimizer=optimizer, loss=tf.keras.losses.MeanSquaredError())
    model(tf.zeros((1, 256, 256, 3)))
    model.summary()
    train_dataset = generator_data(5)

    gen = partial(generator_data, n=5)
    train_dataset = tf.data.Dataset.from_generator(
        gen, output_types=(tf.float32, tf.float32),
        output_shapes=(tf.TensorShape([1, 256, 256, 3]), tf.TensorShape([1, 256, 256, 3]))).repeat()
    val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(2)

    # generator_data(train_list)
    model.fit(x=train_dataset,
        steps_per_epoch=40,
        validation_data=val_dataset,
        verbose=1, epochs=1000)

谢谢你的回复。你能提供一个工作示例来实现我的目标吗?对我来说,这只是验证数据问题的一个证明,谢谢。你是从代码中读到的还是从经验中读到的?更重要的是,我不明白为什么输出形状是多重的,以及我应该如何提供数据进行验证。现在你只提供一个张量,这不是我想做的。如果我运行:x_val,y_val=zip*generator_data20a=listx_val b=listy_val,我会得到一个张量列表,在我看来应该是完全正确的。基本上,这与您所做的是等效的,但是您只提供了一个用于验证的张量,而不是一个列表或它们的集合,这是预期的结果。Pdb printtypea Pdb printtypea[0]Pdb printa[0]。形状1、256、256、3 PdbI不使用不必要的第二个循环。在我的初始代码中,我有范围6。我试图复制我的数据,因为这是一个用于测试的示例。无论如何,我找到了一个解决方案,但实际上它必须首先使用tensorflow版本。我首先通过更新到nightly 2.2,然后使用tf.data模块解决了整个问题。谢谢你的回复。你能提供一个工作示例来实现我的目标吗?对我来说,这只是验证数据问题的一个证明,谢谢。你是从代码中读到的还是从经验中读到的?更重要的是,我不明白为什么输出形状是多重的,以及我应该如何提供数据进行验证。现在你只提供一个张量,这不是我想做的。如果我运行:x_val,y_val=zip*generator_data20a=listx_val b=listy_val,我会得到一个张量列表,在我看来应该是完全正确的。基本上,这与您所做的是等效的,但是您只提供了一个用于验证的张量,而不是一个列表或它们的集合,这是预期的结果。Pdb printtypea Pdb printtypea[0]Pdb printa[0]。形状1、256、256、3 PdbI不使用不必要的第二个循环。在我的初始代码中,我有范围6。我试图复制我的数据,因为这是一个用于测试的示例。无论如何,我找到了一个解决方案,但实际上它必须首先使用tensorflow版本。我 首先通过更新到nightly 2.2,然后使用tf.data模块修复了整个问题。