Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/334.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Can';制作自定义Keras图层时,不要使用未知的输入DIM(批量大小)_Python_Python 3.x_Tensorflow_Keras_Keras Layer - Fatal编程技术网

Python Can';制作自定义Keras图层时,不要使用未知的输入DIM(批量大小)

Python Can';制作自定义Keras图层时,不要使用未知的输入DIM(批量大小),python,python-3.x,tensorflow,keras,keras-layer,Python,Python 3.x,Tensorflow,Keras,Keras Layer,我试图在Keras(后端TensorFlow)中构建一个自定义层,该层在卷积层的过滤器上执行KMeans聚类。在构建该层的逻辑时,我迭代了批处理大小,但Keras/TensorFlow似乎不允许这种情况发生,因为批处理大小在运行时之前是未知的维度 我试图跟踪错误消息,它导致我找到了两个文件:keras/engine/training.py和keras/engine/training_utils.py,据我所知,错误源于ndim变量的问题,该变量被定义为NoneType,因为编译模型时没有已知的批

我试图在Keras(后端TensorFlow)中构建一个自定义层,该层在卷积层的过滤器上执行KMeans聚类。在构建该层的逻辑时,我迭代了批处理大小,但Keras/TensorFlow似乎不允许这种情况发生,因为批处理大小在运行时之前是未知的维度

我试图跟踪错误消息,它导致我找到了两个文件:keras/engine/training.py和keras/engine/training_utils.py,据我所知,错误源于
ndim
变量的问题,该变量被定义为
NoneType
,因为编译模型时没有已知的批大小

在查看各种StackOverflow和GitHub资源时,我没有看到任何关于如何处理Keras/TensorFlow拒绝未知批量大小参数的解决方案

以下是示例代码供参考:

import numpy as np
import tensorflow as tf
import keras
from sklearn.cluster import KMeans

class KMeansLayer(keras.layers.Layer):
    def __init__(self, clusters=8, n_init=5, trainable=False, **kwargs):
        super(KMeansLayer, self).__init__(**kwargs)
        self.clusters = clusters
        self.n_init = n_init

    def build(self, input_shape):
        self.output_s = (input_shape[0],input_shape[1], input_shape[2],1)
        self.depth = input_shape[3]
        self.built=True

    def call(self, inputs):

        def KMeansFunc(input_tens,clusters=self.clusters,n_init=self.n_init):
            base_mat = np.zeros((input_tens.shape[0],input_tens.shape[1],input_tens.shape[2]))

            for frame in range(input_tens.shape[0]):
                init_mat = np.zeros((input_tens.shape[1]*input_tens.shape[2]))
                # print(init_mat.shape)
                reshape_mat = np.reshape(input_tens[frame],(input_tens.shape[1]*input_tens.shape[2],input_tens.shape[3]))
                # print(reshape_mat.shape)
                kmeans_init = KMeans(n_clusters=clusters, n_init=n_init)
                class_pred = kmeans_init.fit_predict(reshape_mat)

                for clust in range(self.clusters):
                    init_mat[class_pred==clust] = np.mean(reshape_mat[class_pred==clust],axis=1)
                    init_mat[class_pred==clust] = np.mean(reshape_mat[class_pred==clust],None)
                # print(base_mat.shape)

                base_mat[frame]=np.reshape(init_mat,(input_tens.shape[1],input_tens.shape[2]))

            return np.expand_dims(base_mat,axis=-1).astype('float32')


        output = tf.py_func(KMeansFunc,[inputs],tf.float32) 
        return output

    def compute_output_shape(self, input_shape):
        return self.output_s


input_1 = keras.Input(shape=(28,28,1), name='input_1', dtype='float32')

conv_1 = keras.layers.Conv2D(filters=20, kernel_size=3, strides=1, padding='same', data_format='channels_last', activation='elu', kernel_initializer='glorot_uniform')(input_1)
pool_1 = keras.layers.MaxPooling2D(pool_size=2, padding='same', data_format='channels_last')(conv_1)

up_conv_1 = keras.layers.SeparableConv2D(filters=20, kernel_size=2, strides=1, padding='same', data_format='channels_last', activation='elu', kernel_initializer='glorot_uniform')(pool_1)
up_1 = keras.layers.UpSampling2D(size=(2, 2), interpolation='bilinear')(up_conv_1)
conv_2 = keras.layers.Conv2D(filters=20, kernel_size=3, strides=1, padding='same', data_format='channels_last', activation='elu', kernel_initializer='glorot_uniform')(up_1)

conv_3 = keras.layers.Conv2D(filters=3, kernel_size=3, strides=1, padding='same', data_format='channels_last', activation='elu', kernel_initializer='glorot_uniform')(conv_2)

kmeans_out = KMeansLayer(clusters=8,n_init=5)(conv_3)


model = keras.Model(inputs=[input_1], outputs=kmeans_out)
keras.utils.plot_model(model, show_shapes=True)
model.compile(optimizer='adam', loss='mse', metrics=['mse'])
从上面的代码中可以看出,如果我的自定义层中有一个大小
(batch\u size,28,28,3)
的输入,我希望创建大小
(batch\u size,28,28,1)
的输出

由于运行上述代码,我得到的错误是:

Traceback (most recent call last):
  File "example_error_file.py", line 64, in <module>
    model.compile(optimizer='adam', loss='mse', metrics=['mse'])
  File "~/fluoro/fenv/lib/python3.6/site-packages/keras/engine/training.py", line 347, in compile
    sample_weight, mask)
  File "~/fluoro/fenv/lib/python3.6/site-packages/keras/engine/training_utils.py", line 426, in weighted
    axis=list(range(weight_ndim, ndim)))
TypeError: 'NoneType' object cannot be interpreted as an integer
回溯(最近一次呼叫最后一次):
文件“example\u error\u File.py”,第64行,在
compile(优化器='adam',loss='mse',metrics=['mse'])
文件“~/fluoro/fenv/lib/python3.6/site packages/keras/engine/training.py”,第347行,编译
样品(重量、面罩)
文件“~/fluoro/fenv/lib/python3.6/site packages/keras/engine/training_utils.py”,第426行,以加权形式
轴=列表(范围(重量_ndim,ndim)))
TypeError:“非类型”对象不能解释为整数
我有两个主要问题:

  • 在定义自定义Keras层时,是否有错误的地方
  • 在这种情况下,有没有办法强迫Keras在不知道批量大小的情况下运行(这似乎是合理的)
TensorFlow版本:1.7.0 Keras版本:2.2.4
Python:3.6.6

如果我们只在整个tensorflow中查看,主要问题似乎是这个事件,因为直到运行时,形状才知道,当tensorflow试图编译这个图时,整个循环都不会出现一些错误

而不是使用TF.PyFunc考虑使用TF.函数或签名(如果你在TysFouth1.1.0或更高版本上)。 或者,您可以使用tf.scan,它将对张量中的每个元素应用一个函数,其中每个元素由每个张量组成,这些张量是从维度0处的原始张量解包而来的

两种方法都可以