Function 自定义最大池层:ValueError:应定义输入的通道维度。没有找到`

Function 自定义最大池层:ValueError:应定义输入的通道维度。没有找到`,function,image-segmentation,tensorflow2.0,max-pooling,Function,Image Segmentation,Tensorflow2.0,Max Pooling,我正在研究tensorflow2,我正在尝试用索引实现Max Unpol,以实现SegNet 当我运行它时,我遇到以下问题。我正在定义defMaxUnpool2D,然后在模型中调用它。我认为问题是由更新和掩码已形成(None、H、W、ch)这一事实给出的 你能帮我解决这个问题吗 我已经解决了这个问题。如果有人需要,这里是MaxUnpoling2D的代码: def MaxUnpooling2D(pool, ind, output_shape, batch_size, name=None): &qu

我正在研究tensorflow2,我正在尝试用索引实现Max Unpol,以实现SegNet

当我运行它时,我遇到以下问题。我正在定义def
MaxUnpool2D
,然后在模型中调用它。我认为问题是由更新和掩码已形成(None、H、W、ch)这一事实给出的


你能帮我解决这个问题吗

我已经解决了这个问题。如果有人需要,这里是MaxUnpoling2D的代码:

def MaxUnpooling2D(pool, ind, output_shape, batch_size, name=None):
"""
   Unpooling layer after max_pool_with_argmax.
   Args:
       pool:   max pooled output tensor
       ind:      argmax indices
       ksize:     ksize is the same as for the pool
   Return:
       unpool:    unpooling tensor
       :param batch_size:
"""
with tf.compat.v1.variable_scope(name):
    pool_ = tf.reshape(pool, [-1])
    batch_range = tf.reshape(tf.range(batch_size, dtype=ind.dtype), [tf.shape(pool)[0], 1, 1, 1])
    b = tf.ones_like(ind) * batch_range
    b = tf.reshape(b, [-1, 1])
    ind_ = tf.reshape(ind, [-1, 1])
    ind_ = tf.concat([b, ind_], 1)
    ret = tf.scatter_nd(ind_, pool_, shape=[batch_size, output_shape[1] * output_shape[2] * output_shape[3]])
    # the reason that we use tf.scatter_nd: if we use tf.sparse_tensor_to_dense, then the gradient is None, which will cut off the network.
    # But if we use tf.scatter_nd, the gradients for all the trainable variables will be tensors, instead of None.
    # The usage for tf.scatter_nd is that: create a new tensor by applying sparse UPDATES(which is the pooling value) to individual values of slices within a
    # zero tensor of given shape (FLAT_OUTPUT_SHAPE) according to the indices (ind_). If we ues the orignal code, the only thing we need to change is: changeing
    # from tf.sparse_tensor_to_dense(sparse_tensor) to tf.sparse_add(tf.zeros((output_sahpe)),sparse_tensor) which will give us the gradients!!!
    ret = tf.reshape(ret, [tf.shape(pool)[0], output_shape[1], output_shape[2], output_shape[3]])
    return ret
当我运行它时,我遇到以下问题。我正在定义def MaxUnpool2D,然后在模型中调用它。我认为问题是由更新和掩码已形成(None、H、W、ch)这一事实给出的。你能帮我解决这个问题吗?什么问题?
def MaxUnpooling2D(pool, ind, output_shape, batch_size, name=None):
"""
   Unpooling layer after max_pool_with_argmax.
   Args:
       pool:   max pooled output tensor
       ind:      argmax indices
       ksize:     ksize is the same as for the pool
   Return:
       unpool:    unpooling tensor
       :param batch_size:
"""
with tf.compat.v1.variable_scope(name):
    pool_ = tf.reshape(pool, [-1])
    batch_range = tf.reshape(tf.range(batch_size, dtype=ind.dtype), [tf.shape(pool)[0], 1, 1, 1])
    b = tf.ones_like(ind) * batch_range
    b = tf.reshape(b, [-1, 1])
    ind_ = tf.reshape(ind, [-1, 1])
    ind_ = tf.concat([b, ind_], 1)
    ret = tf.scatter_nd(ind_, pool_, shape=[batch_size, output_shape[1] * output_shape[2] * output_shape[3]])
    # the reason that we use tf.scatter_nd: if we use tf.sparse_tensor_to_dense, then the gradient is None, which will cut off the network.
    # But if we use tf.scatter_nd, the gradients for all the trainable variables will be tensors, instead of None.
    # The usage for tf.scatter_nd is that: create a new tensor by applying sparse UPDATES(which is the pooling value) to individual values of slices within a
    # zero tensor of given shape (FLAT_OUTPUT_SHAPE) according to the indices (ind_). If we ues the orignal code, the only thing we need to change is: changeing
    # from tf.sparse_tensor_to_dense(sparse_tensor) to tf.sparse_add(tf.zeros((output_sahpe)),sparse_tensor) which will give us the gradients!!!
    ret = tf.reshape(ret, [tf.shape(pool)[0], output_shape[1], output_shape[2], output_shape[3]])
    return ret