Python Keras:如何在自定义损耗中获得张量维数?

Python Keras:如何在自定义损耗中获得张量维数?,python,tensorflow,neural-network,deep-learning,keras,Python,Tensorflow,Neural Network,Deep Learning,Keras,我试图编写我的自定义损失函数:我想对输入向量的部分应用categorical\u crossentropy,然后求和 假设y_为真,y_pred是1D向量 代码: 但我有一个错误: for i in range(0,y_true.shape[0],dictionary_dims): TypeError: __index__ returned non-int (type NoneType) 那个么,如何访问输入张量的形状来获得张量的子集呢 更新: 还尝试通过tensorflow直接写入损

我试图编写我的自定义损失函数:我想对输入向量的部分应用
categorical\u crossentropy
,然后求和

假设y_为真,y_pred是1D向量

代码:

但我有一个错误:

    for i in range(0,y_true.shape[0],dictionary_dims):
TypeError: __index__ returned non-int (type NoneType)
那个么,如何访问输入张量的形状来获得张量的子集呢

更新: 还尝试通过tensorflow直接写入损失:

def custom_loss_tf(y_true, y_pred):

    print('tf.shape(y_true)',tf.shape(y_true)) #
    print('type(tf.shape(y_true))',type(tf.shape(y_true))) #

    sys.exit()

    loss_sum= 0.0
    for i in range(0,y_true.shape[0],dictionary_dims):
        loss_sum+= keras.backend.categorical_crossentropy(y_true[i*dictionary_dims:(i+1)*dictionary_dims], y_pred[i*dictionary_dims:(i+1)*dictionary_dims])

    return loss_sum
输出:

tf.shape(y_true) Tensor("Shape:0", shape=(2,), dtype=int32)
type(tf.shape(y_true)) <class 'tensorflow.python.framework.ops.Tensor'>
这里有两件事:

  • 如果你想得到一个张量形状,你应该使用
    keras.backend
    中的函数
  • 第一个维度设置为批次维度,因此
    int\u shape(y\u true)[0]
    将返回批次大小。您应该使用
    int\u shape(y\u true)[1]

  • 由于某种原因,
    K.int\u shape(y\u true)
    给了我
    (无,无)
    ,而对于
    K.int\u shape(y\u pred)
    它是
    (无,26)
    ,所以它看起来是有效的。我认为这是因为y\u true只有在训练时才知道,而在编译模型时,y\u pred是从模型中知道的。
    tf.shape(y_true) Tensor("Shape:0", shape=(2,), dtype=int32)
    type(tf.shape(y_true)) <class 'tensorflow.python.framework.ops.Tensor'>
    
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #
    =================================================================
    input_1 (InputLayer)         (None, 80, 120, 3)        0
    _________________________________________________________________
    conv2d_1 (Conv2D)            (None, 80, 120, 32)       896
    _________________________________________________________________
    max_pooling2d_1 (MaxPooling2 (None, 40, 60, 32)        0
    _________________________________________________________________
    activation_1 (Activation)    (None, 40, 60, 32)        0
    _________________________________________________________________
    conv2d_2 (Conv2D)            (None, 40, 60, 32)        9248
    _________________________________________________________________
    max_pooling2d_2 (MaxPooling2 (None, 20, 30, 32)        0
    _________________________________________________________________
    activation_2 (Activation)    (None, 20, 30, 32)        0
    _________________________________________________________________
    conv2d_3 (Conv2D)            (None, 20, 30, 64)        18496
    _________________________________________________________________
    max_pooling2d_3 (MaxPooling2 (None, 10, 15, 64)        0
    _________________________________________________________________
    activation_3 (Activation)    (None, 10, 15, 64)        0
    _________________________________________________________________
    conv2d_4 (Conv2D)            (None, 10, 15, 64)        36928
    _________________________________________________________________
    max_pooling2d_4 (MaxPooling2 (None, 5, 7, 64)          0
    _________________________________________________________________
    activation_4 (Activation)    (None, 5, 7, 64)          0
    _________________________________________________________________
    flatten_1 (Flatten)          (None, 2240)              0
    _________________________________________________________________
    head (Dense)                 (None, 26)                58266
    =================================================================