Python ValueError:维度必须相等,但对于';为3和128;添加';(op:';添加';)带有输入形状:[3],[7128]

Python ValueError:维度必须相等,但对于';为3和128;添加';(op:';添加';)带有输入形状:[3],[7128],python,tensorflow,keras-layer,Python,Tensorflow,Keras Layer,我试图使用keras.layer使用变分推理和贝叶斯神经网络,但当我导入我为变分推理编写的类时,我得到了值错误 我可以毫无错误地使用tf.layer.densed,但我想使用变分推理。 这是带有tf.layer和tun的代码,没有任何错误: h_1 = tf.layers.dense(inputs=inputs,units=128, activation=tf.nn.leaky_relu,kernel_regularizer=regularizer) h_1_a=tf.layers.dense(

我试图使用keras.layer使用变分推理和贝叶斯神经网络,但当我导入我为变分推理编写的类时,我得到了值错误

我可以毫无错误地使用tf.layer.densed,但我想使用变分推理。 这是带有tf.layer和tun的代码,没有任何错误:

h_1 = tf.layers.dense(inputs=inputs,units=128, activation=tf.nn.leaky_relu,kernel_regularizer=regularizer)
h_1_a=tf.layers.dense(inputs=action,units=128, activation=tf.nn.leaky_relu, kernel_regularizer=regularizer)
h_1_concat=tf.concat(axis=1,values=[h_1,h_1_a])
h_2 = tf.layers.dense(inputs=h_1_concat, units=64, activation=tf.nn.leaky_relu, kernel_regularizer=regularizer)
h_3 = tf.layers.dense(inputs=h_2, units=16, activation=tf.nn.leaky_relu, kernel_regularizer=regularizer)
out = tf.layers.dense(inputs=h_3, units=1, kernel_regularizer=regularizer)
这是我编写的类,在导入时导致错误:

def mixture_prior_params(sigma_1, sigma_2, pi, return_sigma=False):
params = K.variable([sigma_1, sigma_2, pi], name='mixture_prior_params')
sigma = np.sqrt(pi * sigma_1 ** 2 + (1 - pi) * sigma_2 ** 2) #VI
return params, sigma
def log_mixture_prior_prob(w):
comp_1_dist = tf.distributions.Normal(0.0, prior_params[0]) 
comp_2_dist = tf.distributions.Normal(0.0, prior_params[1]) 
comp_1_weight = prior_params[2]     
return K.log(comp_1_weight * comp_1_dist.prob(w) + (1 - comp_1_weight) * comp_2_dist.prob(w))
prior_params, prior_sigma = mixture_prior_params(sigma_1=1.5, sigma_2=0.5, pi=0.3)
class DenseVariational(Layer): 
def __init__(self, output_dim, kl_loss_weight, activation=None, **kwargs): 
    self.output_dim = output_dim
    self.kl_loss_weight = kl_loss_weight
    self.activation = activations.get(activation)
    super().__init__(**kwargs)
def build(self, input_shape):  
    self._trainable_weights.append(prior_params) 
    self.kernel_mu = self.add_weight(name='kernel_mu', 
                                     shape=(input_shape[0][1], self.output_dim),
                                   initializer=initializers.normal(stddev=prior_sigma),
                                     trainable=True)
    self.bias_mu = self.add_weight(name='bias_mu', 
                                   shape=(self.output_dim,),
                                   initializer=initializers.normal(stddev=prior_sigma),
                                   trainable=True)
    self.kernel_rho = self.add_weight(name='kernel_rho', 
                                      shape=(input_shape[0][1], self.output_dim),
                                      initializer=initializers.constant(0.0),
                                      trainable=True)
    self.bias_rho = self.add_weight(name='bias_rho', 
                                    shape=(self.output_dim,),
                                    initializer=initializers.constant(0.0),
                                    trainable=True)
super().build(input_shape)

def call(self, x):
    sys.exit()
    kernel_sigma = tf.math.softplus(self.kernel_rho)
    kernel = self.kernel_mu + kernel_sigma * tf.random.normal(self.kernel_mu.shape)

    bias_sigma = tf.math.softplus(self.bias_rho)
    bias = self.bias_mu + bias_sigma * tf.random.normal(self.bias_mu.shape)

    self.add_loss(self.kl_loss(kernel, self.kernel_mu, kernel_sigma) + 
                  self.kl_loss(bias, self.bias_mu, bias_sigma))

    return self.activation(K.dot(x, kernel) + bias)

def compute_output_shape(self, input_shape):
    #sys.exit()
    return (input_shape[0][1], self.output_dim)

def kl_loss(self, w, mu, sigma):
    variational_dist = tf.distributions.Normal(mu, sigma)
    return kl_loss_weight * K.sum(variational_dist.log_prob(w) - log_mixture_prior_prob(w)) `
这是我在这节课上用到的部分:

                _inputs=tf.concat(axis=1,values=[inputs, action])
            x_in = Input(shape=(7,))

            x = DenseVariational(128, kl_loss_weight=kl_loss_weight, activation='relu')(x_in)
            x = DenseVariational(64, kl_loss_weight=kl_loss_weight, activation='relu')(x)
            x = DenseVariational(64, kl_loss_weight=kl_loss_weight, activation='relu')(x)
            x = DenseVariational(16, kl_loss_weight=kl_loss_weight, activation='relu')(x)
            x = DenseVariational(3, kl_loss_weight=kl_loss_weight)(x)

            model = Model(x_in,x)

            out = model(x)
我输入的形状是[7128],但我无法理解错误:

File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py", line 884, in binary_op_wrapper
return func(x, y, name=name)
  File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 396, in add
"Add", x=x, y=y, name=name)
  File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
  File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
  File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3616, in create_op
op_def=op_def)
  File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2027, in __init__
control_input_ops)
  File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1867, in _create_c_op
raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 3 and 128 for 'add' (op: 'Add') with input shapes: [3], [7,128]

导致错误的代码行是什么?当我使用:
从我的_VI import densevariation
来使用我编写的类时,即使我不使用它,我也会得到这个错误,我使用
tf.layer
,如上所述,但只导入这个类会导致错误@CalvinGodfreyI认为这行造成了错误:
self.update\u target\u network\u params=\[self.target\u network\u params[i]。赋值(tf.multiply(self.network\u params[i],self.tau)+tf.multiply(self.target\u network\u params[i],1.-self.tau))对于范围内的i(len(self.target\u network\u params))]
目标网络只是我在@Calvingodfrey上面写的DNN的一个副本。导致错误的代码行是什么?当我使用:
从我的导入页面变量
使用我写的类时,我会得到这个错误,即使我没有使用它,并且我使用了
tf.layer
,正如我上面所说,但只是导入这个类会导致错误错误@CalvinGodfreyI认为这行造成了错误:
self.update\u target\u network\u params=\[self.target\u network\u params[i]。赋值(tf.multiply(self.network\u params[i],self.tau)+tf.multiply(self.target\u network\u params[i],1.-self.tau))对于范围内的i(len(self.target\u network\u params))
目标网络只是我在@CalvinGodfrey上面写的DNN的一个副本