Python 如何在tf.keras 1.14中克隆具有ReLU和连接层的模型?
该代码由NeRF官方团队(2020)使用tensorflow 1.14提供。我正在尝试使用tf.keras.clone\u model()克隆此模型。我在当前代码中面临的问题是“ReLU”对象没有属性“\u name”。模型是使用以下代码创建的。模型的摘要可以在文章的末尾找到Python 如何在tf.keras 1.14中克隆具有ReLU和连接层的模型?,python,tensorflow,keras,Python,Tensorflow,Keras,该代码由NeRF官方团队(2020)使用tensorflow 1.14提供。我正在尝试使用tf.keras.clone\u model()克隆此模型。我在当前代码中面临的问题是“ReLU”对象没有属性“\u name”。模型是使用以下代码创建的。模型的摘要可以在文章的末尾找到 def init_nerf_model(D=8, W=256, input_ch=3, input_ch_views=3, output_ch=4, skips=[4], use_viewdirs=False):
def init_nerf_model(D=8, W=256, input_ch=3, input_ch_views=3, output_ch=4, skips=[4], use_viewdirs=False):
relu = tf.keras.layers.ReLU()
def dense(W, act=relu): return tf.keras.layers.Dense(W, activation=act)
print('MODEL', input_ch, input_ch_views, type(
input_ch), type(input_ch_views), use_viewdirs)
input_ch = int(input_ch)
input_ch_views = int(input_ch_views)
inputs = tf.keras.Input(shape=(input_ch + input_ch_views))
inputs_pts, inputs_views = tf.split(inputs, [input_ch, input_ch_views], -1)
inputs_pts.set_shape([None, input_ch])
inputs_views.set_shape([None, input_ch_views])
print(inputs.shape, inputs_pts.shape, inputs_views.shape)
outputs = inputs_pts
for i in range(D):
outputs = dense(W)(outputs)
if i in skips:
outputs = tf.concat([inputs_pts, outputs], -1)
if use_viewdirs:
alpha_out = dense(1, act=None)(outputs)
bottleneck = dense(256, act=None)(outputs)
inputs_viewdirs = tf.concat(
[bottleneck, inputs_views], -1) # concat viewdirs
outputs = inputs_viewdirs
# The supplement to the paper states there are 4 hidden layers here, but this is an error since
# the experiments were actually run with 1 hidden layer, so we will leave it as 1.
for i in range(1):
outputs = dense(W//2)(outputs)
outputs = dense(3, act=None)(outputs)
outputs = tf.concat([outputs, alpha_out], -1)
else:
outputs = dense(output_ch, act=None)(outputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
return model
当我搜索时,我找到的唯一解决方案是创建一个如下所示的顺序模型,但我想知道如何创建一个具有相同结构的顺序模型,例如,如何创建输入层和连接层?您可以在底部看到摘要
model = Sequential()
model.add(Dense(W))
model.add(ReLU())
...
因此,我尝试了三种其他解决方案:
A.如何以当前的构造方式创建可克隆的ReLU层
B如何使用当前使用Sequential()的输入层和连接层构建当前模型
我将非常感谢你的帮助
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 90)] 0
__________________________________________________________________________________________________
tf_op_layer_split (TensorFlowOp [(None, 63), (None, 0 input_1[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 256) 16384 tf_op_layer_split[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 256) 65792 dense[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 256) 65792 dense_1[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 256) 65792 dense_2[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 256) 65792 dense_3[0][0]
__________________________________________________________________________________________________
tf_op_layer_concat (TensorFlowO [(None, 319)] 0 tf_op_layer_split[0][0]
dense_4[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (None, 256) 81920 tf_op_layer_concat[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (None, 256) 65792 dense_5[0][0]
__________________________________________________________________________________________________
dense_7 (Dense) (None, 256) 65792 dense_6[0][0]
__________________________________________________________________________________________________
dense_9 (Dense) (None, 256) 65792 dense_7[0][0]
__________________________________________________________________________________________________
tf_op_layer_concat_1 (TensorFlo [(None, 283)] 0 dense_9[0][0]
tf_op_layer_split[0][1]
__________________________________________________________________________________________________
dense_10 (Dense) (None, 128) 36352 tf_op_layer_concat_1[0][0]
__________________________________________________________________________________________________
dense_11 (Dense) (None, 3) 387 dense_10[0][0]
__________________________________________________________________________________________________
dense_8 (Dense) (None, 1) 257 dense_7[0][0]
__________________________________________________________________________________________________
tf_op_layer_concat_2 (TensorFlo [(None, 4)] 0 dense_11[0][0]
dense_8[0][0]
==================================================================================================
Total params: 595,844
Trainable params: 595,844
安全检查:您使用的是TF2.x吗?我使用的是TF1.14,与keras的版本相同。
new_model = model
new_grad_vars = new_model.trainable_variables
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 90)] 0
__________________________________________________________________________________________________
tf_op_layer_split (TensorFlowOp [(None, 63), (None, 0 input_1[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 256) 16384 tf_op_layer_split[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 256) 65792 dense[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 256) 65792 dense_1[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 256) 65792 dense_2[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 256) 65792 dense_3[0][0]
__________________________________________________________________________________________________
tf_op_layer_concat (TensorFlowO [(None, 319)] 0 tf_op_layer_split[0][0]
dense_4[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (None, 256) 81920 tf_op_layer_concat[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (None, 256) 65792 dense_5[0][0]
__________________________________________________________________________________________________
dense_7 (Dense) (None, 256) 65792 dense_6[0][0]
__________________________________________________________________________________________________
dense_9 (Dense) (None, 256) 65792 dense_7[0][0]
__________________________________________________________________________________________________
tf_op_layer_concat_1 (TensorFlo [(None, 283)] 0 dense_9[0][0]
tf_op_layer_split[0][1]
__________________________________________________________________________________________________
dense_10 (Dense) (None, 128) 36352 tf_op_layer_concat_1[0][0]
__________________________________________________________________________________________________
dense_11 (Dense) (None, 3) 387 dense_10[0][0]
__________________________________________________________________________________________________
dense_8 (Dense) (None, 1) 257 dense_7[0][0]
__________________________________________________________________________________________________
tf_op_layer_concat_2 (TensorFlo [(None, 4)] 0 dense_11[0][0]
dense_8[0][0]
==================================================================================================
Total params: 595,844
Trainable params: 595,844