Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何在returnn中的新网络中加载经过训练的网络的某一层的权重?_Python_Tensorflow_Returnn - Fatal编程技术网

Python 如何在returnn中的新网络中加载经过训练的网络的某一层的权重?

Python 如何在returnn中的新网络中加载经过训练的网络的某一层的权重?,python,tensorflow,returnn,Python,Tensorflow,Returnn,我在文件夹路径/to/modelFile中有以下网络的训练权重: network={ "conv_1" : {"class": "conv", "filter_size": (400,), "activation":"abs" , "padding": "valid", "strides": 10, "n_out": 64 }, "pad_conv_1_time_dim" : {"class": "pad", "axes": "time", "padding": 20, "from": ["co

我在文件夹路径/to/modelFile中有以下网络的训练权重:

network={
"conv_1" : {"class": "conv", "filter_size": (400,), "activation":"abs" , "padding": "valid", "strides": 10, "n_out": 64 },
"pad_conv_1_time_dim" : {"class": "pad", "axes": "time", "padding": 20, "from": ["conv_1"]},
"conv_2" : {"class": "conv", "input_add_feature_dim": True, "filter_size": (40, 64), "activation":"abs", "padding": "valid","strides": 16, "n_out": 128, "from": ["pad_conv_1_time_dim"]},
"flatten_conv": {"class": "merge_dims", "axes": "except_time","n_out": 128,  "from": ["conv_2"]},
"window_1": {"class": "window", "window_size": 17, "from": ["flatten_conv"]},
"flatten_window": {"class": "merge_dims", "axes":"except_time","from": ["window_1"]},
"lin_1" :   { "class" : "linear", "activation": None, "n_out": 512,"from" : ["flatten_window"] },
"ff_2" :   { "class" : "linear", "activation": "relu", "n_out": 2000, "from" : ["lin_1"] },
"output" :   { "class" : "softmax", "loss" : "ce", "from" : ["ff_2"] }
}
我想将层“conv_1”和“conv_2”的训练权重加载到以下网络中:

network={
"conv_1" : {"class": "conv", "filter_size": (400,), "activation": "abs" , "padding": "valid", "strides": 10, "n_out": 64 },
"pad_conv_1_time_dim" : {"class": "pad", "axes": "time", "padding": 20, "from": ["conv_1"]},
"conv_2" : {"class": "conv", "input_add_feature_dim": True, "filter_size": (40, 64), "activation":"abs", "padding": "valid", "strides": 16, "n_out": 128, "from": ["pad_conv_1_time_dim"]},
"flatten_conv": {"class": "merge_dims", "axes": "except_time", "n_out": 128,  "from": ["conv_2"]},
"lstm1_fw" : { "class": "rec", "unit": "lstmp", "n_out" : rnnLayerNodes, "direction": 1, "from" : ['flatten_conv'] },
"lstm1_bw" : { "class": "rec", "unit": "lstmp", "n_out" : rnnLayerNodes, "direction": -1, "from" : ['flatten_conv'] },
"lin_1" :   { "class" : "linear", "activation": None, "n_out": 512, "from" : ["lstm1_fw", "lstm1_bw"] },
"ff_2" :   { "class" : "linear", "activation": "relu", "n_out": 2000, "from" : ["lin_1"] },
"ff_3" :   { "class" : "linear", "activation": "relu", "n_out": 2000,"from" : ["ff_2"] },
"ff_4" :   { "class" : "linear", "activation": "relu", "n_out": 2000,"from" : ["ff_3"] },
"output" :   { "class" : "softmax", "loss" : "ce", "from" : ["ff_4"] }
}

这在Return怎么可能

使用
子网络层
是一种选择。这看起来像:

trained_network_model_file = 'path/to/model_file'

trained_network = {
"conv_1" : {"class": "conv", "filter_size": (400,), "activation": "abs" , "padding": "valid", "strides": 10, "n_out": 64 },
"pad_conv_1_time_dim" : {"class": "pad", "axes": "time", "padding": 20, "from": ["conv_1"]},
"conv_2" : {"class": "conv", "input_add_feature_dim": True, "filter_size": (40, 64), "activation":"abs", "padding": "valid", "strides": 16, "n_out": 128, "from": ["pad_conv_1_time_dim"]},
"flatten_conv": {"class": "merge_dims", "axes": "except_time","n_out": 128,  "from": ["conv_2"]}
}

network = {
"conv_layers" : { "class" : "subnetwork", "subnetwork": trained_network, "load_on_init": trained_network_model_file, "n_out": 128},
"lstm1_fw" : { "class": "rec", "unit": "lstmp", "n_out" : rnnLayerNodes, "direction": 1, "from" : ['conv_layers'] },
"lstm1_bw" : { "class": "rec", "unit": "lstmp", "n_out" : rnnLayerNodes, "direction": -1, "from" : ['conv_layers'] },
"lin_1" :   { "class" : "linear", "activation": None, "n_out": 512, "from" : ["lstm1_fw", "lstm1_bw"] },
"ff_2" :   { "class" : "linear", "activation": "relu", "n_out": 2000, "from" : ["lin_1"] },
"ff_3" :   { "class" : "linear", "activation": "relu", "n_out": 2000, "from" : ["ff_2"] },
"ff_4" :   { "class" : "linear", "activation": "relu", "n_out": 2000, "from" : ["ff_3"] },
"output" :   { "class" : "softmax", "loss" : "ce", "from" : ["ff_4"] }
}
我想对你来说,这是我的首选

否则,每个层都有
custom_param_importer
选项,您可以让它使用该选项


然后,对于许多层,您可以为参数定义初始值设定项,例如,对于
ConvLayer
,您可以使用
forward\u weights\u init
。可以使用
load\u txt\u file\u初始值设定项之类的函数,或者可以添加类似的函数直接从TF检查点文件加载。

如果子网络中缺少“输出层”错误,可以将名称“flattle\u conv”更改为“output”它应该工作如果子网络的重量不应该被训练,参数{…,“trainable”:False,…}必须被添加到训练网络中包含可训练参数的所有层以及子网络层“conv_layers”