Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/320.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Pickle-python千层面模型_Python_Lasagne - Fatal编程技术网

Pickle-python千层面模型

Pickle-python千层面模型,python,lasagne,Python,Lasagne,我在千层面上训练了一个简单的长-短期记忆(lstm)模型,如下所示: 以下是体系结构: l_in = lasagne.layers.InputLayer(shape=(None, None, vocab_size)) # We now build the LSTM layer which takes l_in as the input layer # We clip the gradients at GRAD_CLIP to prevent the problem of exploding g

我在千层面上训练了一个简单的长-短期记忆(lstm)模型,如下所示:

以下是体系结构:

l_in = lasagne.layers.InputLayer(shape=(None, None, vocab_size))

# We now build the LSTM layer which takes l_in as the input layer
# We clip the gradients at GRAD_CLIP to prevent the problem of exploding gradients. 

l_forward_1 = lasagne.layers.LSTMLayer(
    l_in, N_HIDDEN, grad_clipping=GRAD_CLIP,
    nonlinearity=lasagne.nonlinearities.tanh)

l_forward_2 = lasagne.layers.LSTMLayer(
    l_forward_1, N_HIDDEN, grad_clipping=GRAD_CLIP,
    nonlinearity=lasagne.nonlinearities.tanh)

# The l_forward layer creates an output of dimension (batch_size, SEQ_LENGTH, N_HIDDEN)
# Since we are only interested in the final prediction, we isolate that quantity and feed it to the next layer. 
# The output of the sliced layer will then be of size (batch_size, N_HIDDEN)
l_forward_slice = lasagne.layers.SliceLayer(l_forward_2, -1, 1)

# The sliced output is then passed through the softmax nonlinearity to create probability distribution of the prediction
# The output of this stage is (batch_size, vocab_size)
l_out = lasagne.layers.DenseLayer(l_forward_slice, num_units=vocab_size, W = lasagne.init.Normal(), nonlinearity=lasagne.nonlinearities.softmax)

# Theano tensor for the targets
target_values = T.ivector('target_output')

# lasagne.layers.get_output produces a variable for the output of the net
network_output = lasagne.layers.get_output(l_out)

# The loss function is calculated as the mean of the (categorical) cross-entropy between the prediction and target.
cost = T.nnet.categorical_crossentropy(network_output,target_values).mean()

# Retrieve all parameters from the network
all_params = lasagne.layers.get_all_params(l_out)

# Compute AdaGrad updates for training
print("Computing updates ...")
updates = lasagne.updates.adagrad(cost, all_params, LEARNING_RATE)

# Theano functions for training and computing cost
print("Compiling functions ...")
train = theano.function([l_in.input_var, target_values], cost, updates=updates, allow_input_downcast=True)
compute_cost = theano.function([l_in.input_var, target_values], cost, allow_input_downcast=True)

# In order to generate text from the network, we need the probability distribution of the next character given
# the state of the network and the input (a seed).
# In order to produce the probability distribution of the prediction, we compile a function called probs. 

probs = theano.function([l_in.input_var],network_output,allow_input_downcast=True)
模型通过以下方式进行培训:

for it in xrange(data_size * num_epochs / BATCH_SIZE):
        try_it_out() # Generate text using the p^th character as the start. 

        avg_cost = 0;
        for _ in range(PRINT_FREQ):
            x,y = gen_data(p)

            #print(p)
            p += SEQ_LENGTH + BATCH_SIZE - 1 
            if(p+BATCH_SIZE+SEQ_LENGTH >= data_size):
                print('Carriage Return')
                p = 0;


            avg_cost += train(x, y)
        print("Epoch {} average loss = {}".format(it*1.0*PRINT_FREQ/data_size*BATCH_SIZE, avg_cost / PRINT_FREQ))

如何保存模型,以便不需要再次训练它?使用scikit,我通常只需对模型对象进行pickle处理。然而,我不清楚西亚诺/千层面的类似过程

您可以使用numpy保存权重:

np.savez('model.npz', *lasagne.layers.get_all_param_values(network_output))
然后像这样再次加载它们:

with np.load('model.npz') as f:
     param_values = [f['arr_%d' % i] for i in range(len(f.files))]
lasagne.layers.set_all_param_values(network_output, param_values)
资料来源:

至于模型定义本身:在设置预训练权重之前,一个选择当然是保留代码并重新生成网络。

我已经成功地与
numpy.savez
功能结合使用:

import dill as pickle

...
np.savez('model.npz', *lasagne.layers.get_all_param_values(network))
with open('model.dpkl','wb') as p_output:
   pickle.dump(network, p_output)
要导入pickle模型,请执行以下操作:

with open('model.dpkl', 'rb') as p_input:
    network = pickle.load(p_input)

with np.load('model.npz') as f:
    param_values = [f['arr_%d' % i] for i in range(len(f.files))]
lasagne.layers.set_all_param_values(network, param_values)

可以通过Pickle保存模型参数和模型

import cPickle as pickle
import os
#save the network and its parameters as a dictionary
netInfo = {'network': network, 'params': lasagne.layers.get_all_param_values(network)}
Net_FileName = 'LSTM.pkl'
# save the dictionary as a .pkl file
pickle.dump(netInfo, open(os.path.join(/path/to/a/folder/, Net_FileName), 'wb'),protocol=pickle.HIGHEST_PROTOCOL)
保存模型后,可以通过pickle.load检索该模型:

net = pickle.load(open(os.path.join(/path/to/a/folder/,Net_FileName),'rb'))
all_params = net['params']
lasagne.layers.set_all_param_values(net['network'], all_params) 

是的,最后一部分值得强调。您需要保留构建模型的方式。我建议在可以加载/导入的单独文件中执行此操作。