Model 模拟后将Pymc3模型保存到磁盘中

Model 模拟后将Pymc3模型保存到磁盘中,model,theano,backend,pymc3,Model,Theano,Backend,Pymc3,我使用Pymc3构建了一个深度贝叶斯神经网络,我训练了我的模型并获得了我需要的样本。现在,我正在搜索保存到磁盘这个安装模型 我试图选取它,但当我改变测试数据集大小时,我得到了这个错误 def save_模型(跟踪、网络、ann_输入、num): 打印(“in”) 以open('my_model.pkl','wb')作为buff: pickle.dump({'model':网络,'trace':trace},buff) def加载_型号(num): 以open('my_model.pkl','rb

我使用Pymc3构建了一个深度贝叶斯神经网络,我训练了我的模型并获得了我需要的样本。现在,我正在搜索保存到磁盘这个安装模型 我试图选取它,但当我改变测试数据集大小时,我得到了这个错误 def save_模型(跟踪、网络、ann_输入、num): 打印(“in”) 以open('my_model.pkl','wb')作为buff: pickle.dump({'model':网络,'trace':trace},buff)

def加载_型号(num): 以open('my_model.pkl','rb')作为buff: 数据=pickle.load(buff)

我得到这个错误

    print(accuracy_score(y_pred,y_test))
文件“D:\Users\wissam\AppData\Local\Programs\Python\36\lib\site packages\sklearn\metrics\classification.py”,第172行,精度评分 y_type,y_true,y_pred=_check_targets(y_true,y_pred) 文件“D:\Users\wissam\AppData\Local\Programs\Python\36\lib\site packages\sklearn\metrics\classification.py”,第72行,在检查目标中 检查长度是否一致(y_true,y_pred) 文件“D:\Users\wissam\AppData\Local\Programs\Python\36\lib\site packages\sklearn\utils\validation.py”,第181行,检查长度是否一致 “样本:%r”%[int(l)表示长度为l的样本]) ValueError:找到样本数不一致的输入变量:[174169]

我还尝试使用以下代码使用后端

with neural_network:
        step = pm.Metropolis ()
        print("start simpling")
        db = pm.backends.Text ('test')
        trace = pm.sample (10000,step, trace=db)
        print ("end simpling")
        from pymc3 import summary
        summary(trace, varnames=['p'])
我得到了以下错误

Traceback (most recent call last):
File "D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\model.py", line 121, in get_context
return cls.get_contexts()[-1]
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File 
"D:/Users/wissam/PycharmProjects/git_repo/projetenovap/Training/
trainModel.py", 
line 301, in <module>
trace = pm.backends.text.load('test')
File "D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\backends\text.py", line 171, in load
strace = Text(name, model=model)
File "D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\backends\text.py", line 44, in __init__
super(Text, self).__init__(name, model, vars)
File "D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\backends\base.py", line 31, in __init__
model = modelcontext(model)
File "D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\model.py", line 131, in modelcontext
return Model.get_context()
File "D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\model.py", line 123, in get_context
raise TypeError("No context on context stack")
TypeError: No context on context stack
回溯(最近一次呼叫最后一次):
文件“D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\model.py”,第121行,在get\u上下文中
返回cls.get_contexts()[-1]
索引器:列表索引超出范围
在处理上述异常期间,发生了另一个异常:
回溯(最近一次呼叫最后一次):
文件
“D:/Users/wissam/pycharm项目/git_repo/projetenovap/Training/
trainModel.py“,
第301行,输入
trace=pm.backends.text.load('test')
文件“D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\backends\text.py”,第171行,装入
strace=Text(名称,型号=model)
文件“D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\backends\text.py”,第44行,在__
super(文本,self)。\uuuuu init\uuuuuu(名称,型号,变量)
文件“D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\backends\base.py”,第31行,在__
模型=模型上下文(模型)
文件“D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\model.py“,modelcontext中的第131行
返回模型。获取上下文()
文件“D:\Users\wissam\AppData\Roaming\Python\Python36\site-
packages\pymc3\model.py”,第123行,在get\u上下文中
raise TypeError(“上下文堆栈上没有上下文”)
TypeError:上下文堆栈上没有上下文

有人想到要保存这个模型吗

我的问题解决了,我们应该只保存跟踪(采样数据),每次我们创建一个新的神经网络(只保存权重而不是所有的神经网络)

这是我用过的该死的代码

  def predict(trace, test_X):

        #create the model
        X_test, X_train, y_test, y_train = loadDataset ()
        binary = sklearn.preprocessing.LabelBinarizer ().fit (y_train)
        y_2_bin = binary.transform (y_train)
        ann_input = theano.shared (X_train)
        n_hidden = 8;
        nbHidden = 3;
        # Initialize random weights between each layer
        init_1 = np.random.randn (X_train.shape[ 1 ], n_hidden)
        init_2 = [ ]
        for i in range (nbHidden):
            init_2.append (np.random.randn (n_hidden, n_hidden))
        init_out = np.random.randn (n_hidden, 3)
        with pm.Model () as neural_network:
            # Weights from input to hidden layer
            weights_in_1 = pm.Normal ('w_in_1', 0, sd=1,
                                      shape=(X_train.shape[ 1 ], n_hidden),
                                      testval=init_1)
            # Weights from 1st to 2nd layer
            weights_1_2 = [ ]
            for i in range (1, nbHidden, 1):
                weights_1_2.append (pm.Normal ('w_' + str (i) + '_' + str (i + 1), 0, sd=1,
                                               shape=(n_hidden, n_hidden),
                                               testval=init_2[ i ]))

            # Weights from hidden lay2er to output

            weights_2_out = pm.Normal ('w_' + str (nbHidden) + '_out', 0, sd=1,
                                       shape=(n_hidden, 3),
                                       testval=init_out)

            # Build neural-network using tanh activation function
            act_1 = T.tanh (T.dot (ann_input,
                                   weights_in_1))
            act_2 = [ ]
            act_2.append (T.tanh (T.dot (act_1,
                                         weights_1_2[ 0 ])))

            for i in range (1, nbHidden, 1):
                act_2.append (T.tanh (T.dot (act_2[ i - 1 ],
                                             weights_1_2[ i - 1 ])))

            act_out = T.nnet.softmax (T.dot (act_2[ nbHidden - 1 ],
                                             weights_2_out))
            # 10 discrete output classes -> pymc3 categorical distribution
            p = pm.Deterministic ('p', act_out)
            # y_train [y_train==2]=0
            # y_2_bin = sklearn.preprocessing.LabelBinarizer ().fit_transform (y_train)
            out = pm.Bernoulli ('out', p, observed=y_2_bin)
            print("model etablis")
        ann_input.set_value(test_X)

        #use the saved trace which containes the weight
        with neural_network:
            print("start simpling")
            ppc = pm.sample_ppc (trace, samples=1000)
            print("end simpling")

        #get the prediction
        y_pred = ppc[ 'p' ]

        #return the prediction
        return y_pred
为了保存跟踪,我使用了这个函数

     #save trained model
     def save_model(trace, network):
          with open ('my_model.pkl', 'wb') as buff:
          pickle.dump ({'model': network, 'trace': trace}, buff)
要重新加载它,我使用了

    #reload trained model
    def load_model(num):
          with open ('my_model.pkl', 'rb') as buff:
          data = pickle.load (buff)

          network, trace = data[ 'model' ], data[ 'trace' ]

我的问题解决了,我们应该只保存跟踪(采样数据),每次我们创建一个新的神经网络(只保存权重而不是所有的神经网络)
     #save trained model
     def save_model(trace, network):
          with open ('my_model.pkl', 'wb') as buff:
          pickle.dump ({'model': network, 'trace': trace}, buff)
    #reload trained model
    def load_model(num):
          with open ('my_model.pkl', 'rb') as buff:
          data = pickle.load (buff)

          network, trace = data[ 'model' ], data[ 'trace' ]