Python 错误类型:can';t pickle_线程。_本地对象

Python 错误类型:can';t pickle_线程。_本地对象,python,tensorflow,machine-learning,data-science,pickle,Python,Tensorflow,Machine Learning,Data Science,Pickle,我已经使用tensorflow(python)实现了图像分类机器学习模型。该模型工作正常,现在我必须将该模型投入生产阶段,因为我使用的是sklearn joblib库,也尝试了pickle库,但在这两种情况下我都会出错 model = Models.Sequential() model.add(Layers.Conv2D(200,kernel_size=(5,5),activation='relu',input_shape=(150,150,3))) model.add(Layers.Conv

我已经使用tensorflow(python)实现了图像分类机器学习模型。该模型工作正常,现在我必须将该模型投入生产阶段,因为我使用的是sklearn joblib库,也尝试了pickle库,但在这两种情况下我都会出错

model = Models.Sequential()

model.add(Layers.Conv2D(200,kernel_size=(5,5),activation='relu',input_shape=(150,150,3)))
model.add(Layers.Conv2D(180,kernel_size=(5,5),activation='relu'))
model.add(Layers.MaxPool2D(5,5))

model.add(Layers.Conv2D(50,kernel_size=(5,5),activation='relu'))
model.add(Layers.MaxPool2D(5,5))
model.add(Layers.Flatten())
model.add(Layers.Dense(180,activation='relu'))
model.add(Layers.Dense(100,activation='relu'))
model.add(Layers.Dense(50,activation='relu'))
model.add(Layers.Dropout(rate=0.5))
model.add(Layers.Dense(6,activation='softmax'))

model.compile(optimizer=Optimizer.Adam(lr=0.0001),loss='sparse_categorical_crossentropy',metrics=['accuracy'])

model.summary()

trained = model.fit(Images,Labels,epochs=25,validation_split=0.20)




test_images,test_labels = get_images('C:/Users/shrey/Desktop/img_classification/New folder/seg_test/seg_test/')
test_images = np.array(test_images)
test_labels = np.array(test_labels)
test_images = test_images / 255.0
model.evaluate(test_images,test_labels, verbose=1)

test_images,test_labels = get_images('C:/Users/shrey/Desktop/img_classification/New folder/seg_test/seg_test/')
test_images = np.array(test_images)
test_labels = np.array(test_labels)
test_images = test_images / 255.0
model.evaluate(test_images,test_labels, verbose=1)

#Lets predict the images from the "pred" folder.
In [12]:





​
pred_images,no_labels = get_images('C:/Users/shrey/Desktop/img_classification/New folder/seg_pred/')
#pred_images = tf.image.decode_jpeg(pred_images)
#pred_images = tf.cast(pred_images, tf.float32)                                   
pred_images = np.array(pred_images)
pred_images.shape

from sklearn.externals import joblib

        with open('model_pickle','wb') as f:
             pickle.dump(model,f)

    ---------------------------------------------------------------------------

        Type Error                                 Trackback (most recent call last)
        <ipython-input-43-5da5ca65d688> in <module>
              1 with open('model_pickle','wb') as f:
        ----> 2      pickle.dump(model,f)

        Type Error: can't pickle _thread._local objects
model=Models.Sequential()
添加(Layers.Conv2D(200,内核大小=(5,5),激活=(relu',输入形状=(150150,3)))
add(Layers.Conv2D(180,内核大小=(5,5),activation='relu'))
添加模型(Layers.MaxPool2D(5,5))
添加(Layers.Conv2D(50,内核大小=(5,5),激活=(relu'))
添加模型(Layers.MaxPool2D(5,5))
model.add(Layers.flatte())
model.add(Layers.Dense(180,activation='relu'))
model.add(Layers.Dense(100,activation='relu'))
model.add(Layers.Dense(50,activation='relu'))
模型.添加(层.退出(比率=0.5))
model.add(Layers.Dense(6,activation='softmax'))
model.compile(optimizer=optimizer.Adam(lr=0.0001),loss='sparse\u categorical\u crossentropy',metrics=['accurity'])
model.summary()
训练=模型.fit(图像、标签、时代=25,验证\u分割=0.20)
test_images,test_labels=get_images('C:/Users/shrey/Desktop/img_classification/New folder/seg_test/seg_test/'))
test\u images=np.array(test\u images)
test\u labels=np.array(test\u labels)
测试图像=测试图像/255.0
model.evaluate(测试图像,测试标签,verbose=1)
test_images,test_labels=get_images('C:/Users/shrey/Desktop/img_classification/New folder/seg_test/seg_test/'))
test\u images=np.array(test\u images)
test\u labels=np.array(test\u labels)
测试图像=测试图像/255.0
model.evaluate(测试图像,测试标签,verbose=1)
#让我们预测“pred”文件夹中的图像。
在[12]中:
​
pred_images,no_label=get_images('C:/Users/shrey/Desktop/img_classification/New folder/seg_pred/'))
#pred_images=tf.image.decode_jpeg(pred_images)
#pred_images=tf.cast(pred_images,tf.float32)
pred_images=np.array(pred_images)
pred_images.shape
从sklearn.externals导入作业库
将open('model_pickle','wb')作为f:
pickle.dump(型号f)
---------------------------------------------------------------------------
类型错误Trackback(最近一次调用,最后一次)
在里面
1以open('model_pickle','wb')作为f:
---->2酸洗倾倒区(f型)
类型错误:无法pickle\u线程。\u本地对象

在第一个程序中,我们构建了模型,对模型进行了拟合,然后将模型保存为
model.h5
到磁盘。在下一个程序中,我加载保存的模型
model.h5
,并使用加载的模型进行预测。您可以从下载我们在程序中使用的数据集

构建、安装并保存模型-

%tensorflow_version 2.x
print(tf.__version__)
# MLP for Pima Indians Dataset saved to single file
import numpy as np
from numpy import loadtxt
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

# load pima indians dataset
dataset = np.loadtxt("/content/pima-indians-diabetes.csv", delimiter=",")

# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# define model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

# compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Model Summary
model.summary()

# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)

# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

# save model and architecture to single file
model.save("model.h5")
print("Saved model to disk")
2.2.0
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 12)                108       
_________________________________________________________________
dense_1 (Dense)              (None, 8)                 104       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 9         
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________
accuracy: 76.43%
Saved model to disk
# load and evaluate a saved model
import tensorflow as tf
from numpy import loadtxt
from tensorflow.keras.models import load_model

# load model
model = load_model('model.h5')

# summarize model
model.summary()

# LOAD THE NEW DATASET HERE
dataset = loadtxt("pima-indians-diabetes.csv", delimiter=",")

# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# PREDICT 
score = model.predict(X,verbose=0)
print(score.shape)
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 12)                108       
_________________________________________________________________
dense_1 (Dense)              (None, 8)                 104       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 9         
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________
(768, 1)
输出-

%tensorflow_version 2.x
print(tf.__version__)
# MLP for Pima Indians Dataset saved to single file
import numpy as np
from numpy import loadtxt
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

# load pima indians dataset
dataset = np.loadtxt("/content/pima-indians-diabetes.csv", delimiter=",")

# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# define model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

# compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Model Summary
model.summary()

# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)

# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

# save model and architecture to single file
model.save("model.h5")
print("Saved model to disk")
2.2.0
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 12)                108       
_________________________________________________________________
dense_1 (Dense)              (None, 8)                 104       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 9         
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________
accuracy: 76.43%
Saved model to disk
# load and evaluate a saved model
import tensorflow as tf
from numpy import loadtxt
from tensorflow.keras.models import load_model

# load model
model = load_model('model.h5')

# summarize model
model.summary()

# LOAD THE NEW DATASET HERE
dataset = loadtxt("pima-indians-diabetes.csv", delimiter=",")

# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# PREDICT 
score = model.predict(X,verbose=0)
print(score.shape)
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 12)                108       
_________________________________________________________________
dense_1 (Dense)              (None, 8)                 104       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 9         
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________
(768, 1)
加载模型并用于预测-

%tensorflow_version 2.x
print(tf.__version__)
# MLP for Pima Indians Dataset saved to single file
import numpy as np
from numpy import loadtxt
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

# load pima indians dataset
dataset = np.loadtxt("/content/pima-indians-diabetes.csv", delimiter=",")

# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# define model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

# compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Model Summary
model.summary()

# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)

# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

# save model and architecture to single file
model.save("model.h5")
print("Saved model to disk")
2.2.0
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 12)                108       
_________________________________________________________________
dense_1 (Dense)              (None, 8)                 104       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 9         
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________
accuracy: 76.43%
Saved model to disk
# load and evaluate a saved model
import tensorflow as tf
from numpy import loadtxt
from tensorflow.keras.models import load_model

# load model
model = load_model('model.h5')

# summarize model
model.summary()

# LOAD THE NEW DATASET HERE
dataset = loadtxt("pima-indians-diabetes.csv", delimiter=",")

# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# PREDICT 
score = model.predict(X,verbose=0)
print(score.shape)
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 12)                108       
_________________________________________________________________
dense_1 (Dense)              (None, 8)                 104       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 9         
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________
(768, 1)
输出-

%tensorflow_version 2.x
print(tf.__version__)
# MLP for Pima Indians Dataset saved to single file
import numpy as np
from numpy import loadtxt
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

# load pima indians dataset
dataset = np.loadtxt("/content/pima-indians-diabetes.csv", delimiter=",")

# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# define model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

# compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Model Summary
model.summary()

# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)

# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

# save model and architecture to single file
model.save("model.h5")
print("Saved model to disk")
2.2.0
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 12)                108       
_________________________________________________________________
dense_1 (Dense)              (None, 8)                 104       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 9         
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________
accuracy: 76.43%
Saved model to disk
# load and evaluate a saved model
import tensorflow as tf
from numpy import loadtxt
from tensorflow.keras.models import load_model

# load model
model = load_model('model.h5')

# summarize model
model.summary()

# LOAD THE NEW DATASET HERE
dataset = loadtxt("pima-indians-diabetes.csv", delimiter=",")

# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# PREDICT 
score = model.predict(X,verbose=0)
print(score.shape)
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 12)                108       
_________________________________________________________________
dense_1 (Dense)              (None, 8)                 104       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 9         
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________
(768, 1)

希望这能回答你的问题。愉快的学习。

我能假设您的代码比显示的代码多吗?请分享足够的信息,以便我们复制该问题。例如,什么是“model”?是的,我现在已经编辑了帖子,你可以看到itTry
model.save('model.h5')
。是的,它工作了。它创建了一个文件名“model.h5”,可以建议我如何使用这个“model.h5”文件prediction@shrey希望我们已经回答了你的问题。如果你对答案感到满意,请接受并投票。