Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/304.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 烧瓶中加载的keras模型总是预测相同的等级_Python_Tensorflow_Machine Learning_Flask_Keras - Fatal编程技术网

Python 烧瓶中加载的keras模型总是预测相同的等级

Python 烧瓶中加载的keras模型总是预测相同的等级,python,tensorflow,machine-learning,flask,keras,Python,Tensorflow,Machine Learning,Flask,Keras,奇怪的事情发生在我身上。我使用keras培训了情绪分析模型,如下所示: max_fatures = 2000 tokenizer = Tokenizer(num_words=max_fatures, split=' ') tokenizer.fit_on_texts(data) X = tokenizer.texts_to_sequences(data) X = pad_sequences(X) with open('tokenizer.pkl', 'wb') as fid: _pic

奇怪的事情发生在我身上。我使用keras培训了情绪分析模型,如下所示:

max_fatures = 2000
tokenizer = Tokenizer(num_words=max_fatures, split=' ')
tokenizer.fit_on_texts(data)
X = tokenizer.texts_to_sequences(data)
X = pad_sequences(X)

with open('tokenizer.pkl', 'wb') as fid:
    _pickle.dump(tokenizer, fid)

le = LabelEncoder()
le.fit(["pos", "neg"])
y = le.transform(data_labels)
y = keras.utils.to_categorical(y)

embed_dim = 128
lstm_out = 196

model = Sequential()
model.add(Embedding(max_fatures, embed_dim, input_length=X.shape[1]))
model.add(SpatialDropout1D(0.4))
model.add(LSTM(lstm_out, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

batch_size = 32
model.fit(X, y, epochs=10, batch_size=batch_size, verbose=2)

model.save('deep.h5')
当我将它加载到另一个python文件中时,一切正常。但当我将其加载到flask web应用程序中时,所有预测的类都是正的。出什么事了?以下是我在flask web应用程序中使用的代码:

with open('./resources/model/tokenizer.pkl', 'rb') as handle:
    keras_tokenizer = _pickle.load(handle)

K.clear_session()
model = load_model('./resources/model/deep.h5')
model._make_predict_function()
session = K.get_session()
global graph
graph = tf.get_default_graph()
graph.finalize()

stop_words = []

with open('./resources/stopwords.txt', encoding="utf8") as f:
    stop_words = f.read().splitlines()

normalizer = Normalizer()
stemmer = Stemmer()
tokenizer = RegexpTokenizer(r'\w+')


def predict_class(text):
    tokens = tokenizer.tokenize(text)
    temp = ''

    for token in tokens:
        if token in stop_words:
            continue

        token = normalizer.normalize(token)
        token = stemmer.stem(token)
        temp += token + ' '

    if not temp.strip():
        return None

    text = keras_tokenizer.texts_to_sequences(temp.strip())
    text = pad_sequences(text, maxlen=41)

    le = LabelEncoder()
    le.fit(["pos", "neg"])

    with session.as_default():
        with graph.as_default():
            sentiment = model.predict_classes(text)
            return le.inverse_transform(sentiment)[0]

您正在保存模型体系结构,但不是它的权重

考虑到您正在使用Keras及其标记器,我发现加载和重用您的模型的最佳方法是对架构和标记器使用json表示,并使用h5保存权重:

def save(model):
    # Save the trained weights
    model.save_weights('model_weights.h5')

    # Save the model architecture
    with open('model_architecture.json', 'w') as f:
        f.write(model.to_json())

    # Save the tokenizer
    with open('tokenizer.json', 'w') as f:
        f.write(tokenizer.to_json())
然后在你的烧瓶应用程序中,按如下方式加载:

def models():
    with open('models/tokenizer.json') as f:
        tokenizer = tokenizer_from_json(f.read())

    # Model reconstruction from JSON file
    with open('models/model_architecture.json', 'r') as f:
        model = model_from_json(f.read())

    # Load weights into the new model
    model.load_weights('models/model_weights.h5')

    return model, tokenizer

您正在保存模型体系结构,但不是它的权重

考虑到您正在使用Keras及其标记器,我发现加载和重用您的模型的最佳方法是对架构和标记器使用json表示,并使用h5保存权重:

def save(model):
    # Save the trained weights
    model.save_weights('model_weights.h5')

    # Save the model architecture
    with open('model_architecture.json', 'w') as f:
        f.write(model.to_json())

    # Save the tokenizer
    with open('tokenizer.json', 'w') as f:
        f.write(tokenizer.to_json())
然后在你的烧瓶应用程序中,按如下方式加载:

def models():
    with open('models/tokenizer.json') as f:
        tokenizer = tokenizer_from_json(f.read())

    # Model reconstruction from JSON file
    with open('models/model_architecture.json', 'r') as f:
        model = model_from_json(f.read())

    # Load weights into the new model
    model.load_weights('models/model_weights.h5')

    return model, tokenizer

是的,我也有同样的问题。但就我而言,我的预测是正确的。我认为带有模型体系结构和权重的“.h5”文件是不够的,您需要有标记器,因为它包含所有唯一标记的单词索引,或者您的模型所使用的单词索引

因此,我强烈推荐(尤达尔德·阿兰兹)[https://stackoverflow.com/users/11153431/eudald-arranz]的最后一篇文章-用JSON格式保存权重和模型架构

职位:

因为这对我来说很有效


谢谢,尤德是的,我也有同样的问题。但就我而言,我的预测是正确的。我认为带有模型体系结构和权重的“.h5”文件是不够的,您需要有标记器,因为它包含所有唯一标记的单词索引,或者您的模型所使用的单词索引

因此,我强烈推荐(尤达尔德·阿兰兹)[https://stackoverflow.com/users/11153431/eudald-arranz]的最后一篇文章-用JSON格式保存权重和模型架构

职位:

因为这对我来说很有效


谢谢,Eudald

我尝试过这个方法,但仍然从模型中得到了积极的评价。嗯,这很奇怪,我现在正在我的一个项目中使用这种方法,效果很好。您是否在模型(model.fit)训练后保存重量?尝试打印模型。加载权重后获取_weights(),并检查是否正确加载!我尝试了这个方法,但仍然从模型中得到了唯一的积极类。嗯,这很奇怪,我现在正在我的一个项目中使用这种方法,效果很好。您是否在模型(model.fit)训练后保存重量?尝试打印模型。加载权重后获取_weights(),并检查是否正确加载!