Tensorflow 带嵌入层的keras模型的串行化

Tensorflow 带嵌入层的keras模型的串行化,tensorflow,keras,word-embedding,Tensorflow,Keras,Word Embedding,我已经训练了一个模型,使用预先训练好的单词嵌入,如下所示: embedding_matrix = np.zeros((vocab_size, 100)) for word, i in text_tokenizer.word_index.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedd

我已经训练了一个模型,使用预先训练好的单词嵌入,如下所示:

embedding_matrix = np.zeros((vocab_size, 100))
for word, i in text_tokenizer.word_index.items():
    embedding_vector = embeddings_index.get(word)
    if embedding_vector is not None:
        embedding_matrix[i] = embedding_vector

embedding_layer = Embedding(vocab_size,
                        100,
                        embeddings_initializer=Constant(embedding_matrix),
                        input_length=50,
                        trainable=False)
sequence_input = Input(shape=(50,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
text_cnn = Conv1D(filters=5, kernel_size=5, padding='same',     activation='relu')(embedded_sequences)
text_lstm = LSTM(500, return_sequences=True)(embedded_sequences)


char_in = Input(shape=(50, 18, ))
char_cnn = Conv1D(filters=5, kernel_size=5, padding='same', activation='relu')(char_in)
char_cnn = GaussianNoise(0.40)(char_cnn)
char_lstm = LSTM(500, return_sequences=True)(char_in)



merged = concatenate([char_lstm, text_lstm]) 

merged_d1 = Dense(800, activation='relu')(merged)
merged_d1 = Dropout(0.5)(merged_d1)

text_class = Dense(len(y_unique), activation='softmax')(merged_d1)
model = Model([sequence_input,char_in], text_class)
架构如下所示:

embedding_matrix = np.zeros((vocab_size, 100))
for word, i in text_tokenizer.word_index.items():
    embedding_vector = embeddings_index.get(word)
    if embedding_vector is not None:
        embedding_matrix[i] = embedding_vector

embedding_layer = Embedding(vocab_size,
                        100,
                        embeddings_initializer=Constant(embedding_matrix),
                        input_length=50,
                        trainable=False)
sequence_input = Input(shape=(50,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
text_cnn = Conv1D(filters=5, kernel_size=5, padding='same',     activation='relu')(embedded_sequences)
text_lstm = LSTM(500, return_sequences=True)(embedded_sequences)


char_in = Input(shape=(50, 18, ))
char_cnn = Conv1D(filters=5, kernel_size=5, padding='same', activation='relu')(char_in)
char_cnn = GaussianNoise(0.40)(char_cnn)
char_lstm = LSTM(500, return_sequences=True)(char_in)



merged = concatenate([char_lstm, text_lstm]) 

merged_d1 = Dense(800, activation='relu')(merged)
merged_d1 = Dropout(0.5)(merged_d1)

text_class = Dense(len(y_unique), activation='softmax')(merged_d1)
model = Model([sequence_input,char_in], text_class)
当我将模型转换为json时,会出现以下错误:

ValueError: can only convert an array of size 1 to a Python scalar
类似地,如果我使用
model.save()
函数,它似乎保存正确,但当我去加载它时,我得到
类型错误:预期Float32

我的问题是:在尝试序列化此模型时,是否缺少一些东西?我需要某种
Lambda
图层或类似的东西吗


任何帮助都将不胜感激

您可以使用
嵌入
层中的
权重
参数来提供初始权重

embedding\u layer=嵌入(声音大小,
100,
权重=[嵌入矩阵],
输入长度=50,
可培训=错误)
模型保存/加载后,重量应保持不可训练:

model.save('1.h5'))
m=负载_模型('1.h5')
m、 摘要()
__________________________________________________________________________________________________
层(类型)输出形状参数#连接到
==================================================================================================
输入_3(输入层)(无,50)0
__________________________________________________________________________________________________
输入_4(输入层)(无、50、18)0
__________________________________________________________________________________________________
嵌入_1(嵌入)(无,50100)1000000输入_3[0][0]
__________________________________________________________________________________________________
lstm_4(lstm)(无,50500)1038000输入_4[0][0]
__________________________________________________________________________________________________
lstm_3(lstm)(无,50500)1202000_1[0][0]
__________________________________________________________________________________________________
连接2(连接)(无,50,1000)0 lstm_4[0][0]
lstm_3[0][0]
__________________________________________________________________________________________________
稠密_2(稠密)(无,50800)800800串联_2[0][0]
__________________________________________________________________________________________________
辍学_2(辍学)(无,50800)0密集_2[0][0]
__________________________________________________________________________________________________
密集_3(密集)(无,50,15)12015辍学_2[0][0]
==================================================================================================
总参数:4052815
可培训参数:3052815
不可培训参数:1000000
__________________________________________________________________________________________________

我希望您在编译后保存模型。比如:

    model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
要保存模型,可以执行以下操作:

    from keras.models import load_model

    model.save('model.h5')
    model = load_model('model_detect1.h5')
    model_json = model.to_json()
    with open("model.json", "w") as json_file:
        json_file.write(model_json)
要加载模型

    from keras.models import model_from_json

    json_file = open('model.json', 'r')
    model_json = json_file.read()
    model = model_from_json(model_json)
    model.load_weights("model.h5")

我尝试了多种方法。问题是当我们在嵌入层工作时,pickle就不工作了,并且不能保存数据。 所以,当你有一些像这样的图层时,你能做什么:-

## Creating model
embedding_vector_features=100
model=Sequential()
model.add(Embedding(voc_size,embedding_vector_features,input_length=sent_length))
model.add(LSTM(100))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
print(model.summary())
然后,你可以使用 h5扩展名为d=保存文件,然后将其转换为json,此处将模型Converett转换为模型2

from tensorflow.keras.models import load_model
model.save('model.h5')
model = load_model('model.h5')
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
此选项用于加载数据:-

from tensorflow.keras.models import model_from_json
json_file = open('model.json', 'r')
model_json = json_file.read()
model2 = model_from_json(model_json)
model2.load_weights("model.h5")

此外,您应该使用模型检查点来保存模型,因为
model.save()
不会保存最佳模型迭代,而只是保存最后一次迭代,而这可能并不总是最佳迭代。因此,保存最佳权重并使用
model.to_json()
保存模型架构。查找更多详细信息和