Python 基于Keras的单词嵌入预测二分类类

Python 基于Keras的单词嵌入预测二分类类,python,machine-learning,neural-network,deep-learning,keras,Python,Machine Learning,Neural Network,Deep Learning,Keras,我目前正在学习机器学习掌握方面的示例: 这是我的密码: from keras.preprocessing.text import one_hot from keras.preprocessing.sequence import pad_sequences from keras.models import Sequential from keras.layers import Dense from keras.layers import Flatten from keras.layers.embe

我目前正在学习机器学习掌握方面的示例:

这是我的密码:

from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding

# define documents
docs = ['Well done!','Good work','Great effort','nice      work','Excellent!','Weak','Poor effort!','not good','poor work', 'Could have done better.']

# define class labels
labels = [1,1,1,1,1,0,0,0,0,0]

vocab_size = 50 
encoded_docs = [one_hot(d, vocab_size) for d in docs]

# The sequences have different lengths and Keras prefers inputs to be   vectorized and all inputs to have the same length
#...So we pad documents to a max length of 4 words:
max_length = 4
padded_docs = pad_sequences(encoded_docs, maxlen=max_length,   padding='post')
print(padded_docs) 
#Creating the Embedding Layer

# Define the model
model = Sequential()
model.add(Embedding(vocab_size, 8, input_length=max_length))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])

# summarize the model
print(model.summary())

# fit the model
model.fit(padded_docs, labels, epochs=50, verbose=0)

# evaluate the model
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
print('Accuracy: %f' % (accuracy*100))
然后,我创建了一个函数来测试一个示例单词,并尝试查看我的模型是否返回0或1,但得到的结果如下:
[[0.55765963]]

我编写了函数,但我不理解输出,我希望是0或1:

sample_string = ['nice work']

def model_builder_predict(sample_string):
    vocab_size = 50
    max_length = 4
    encoded_docs = [one_hot(d, vocab_size) for d in sample_string]
    padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
    model_answer = model.predict(np.array(padded_docs))
    return model_answer

print(model_builder_predict(sample_string))

任何帮助都会很好

对这件事不是100%肯定。。但是你能尝试一下
model\u answer=model.predict\u类(np.array(padded\u docs))
而不是仅仅使用
.predict()
?谢谢!