Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/314.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 理解Keras预测_Python_Tensorflow_Keras_Neural Network - Fatal编程技术网

Python 理解Keras预测

Python 理解Keras预测,python,tensorflow,keras,neural-network,Python,Tensorflow,Keras,Neural Network,我有以下代码: import tensorflow as tf from tensorflow import keras from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.models import Sequential from tensorflow.ke

我有以下代码:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.callbacks import EarlyStopping
import numpy as np
from numpy.random import seed
from tensorflow import random

seed(42)
random.set_seed(43)

X = [
    'may it all be fine in the world',
    'this is not for me',
    'pffff ugly bike',
    'dropping by to say leave me alone',
    'getting sarcastic by now'
    'how would one satisfy his or her needs when the earth is boiling'
]

y = [1,2,4,5,3]

tokenizer = Tokenizer(num_words = 13)
tokenizer.fit_on_texts(X)
X_train_seq = tokenizer.texts_to_sequences(X)


X_train_seq_padded = pad_sequences(X_train_seq, maxlen = 15)

model = Sequential()
model.add(Dense(16, input_dim = 15, activation = 'relu', name = 'hidden-1'))
model.add(Dense(16, activation = 'relu', name = 'hidden-2'))
model.add(Dense(16, activation = 'relu', name = 'hidden-3'))
model.add(Dense(5, activation='softmax', name = 'output_layer'))

model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics=['accuracy'])

class CustomCallback(keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs=None):
        print('finished an epoch')
        zin = 'dropping by to say leave her alone'
        zin = tokenizer.texts_to_sequences(zin)
        zin = pad_sequences(zin, maxlen = 15)
        print(model.predict(zin))
        print(np.argmax(model.predict(zin), axis=-1))
callbacks = [EarlyStopping(monitor = 'accuracy', patience = 5, mode = 'max'), CustomCallback()]

from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
y = encoder.fit_transform(y)

history = model.fit(X_train_seq_padded, y, epochs = 100, batch_size = 100, callbacks = callbacks)
我希望在回调
model.predict()
中会产生如下结果(因为有5个可能的类):

np.argmax(model.predict(zin),axis=-1)
在单个数字1、2、3、4或5中

但是,我收到的输出(显示一个历元)是:

我必须如何解释这一点,以及如何过滤掉模型预测句子所属的实际类

print(model.predict(zin)[0])
print(np.argmax(model.predict(zin)[0], axis=-1))
这将为您提供正确的值


tf模型设计为成批工作,而不是一次性使用,因此它会为您提供一个输出列表,但由于您的输入是一个项目,它只会在NN中推送该项目n次,因此相同输出的列表

它如何确定推送输入的次数?批次大小是参数之一,默认值为32,尝试将其更改为1,请参阅:似乎不是由批次大小引起的。两者的
len()
都是34。设置批量大小没有任何效果。我认为您正在寻找类似这样的设置数据来预测X_测试=['愿世界上一切都好]]标记器。匹配文本(X_测试)X_测试序列=标记器。文本到序列(X_测试)X_测试序列填充=填充序列(X_测试序列,maxlen=15)然后预测并显示结果
my_labels=[1,2,4,5,3]print(f“对{X_test}的预测是{my_labels[np.argmax(model.predict(X_test_seq_padded))
我是否应该在所有文本上使用相同的标记器?
print(model.predict(zin)[0])
print(np.argmax(model.predict(zin)[0], axis=-1))