Python 为什么我的Tensorflow Keras模型在训练时输出奇怪的损失和精度值?

Python 为什么我的Tensorflow Keras模型在训练时输出奇怪的损失和精度值?,python,tensorflow,machine-learning,keras,Python,Tensorflow,Machine Learning,Keras,我使用python在Tensorflow中培训了一个自定义文本分类器,用于使用以下代码将句子分类为包含信息的问题/句子: import tensorflow as tf from tensorflow import keras from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences text = &q

我使用python在Tensorflow中培训了一个自定义文本分类器,用于使用以下代码将句子分类为包含信息的问题/句子:

import tensorflow as tf
from tensorflow import keras


from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

text = ""
with open("/content/train_new.txt") as source:
  for line in source.readlines():
    text = text + line

print("text: " + text)

sentences = []
labels = []

for item in text.split("<n>"):
  parts = item.split("<t>")
  print(parts)
  sentences.append(parts[0])
  labels.append(parts[1])

print(sentences)
print(labels)

print("----")

train_test_split_percentage = 80

training_size = round((len(sentences)/100)*train_test_split_percentage)

print("training size: " + str(training_size) + " of " + str(len(labels)))

training_sentences = sentences[0:training_size]
testing_sentences = sentences[training_size:]

training_labels = labels[0:training_size]
testing_labels = labels[training_size:]

vocab_size = 100
max_length = 10

tokenizer = Tokenizer(num_words = vocab_size, oov_token="<OOV>")
tokenizer.fit_on_texts(sentences)

word_index = tokenizer.word_index

training_sequences = tokenizer.texts_to_sequences(training_sentences)
training_padded = pad_sequences(training_sequences, maxlen=max_length, padding="post", truncating="post")

testing_sequences = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequences, maxlen=max_length, padding="post", truncating="post")

# convert training & testing data into numpy array
# Need this block to get it to work with TensorFlow 2.x
import numpy as np
training_padded = np.array(training_padded)
training_labels = np.asarray(training_labels).astype('float32').reshape((-1,1))
testing_padded = np.array(testing_padded)
testing_labels = np.asarray(testing_labels).astype('float32').reshape((-1,1))

# defining the model
model = tf.keras.Sequential([
    tf.keras.layers.Embedding(vocab_size, 24, input_length=max_length),
    tf.keras.layers.GlobalAveragePooling1D(),
    tf.keras.layers.Dense(24, activation='relu'),
    tf.keras.layers.Dense(1, activation='softmax')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])

# training the model
num_epochs = 1000
history = model.fit(training_padded, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels), verbose=2)
train_new.txt文件包含
textclass_num

当尝试使用
model.predict()函数进行预测时,它总是输出
[[1.]]

我的代码有什么问题

tf.keras.layers.Dense(1, activation='sigmoid')
如果您正在进行二进制分类,则应使用sigmoid作为激活。但是,

tf.keras.layers.Dense(2, activation='softmax') 
在概率方面是正确的


Softmax输出的总和将始终等于一。这就是为什么每次都会得到1作为输出。

我想在我的模型中添加两个类,总共4个类,我需要在我的模型中更改什么才能使其工作?最后一个层应该是tf.keras.layers.Dense(类的数量,激活='softmax')您的损失也应该是分类损失。这输出
logits和标签必须具有相同的形状((无,4)和(无,1))
,但为什么?抱歉这些问题,我对Keras和TensorFlow完全陌生。标签取决于您的数据集。您不能随时添加类。当您的数据集有4个不同的类时,您可以使用上面的代码,否则,如果是二进制分类,您的输出密集层应该有1个具有二进制交叉熵损失的神经元。请参阅
tf.keras.layers.Dense(2, activation='softmax')