Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/313.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 无法让CNN进行多类别分类。抛出;登录和标签必须具有相同的形状;_Python_Tensorflow_Conv Neural Network_Multiclass Classification - Fatal编程技术网

Python 无法让CNN进行多类别分类。抛出;登录和标签必须具有相同的形状;

Python 无法让CNN进行多类别分类。抛出;登录和标签必须具有相同的形状;,python,tensorflow,conv-neural-network,multiclass-classification,Python,Tensorflow,Conv Neural Network,Multiclass Classification,我已经测试了一个CNN,它在二进制分类上工作得很好,但是当我将输出层从1更改为5,并给它5个标签的数据时,它抛出错误:ValueError:logits和标签必须具有相同的形状((无,5)vs(无,1)) 每个样本是一个190x8矩阵,其标签介于0和4之间 我的代码如下所示: testingData = loadmat('C:/Users/timwa/Desktop/Sundhedsteknologi/10.Semester/SegmentedData/Rune/Samlet/RuneStati

我已经测试了一个CNN,它在二进制分类上工作得很好,但是当我将输出层从1更改为5,并给它5个标签的数据时,它抛出错误:ValueError:logits和标签必须具有相同的形状((无,5)vs(无,1))

每个样本是一个190x8矩阵,其标签介于0和4之间

我的代码如下所示:

testingData = loadmat('C:/Users/timwa/Desktop/Sundhedsteknologi/10.Semester/SegmentedData/Rune/Samlet/RuneStaticData19.mat')  # For small datasets
arr1 = np.array(testingData['finalData'])
tempX1=np.array([x for x in arr1[:,0]]) # Dette giver data
tempX1=np.array([x for x in tempX1])

y1=arr1[:,1] # Dette giver labels
y1.reshape(len(y1),1)

X_train, X_test, y_train, y_test = train_test_split(tempX1, y1, test_size=0.25, random_state=1)
X_train = np.asarray(X_train).astype('float32')
X_test = np.asarray(X_test).astype('float32')
y_train = np.asarray(y_train).astype('float32')
y_test = np.asarray(y_test).astype('float32')

hpKernelSize = 3
hpBatchsize = 64
hpEpochs = 50 
hpPatience = 5 
hpInitialLearningRate = 0.001
hpmaxconvfilters = 32
hpPoolSize = 2
hpLRDecreaseOnPlateau = 0.1

earlyStopping = tensorflow.keras.callbacks.EarlyStopping(monitor='val_loss', patience=hpPatience)
reduceLROnPlateau = tensorflow.keras.callbacks.ReduceLROnPlateau(monitor='val_loss',
factor=hpLRDecreaseOnPlateau, patience=math.ceil(hpPatience/2), verbose=1)
callbacks_list = [earlyStopping, reduceLROnPlateau]

inputShape = X_train.shape[1:3]
inputs = keras.Input(shape=inputShape)

x = layers.Conv1D(filters=hpmaxconvfilters, kernel_size=hpKernelSize, bias_initializer='zeros',
kernel_initializer='normal', activation='relu', name='Conv1')(inputs)
x = layers.Conv1D(filters=hpmaxconvfilters, kernel_size=hpKernelSize, bias_initializer='zeros',
kernel_initializer='normal', activation='relu', name='Conv2')(x)
x = layers.MaxPooling1D(pool_size=hpPoolSize, name='MaxPool1')(x)
x = layers.Conv1D(filters=hpmaxconvfilters, kernel_size=hpKernelSize, bias_initializer='zeros',
kernel_initializer='normal', activation='relu', name='Conv3')(x)
x = layers.Conv1D(filters=hpmaxconvfilters, kernel_size=hpKernelSize, bias_initializer='zeros',
kernel_initializer='normal', activation='relu', name='Conv4')(x)
x = layers.MaxPooling1D(pool_size=hpPoolSize, name='MaxPool2')(x)
x = layers.Flatten()(x)
x = layers.Dense(44, bias_initializer='zeros', kernel_initializer='normal', activation='relu',
name='Dense1')(x)
outputs = layers.Dense(5, bias_initializer='zeros', kernel_initializer='normal',
activation='softmax', name='OutputLayer')(x)

model = keras.Model(inputs=inputs, outputs=outputs, name="CNN")
model.summary()

model.compile(loss=tensorflow.keras.losses.categorical_crossentropy,optimizer=tensorflow.keras.optimizers.Adam(lr=hpInitialLearningRate), metrics=['accuracy',tensorflow.keras.metrics.TruePositives(),tensorflow.keras.metrics.TreNegatives(), tensorflow.keras.metrics.FalsePositives(),tensorflow.keras.metrics.FalseNegatives()])
model.fit(X_train, y_train, epochs=hpEpochs, verbose=1,batch_size=hpBatchsize, callbacks=callbacks_list, validation_data=(X_test, y_test))

任何建议都将不胜感激。

您在model.compile中指定损失为分类信息。这意味着您的标签y_train和y_test必须是一个热编码的。我在你的代码中没有看到你这样做。因此,您可以对标签进行热编码,或者如果标签是整数,则可以将损失函数更改为稀疏\分类\交叉熵。下面是使用tf.one_hot将整数标签转换为一个热编码标签的示例

import tensorflow as tf
import numpy as np

a = np.array([1, 0, 3])
depth = 4
b = tf.one_hot(a, depth)
# <tf.Tensor: shape=(3, 3), dtype=float32, numpy=
# array([[0., 1., 0.],
#        [1., 0., 0.],
#        [0., 0., 0.]], dtype=float32)>
将tensorflow导入为tf
将numpy作为np导入
a=np.array([1,0,3])
深度=4
b=tf.1_热(a,深度)
# 

该代码与输出层中的1个神经元一起工作,但损失为0,精确度非常低