Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/329.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 属性错误:';历史';对象没有属性';预测类';_Python_Machine Learning_Keras_Scikit Learn_Predict - Fatal编程技术网

Python 属性错误:';历史';对象没有属性';预测类';

Python 属性错误:';历史';对象没有属性';预测类';,python,machine-learning,keras,scikit-learn,predict,Python,Machine Learning,Keras,Scikit Learn,Predict,我试图使用keras创建一个分类器,但由于某种原因,我无法从测试集中生成一些类预测。 为此,我使用以下模型 def get_model(): #takes ch1, ch2, y_train nclass = 6 #Channel 1 ch1_input = Input(shape=X_train_ch1[0].shape) #(3000,1) signal_1 = Convolution1D(16, kernel_size=5, activation=activ

我试图使用keras创建一个分类器,但由于某种原因,我无法从测试集中生成一些类预测。 为此,我使用以下模型

def get_model():          #takes ch1, ch2, y_train

nclass = 6
#Channel 1
ch1_input = Input(shape=X_train_ch1[0].shape)       #(3000,1)
signal_1 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(ch1_input)
signal_1 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(signal_1)
signal_1 = MaxPool1D(pool_size=2)(signal_1)
signal_1 = SpatialDropout1D(rate=0.1)(signal_1)
signal_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(signal_1)
signal_1 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(signal_1)
signal_1 = MaxPool1D(pool_size=2)(signal_1)
signal_1 = SpatialDropout1D(rate=0.)(signal_1)
signal_1 = Convolution1D(64, kernel_size=3, activation=activations.relu, padding="valid")(signal_1)
signal_1 = Convolution1D(64, kernel_size=3, activation=activations.relu, padding="valid")(signal_1)
flatten_1 = Flatten()(signal_1)

#Channel 2
ch2_input = Input(shape=X_train_ch2[0].shape)
signal_2 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(ch2_input)
signal_2 = Convolution1D(16, kernel_size=5, activation=activations.relu, padding="valid")(signal_2)
signal_2 = MaxPool1D(pool_size=2)(signal_2)
signal_2 = SpatialDropout1D(rate=0.1)(signal_2)
signal_2 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(signal_2)
signal_2 = Convolution1D(32, kernel_size=3, activation=activations.relu, padding="valid")(signal_2)
signal_2 = MaxPool1D(pool_size=2)(signal_2)
signal_2 = SpatialDropout1D(rate=0.2)(signal_2)
signal_2 = Convolution1D(64, kernel_size=3, activation=activations.relu, padding="valid")(signal_2)
signal_2 = Convolution1D(64, kernel_size=3, activation=activations.relu, padding="valid")(signal_2)
flatten_2 = Flatten()(signal_2)

# merge CNN's being trained on each channel 
merged = concatenate([flatten_1, flatten_2])

# Output
dense_1 = Dropout(rate=0.15)(Dense(64, activation=activations.relu, name="dense_1")(merged))
#dense_1 = Dense(, activation=activations.relu)(dense_1)
dense_1 = Dropout(rate=0.25)(Dense(32, activation=activations.relu, name="dense_2")(dense_1))
dense_1 = Dense(nclass, activation=activations.softmax, name="dense_3")(dense_1)

# Compile model 
model = Model(inputs=[ch1_input, ch2_input], outputs=dense_1)
model.compile(loss='categorical_crossentropy',
          optimizer='adam',
          metrics=['accuracy'])
model.summary()
print(model.summary)
return model

#  --------------------------- Create train data and model
sequence = Standardise_and_Augment(sequence)  
X_train_ch1, X_train_ch2, X_test_ch1, X_test_ch2, X_test, y_train, y_test = Process_data()  
y_flat = np.argmax(y_train, axis=1) 
model = get_model()

#  ---------------------------  Run model 
ch_model = model.fit([X_train_ch1,X_train_ch2], y_train, epochs=20, batch_size=32, 
                 validation_split=0.2 ,class_weight='auto', shuffle = True)

#  ---------------------------  Get get class breakdown
from sklearn.metrics import classification_report

Y_test = np.argmax(y_test, axis=1) # Convert one-hot to index
y_pred = ch_model.predict_classes(X_test)
print(classification_report(Y_test, y_pred))
运行此命令会使AttributeError:“History”对象没有“predict\u classes”属性 我知道我的模型历史记录正在存储中,因为我可以通过运行以下命令生成模型性能图:

# Plot model accuracy 
plt.subplot(2, 1, 1)
plt.plot(ch_model.history['accuracy'])
plt.plot(ch_model.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.show()
到目前为止,我所看到的所有解决这个错误的方法都是关于使用顺序模型的,我确信我正在使用这个模型。
如果有人能告诉我哪里出了问题,或者有其他生成y_pred的方法,我将不胜感激。

显然我误解了这个问题。如前所述,model.fit返回一个历史对象,因此不能用于进行预测。

model.fit
不返回一个模型实例,您可以在该实例中调用
predict
,因此您在错误的对象上调用
predict
,正确的方法是:

model.fit([X_train_ch1,X_train_ch2], y_train, epochs=20, batch_size=32, 
          validation_split=0.2 ,class_weight='auto', shuffle = True)

y_pred = model.predict_classes(X_test)

好的,使用给定的代码,我现在得到一个稍微不同的错误:AttributeError:“Model”对象没有属性“predict\u classes”。有什么想法吗?