Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/mongodb/12.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 训练准确度增加,然后偶尔突然下降。修理[Keras][TensorFlow后端]_Python_Tensorflow_Keras_Neural Network_Deep Learning - Fatal编程技术网

Python 训练准确度增加,然后偶尔突然下降。修理[Keras][TensorFlow后端]

Python 训练准确度增加,然后偶尔突然下降。修理[Keras][TensorFlow后端],python,tensorflow,keras,neural-network,deep-learning,Python,Tensorflow,Keras,Neural Network,Deep Learning,我在做二进制分类 因此,在训练我的模型时,训练精度在提高,但在某些时期,它会突然下降。下面是一个图片来说明。我做错了什么?为什么会这样?原因是什么?我怎样才能解决这个问题 此外,训练准确度和验证准确度(尤其是验证准确度)在大多数情况下都接近1(100%),在历元周期的早期。为什么?这是好还是坏?我不这么认为,对吗 以下是数据: “Gewicht”是输出,我在下面的代码中将其转换为1和0 下面的代码是我尝试过的: 代码如下: # -*- coding: utf-8 -*- """ Created

我在做二进制分类

因此,在训练我的模型时,训练精度在提高,但在某些时期,它会突然下降。下面是一个图片来说明。我做错了什么?为什么会这样?原因是什么?我怎样才能解决这个问题

此外,训练准确度和验证准确度(尤其是验证准确度)在大多数情况下都接近1(100%),在历元周期的早期。为什么?这是好还是坏?我不这么认为,对吗

以下是数据:

“Gewicht”是输出,我在下面的代码中将其转换为1和0

下面的代码是我尝试过的:

代码如下:

# -*- coding: utf-8 -*-
"""
Created on Fri Oct 18 15:44:44 2019

@author: Shahbaz Shah Syed
"""

#Import the required Libraries
from sklearn.metrics import confusion_matrix, precision_score
from sklearn.model_selection import train_test_split
from keras.layers import Dense,Dropout
from keras.models import Sequential
from keras.regularizers import l2
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np

##EXTRACT THE DATA AND SPLITTING IN TRAINING AND TESTING-----------------------

Input = 'DATA_Gewicht.xlsx'
Tabelle = pd.read_excel(Input,names=['Plastzeit Z [s]','Massepolster [mm]',
                                'Zylind. Z11 [°C]','Entformen[s]',
                                'Nachdr Zeit [s]','APC+ Vol. [cm³]',
                                'Energie HptAntr [Wh]','Fläche WkzDr1 [bar*s]',
                                'Fläche Massedr [bar*s]',
                                'Fläche Spritzweg [mm*s]', 'Gewicht'])

Gewicht = Tabelle['Gewicht']


#Toleranz festlegen
toleranz = 0.5

#guter Bereich für Gewicht
Gewicht_mittel = Gewicht.mean()
Gewicht_abw = Gewicht.std()
Gewicht_tol = Gewicht_abw*toleranz

Gewicht_OG = Gewicht_mittel+Gewicht_tol
Gewicht_UG = Gewicht_mittel-Gewicht_tol


#Gewicht Werte in Gut und Schlecht zuordnen
G = []
for element in Gewicht:
    if element > Gewicht_OG or element < Gewicht_UG:
        G.append(0)
    else:
        G.append(1)      
G = pd.DataFrame(G)
G=G.rename(columns={0:'Gewicht_Value'})
Gewicht = pd.concat([Gewicht, G], axis=1)

#extracting columns from sheets
Gewicht_Value = Gewicht['Gewicht_Value']



x = Tabelle.drop(columns=['Gewicht'])
y = Gewicht_Value

#Split the train and test/validation set
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size=0.10, random_state=0)
x_train.shape,y_train.shape,x_test.shape,y_test.shape


##Creating a Neural Network----------------------------------------------------

#define and use a Sequential model
model = Sequential() #Sequential model is a linear stack of layers

#Hidden Layer-1/Input Layer
model.add(Dense(200,activation='relu',input_dim=10,kernel_regularizer=l2(0.01))) #adding a layer
model.add(Dropout(0.3, noise_shape=None, seed=None))
#Hidden Layer-2
model.add(Dense(200,activation = 'relu',kernel_regularizer=l2(0.01)))
model.add(Dropout(0.3, noise_shape=None, seed=None))
#Output layer
model.add(Dense(1,activation='sigmoid'))

#Compile the Model
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])

#Check the Model summary
model.summary()


##TRAINING the Neural Network--------------------------------------------------

#Train the Model
model_output = model.fit(x_train,y_train,epochs=500,batch_size=20,verbose=1,validation_data=(x_test,y_test),)
print('Training Accuracy : ' , np.mean(model_output.history['accuracy']))
print('Validation Accuracy : ' , np.mean(model_output.history['val_accuracy']))


##CHECKING PREDICTION----------------------------------------------------------

#Do a Prediction and check the Precision
y_pred = model.predict(x_test)
rounded = [round(x[0]) for x in y_pred]
y_pred1 = np.array(rounded,dtype='int64')
confusion_matrix(y_test,y_pred1)
precision_score(y_test,y_pred1)


#Plot the model accuracy over epochs
# Plot training & validation accuracy values
plt.plot(model_output.history['accuracy'])
plt.plot(model_output.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()

# Plot training & validation loss values
plt.plot(model_output.history['loss'])
plt.plot(model_output.history['val_loss'])
plt.title('model_output loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()

#-*-编码:utf-8-*-
"""
创建于2019年10月18日星期五15:44:44
@作者:Shahbaz Shah Syed
"""
#导入所需的库
从sklearn.metrics导入混淆矩阵、精度分数
从sklearn.model\u选择导入列车\u测试\u拆分
从keras.layers导入稠密、脱落
从keras.models导入顺序
从keras.regularizers导入l2
将matplotlib.pyplot作为plt导入
作为pd进口熊猫
将numpy作为np导入
##在训练和测试中提取数据并拆分-----------------------
输入='DATA_Gewicht.xlsx'
Tabelle=pd.read_excel(输入,名称=['Plastzeit Z[s]','Massepolster[mm]',
“Zylind.Z11[°C],“Entformen[s],
“Nachdr Zeit[s]”,“APC+体积[cm³]”,
“能源HptAntr[Wh],“Fläche WkzDr1[bar*s],
“Fläche Massedr[bar*s]”,
“Fläche Spritzweg[mm*s],“Gewicht”])
Gewicht=Tabelle['Gewicht']
#费斯特莱根酒店
公差nz=0.5
#古特·贝里希·弗尔·格威奇
Gewicht_mitel=Gewicht.平均值()
Gewicht_abw=Gewicht.std()
Gewicht\U tol=Gewicht\U abw*公差
Gewicht_OG=Gewicht_mittel+Gewicht_tol
Gewicht_UG=Gewicht_mittel-Gewicht_tol
#格威希特·维特·古特和施莱赫特·佐尔德宁
G=[]
对于Gewicht中的元素:
如果元件>Gewicht_OG或元件