Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/300.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 双输出神经网络的keras问题_Python_Keras_Neural Network - Fatal编程技术网

Python 双输出神经网络的keras问题

Python 双输出神经网络的keras问题,python,keras,neural-network,Python,Keras,Neural Network,我在python中使用Keras,我面临一个问题,当运行下面的代码时,我通常会得到两个精度结果,10%或90% import numpy as np from numpy import loadtxt from keras.models import Sequential from keras.layers import Dense ler = loadtxt(r'C:\Users\Mateus\Desktop\Nova\artigo.csv') ler_norm = ler / np.sqrt

我在python中使用Keras,我面临一个问题,当运行下面的代码时,我通常会得到两个精度结果,10%或90%

import numpy as np
from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
ler = loadtxt(r'C:\Users\Mateus\Desktop\Nova\artigo.csv')
ler_norm = ler / np.sqrt(np.sum(ler**2))
entrada = ler_norm[:,0:3]
saida = ler[:,3:5]
model = Sequential()
model.add(Dense(units = 3, input_dim = 3, activation='relu'))
model.add(Dense(units = 2, activation = 'sigmoid'))
model.compile(loss='mean_absolute_error', optimizer='adam', metrics=['accuracy'])
model.fit(entrada, saida, epochs=100, batch_size=10)
_, accuracy = model.evaluate(entrada, saida)
print('Accuracy: {:.2f}%'.format(accuracy*100))
“entrada”e“saida”中使用的部分值(原始数据库有300x5):

过去15个时代:

Epoch 85/100
300/300 [==============================] - 0s 177us/step - loss: 77.1145 - acc: 0.4867
Epoch 86/100
300/300 [==============================] - 0s 167us/step - loss: 77.1126 - acc: 0.5400
Epoch 87/100
300/300 [==============================] - 0s 157us/step - loss: 77.1108 - acc: 0.5600
Epoch 88/100
300/300 [==============================] - 0s 159us/step - loss: 77.1091 - acc: 0.6200
Epoch 89/100
300/300 [==============================] - 0s 167us/step - loss: 77.1073 - acc: 0.6733
Epoch 90/100
300/300 [==============================] - 0s 171us/step - loss: 77.1057 - acc: 0.5333
Epoch 91/100
300/300 [==============================] - 0s 157us/step - loss: 77.1040 - acc: 0.4600
Epoch 92/100
300/300 [==============================] - 0s 164us/step - loss: 77.1024 - acc: 0.5333
Epoch 93/100
300/300 [==============================] - 0s 176us/step - loss: 77.1008 - acc: 0.4800
Epoch 94/100
300/300 [==============================] - 0s 160us/step - loss: 77.0992 - acc: 0.5400
Epoch 95/100
300/300 [==============================] - 0s 150us/step - loss: 77.0977 - acc: 0.6067
Epoch 96/100
300/300 [==============================] - 0s 166us/step - loss: 77.0962 - acc: 0.5133
Epoch 97/100
300/300 [==============================] - 0s 168us/step - loss: 77.0947 - acc: 0.5400
Epoch 98/100
300/300 [==============================] - 0s 150us/step - loss: 77.0933 - acc: 0.4067
Epoch 99/100
300/300 [==============================] - 0s 164us/step - loss: 77.0919 - acc: 0.5267
Epoch 100/100
300/300 [==============================] - 0s 166us/step - loss: 77.0905 - acc: 0.5067
有人知道怎么了吗?
感谢收听

您正在尝试预测一个连续值,该值约为80,通过
Sigmoid
激活,您将获得介于0和1之间的输出。
尝试使用线性或relu激活:

model.add(Dense(units = 1, activation = 'linear'))
model.add(Dense(units = 1, activation = 'relu'))
此外,在回归问题中使用
精度
没有任何意义,请尝试使用
mae
之类的方法更改度量

model.compile(loss='mean_absolute_error', optimizer='adam', metrics=['mae'])

谢谢,Thibault。你说得对,我忘记了(在回归中使用精度指标)。我试着将sigmoid改为'relu'或'linear',将metrics改为'mae',并在最后一个历元得到54.1555的误差,但我不知道如何解释这个值,无论是好是坏。规范化数据库然后使用sigmoid是有用的,否则我会丢失重要信息?最后一件事,我在这个问题上说了,但我忘了更新代码,我将keras用于两个输出。以防万一,它可能会改变你对我的问题的理解。在这里你会发现为什么sigmoid是个坏主意的信息:你可以这样解释:这是你的模型平均错过预测的程度,所以在这里你用+-54来建模“错过”正确的值。因此,您需要最小化此值,最低,最好尝试读取回归的损失和度量,您也可以尝试将MSE作为损失和度量,并增加您的历元数和两个输出,此处不更改任何内容尝试将所有激活更改为线性或relu,并查看最佳性能
model.compile(loss='mean_absolute_error', optimizer='adam', metrics=['mae'])