Python 如何正确训练我的神经网络?

Python 如何正确训练我的神经网络?,python,tensorflow,neural-network,feed-forward,Python,Tensorflow,Neural Network,Feed Forward,我的神经网络解决了一个非线性问题,但是测试的损耗非常高。当我使用无隐层的神经网络时,测试损失比有隐层的低,但也高。有人知道为什么吗?我如何改善损失 #data train_X = data_in[0:9001, :] train_Y = data_out[0:9001, :] test_X = data_in[9000:10001, :] test_Y = data_out[9000:10001, : n = train_X.shape[1]

我的神经网络解决了一个非线性问题,但是测试的损耗非常高。当我使用无隐层的神经网络时,测试损失比有隐层的低,但也高。有人知道为什么吗?我如何改善损失

#data

    train_X = data_in[0:9001, :]
    train_Y = data_out[0:9001, :]
    test_X = data_in[9000:10001, :]
    test_Y = data_out[9000:10001, :
    n = train_X.shape[1] 
    m = train_X.shape[0]
    d = train_Y.shape[1]  
    l = test_X.shape[0]

#parameters

    trainX = tf.placeholder(tf.float32, [batch_size, n])
    trainY = tf.placeholder(tf.float32, [batch_size, d])
    testX = tf.placeholder(tf.float32, [l, n])
    testY = tf.placeholder(tf.float32, [l, d])
    def multilayer(trainX, h1, h2, hout, b1, b2, bout):
        layer_1 = tf.matmul(trainX, h1) + b1
        layer_1 = tf.nn.sigmoid(layer_1)
        layer_2 = tf.matmul(layer_1, h2) + b2
        layer_2 = tf.nn.sigmoid(layer_2)
        out_layer = tf.matmul(layer_2, hout) + bout
        return out_layer
    h1 = tf.Variable(tf.zeros([n, n_hidden_1]))
    h2 = tf.Variable(tf.zeros([n_hidden_1, n_hidden_2]))
    hout = tf.Variable(tf.zeros([n_hidden_2, d]))
    b1 = tf.Variable(tf.zeros([n_hidden_1]))
    b2 = tf.Variable(tf.zeros([n_hidden_2]))
    bout = tf.Variable(tf.zeros([d]))
    pred = multilayer(trainX, h1, h2, hout, b1, b2, bout)
    predtest = multilayer(testX, h1, h2, hout, b1, b2, bout)
    loss = tf.reduce_sum(tf.pow(pred - trainY, 2)) / (batch_size)
    losstest = tf.reduce_sum(tf.pow(predtest - testY, 2)) / (l)
    optimizer = 
    tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)

# Initializing the variables

    init = tf.global_variables_initializer()
    a = np.linspace(0, m - batch_size, m / batch_size, dtype=np.int32)
    with tf.Session() as sess:
        sess.run(init)
        for i in (a):
            x = train_X[i:i + batch_size, :]
            y = train_Y[i:i + batch_size, :]
            for epoch in range(training_epochs):
                sess.run(optimizer, feed_dict={trainX: np.asarray(x), trainY: 
                          np.asarray(y)})
                c = sess.run(loss, feed_dict={trainX: np.asarray(x), trainY: 
                            np.asarray(y)})
                print("Batch:", '%04d' % (i / batch_size + 1), "Epoch:", '%04d'%
                      (epoch + 1), "loss=", "{:.9f}".format(c))
# Testing
    print("Testing... (Mean square loss Comparison)")
    testing_loss = sess.run(losstest, feed_dict={testX: np.asarray(test_X), 
    testY: np.asarray(test_Y)})
    pred_y_vals = sess.run(predtest, feed_dict={testX: test_X})
    print("Testing loss=", testing_loss)

从我在培训循环中看到的情况来看,在迭代批处理之前,您正在迭代各个时代。这意味着您的网络将在同一批上接受多次培训(
training\u epochs
次),然后继续下一批。它再也不会像以前那样批量生产了

直觉上,我想说,你的人际网络在训练中看到的最后一批人中,严重地过度拟合了。这解释了测试过程中的高损耗


在训练中颠倒两个循环,你应该会没事。

我认为你的训练是错误的。你颠倒了年代和批次。也就是说,你在同一批训练中进行了多次训练,然后换成了新的一批。正如其他人已经提到的,你的时代和海滩很可能颠倒了。此外,测试中出现严重损失意味着数据拟合过度。如果您在更改循环后仍看到相同的问题,请尝试使用正则化。您能推荐此网络的最佳正则化吗?以及我如何在tensorflow中执行此操作?