Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/327.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/rest/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 尝试使用tf.GradientTape()重建自动编码器神经网络时出现错误结果_Python_Tensorflow_Cnn - Fatal编程技术网

Python 尝试使用tf.GradientTape()重建自动编码器神经网络时出现错误结果

Python 尝试使用tf.GradientTape()重建自动编码器神经网络时出现错误结果,python,tensorflow,cnn,Python,Tensorflow,Cnn,我对使用tensorflow是新手,我曾经使用keras构建神经网络。当我尝试重建训练过程时,我在结果上遇到了一些问题。代码运行良好,在我没有使用激活函数时,给了我一个不错的结果,但是,当我将sigmoid函数或relu函数添加到正向路径中时,该模型在每一个阶段都会造成阻塞,这与我的识别背道而驰。请帮我找出问题所在 这是我的密码: ` import numpy as np import pandas as pd from tensorflow.keras.datasets import mnis

我对使用tensorflow是新手,我曾经使用keras构建神经网络。当我尝试重建训练过程时,我在结果上遇到了一些问题。代码运行良好,在我没有使用激活函数时,给了我一个不错的结果,但是,当我将sigmoid函数或relu函数添加到正向路径中时,该模型在每一个阶段都会造成阻塞,这与我的识别背道而驰。请帮我找出问题所在

这是我的密码:

`
import numpy as np
import pandas as pd
from tensorflow.keras.datasets import mnist
import tensorflow as tf
tf.enable_eager_execution()
import pandas as pd

# traget_shape = 13*13

def sigmoid(x):
    return(1/(1+tf.math.exp(-x)))#sigmoid的方程式

def relu(x):
    compare = tf.zeros((x.shape))
    return(tf.math.maximum(compare,x))#maximum的規則是把"對照值"以下的數值變成"對照值"




def forwardpass(input_data,W1,W2):
    """
    Argument:
    X -- input data of size (n_x, m)
    parameters -- python dictionary containing your parameters (output of initialization function)
    
    Returns:
    A2 -- The sigmoid output of the second activation
    cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
    """

    Z1 = tf.matmul(W1, input_data)
    # A1 = tf.nn.relu(Z1)
    Z2 = tf.matmul(W2, Z1)
    A2 = tf.sigmoid(Z2)  

    return A2

def nn_model(input_data, target, hidden, epoches, learning_rate=0.08, print_cost = False):
    input_data = tf.constant(input_data)
    target = tf.constant(target)

    input_shape = input_data.shape[1]
    target_shape = target.shape[1]
    W1 = tf.ones((hidden, input_shape),dtype=tf.float32)*0.1
    W2 = tf.ones((target_shape, hidden),dtype=tf.float32)*0.1
    W1 = tf.Variable(W1)
    W2 = tf.Variable(W2)
    # learning_rate = tf.constant(learning_rate)

    for i in range(500):
        for j in range(500):
            input_data2 = input_data[j,:]
            # print(input_data2.shape)
            target2 = target[j,:]
            with tf.GradientTape() as tape:
                error = tf.reduce_mean(tf.square(input_data2-forwardpass(input_data2,W1,W2)),axis = 0)

            [W2_back,W1_back] = tape.gradient(error, [W2,W1])
            del tape

            W2.assign(W2-learning_rate*W2_back)
            # W2 = tf.Variable(W2)
            W1.assign(W1-learning_rate*W1_back)
            # W1 = tf.Variable(W1)
            # result = tf.reduce_mean(error,axis =0)
            print(error)
    return forwardpass(input_data[0,:],W1,W2).numpy()



(train_data,train_tag),(test_data,test_tag) = mnist.load_data()

train_data = train_data.reshape(-1,28*28,1).astype('float32')/255.
test_data = test_data.reshape(-1,28*28,1).astype('float32')/255.

result= nn_model(train_data, train_data, 800, epoches = 500, learning_rate=0.0001, print_cost = False)

result = result.reshape(28,28)
pd.DataFrame(result).to_csv("result.csv",index = False,header = False)
`
我将使用mnist_数据集进行此培训,并将tensorflow1.15.0作为后端