Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow ReLU异常地规范化_Python_Tensorflow_Deep Learning - Fatal编程技术网

Python Tensorflow ReLU异常地规范化

Python Tensorflow ReLU异常地规范化,python,tensorflow,deep-learning,Python,Tensorflow,Deep Learning,我认为校正后的线性单元应执行以下功能: relu(x) = max(x, 0) 然而,tf.nn.relu的情况似乎并非如此: import tensorflow as tf import numpy as np rand_large = np.random.randn(10, 3)*100 X = tf.placeholder(tf.float32, [10, 3]) sess = tf.Session() sess.run(tf.nn.relu(X), feed_dict={X:rand_

我认为校正后的线性单元应执行以下功能:

relu(x) = max(x, 0)
然而,
tf.nn.relu的情况似乎并非如此:

import tensorflow as tf
import numpy as np
rand_large = np.random.randn(10, 3)*100
X = tf.placeholder(tf.float32, [10, 3])
sess = tf.Session()
sess.run(tf.nn.relu(X), feed_dict={X:rand_large})
随机矩阵如下所示:

>>> rand_large
array([[  21.94064161,  -82.16632876,   16.25152777],
   [  55.54897693,  -93.15235155,  118.99166126],
   [ -13.36452239,   39.36508285,   65.42844521],
   [-193.34041145,  -97.08632376,   99.22162259],
   [  87.02924619,    2.04134891,  -27.29975745],
   [-181.11406687,   43.55952393,   42.29312993],
   [ -29.81242188,   93.5764354 , -165.62711447],
   [  17.78380711, -171.30536766, -197.20709038],
   [ 105.94903623,   34.07995616,   -7.27568839],
   [-100.59533697, -189.88957685,   -7.52421816]])
>>> sess.run(tf.nn.relu(X), feed_dict={X:rand_large})array([[ 1. ,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5]], dtype=float32)
relu函数的输出如下:

>>> rand_large
array([[  21.94064161,  -82.16632876,   16.25152777],
   [  55.54897693,  -93.15235155,  118.99166126],
   [ -13.36452239,   39.36508285,   65.42844521],
   [-193.34041145,  -97.08632376,   99.22162259],
   [  87.02924619,    2.04134891,  -27.29975745],
   [-181.11406687,   43.55952393,   42.29312993],
   [ -29.81242188,   93.5764354 , -165.62711447],
   [  17.78380711, -171.30536766, -197.20709038],
   [ 105.94903623,   34.07995616,   -7.27568839],
   [-100.59533697, -189.88957685,   -7.52421816]])
>>> sess.run(tf.nn.relu(X), feed_dict={X:rand_large})array([[ 1. ,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5],
   [ 0.5,  0.5,  0.5]], dtype=float32)
所以,如果我看对了,
tf.nn.relu
做了某种规格化,对吗?如果是,为什么不在报告中提及?

好吧,我发现整个问题都与我的tensorflow安装有关,它似乎已损坏。在另一台机器上,我确实得到了预期的结果。
感谢您的帮助和宝贵意见。

tf.nn.relu
不规范数据。例如,如果我跑步

import tensorflow as tf
import numpy as np
X = tf.placeholder(tf.float32, [2, 3])
relu_X=tf.nn.relu(X)

sess = tf.Session()
mat = np.array([[-1,2,3],[2,-5,1]])
sess.run(relu_X, feed_dict={X:mat})
结果是

array([[ 0.,  2.,  3.],
       [ 2.,  0.,  1.]], dtype=float32)

你提到的输出是错误的,你应该得到负值作为零,保留正值。当我运行你的代码时,我没有得到你提到的输出。真奇怪。。。也许这是我安装的TF?