Python Keras中的GaussianDropout vs.Dropout vs.GaussianNoise

Python Keras中的GaussianDropout vs.Dropout vs.GaussianNoise,python,tensorflow,keras,gaussian,dropout,Python,Tensorflow,Keras,Gaussian,Dropout,有人能解释不同辍学类型之间的区别吗?根据这个公式,我假设Gaussiandroput不是将一些单位降为零(辍学),而是将这些单位乘以一些分布。但是,在实际测试时,所有单元都会被触碰。结果看起来更像经典的高斯噪声 tf.random.set_seed(0) layer = tf.keras.layers.GaussianDropout(.05, input_shape=(2,)) data = np.arange(10).reshape(5, 2).astype(np.float32) print

有人能解释不同辍学类型之间的区别吗?根据这个公式,我假设Gaussiandroput不是将一些单位降为零(辍学),而是将这些单位乘以一些分布。但是,在实际测试时,所有单元都会被触碰。结果看起来更像经典的高斯噪声

tf.random.set_seed(0)
layer = tf.keras.layers.GaussianDropout(.05, input_shape=(2,))
data = np.arange(10).reshape(5, 2).astype(np.float32)
print(data)

outputs = layer(data, training=True)
print(outputs)
结果:

[[0. 1.]
 [2. 3.]
 [4. 5.]
 [6. 7.]
 [8. 9.]]
tf.Tensor(
[[0.    1.399]
 [1.771 2.533]
 [4.759 3.973]
 [5.562 5.94 ]
 [8.882 9.891]], shape=(5, 2), dtype=float32)
编辑:

显然,这是我一直想要的:

def RealGaussianDropout(x, rate, stddev):

    keep_prob = 1 - rate
    random_tensor = tf.random.uniform(tf.shape(x))
    keep_mask = tf.cast(random_tensor >= rate, tf.float32)   
    noised = x + K.random_normal(tf.shape(x), mean=.0, stddev=stddev)   
    ret = tf.multiply(x, keep_mask) + tf.multiply(noised, (1-keep_mask))

    return ret


outputs = RealGaussianDropout(data,0.2,0.1)
print(outputs)

你是对的。。。高斯噪声和高斯噪声非常相似。你可以通过自己复制来测试所有的相似性

def dropout(x, rate):

    keep_prob = 1 - rate
    scale = 1 / keep_prob
    ret = tf.multiply(x, scale)
    random_tensor = tf.random.uniform(tf.shape(x))
    keep_mask = random_tensor >= rate
    ret = tf.multiply(ret, tf.cast(keep_mask, tf.float32))
    
    return ret

def gaussian_dropout(x, rate):
    
    stddev = np.sqrt(rate / (1.0 - rate))
    ret = x * K.random_normal(tf.shape(x), mean=1.0, stddev=stddev)
    
    return ret

def gaussian_noise(x, stddev):
    
    ret = x + K.random_normal(tf.shape(x), mean=.0, stddev=stddev)
    
    return ret
高斯噪声只是将随机正常值与0平均值相加,而高斯衰减只是将随机正常值与1平均值相乘。这些操作涉及输入的所有元素。经典的辍学变成0,一些输入元素对其他元素进行缩放

辍学

data = np.arange(10).reshape(5, 2).astype(np.float32)

set_seed(0)
layer = tf.keras.layers.Dropout(.4)
out1 = layer(data, training=True)
set_seed(0)
out2 = dropout(data, .4)
print(tf.reduce_all(out1 == out2).numpy()) # TRUE
data = np.arange(10).reshape(5, 2).astype(np.float32)

set_seed(0)
layer = tf.keras.layers.GaussianDropout(.05)
out1 = layer(data, training=True)
set_seed(0)
out2 = gaussian_dropout(data, .05)
print(tf.reduce_all(out1 == out2).numpy()) # TRUE
Gaussiandroput

data = np.arange(10).reshape(5, 2).astype(np.float32)

set_seed(0)
layer = tf.keras.layers.Dropout(.4)
out1 = layer(data, training=True)
set_seed(0)
out2 = dropout(data, .4)
print(tf.reduce_all(out1 == out2).numpy()) # TRUE
data = np.arange(10).reshape(5, 2).astype(np.float32)

set_seed(0)
layer = tf.keras.layers.GaussianDropout(.05)
out1 = layer(data, training=True)
set_seed(0)
out2 = gaussian_dropout(data, .05)
print(tf.reduce_all(out1 == out2).numpy()) # TRUE
高斯噪声

data = np.arange(10).reshape(5, 2).astype(np.float32)

set_seed(0)
layer = tf.keras.layers.GaussianNoise(.3)
out1 = layer(data, training=True)
set_seed(0)
out2 = gaussian_noise(data, .3)
print(tf.reduce_all(out1 == out2).numpy()) # TRUE
为了保证再现性,我们使用了(TF2):


谢谢你的详细回答。奇怪的是,他们试图定义两个类似的函数。我仍然对一个像dropout一样工作的层感兴趣,它可以将噪声随机分布到数据的20%。还没有人试过吗?是的,我不知道为什么。。。但是,如果你保留了经典的衰减函数/层的价值,你可以将经典衰减函数的掩蔽能力与高斯方法的噪声相加结合起来,构建你自己的衰减函数/层