Python 在函数API模型中添加截断层
我正在尝试在以下代码中的Conv2D层之后添加截断层:Python 在函数API模型中添加截断层,python,tensorflow,keras,conv-neural-network,keras-layer,Python,Tensorflow,Keras,Conv Neural Network,Keras Layer,我正在尝试在以下代码中的Conv2D层之后添加截断层: input_layer = Input(shape=(256, 256, 1)) conv = Conv2D(8, (5, 5), padding='same', strides=1, use_bias=False)(input_layer) output_layer = Activation(activation='tanh')(lambda_layer) output_layer = AveragePooling2D(pool_size
input_layer = Input(shape=(256, 256, 1))
conv = Conv2D(8, (5, 5), padding='same', strides=1, use_bias=False)(input_layer)
output_layer = Activation(activation='tanh')(lambda_layer)
output_layer = AveragePooling2D(pool_size= (5, 5), strides=2)(output_layer)
output_layer = BatchNormalization()(output_layer)
截断层必须满足:
−T if x < −T
x if −T ≤ x ≤ T
T x > T
where `T` is a threshold value, x= the output of convolution layer`
−如果x<−T
x如果−T≤ x≤ T
tx>T
其中'T'是阈值,x=卷积层的输出`
有人能帮我建这个层吗
谢谢您可以使用
tf.clip\u by\u value
和tf.stop\u gradient
来保留渐变,因为tf.clip\u by\u value
是不可微分的。最后,将其包装在Lambda
层中:
import functools
def clip_preserve_grad(inp, clip_min, clip_max):
return inp + tf.stop_gradient(tf.clip_by_value(inp, clip_min, clip_max) - inp)
T = 0.5
trunc_func = functools.partial(clip_preserve_grad, clip_min=-T, clip_max=T)
trunc = tf.keras.layers.Lambda(trunc_func)
使用Lambda层:
>>> a = tf.random.normal((1,10))
>>> a
<tf.Tensor: shape=(1, 10), dtype=float32, numpy=
array([[-1.8041286 , -0.11153453, -0.84555113, 0.8489615 , 0.12237629,
1.3350475 , 0.619644 , -0.5498301 , -0.6082269 , 0.8465021 ]],
dtype=float32)>
>>> trunc(a)
<tf.Tensor: shape=(1, 10), dtype=float32, numpy=
array([[-0.5 , -0.11153453, -0.5 , 0.5 , 0.12237629,
0.5 , 0.5 , -0.5 , -0.5 , 0.5 ]],
dtype=float32)>
>a=tf.random.normal((1,10))
>>>a
>>>特鲁克(a)
您可以使用tensorflow.keras.backed.switch
构建所需的函数,并将其封装在Lambda
层中
构建并测试功能:
T = 5
X = tf.constant(np.random.uniform(-10, 10, (3,5)))
def switch_func(X, T):
zeros = tf.zeros_like(X)
T_matrix = tf.ones_like(X) * T
cond1 = K.switch(X < -T_matrix, -T_matrix, zeros)
cond2 = K.switch(X > T_matrix, T_matrix, zeros)
cond3 = K.switch(tf.abs(cond1 + cond2) == T, zeros, X)
res = cond1 + cond2 + cond3
return res
switch_func(X, T)
<tf.Tensor: shape=(3, 5), dtype=float64, numpy=
array([[-5. , 0.65807168, -4.93481499, -5. , -2.94954848],
[-1.25114075, -5. , 2.97657545, 5. , -0.8958152 ],
[-1.26611956, 5. , -3.38477137, 5. , -3.53358454]])>
谢谢你,马可·塞里亚尼,你的回答真的是我所需要的。
X = np.random.uniform(0,1, (100,10))
y = np.random.uniform(0,1, (100,))
inp = Input((10,))
x = Dense(8)(inp)
x = Lambda(lambda x: switch_func(x, T=0.5))(x)
out = Dense(1)(x)
model = Model(inp, out)
model.compile('adam', 'mse')
model.fit(X,y, epochs=3)