Python 如何限制稀疏梯度的每个维度的绝对值过大?

Python 如何限制稀疏梯度的每个维度的绝对值过大?,python,tensorflow,sparse-matrix,gradient,Python,Tensorflow,Sparse Matrix,Gradient,考虑以下代码: import tensorflow as tf inputs=tf.placeholder(tf.int32, [None]) labels=tf.placeholder(tf.int32, [None]) with tf.variable_scope('embedding'): embedding=tf.get_variable('embedding', shape=[2000000, 300], dtype=tf.float32) layer1=tf.nn.em

考虑以下代码:

import tensorflow as tf

inputs=tf.placeholder(tf.int32, [None])
labels=tf.placeholder(tf.int32, [None])

with tf.variable_scope('embedding'):
    embedding=tf.get_variable('embedding', shape=[2000000, 300], dtype=tf.float32)

layer1=tf.nn.embedding_lookup(embedding, inputs)
logits=tf.layers.dense(layer1, 2000000)

loss=tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits)
cost=tf.reduce_sum(loss)

optimizer=tf.train.GradientDescentOptimizer(0.01)
grads, vars=zip(*optimizer.compute_gradients(cost))
for g in grads:
    print(0, g)

grads1=[tf.clip_by_value(g, -100, 100) for g in grads]
for g in grads1:
    print(1, g)

grads2, _=tf.clip_by_global_norm(grads, 10)
for g in grads2:
    print(2, g)
输出为:

0 IndexedSlices(indices=Tensor("gradients/embedding_lookup_grad/Reshape_1:0", shape=(?,), dtype=int32), values=Tensor("gradients/embedding_lookup_grad/Reshape:0", shape=(?, 300), dtype=float32), dense_shape=Tensor("gradients/embedding_lookup_grad/ToInt32:0", shape=(2,), dtype=int32))
0 Tensor("gradients/dense/MatMul_grad/tuple/control_dependency_1:0", shape=(300, 2000000), dtype=float32)
0 Tensor("gradients/dense/BiasAdd_grad/tuple/control_dependency_1:0", shape=(2000000,), dtype=float32)
C:\Python\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py:97: UserWarning: Converting sparse IndexedSlices to a dense Tensor with 600000000 elements. This may consume a large amount of memory.
  num_elements)
1 Tensor("clip_by_value:0", shape=(?, 300), dtype=float32)
1 Tensor("clip_by_value_1:0", shape=(300, 2000000), dtype=float32)
1 Tensor("clip_by_value_2:0", shape=(2000000,), dtype=float32)
2 IndexedSlices(indices=Tensor("gradients/embedding_lookup_grad/Reshape_1:0", shape=(?,), dtype=int32), values=Tensor("clip_by_global_norm/clip_by_global_norm/_0:0", shape=(?, 300), dtype=float32), dense_shape=Tensor("gradients/embedding_lookup_grad/ToInt32:0", shape=(2,), dtype=int32))
2 Tensor("clip_by_global_norm/clip_by_global_norm/_1:0", shape=(300, 2000000), dtype=float32)
2 Tensor("clip_by_global_norm/clip_by_global_norm/_2:0", shape=(2000000,), dtype=float32)
我知道有两种方法可以限制坡度过大
tf.clip_by_value
以限制每个维度,而
tf.clip_by_global_norm
以根据全局渐变规范进行限制

但是,
tf.clip\u by_value
会将稀疏梯度转换为密集梯度,这会显著增加内存使用并降低计算效率,正如警告所示,而
tf.clip\u by_global\u norm
则不会。虽然我可以理解为什么要这样设计,但如何限制稀疏梯度的每个维度的绝对值过大而不降低效率


请不要告诉我只使用tf。clip\u by\u global\u norm,我知道这在大多数情况下都可以,但不是我想要的。

现在我使用它,效果很好

grads=[tf.IndexedSlices(tf.clip_by_value(g.values, -max_grad_value, max_grad_value), g.indices, g.dense_shape) if isinstance(g, tf.IndexedSlices) else tf.clip_by_value(g, -max_grad_value, max_grad_value) for g in grads]