Python 避免tensorflow会话扩展

Python 避免tensorflow会话扩展,python,tensorflow,keras,Python,Tensorflow,Keras,我有一个使用keras的tensorflow后端的函数。在循环中我向会话图添加操作,然后运行会话。问题是在多次调用函数后,图形似乎会大量增长。这会导致函数调用4/5次后,函数求值时间延长2倍 这就是功能: def attack_fgsm(self, x, y, epsilon=1e-2): sess = K.get_session() nabla_x = np.zeros(x.shape) for (weak_classi, alpha) in zip(self.mod

我有一个使用keras的tensorflow后端的函数。在循环中我向会话图添加操作,然后运行会话。问题是在多次调用函数后,图形似乎会大量增长。这会导致函数调用4/5次后,函数求值时间延长2倍

这就是功能:

def attack_fgsm(self, x, y, epsilon=1e-2):
    sess = K.get_session()
    nabla_x = np.zeros(x.shape)

    for (weak_classi, alpha) in zip(self.models, self.alphas):
        grads = K.gradients(K.categorical_crossentropy(y, weak_classi.model.output), weak_classi.model.input)[0]
        grads = sess.run(grads, feed_dict={weak_classi.model.input: x})
        nabla_x += alpha*grads

    x_adv = x + epsilon*np.sign(nabla_x)

    return x_adv
所以问题是如何优化这个函数,使图不会增长太多

经过一些研究,我似乎需要使用占位符来解决这个问题。所以我想到了这个:

def attack_fgsm(self, x, y, epsilon=1e-2):
    sess = K.get_session()
    nabla_x = np.zeros(x.shape)
    y_ph = K.placeholder(y.shape)
    model_in = K.placeholder(x.shape, dtype="float")

    for (weak_classi, alpha) in zip(self.models, self.alphas):
        grads = K.gradients(K.categorical_crossentropy(y_ph, weak_classi.model.output), weak_classi.model.input)[0]
        grads = sess.run(grads, feed_dict={y_ph:y, model_in:x})
        nabla_x += alpha*grads

    x_adv = x + epsilon*np.sign(nabla_x)
    #K.clear_session()
    return x_adv
这导致:

Traceback (most recent call last):
  File "/home/simond/adversarialboosting/src/scripts/robustness_study.py", line 93, in <module>
    x_att_ada = adaboost.attack_fgsm(x_test, y_test, epsilon=eps)
  File "/home/simond/adversarialboosting/src/classes/AdvBoostM1.py", line 308, in attack_fgsm
    grads = sess.run(grads, feed_dict={y_ph:y, model_in:x})
  File "/home/simond/miniconda3/envs/keras/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 950, in run
    run_metadata_ptr)
  File "/home/simond/miniconda3/envs/keras/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1158, in _run
    self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
  File "/home/simond/miniconda3/envs/keras/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 474, in __init__
    self._fetch_mapper = _FetchMapper.for_fetch(fetches)
  File "/home/simond/miniconda3/envs/keras/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 261, in for_fetch
    type(fetch)))
TypeError: Fetch argument None has invalid type <class 'NoneType'>
回溯(最近一次呼叫最后一次):
文件“/home/simond/disperarialboosting/src/scripts/robustness\u study.py”,第93行,在
x_att_ada=adaboost.attack_fgsm(x_测试,y_测试,epsilon=eps)
文件“/home/simond/disperarialboosting/src/classes/AdvBoostM1.py”,第308行,位于攻击\u fgsm中
grads=sess.run(grads,feed_dict={y_ph:y,model_in:x})
文件“/home/simond/miniconda3/envs/keras/lib/python3.6/site packages/tensorflow/python/client/session.py”,第950行,正在运行
运行_元数据_ptr)
文件“/home/simond/miniconda3/envs/keras/lib/python3.6/site packages/tensorflow/python/client/session.py”,第1158行,正在运行
self.\u图形、回迁、馈送\u dict\u张量、馈送\u句柄=馈送\u句柄)
文件“/home/simond/miniconda3/envs/keras/lib/python3.6/site packages/tensorflow/python/client/session.py”,第474行,在__
self.\u fetch\u mapper=\u FetchMapper.for\u fetch(fetches)
文件“/home/simond/miniconda3/envs/keras/lib/python3.6/site packages/tensorflow/python/client/session.py”,第261行,用于获取
类型(获取)))
TypeError:获取参数None的类型无效

问题在于每次调用此函数时都会运行这行代码:

grads = K.gradients(K.categorical_crossentropy(y, weak_classi.model.output), weak_classi.model.input)[0]
这会将渐变的符号计算添加到图形中,并且不需要对每个
weak\u classi
实例运行多次,因此可以将其分为两部分。此部件仅应在初始化时运行一次:

self.weak_classi_grads = []
for (weak_classi, alpha) in zip(self.models, self.alphas):
    grads = K.gradients(K.categorical_crossentropy(y_ph, weak_classi.model.output), weak_classi.model.input)[0]
self.weak_classi_grads.append(grads)
然后,您可以将评估函数重写为:

def attack_fgsm(self, x, y, epsilon=1e-2):
    sess = K.get_session()
    nabla_x = np.zeros(x.shape)

    for (weak_classi, alpha, grads) in zip(self.models, self.alphas, self.weak_classi_grads):
        grads = sess.run(grads, feed_dict={weak_classi.model.input: x})
        nabla_x += alpha*grads

    x_adv = x + epsilon*np.sign(nabla_x)

    return x_adv
这样,对于每个模型,图形只有一个梯度计算实例,然后您只需要运行会话来计算具有不同输入的梯度