Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何修复&x2018;必须为占位符张量';占位符2';带有数据类型float和shape[?,1680,1]’;Python中的错误?_Python_Tensorflow_Generative Adversarial Network_Cleverhans - Fatal编程技术网

如何修复&x2018;必须为占位符张量';占位符2';带有数据类型float和shape[?,1680,1]’;Python中的错误?

如何修复&x2018;必须为占位符张量';占位符2';带有数据类型float和shape[?,1680,1]’;Python中的错误?,python,tensorflow,generative-adversarial-network,cleverhans,Python,Tensorflow,Generative Adversarial Network,Cleverhans,在Cleverhans库中运行修改后的代码时出现问题。我正在尝试运行mnist\u blackbox.py示例的修改版本。输入为3*680*1,而不是28*28*1。x_adv_sub是一个1*680*1的张量,与x_test\u rest\u tf相结合,后者是一个2*680*1的张量,为model\u eval函数生成一个3*680*1的concat_adv张量 accuracy = model_eval(sess, x, y, model.get_logits(concat_adv),

在Cleverhans库中运行修改后的代码时出现问题。我正在尝试运行
mnist\u blackbox.py
示例的修改版本。输入为3*680*1,而不是28*28*1。
x_adv_sub
是一个1*680*1的张量,与
x_test\u rest\u tf
相结合,后者是一个2*680*1的张量,为
model\u eval
函数生成一个3*680*1的
concat_adv
张量

accuracy = model_eval(sess, x, y, model.get_logits(concat_adv),
                        x_test, y_test, args=eval_params) 
但是,当它运行时,会抛出以下错误:

“tensorflow.python.framework.errors\u impl.invalidargumeinterror:您 必须为带有dtype的占位符张量“占位符_2”提供一个值 浮动和形状[?,1680,1]”

我甚至尝试将相同的三个1*680*1
x\u adv\u sub
张量连接起来,以输入
model\u eval
函数,但仍然得到相同的错误。
concat\u adv=tf.concat([x\u adv\u sub,x\u adv\u sub,x\u adv\u sub],1)

我不知道如何连接定义为占位符的张量。如果有人能帮我,我真的很感激

以下是代码的一部分:

def mnist_blackbox(train_start=0, train_end=1000, test_start=0,
                   test_end=200, nb_classes=NB_CLASSES,
                   batch_size=BATCH_SIZE, learning_rate=LEARNING_RATE,
                   nb_epochs=NB_EPOCHS, holdout=HOLDOUT, data_aug=DATA_AUG,
                   nb_epochs_s=NB_EPOCHS_S, lmbda=LMBDA,
                   aug_batch_size=AUG_BATCH_SIZE):
  """
  MNIST tutorial for the black-box attack from arxiv.org/abs/1602.02697
  :param train_start: index of first training set example
  :param train_end: index of last training set example
  :param test_start: index of first test set example
  :param test_end: index of last test set example
  :return: a dictionary with:
           * black-box model accuracy on test set
           * substitute model accuracy on test set
           * black-box model accuracy on adversarial examples transferred
             from the substitute model
  """

  # Set logging level to see debug information
  set_log_level(logging.DEBUG)

  # Dictionary used to keep track and return key accuracies
  accuracies = {}

  # Perform tutorial setup
  assert setup_tutorial()

  # Create TF session
  sess = tf.Session()

  # Get data

  with open('X.pickle','rb') as pickle_in:
     x_all= pickle.load(pickle_in)
     x_all=np.divide(x_all, 255)

  with open('y.pickle','rb') as pickle_in:
     y_all= pickle.load(pickle_in)

  # Convert to float 32
     x_all= np.float32(x_all)

     y_all= np.float32(y_all)


  num_class=3
  class_lables=np.zeros((len(y_all),num_class)) 


  # make y dataset a matrix (each row shows the class lable)
  for index in range(len(y_all)):
     if y_all[index]==0:
        class_lables[index][0]=1
     elif y_all[index]==1:
        class_lables[index][1]=1
     elif y_all[index]==2:
        class_lables[index][2]=1

  y_all=class_lables

  #splitting data set to train/test randomly
  x_train, x_test, y_train, y_test = train_test_split(x_all, y_all, test_size=0.2)


  """
  # Get MNIST data
  mnist = MNIST(train_start=train_start, train_end=train_end,
                test_start=test_start, test_end=test_end)
  x_train, y_train = mnist.get_set('train')
  x_test, y_test = mnist.get_set('test')
  """

  # Initialize substitute training set reserved for adversary
  x_sub = x_test[:holdout]
  y_sub = y_test[:holdout]

  x_sub_1=x_sub[:,:1,:,:]


  # Redefine test set as remaining samples unavailable to adversaries
  x_test = x_test[holdout:]
  y_test = y_test[holdout:]
  x_test_1=x_test[:,:1,:,:]

  # Obtain Image parameters
  img_rows, img_cols, nchannels = x_train.shape[1:4]
  nb_classes = y_train.shape[1]

  # Define input TF placeholder
  x = tf.placeholder(tf.float32, shape=(None, img_rows, img_cols,
                                        nchannels))
  y = tf.placeholder(tf.float32, shape=(None, nb_classes))

  # Define input TF placeholder for X-sub Vulnerability Analysis 

  x_vul = tf.placeholder(tf.float32, shape=(None, 1, img_cols,
                                        nchannels))

  # Seed random number generator so tutorial is reproducible
  rng = np.random.RandomState([2017, 8, 30])

  # Simulate the black-box model locally
  # You could replace this by a remote labeling API for instance
  print("Preparing the black-box model.")
  prep_bbox_out = prep_bbox(sess, x, y, x_train, y_train, x_test, y_test,
                            nb_epochs, batch_size, learning_rate,
                            rng, nb_classes, img_rows, img_cols, nchannels)
  model, bbox_preds, accuracies['bbox'] = prep_bbox_out

  # Train substitute using method from https://arxiv.org/abs/1602.02697
  print("Training the substitute model.")
  train_sub_out = train_sub(sess, x_vul, y, x_sub_1, y_sub, x_test_1, y_test,
                            nb_epochs, batch_size, learning_rate,
                            rng, nb_classes, 1, img_cols, nchannels)
  model_sub, preds_sub, accuracies['sub'] = train_sub_out

  # Initialize the Fast Gradient Sign Method (FGSM) attack object.
  fgsm_par = {'eps': 0.1, 'ord': np.inf, 'clip_min': 0., 'clip_max': 1.}
  fgsm = FastGradientMethod(model_sub, sess=sess)

  # Craft adversarial examples using the substitute
  eval_params = {'batch_size': batch_size}
  x_adv_sub = fgsm.generate(x_vul, **fgsm_par)

  print(x_adv_sub.get_shape())
  print(type(x_adv_sub))
  x_test_rest=x_test[:,1:,:,:]
  x_test_rest_tf=tf.convert_to_tensor(x_test_rest)
  concat_adv=tf.concat([x_adv_sub, x_test_rest_tf], 1)
  print(concat_adv.get_shape())
  print(type(concat_adv))


  # Evaluate the accuracy of the "black-box" model on adversarial examples
  accuracy = model_eval(sess, x, y, model.get_logits(concat_adv),
                        x_test, y_test, args=eval_params)
  print('Test accuracy of oracle on adversarial examples generated '
        'using the substitute: ' + str(accuracy))
  accuracies['bbox_on_sub_adv_ex'] = accuracy

  return accuracies

这可能与克里夫汉斯回购协议中的内容有关

在调用
model.fit()
model.eval()
之前,尝试插入以下命令:

或者,如果使用
tf.keras

import tensorflow.keras.backend as K
K.set_learning_phase(False)

不幸的是,从问题中不清楚这究竟是为什么解决了问题,或者它可能会如何影响培训/评估,因此在进行此更改后仔细监控结果。

这可能与克利夫兰回购协议中的这一点有关

在调用
model.fit()
model.eval()
之前,尝试插入以下命令:

或者,如果使用
tf.keras

import tensorflow.keras.backend as K
K.set_learning_phase(False)
不幸的是,从这个问题上看,还不清楚这究竟是为什么解决了问题,或者它可能会如何影响培训/评估,因此,在进行此更改后,请仔细监控结果