Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 2.7 类型错误:';功能';对象不能用张量流下标_Python 2.7_Tensorflow - Fatal编程技术网

Python 2.7 类型错误:';功能';对象不能用张量流下标

Python 2.7 类型错误:';功能';对象不能用张量流下标,python-2.7,tensorflow,Python 2.7,Tensorflow,我试图从中执行代码,更具体地说是从RFHO开始的example.ipynb。我唯一想改变的是在前进模式下进行,而不是在后退模式下进行。这就是更改后的代码: import tensorflow as tf import rfho as rf from rfho.datasets import load_mnist mnist = load_mnist(partitions=(.05, .01)) # 5% of data in training set, 1% in validation #

我试图从中执行代码,更具体地说是从RFHO开始的example.ipynb。我唯一想改变的是在前进模式下进行,而不是在后退模式下进行。这就是更改后的代码:

import tensorflow as tf
import rfho as rf

from rfho.datasets import load_mnist


mnist = load_mnist(partitions=(.05, .01)) # 5% of data in training set, 1% in validation
# remaining in test set (change these percentages and see the effect on regularization hyperparameter)

x, y = tf.placeholder(tf.float32, name='x'), tf.placeholder(tf.float32, name='y')
# define the model (here use a linear model from rfho.models)
model = rf.LinearModel(x, mnist.train.dim_data, mnist.train.dim_target)
# vectorize the model, and build the state vector (augment by 1 since we are
# going to optimize the weights with momentum)
s, out, w_matrix = rf.vectorize_model(model.var_list, model.inp[-1], model.Ws[0],
                                     augment=0)
# (this function will print also some tensorflow infos and warnings about variables
# collections... we'll solve this)

# define error
error = tf.reduce_mean(rf.cross_entropy_loss(labels=y, logits=out), name='error')

constraints = []

# define training error by error + L2 weights penalty
rho = tf.Variable(0., name='rho')  # regularization hyperparameter
training_error = error + rho*tf.reduce_sum(tf.pow(w_matrix, 2))
constraints.append(rf.positivity(rho))  # regularization coefficient should be positive

accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(out, 1), tf.argmax(y, 1)),
                                  "float"), name='accuracy')

# define learning rates and momentum factor as variables, to be optimized
eta = tf.Variable(.01, name='eta')
#mu = tf.Variable(.5, name='mu')
# now define the training dynamics (similar to tf.train.Optimizer)
optimizer = rf.GradientDescentOptimizer.create(s, eta, loss=training_error)

# add constraints for learning rate and momentum factor
constraints += optimizer.get_natural_hyperparameter_constraints()

# we want to optimize the weights w.r.t. training_error
# and hyperparameters w.r.t. validation error (that in this case is
# error evaluated on the validation set)
# we are going to use ReverseMode
hyper_dict = {error: [rho, eta]}
hyper_opt = rf.HyperOptimizer(optimizer, hyper_dict, method=rf.ForwardHG)

# define helper for stochastic descent
ev_data = rf.ExampleVisiting(mnist.train, batch_size=2**8, epochs=200)
tr_suppl = ev_data.create_supplier(x, y)
val_supplier = mnist.validation.create_supplier(x, y)
test_supplier = mnist.test.create_supplier(x, y)

# Run all for some hyper-iterations and print progresses
def run(hyper_iterations):
    with tf.Session().as_default() as ss:
        ev_data.generate_visiting_scheme()  # needed for remembering the example visited in forward pass
        for hyper_step in range(hyper_iterations):
            hyper_opt.initialize()  # initializes all variables or reset weights to initial state
            hyper_opt.run(ev_data.T, train_feed_dict_supplier=tr_suppl,
                          val_feed_dict_suppliers=val_supplier,
                          hyper_constraints_ops=constraints)
        #
        # print('Concluded hyper-iteration', hyper_step)
        # print('Test accuracy:', ss.run(accuracy, feed_dict=test_supplier()))
        # print('Validation error:', ss.run(error, feed_dict=val_supplier()))

saver = rf.Saver('Staring example', collect_data=False)
with saver.record(rf.Records.tensors('error', fd=('x', 'y', mnist.validation), rec_name='valid'),
                  rf.Records.tensors('error', fd=('x', 'y', mnist.test), rec_name='test'),
                  rf.Records.tensors('accuracy', fd=('x', 'y', mnist.validation), rec_name='valid'),
                  rf.Records.tensors('accuracy', fd=('x', 'y', mnist.test), rec_name='test'),
                  rf.Records.hyperparameters(),
                  rf.Records.hypergradients(),
                  ):  # a context to print some statistics.
    # If you execute again any cell containing the model construction,
    # restart the notebook or reset tensorflow graph in order to prevent errors
    # due to tensor namings
    run(20)  # this will take some time... run it for less hyper-iterations for a quicker look
问题是我得到了一个类型错误:“function”对象在第一次迭代后不可重新订阅:

Traceback (most recent call last):
  File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydev_run_in_console.py", line 52, in run_file
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/examples/simply_example.py", line 80, in <module>
    run(20)  # this will take some time... run it for less hyper-iterations for a quicker look
  File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/examples/simply_example.py", line 63, in run
    hyper_constraints_ops=constraints)
  File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/save_and_load.py", line 624, in _saver_wrapped
    res = f(*args, **kwargs)
  File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/hyper_gradients.py", line 689, in run
    hyper_batch_step=self.hyper_batch_step.eval())
  File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/hyper_gradients.py", line 581, in run_all
    return self.hyper_gradients(val_feed_dict_suppliers, hyper_batch_step)
  File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/hyper_gradients.py", line 551, in hyper_gradients
    val_sup_lst.append(val_feed_dict_supplier[k])
TypeError: 'function' object is not subscriptable
回溯(最近一次呼叫最后一次):
文件“/Applications/PyCharm CE.app/Contents/helpers/pydev/pydev_run_在_console.py中”,第52行,在run_文件中
pydev_imports.execfile(文件、全局、局部)#执行脚本
文件“/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py”,execfile中第18行
exec(编译(内容+“\n”,文件,'exec'),全局,loc)
文件“/Users/repierau/Documents/FSHO/RFHO master/RFHO/examples/simply_example.py”,第80行,在
跑(20)#这需要一些时间。。。以较少的超迭代次数运行,以获得更快的外观
文件“/Users/repierau/Documents/FSHO/RFHO master/RFHO/examples/simply_example.py”,第63行,运行中
超约束(操作=约束)
文件“/Users/repierau/Documents/FSHO/RFHO-master/RFHO/save_和_-load.py”,第624行,以“saver”包装
res=f(*args,**kwargs)
文件“/Users/repierau/Documents/FSHO/RFHO master/RFHO/hyper_gradients.py”,第689行,运行中
hyper\u batch\u step=self.hyper\u batch\u step.eval())
文件“/Users/repierau/Documents/FSHO/RFHO master/RFHO/hyper_gradients.py”,第581行,全部运行
返回自己的超梯度(val\u feed\u dict\u供应商,超批次步骤)
文件“/Users/repierau/Documents/FSHO/RFHO master/RFHO/hyper_gradients.py”,第551行,在hyper_gradients中
val_sup_lst.append(val_feed_dict_sup供应商[k])
TypeError:“函数”对象不可下标

您可能想更改
val\u-feed\u-dict\u供应商[k]
对于
val\u-feed\u-dict\u供应商(k)
来说,这不起作用……代码太多,很难评估。位您的错误意味着
val\u feed\u dict\u supplier
是一个函数,您试图用
[x]
对其进行切片。要么您有另一个名为
val\u feed\u dict\u supplier
的变量,它不是一个函数,您重写了它,要么您打算调用
val\u feed\u dict\u supplier(x)
而不是
val\u feed\u dict\u supplier[x]
。另外,产生错误的代码部分不在您的帖子上。您的问题需要一个答案。您包含了许多与此错误不相关的代码。正如其他人所评论的,您将一个不正确的类型作为
val\u feed\u dict\u supplier
参数传递到第63行的
hyper\u opt.run
。现在还不清楚在这里应该使用什么样的论点。我建议查看
rfho
提供的示例。您可能希望使用类似于
val\u feed\u dict\u suppliers={error:val\u supplier}
。我找不到任何关于
rfho
的适当文档,只是一些令人困惑的示例代码,所以这只是一个猜测。在rfho的github问题跟踪程序中,您可能会更幸运地问这个问题。