Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/neo4j/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 不正确:将hyperopt与tensorflow一起使用_Python_Tensorflow - Fatal编程技术网

Python 不正确:将hyperopt与tensorflow一起使用

Python 不正确:将hyperopt与tensorflow一起使用,python,tensorflow,Python,Tensorflow,在下面的代码中,我修改了tensorflow教程(官方)中的Deep MNIST示例 修改——将重量衰减添加到损失函数中,同时也修改重量。(如果不正确,请务必让我知道) Hyperopt用于调整超参数(权重衰减因子和退出概率) 当使用hyperopt代码时,代码仅在一次TPE运行时运行良好,但是,如果轨迹数增加,则会报告以下错误 self._traceback = _extract_stack() InvalidArgumentError (see above for traceback):

在下面的代码中,我修改了tensorflow教程(官方)中的Deep MNIST示例

修改——将重量衰减添加到损失函数中,同时也修改重量。(如果不正确,请务必让我知道)

Hyperopt用于调整超参数(权重衰减因子和退出概率)

当使用hyperopt代码时,代码仅在一次TPE运行时运行良好,但是,如果轨迹数增加,则会报告以下错误

self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): Shape [-1,784] has negative dimensions
         [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[?,784], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

这个问题很可能是因为对
build\u和
的每次调用都会将节点添加到同一个TensorFlow图中,而
tf.train.AdamOptimizer
除了当前图外,还试图优化所有先前图中的变量。要解决此问题,请使用以下更改修改
build\u和
,使其在不同的TensorFlow图中运行
main()

def build_and_optimize(hp_space):
    global Flags2
    Flags2 = {}
    Flags2['dp'] = hp_space['dropout_global']
    Flags2['wd'] = hp_space['wd']

    # Create a new, empty graph for each trial to avoid interference from
    # previous trials.
    with tf.Graph().as_default():
        res = main(Flags2)

    results = {
        'loss': res,
        'status': STATUS_OK
    }
    return results
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): Shape [-1,784] has negative dimensions
         [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[?,784], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
def build_and_optimize(hp_space):
    global Flags2
    Flags2 = {}
    Flags2['dp'] = hp_space['dropout_global']
    Flags2['wd'] = hp_space['wd']

    # Create a new, empty graph for each trial to avoid interference from
    # previous trials.
    with tf.Graph().as_default():
        res = main(Flags2)

    results = {
        'loss': res,
        'status': STATUS_OK
    }
    return results