Python 3.x 具有固定时间表的自适应学习率

Python 3.x 具有固定时间表的自适应学习率,python-3.x,machine-learning,tensorflow,conv-neural-network,Python 3.x,Machine Learning,Tensorflow,Conv Neural Network,我正在尝试使用自适应学习速率和基于Adam梯度的优化实现卷积神经网络。我有以下代码: # learning rate schedule schedule = np.array([0.0005, 0.0005, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001, 0.00005, 0.00005, 0.00005, 0.00005, 0.00001, 0.00001, 0.00001, 0.0000

我正在尝试使用自适应学习速率和基于Adam梯度的优化实现卷积神经网络。我有以下代码:

# learning rate schedule
schedule = np.array([0.0005, 0.0005,
       0.0002, 0.0002, 0.0002,
       0.0001, 0.0001, 0.0001,
       0.00005, 0.00005, 0.00005, 0.00005,
       0.00001, 0.00001, 0.00001, 0.00001, 0.00001, 0.00001, 0.00001, 0.00001])

# define placeholder for variable learning rate
learning_rates = tf.placeholder(tf.float32, (None),name='learning_rate')

# training operation
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, 
labels=one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rates)
training_operation = optimizer.minimize(loss_operation)
运行会话的代码:

.
.
.
_, loss = sess.run([training_operation, loss_operation], 
               feed_dict={x: batch_x, y: batch_y, learning_rate: schedule[i]})
.
.
.
i表示初始化为0的历元计数,因此技术上应使用计划中的第一个值

每当我尝试运行此操作时,都会出现以下错误:

InvalidArgumentError:必须为带有dtype float的占位符张量“learning_rate_2”输入一个值 [[Node:learning_rate_2=PlaceholdType=DT_FLOAT,shape=[],[u device=“/job:localhost/replica:0/task:0/cpu:0”]


有人有过同样的问题吗?我尝试重新初始化会话,重命名变量,但没有成功。

找到了一个中间解决方案

.
.
.
for i in range(EPOCHS):
    XX_train, yy_train = shuffle(X_train, y_train)

    # code for adaptive rate
    optimizer = tf.train.AdamOptimizer(learning_rate = schedule[i])

    for offset in range(0, num_examples, BATCH_SIZE):
        end = offset + BATCH_SIZE
        batch_x, batch_y = XX_train[offset:end], yy_train[offset:end]
        _, loss = sess.run([training_operation, loss_operation], feed_dict={x: batch_x, y: batch_y})
.
.
.

不是很优雅,但至少它能工作

找到了一种中间解决方案

.
.
.
for i in range(EPOCHS):
    XX_train, yy_train = shuffle(X_train, y_train)

    # code for adaptive rate
    optimizer = tf.train.AdamOptimizer(learning_rate = schedule[i])

    for offset in range(0, num_examples, BATCH_SIZE):
        end = offset + BATCH_SIZE
        batch_x, batch_y = XX_train[offset:end], yy_train[offset:end]
        _, loss = sess.run([training_operation, loss_operation], feed_dict={x: batch_x, y: batch_y})
.
.
.

不是很优雅,但至少它能工作

试试这个,在sessionHi-Ali中定义时间表,这也不起作用,但我找到了另一个。我删除了learning_rates占位符,并将optimizer变量复制到我的训练循环中。虽然不是很优雅,但很有效。试试这个,在会话中定义时间表。嗨,阿里,这也不起作用,但我找到了另一个。我删除了learning_rates占位符,并将optimizer变量复制到我的训练循环中。不是很优雅,但很管用。