Python 登录和标签的大小必须相同:登录\u大小=[1200,7]标签\u大小=[600,7]

Python 登录和标签的大小必须相同:登录\u大小=[1200,7]标签\u大小=[600,7],python,tensorflow,deep-learning,data-science,Python,Tensorflow,Deep Learning,Data Science,这是我第一次在stackoverflow中提问,因此我可能无法在一篇文章中给出问题的所有细节 我正在尝试将CNN应用于活动识别数据集,但当前我面临一个问题,即logits和label的大小必须相同:logits_size=[1200,7]labels_size=[600,7] 下面是我运行模型的代码: test_acc = [] test_loss = [] train_acc = [] train_loss = [] # with graph.as_default(): saver =

这是我第一次在stackoverflow中提问,因此我可能无法在一篇文章中给出问题的所有细节

我正在尝试将CNN应用于活动识别数据集,但当前我面临一个问题,即logits和label的大小必须相同:logits_size=[1200,7]labels_size=[600,7]

下面是我运行模型的代码:

test_acc = []
test_loss = []

train_acc = []
train_loss = []

# with graph.as_default():
saver = tf.train.Saver()

# with tf.Session(graph=graph) as sess:
# with tf.Session() as sess:

sess = tf.Session()
sess.run(tf.global_variables_initializer())
# writer = tf.summary.FileWriter("logs/", sess.graph)

iteration = 1

for e in range(epochs):
#     tf.set_random_seed(123)
    # Loop over batches
    for x,y in get_batches(X_train, y_train, batch_size):

        # Feed dictionary
        feed = {inputs_ : x, labels_ : y, keep_prob_ : 0.5, learning_rate_ : learning_rate}

        # Loss
        loss, _ , acc = sess.run([cost, optimizer, accuracy], feed_dict = feed)
        train_acc.append(acc)
        train_loss.append(loss)

        # Print at each 5 iters
        if (iteration % 5 == 0):
            print("Epoch: {}/{}".format(e, epochs),
                  "Iteration: {:d}".format(iteration),
                  "Train loss: {:6f}".format(loss),
                  "Train acc: {:.6f}".format(acc))

        # Compute validation loss at every 10 iterations
        if (iteration%10 == 0):                
            val_acc_ = []
            val_loss_ = []

            for x_t, y_t in get_batches(X_test, y_test, batch_size):
                # Feed
                feed = {inputs_ : x_t, labels_ : y_t, keep_prob_ : 1.0}  

                # Loss
                loss_v, acc_v = sess.run([cost, accuracy], feed_dict = feed)                    
                val_acc_.append(acc_v)
                val_loss_.append(loss_v)

            # Print info
            print("Epoch: {}/{}".format(e, epochs),
                  "Iteration: {:d}".format(iteration),
                  "Testing loss NOW: {:6f}".format(np.mean(val_loss_)),
                  "Testing acc NOW: {:.6f}".format(np.mean(val_acc_)))

            # Store
            test_acc.append(np.mean(val_acc_))
            test_loss.append(np.mean(val_loss_))

        # Iterate 
        iteration += 1

    print("Optimization Finished!")
print("Ended!")

非常感谢您的帮助,请提前感谢。

我想问题在于重塑。 池_3的输出可以是4×1×72的形状。当你做同样的填充时,零将在最后被填充

您需要将“重塑层”更改为flat=tf.restrapepool_3,-1,4*72

谢谢你,伙计,这真的解决了我的问题,但为什么需要将其设置回4?而我使用了另一个类似的数据集,参数设置为2*72。。它适用于另一个数据集tho。这取决于你在神经网络中使用的输入形状和层。你介意帮我看看这篇文章吗:
test_acc = []
test_loss = []

train_acc = []
train_loss = []

# with graph.as_default():
saver = tf.train.Saver()

# with tf.Session(graph=graph) as sess:
# with tf.Session() as sess:

sess = tf.Session()
sess.run(tf.global_variables_initializer())
# writer = tf.summary.FileWriter("logs/", sess.graph)

iteration = 1

for e in range(epochs):
#     tf.set_random_seed(123)
    # Loop over batches
    for x,y in get_batches(X_train, y_train, batch_size):

        # Feed dictionary
        feed = {inputs_ : x, labels_ : y, keep_prob_ : 0.5, learning_rate_ : learning_rate}

        # Loss
        loss, _ , acc = sess.run([cost, optimizer, accuracy], feed_dict = feed)
        train_acc.append(acc)
        train_loss.append(loss)

        # Print at each 5 iters
        if (iteration % 5 == 0):
            print("Epoch: {}/{}".format(e, epochs),
                  "Iteration: {:d}".format(iteration),
                  "Train loss: {:6f}".format(loss),
                  "Train acc: {:.6f}".format(acc))

        # Compute validation loss at every 10 iterations
        if (iteration%10 == 0):                
            val_acc_ = []
            val_loss_ = []

            for x_t, y_t in get_batches(X_test, y_test, batch_size):
                # Feed
                feed = {inputs_ : x_t, labels_ : y_t, keep_prob_ : 1.0}  

                # Loss
                loss_v, acc_v = sess.run([cost, accuracy], feed_dict = feed)                    
                val_acc_.append(acc_v)
                val_loss_.append(loss_v)

            # Print info
            print("Epoch: {}/{}".format(e, epochs),
                  "Iteration: {:d}".format(iteration),
                  "Testing loss NOW: {:6f}".format(np.mean(val_loss_)),
                  "Testing acc NOW: {:.6f}".format(np.mean(val_acc_)))

            # Store
            test_acc.append(np.mean(val_acc_))
            test_loss.append(np.mean(val_loss_))

        # Iterate 
        iteration += 1

    print("Optimization Finished!")
print("Ended!")