以np数组作为输入在Python中训练CNTK模型

以np数组作为输入在Python中训练CNTK模型,python,numpy,cntk,Python,Numpy,Cntk,我一直在尝试用CNTK重新编写一个简单的分类器。但我遇到的所有示例都使用内置的读取器和输入映射,我的数据在读取后需要进行大量修改,因此我无法使用大多数示例演示的数据加载方法。我在上遇到的代码似乎显示了如何使用直np数组进行训练,但它似乎并没有实际训练任何东西 显示问题的最低工作示例: import cntk as C import numpy as np from cntk.ops import relu from cntk.layers import Dense, Convolution2D

我一直在尝试用CNTK重新编写一个简单的分类器。但我遇到的所有示例都使用内置的读取器和输入映射,我的数据在读取后需要进行大量修改,因此我无法使用大多数示例演示的数据加载方法。我在上遇到的代码似乎显示了如何使用直np数组进行训练,但它似乎并没有实际训练任何东西

显示问题的最低工作示例:

import cntk as C
import numpy as np
from cntk.ops import relu
from cntk.layers import Dense, Convolution2D

outputs = 10

input_var = C.input_variable((7, 19, 19), name='features')
label_var = C.input_variable((outputs))

epochs = 20
minibatchSize = 100

cc = C.layers.Convolution2D((3,3), 64, activation=relu)(input_var)
net = C.layers.Dense(outputs)(cc)

loss = C.cross_entropy_with_softmax(net, label_var)

learner = C.adam(net.parameters, 0.0018, 0.9, minibatch_size=minibatchSize)

progressPrinter = C.logging.ProgressPrinter(tag='Training', num_epochs=epochs)

for i in range(epochs):
    X = np.zeros((minibatchSize, 7, 19, 19), dtype=np.float32)
    Y = np.ones((minibatchSize, outputs), dtype=np.float32)

    train_summary = loss.train((X, Y), parameter_learners=[learner], callbacks=[progressPrinter])
样本输出:

Learning rate per 100 samples: 0.0018
Finished Epoch[1 of 20]: [Training] loss = 2.302410 * 100, metric = 0.00% * 100 0.835s (119.8 samples/s);
Finished Epoch[2 of 20]: [Training] loss = 0.000000 * 0, metric = 0.00% * 0 0.003s (  0.0 samples/s);
Finished Epoch[3 of 20]: [Training] loss = 0.000000 * 0, metric = 0.00% * 0 0.001s (  0.0 samples/s);

发生这种情况可能有一个非常明显的原因,但我还没有弄清楚。任何关于如何补救的想法都将不胜感激

事实证明,解决方案非常简单,您可以轻松创建输入词典,而无需读者。以下是解决培训问题的完整代码:

import cntk as C
import numpy as np
from cntk.ops import relu
from cntk.layers import Dense, Convolution2D

outputs = 10

input_var = C.input_variable((7, 19, 19), name='features')
label_var = C.input_variable((outputs))

epochs = 20
minibatchSize = 100

cc = C.layers.Convolution2D((3,3), 64, activation=relu)(input_var)
net = C.layers.Dense(outputs)(cc)

loss = C.cross_entropy_with_softmax(net, label_var)
pe = C.classification_error(net, label_var)    

learner = C.adam(net.parameters, 0.0018, 0.9, minibatch_size=minibatchSize)

progressPrinter = C.logging.ProgressPrinter(tag='Training', num_epochs=epochs)
trainer = C.Trainer(net, (loss, pe), learner, progressPrinter)    

for i in range(epochs):
    X = np.zeros((minibatchSize, 7, 19, 19), dtype=np.float32)
    Y = np.ones((minibatchSize, outputs), dtype=np.float32)

    trainer.train_minibatch({input_var : X, label_var : Y})

    trainer.summarize_training_progress()