Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/317.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Keras fit_generator:意外使用了_getitem___方法_Python_Neural Network_Keras - Fatal编程技术网

Python Keras fit_generator:意外使用了_getitem___方法

Python Keras fit_generator:意外使用了_getitem___方法,python,neural-network,keras,Python,Neural Network,Keras,我正在使用fit_generator函数来训练我的模型,并希望验证我的数据是否按预期构建和使用。从keras.utils.Sequence()派生的我的类实现了方法\uuuu getitem\uuuuuu、\uuuuu len\uuuuuu和on\u epoch\u end,如下所示: class PairwiseSequence(Sequence): """Generator that returns a combination of simulations (over a parametri

我正在使用fit_generator函数来训练我的模型,并希望验证我的数据是否按预期构建和使用。从keras.utils.Sequence()派生的我的类实现了方法
\uuuu getitem\uuuuuu
\uuuuu len\uuuuuu
on\u epoch\u end
,如下所示:

class PairwiseSequence(Sequence):
"""Generator that returns a combination of simulations (over a
parametrizable amount of timesteps) and the corresponding metric distance.

pair_list: List of pairwise combinations of simulations
results: dictionary with results for the metric distance between
         simulation pairs
sim_files: List of filenames representing single timesteps
batch_size: number of samples to process in a single interference run
"""

def __init__(self, pair_list, results, mean, std, train=False, sim_files=None,
             batch_size=1):
    self.pair_list = pair_list
    self.results = results
    self.batch_size = batch_size
    self.sim_files = sim_files
    self.mean = mean
    self.std = std
    self.train = train

def __len__(self):
    return math.ceil(len(self.pair_list) / self.batch_size)

def __getitem__(self, idx):
    dummy = LOADING_METHOD(self.pair_list[0][0], self.sim_files)
    x_1 = np.zeros(tuple([self.batch_size]) + dummy.shape)
    x_2 = np.zeros(tuple([self.batch_size]) + dummy.shape)
    y = np.zeros((self.batch_size, 1))

    if self.train:
        #print((idx * self.batch_size + i) % len(self.pair_list), ',')
        print("training idx:", idx)
    else:
        print("validation idx:", idx)

        for i in range(0, self.batch_size):
        (sim1, sim2) = self.pair_list[(idx * self.batch_size + i) %
                                      len(self.pair_list)]
        x_1[i] = LOADING_METHOD(sim1, self.sim_files) - self.mean[0]
        x_1[i] /= self.std[0]
        x_2[i] = LOADING_METHOD(sim2, self.sim_files) - self.mean[1]
        x_2[i] /= self.std[1]
        y[i] = self.results[frozenset((sim1.ensemble, sim2.ensemble))]
    return [x_1, x_2], y

def on_epoch_end(self):
    if self.train:
        print("training generator: epoch end")
    else:
        print("validation generator: epoch end")
    #random.shuffle(self.pair_list)
此类用作培训和验证数据的生成器(两个单独的实例)

如您所见,我正在打印
\uu getitem\uuu
idx
参数,并在历元结束时向控制台打印一些通知。我按如下方式调用fit_生成器(打开多处理):

我还谈到了数据的洗牌。使用此配置,我希望
idx
从0变为len(生成器),然后调用
on\u epoch\u end
。我有372个样本用于培训,93个样本用于验证,批次大小为12
idx
应为0到30(培训数据),分别为0到7(验证数据)。但是
\uuu getitem\uuu
被调用的频率比我预期的要高,而且在epoch\u end上的
也在中间被调用!以下是控制台输出的外观:

batch_size: 12
len(train_gen): 31
len(valid_gen): 8
2018-02-14 08:45:09.041929: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
training idx: 0
training idx: 1
training idx: 2
training idx: 3
training idx: 4
training idx: 5
training idx: 6
training idx: 7
training idx: 8
training idx: 9
training idx: 10
training idx: 11
training idx: 12
training idx: 13
training idx: 14
training idx: 15
training idx: 16
training idx: 17
training idx: 18
training idx: 19
training idx: 20
training generator: epoch end
training idx: 21
training idx: 22
training idx: 23
training idx: 24
training idx: 25
training idx: 26
training idx: 27
training idx: 28
training idx: 29
training idx: 30
training idx: 0
validation generator: epoch end
validation idx: 0
training idx: 1
validation idx: 1
training idx: 2
validation idx: 2
training idx: 3
validation idx: 3
training idx: 4
validation idx: 4
training idx: 5
validation idx: 5
validation generator: epoch end
training idx: 6
validation idx: 6
training idx: 7
validation idx: 7
training idx: 8
validation idx: 0
training idx: 9
validation idx: 1
training idx: 10
validation idx: 2
validation idx: 3
validation idx: 4
validation idx: 5
validation idx: 6
validation idx: 7
validation idx: 0
validation idx: 1
validation idx: 2
Epoch 00000: val_loss improved from inf to 10512.69922, saving model to /home/stefan/vcs/MA/code/results/test/TB_dummy_distance_10513.hdf5
training idx: 11
training idx: 12
training idx: 13
training idx: 14
training idx: 15
training idx: 16
training idx: 17
training idx: 18
training idx: 19
training idx: 20
training generator: epoch end
training idx: 21
training idx: 22
training idx: 23
training idx: 24
training idx: 25
training idx: 26
training idx: 27
training idx: 28
training idx: 29
training idx: 30
training idx: 0
validation generator: epoch end
validation idx: 0
training idx: 1
validation idx: 1
training idx: 2
validation idx: 2
training idx: 3
validation idx: 3
training idx: 4
validation idx: 4
training idx: 5
validation idx: 5
validation generator: epoch end
training idx: 6
validation idx: 6
training idx: 7
validation idx: 7
validation idx: 0
training idx: 8
validation idx: 1
training idx: 9
validation idx: 2
training idx: 10
validation idx: 3
validation idx: 4
validation idx: 5
validation idx: 6
validation idx: 7
validation idx: 0
validation idx: 1
validation idx: 2
Epoch 00001: val_loss improved from 10512.69922 to 5905.95929, saving model to /home/stefan/vcs/MA/code/results/test/TB_dummy_distance_5906.hdf5
fit_generator如何使用
\uu getitem_uuuuuuuuuuuuu
on_epoch\u end
方法?在第一个历元开始之前,它是否也调用这些方法来获取一些用于权重初始化的样本数据?这种行为是由某种缓存引起的吗

非常感谢您的帮助!提前谢谢你

更新: 出于测试目的,我将
fit\u generator
max\u queue\u size
参数更改为1。这是最终的终端输出:

batch_size: 12
len(train_gen): 31
len(valid_gen): 8
2018-02-14 10:10:40.001065: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
training idx: 0
training idx: 1
training idx: 2
training idx: 3
training idx: 4
training idx: 5
training idx: 6
training idx: 7
training idx: 8
training idx: 9
training idx: 10
training idx: 11
training idx: 12
training idx: 13
training idx: 14
training idx: 15
training idx: 16
training idx: 17
training idx: 18
training idx: 19
training idx: 20
training idx: 21
training idx: 22
training idx: 23
training idx: 24
training idx: 25
training idx: 26
training idx: 27
training idx: 28
training idx: 29
training idx: 30
training generator: epoch end
training idx: 0
training idx: 1
validation idx: 0
validation idx: 1
validation idx: 2
validation idx: 3
validation idx: 4
validation idx: 5
validation idx: 6
validation generator: epoch end
validation idx: 7
validation idx: 0
validation idx: 1
Epoch 00000: val_loss improved from inf to 18090.34473, saving model to /home/stefan/vcs/MA/code/results/test/TB_dummy_distance_18090.hdf5
training idx: 2
training idx: 3
training idx: 4
training idx: 5
training idx: 6
training idx: 7
training idx: 8
training idx: 9
training idx: 10
training idx: 11
training idx: 12
training idx: 13
training idx: 14
training idx: 15
training idx: 16
training idx: 17
training idx: 18
training idx: 19
training idx: 20
training idx: 21
training idx: 22
training idx: 23
training idx: 24
training idx: 25
training idx: 26
training idx: 27
training idx: 28
training idx: 29
training idx: 30
training generator: epoch end
training idx: 0
training idx: 1
validation idx: 0
validation idx: 1
validation idx: 2
validation idx: 3
validation idx: 4
validation idx: 5
validation idx: 6
validation generator: epoch end
validation idx: 7
validation idx: 0
validation idx: 1
Epoch 00001: val_loss did not improve

现在,至少在第一个时期,所有训练样本都被查询。但是对于第二个epoch中的验证数据和训练数据,
on\u epoch\u end
仍然会被调用到早期。

下面的代码将适用于您

def gen(train_data):
    print('generator initiated')
    #Define a batch size
    batch_size = 64

    #Complete length of data
    data_size = len(train_data)

    #Total number of batches will be created
    num_batches = int(data_size / batch_size)


    if (num_batches*batch_size) < data_size:
         num_batches += 1
    while True:
        cnt=0
        for i in range(num_batches):
            start_index = cnt * batch_size
            end_index = min((cnt + 1) * batch_size, data_size)
            cnt +=1

            //Do some preprocessing 
            x_train_padded = add_pad(x_train,3,pad)
            x_train_padded = np.array(x_train_padded)

            yield (x_train_padded,y_train_padded)


fun_model.fit_generator(gen(train_data),steps_per_epoch =int(len(train_data)/64),nb_epoch=50,callbacks=callbacks_list, verbose=2,shuffle=True)
def gen(列数据):
打印('生成器已启动')
#定义批量大小
批量大小=64
#数据的完整长度
数据大小=长度(列车数据)
#将创建批的总数
num\u batches=int(数据大小/批次大小)
如果(数量批次*批次大小)<数据大小:
批次数+=1
尽管如此:
cnt=0
对于范围内的i(数量批次):
开始索引=cnt*批量大小
结束索引=最小值((cnt+1)*批次大小、数据大小)
cnt+=1
//做一些预处理
x_列加垫=加垫(x_列,3,垫)
x_列_填充=np.数组(x_列_填充)
产量(x_列加垫,y_列加垫)
fun\u model.fit\u生成器(gen(train\u data),steps\u per\u epoch=int(len(train\u data)/64),nb\u epoch=50,callbacks=callbacks\u list,verbose=2,shuffle=True)

关于额外批次的问题得到了回答。当您可以阅读代码时,为什么要在这里询问?我无法复制它<代码>在\u epoch\u end上
始终显示在我的机器的正确位置。你的平台是什么?你使用哪个版本的Keras?如果你将
flush=True
添加到
print
函数中,问题是否仍然存在?@Yu-Yang:我将tensorflow.python.Keras中的Keras与tensorflow 1.4.0一起使用。我刚刚在另一台使用相同Tensorflow版本的机器上试用了它,在那台机器上,
on\u epoch\u end
也出现在正确的位置,使用flush=True也修复了我第一台机器上的输出!非常感谢你!!!感谢您的回答,但我必须使用keras.Sequence()实例,而不是普通的生成器,以避免在使用多处理时出现重复数据。我仅出于调试目的停用了多处理。此外,这也不能解释为什么经常调用
\uu getitem\uuuuuuuuu
(以及
在\u epoch\u end
中间)。
def gen(train_data):
    print('generator initiated')
    #Define a batch size
    batch_size = 64

    #Complete length of data
    data_size = len(train_data)

    #Total number of batches will be created
    num_batches = int(data_size / batch_size)


    if (num_batches*batch_size) < data_size:
         num_batches += 1
    while True:
        cnt=0
        for i in range(num_batches):
            start_index = cnt * batch_size
            end_index = min((cnt + 1) * batch_size, data_size)
            cnt +=1

            //Do some preprocessing 
            x_train_padded = add_pad(x_train,3,pad)
            x_train_padded = np.array(x_train_padded)

            yield (x_train_padded,y_train_padded)


fun_model.fit_generator(gen(train_data),steps_per_epoch =int(len(train_data)/64),nb_epoch=50,callbacks=callbacks_list, verbose=2,shuffle=True)