Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/317.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Keras:网络不';带fit_发电机的t系列()_Python_Deep Learning_Keras - Fatal编程技术网

Python Keras:网络不';带fit_发电机的t系列()

Python Keras:网络不';带fit_发电机的t系列(),python,deep-learning,keras,Python,Deep Learning,Keras,我在大数据集上使用Keras(音乐自动标记与MagnaTagATune数据集)。因此,我尝试将fit_generator()函数用于自定义数据生成器。但损失函数和度量值在培训过程中不会发生变化。看起来我的网络根本没有训练 当我使用fit()函数而不是fit_generator()时,一切都正常,但我无法将整个数据集保存在内存中 我试过使用Theano和TensorFlow后端 主代码: if __name__ == '__main__': model = models.FCN4()

我在大数据集上使用Keras(音乐自动标记与MagnaTagATune数据集)。因此,我尝试将fit_generator()函数用于自定义数据生成器。但损失函数和度量值在培训过程中不会发生变化。看起来我的网络根本没有训练

当我使用fit()函数而不是fit_generator()时,一切都正常,但我无法将整个数据集保存在内存中

我试过使用Theano和TensorFlow后端

主代码:

if __name__ == '__main__':
    model = models.FCN4()
    model.compile(optimizer='adam',
                  loss='binary_crossentropy',
                  metrics=['accuracy', 'categorical_accuracy', 'precision', 'recall'])
    gen = mttutils.generator_v2(csv_path, melgrams_dir)
    history = model.fit_generator(gen.generate(0,750),
                                  samples_per_epoch=750,
                                  nb_epoch=80,
                                  validation_data=gen.generate(750,1000,False),
                                  nb_val_samples=250)
    # RESULTS SAVING
    np.save(output_history, history.history)
    model.save(output_model)
genres = ['guitar', 'classical', 'slow', 'techno', 'strings', 'drums', 'electronic', 'rock', 'fast',
        'piano', 'ambient', 'beat', 'violin', 'vocal', 'synth', 'female', 'indian', 'opera', 'male', 'singing',
        'vocals', 'no vocals', 'harpsichord', 'loud', 'quiet', 'flute', 'woman', 'male vocal', 'no vocal',
        'pop', 'soft', 'sitar', 'solo', 'man', 'classic', 'choir', 'voice', 'new age', 'dance', 'male voice',
        'female vocal', 'beats', 'harp', 'cello', 'no voice', 'weird', 'country', 'metal', 'female voice', 'choral']

def __init__(self, csv_path, melgrams_dir):

    def get_dict_vals(dictionary, keys):
        vals = []
        for key in keys:
            vals.append(dictionary[key])
        return vals

    self.melgrams_dir = melgrams_dir
    with open(csv_path, newline='') as csvfile:
        reader = csv.DictReader(csvfile, dialect='excel-tab')
        self.labels = []
        for row in reader:
            labels_arr = np.array(get_dict_vals(
                row, self.genres)).astype(np.int)
            labels_arr = labels_arr.reshape((1, labels_arr.shape[0]))
            if (np.sum(labels_arr) > 0):
                self.labels.append((row['mp3_path'], labels_arr))
        self.size = len(self.labels)


def generate(self, begin, end):
    while(1):
        for count in range(begin, end):
            try:
                item = self.labels[count]
                mels = np.load(os.path.join(
                    self.melgrams_dir, item[0] + '.npy'))
                tags = item[1]
                yield((mels, tags))
            except FileNotFoundError:
                continue
def generate(self):  
    if((self.count < self.begin) or (self.count >= self.end)):
        self.count = self.begin
    item = self.labels[self.count]
    mels = np.load(os.path.join(self.melgrams_dir, item[0] + '.npy'))
    tags = item[1]
    self.count = self.count + 1
    return((mels, tags))

def __next__(self):   # fit_generator() uses this method
    return self.generate() 
history = model.fit_generator(tr_gen,
                              samples_per_epoch = tr_gen.size,
                              nb_epoch = 120,
                              validation_data = val_gen,
                              nb_val_samples = val_gen.size)
Epoch 1/120
10554/10554 [==============================] - 545s - loss: 1.7240 - acc: 0.8922 
Epoch 2/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
Epoch 3/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
Epoch 4/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
... etc (loss is always 1.8922; acc is always 0.8820)
类生成器\u v2:

if __name__ == '__main__':
    model = models.FCN4()
    model.compile(optimizer='adam',
                  loss='binary_crossentropy',
                  metrics=['accuracy', 'categorical_accuracy', 'precision', 'recall'])
    gen = mttutils.generator_v2(csv_path, melgrams_dir)
    history = model.fit_generator(gen.generate(0,750),
                                  samples_per_epoch=750,
                                  nb_epoch=80,
                                  validation_data=gen.generate(750,1000,False),
                                  nb_val_samples=250)
    # RESULTS SAVING
    np.save(output_history, history.history)
    model.save(output_model)
genres = ['guitar', 'classical', 'slow', 'techno', 'strings', 'drums', 'electronic', 'rock', 'fast',
        'piano', 'ambient', 'beat', 'violin', 'vocal', 'synth', 'female', 'indian', 'opera', 'male', 'singing',
        'vocals', 'no vocals', 'harpsichord', 'loud', 'quiet', 'flute', 'woman', 'male vocal', 'no vocal',
        'pop', 'soft', 'sitar', 'solo', 'man', 'classic', 'choir', 'voice', 'new age', 'dance', 'male voice',
        'female vocal', 'beats', 'harp', 'cello', 'no voice', 'weird', 'country', 'metal', 'female voice', 'choral']

def __init__(self, csv_path, melgrams_dir):

    def get_dict_vals(dictionary, keys):
        vals = []
        for key in keys:
            vals.append(dictionary[key])
        return vals

    self.melgrams_dir = melgrams_dir
    with open(csv_path, newline='') as csvfile:
        reader = csv.DictReader(csvfile, dialect='excel-tab')
        self.labels = []
        for row in reader:
            labels_arr = np.array(get_dict_vals(
                row, self.genres)).astype(np.int)
            labels_arr = labels_arr.reshape((1, labels_arr.shape[0]))
            if (np.sum(labels_arr) > 0):
                self.labels.append((row['mp3_path'], labels_arr))
        self.size = len(self.labels)


def generate(self, begin, end):
    while(1):
        for count in range(begin, end):
            try:
                item = self.labels[count]
                mels = np.load(os.path.join(
                    self.melgrams_dir, item[0] + '.npy'))
                tags = item[1]
                yield((mels, tags))
            except FileNotFoundError:
                continue
def generate(self):  
    if((self.count < self.begin) or (self.count >= self.end)):
        self.count = self.begin
    item = self.labels[self.count]
    mels = np.load(os.path.join(self.melgrams_dir, item[0] + '.npy'))
    tags = item[1]
    self.count = self.count + 1
    return((mels, tags))

def __next__(self):   # fit_generator() uses this method
    return self.generate() 
history = model.fit_generator(tr_gen,
                              samples_per_epoch = tr_gen.size,
                              nb_epoch = 120,
                              validation_data = val_gen,
                              nb_val_samples = val_gen.size)
Epoch 1/120
10554/10554 [==============================] - 545s - loss: 1.7240 - acc: 0.8922 
Epoch 2/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
Epoch 3/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
Epoch 4/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
... etc (loss is always 1.8922; acc is always 0.8820)
要准备fit()函数的数组,我使用以下代码:

def TEST_get_data_array(csv_path, melgrams_dir):
    gen = generator_v2(csv_path, melgrams_dir).generate(0,100)
    item = next(gen)
    x = np.array(item[0])
    y = np.array(item[1])
    for i in range(0,100):
        item = next(gen.training)
        x = np.concatenate((x,item[0]),axis = 0)
        y = np.concatenate((y,item[1]),axis = 0)
    return(x,y)
抱歉,如果我的代码样式不好。谢谢你

UPD 1: 我尝试使用
return(X,y)
而不是
yield(X,y)
,但没有任何变化

我的新发电机类的一部分:

if __name__ == '__main__':
    model = models.FCN4()
    model.compile(optimizer='adam',
                  loss='binary_crossentropy',
                  metrics=['accuracy', 'categorical_accuracy', 'precision', 'recall'])
    gen = mttutils.generator_v2(csv_path, melgrams_dir)
    history = model.fit_generator(gen.generate(0,750),
                                  samples_per_epoch=750,
                                  nb_epoch=80,
                                  validation_data=gen.generate(750,1000,False),
                                  nb_val_samples=250)
    # RESULTS SAVING
    np.save(output_history, history.history)
    model.save(output_model)
genres = ['guitar', 'classical', 'slow', 'techno', 'strings', 'drums', 'electronic', 'rock', 'fast',
        'piano', 'ambient', 'beat', 'violin', 'vocal', 'synth', 'female', 'indian', 'opera', 'male', 'singing',
        'vocals', 'no vocals', 'harpsichord', 'loud', 'quiet', 'flute', 'woman', 'male vocal', 'no vocal',
        'pop', 'soft', 'sitar', 'solo', 'man', 'classic', 'choir', 'voice', 'new age', 'dance', 'male voice',
        'female vocal', 'beats', 'harp', 'cello', 'no voice', 'weird', 'country', 'metal', 'female voice', 'choral']

def __init__(self, csv_path, melgrams_dir):

    def get_dict_vals(dictionary, keys):
        vals = []
        for key in keys:
            vals.append(dictionary[key])
        return vals

    self.melgrams_dir = melgrams_dir
    with open(csv_path, newline='') as csvfile:
        reader = csv.DictReader(csvfile, dialect='excel-tab')
        self.labels = []
        for row in reader:
            labels_arr = np.array(get_dict_vals(
                row, self.genres)).astype(np.int)
            labels_arr = labels_arr.reshape((1, labels_arr.shape[0]))
            if (np.sum(labels_arr) > 0):
                self.labels.append((row['mp3_path'], labels_arr))
        self.size = len(self.labels)


def generate(self, begin, end):
    while(1):
        for count in range(begin, end):
            try:
                item = self.labels[count]
                mels = np.load(os.path.join(
                    self.melgrams_dir, item[0] + '.npy'))
                tags = item[1]
                yield((mels, tags))
            except FileNotFoundError:
                continue
def generate(self):  
    if((self.count < self.begin) or (self.count >= self.end)):
        self.count = self.begin
    item = self.labels[self.count]
    mels = np.load(os.path.join(self.melgrams_dir, item[0] + '.npy'))
    tags = item[1]
    self.count = self.count + 1
    return((mels, tags))

def __next__(self):   # fit_generator() uses this method
    return self.generate() 
history = model.fit_generator(tr_gen,
                              samples_per_epoch = tr_gen.size,
                              nb_epoch = 120,
                              validation_data = val_gen,
                              nb_val_samples = val_gen.size)
Epoch 1/120
10554/10554 [==============================] - 545s - loss: 1.7240 - acc: 0.8922 
Epoch 2/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
Epoch 3/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
Epoch 4/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
... etc (loss is always 1.8922; acc is always 0.8820)
日志:

if __name__ == '__main__':
    model = models.FCN4()
    model.compile(optimizer='adam',
                  loss='binary_crossentropy',
                  metrics=['accuracy', 'categorical_accuracy', 'precision', 'recall'])
    gen = mttutils.generator_v2(csv_path, melgrams_dir)
    history = model.fit_generator(gen.generate(0,750),
                                  samples_per_epoch=750,
                                  nb_epoch=80,
                                  validation_data=gen.generate(750,1000,False),
                                  nb_val_samples=250)
    # RESULTS SAVING
    np.save(output_history, history.history)
    model.save(output_model)
genres = ['guitar', 'classical', 'slow', 'techno', 'strings', 'drums', 'electronic', 'rock', 'fast',
        'piano', 'ambient', 'beat', 'violin', 'vocal', 'synth', 'female', 'indian', 'opera', 'male', 'singing',
        'vocals', 'no vocals', 'harpsichord', 'loud', 'quiet', 'flute', 'woman', 'male vocal', 'no vocal',
        'pop', 'soft', 'sitar', 'solo', 'man', 'classic', 'choir', 'voice', 'new age', 'dance', 'male voice',
        'female vocal', 'beats', 'harp', 'cello', 'no voice', 'weird', 'country', 'metal', 'female voice', 'choral']

def __init__(self, csv_path, melgrams_dir):

    def get_dict_vals(dictionary, keys):
        vals = []
        for key in keys:
            vals.append(dictionary[key])
        return vals

    self.melgrams_dir = melgrams_dir
    with open(csv_path, newline='') as csvfile:
        reader = csv.DictReader(csvfile, dialect='excel-tab')
        self.labels = []
        for row in reader:
            labels_arr = np.array(get_dict_vals(
                row, self.genres)).astype(np.int)
            labels_arr = labels_arr.reshape((1, labels_arr.shape[0]))
            if (np.sum(labels_arr) > 0):
                self.labels.append((row['mp3_path'], labels_arr))
        self.size = len(self.labels)


def generate(self, begin, end):
    while(1):
        for count in range(begin, end):
            try:
                item = self.labels[count]
                mels = np.load(os.path.join(
                    self.melgrams_dir, item[0] + '.npy'))
                tags = item[1]
                yield((mels, tags))
            except FileNotFoundError:
                continue
def generate(self):  
    if((self.count < self.begin) or (self.count >= self.end)):
        self.count = self.begin
    item = self.labels[self.count]
    mels = np.load(os.path.join(self.melgrams_dir, item[0] + '.npy'))
    tags = item[1]
    self.count = self.count + 1
    return((mels, tags))

def __next__(self):   # fit_generator() uses this method
    return self.generate() 
history = model.fit_generator(tr_gen,
                              samples_per_epoch = tr_gen.size,
                              nb_epoch = 120,
                              validation_data = val_gen,
                              nb_val_samples = val_gen.size)
Epoch 1/120
10554/10554 [==============================] - 545s - loss: 1.7240 - acc: 0.8922 
Epoch 2/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
Epoch 3/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
Epoch 4/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
... etc (loss is always 1.8922; acc is always 0.8820)

在方法“generate”中,有一个while语句

def generate(self, begin, end):
    while(1): # this
        for count in range(begin, end):
            try:
                # something
                yield(...)

            except FileNotFoundError:
                continue
我认为这句话是不必要的,所以

def generate(self, begin, end):
    for count in range(begin, end):
        try:
            # something
            yield(...)

        except FileNotFoundError:
            continue

我和你在收益率法上有同样的问题。因此,我只存储了当前索引,并使用return语句每次调用返回一个批

所以我只是使用了
return(X,y)
而不是
yield(X,y)
,它起了作用。我不知道这是为什么。如果有人能解释一下,那就太酷了

编辑: 您需要将生成器传递给函数,而不仅仅是调用函数。大概是这样的:

model.fit_generator(gen, samples_per_epoch=750,
                                  nb_epoch=80,
                                  validation_data=gen,
                                  nb_val_samples=250)

Keras将在对数据进行训练时调用您的uuu next_uu函数。

它引发了一个异常:
文件/usr/local/lib/python3.4/dist packages/Keras/engine/training.py”,fit_generator str(generator_output))ValueError第1528行,在fit_generator str(generator_output))valuer中:generator的输出应该是元组(x,y,sample_weight)或(x,y)。发现:None
生成器必须是无止境的,因为它必须在下一次尝试时返回同一批数据,但没有任何更改。请检查我是否正确理解您的意思(我的带有
return
语句的代码在主要帖子的末尾)。非常感谢。这样通过发电机时应该工作。如果没有,您能否发布错误消息?是的,我正在将我的生成器传递到
fit\u generator
函数中,如下所示。没有例外或错误。问题是损失函数的值在培训过程中没有改变(我已经在主帖子中添加了日志)。看起来网络没有刷新其权重。在模型中这不会是一个错误,因为
fit
函数(使用数组而不是生成器)可以正常工作。尝试在每次迭代中向模型传递更多元素。这意味着您的下一个方法将返回例如32个元素。可能您的类内差异太大,无法使用1作为批次大小。您是否能够找到问题的解决方案?在范围内计数(开始、结束)之前,您可以洗牌数据。@Ladislao我也面临同样的问题。你能告诉我你是按照什么程序来解决这个问题的吗advance@prasanna,正如最佳答案评论中提到的,我刚刚在一批中放置了更多元素,这很有帮助。