Python 3.x 元组索引超出范围-训练音频模型
我试图分析一个音频文件,并根据提取的特征对系统进行训练,但在拟合模型时出现了一个错误,即“元组索引超出范围”。我已经在print语句旁边的注释中提供了我正在使用的所有数组的形状。在定义模型时,您能帮助我理解如何定义尺寸吗 如果需要更多的细节,请告诉我Python 3.x 元组索引超出范围-训练音频模型,python-3.x,tensorflow,machine-learning,keras,audio-analysis,Python 3.x,Tensorflow,Machine Learning,Keras,Audio Analysis,我试图分析一个音频文件,并根据提取的特征对系统进行训练,但在拟合模型时出现了一个错误,即“元组索引超出范围”。我已经在print语句旁边的注释中提供了我正在使用的所有数组的形状。在定义模型时,您能帮助我理解如何定义尺寸吗 如果需要更多的细节,请告诉我 import glob import numpy as np import pandas as pd import random import librosa import librosa.display import glob import os
import glob
import numpy as np
import pandas as pd
import random
import librosa
import librosa.display
import glob
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from tensorflow.keras.layers import LSTM, Dense, Dropout, Flatten
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
X, sample_rate = librosa.load(r'C:\Users\Sumanth\Desktop\voice\Speaker-275-3.wav', res_type='kaiser_fast')
print(X.shape) # Shape is (439238,)
#extracting the MFCC feature from Audio signal
mfccs = librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40)
print(mfccs.shape) # Shape is (40, 858)
#manually assigning the label as 275
z = np.asarray(275)
#Validation data
val_x, sample_rate = librosa.load(r'C:\Users\Sumanth\Desktop\voice\Speaker-275-2.wav', res_type='kaiser_fast')
print(val_x.shape) # Shape is (292826,)
val_y=np.asarray(275)
#Building the model
model = Sequential()
model.add(Dense(256, input_shape=(858,),activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(275,activation='softmax'))
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
#training our model
model.fit(mfccs, z, epochs=5, validation_data=(val_x, val_y))
-------------------错误------------------------------------------------------
索引器错误回溯(最近一次调用)
在里面
40模型。编译(loss='classifical_crossentropy',metrics=['accurity'],optimizer='adam')
41#训练我们的模型
--->42模型拟合(MFCC,z,历元=5,验证数据=(val_x,val_y))
43
44
~\AppData\Roaming\Python\Python37\site packages\tensorflow\u core\Python\keras\engine\training.py适合(self、x、y、批大小、历元、冗余、回调、验证拆分、验证数据、无序排列、类权重、样本权重、初始历元、每历元步数、验证步骤、验证频率、最大队列大小、工作者、使用多处理、**kwargs)
726最大队列大小=最大队列大小,
727名工人=工人,
-->728使用多处理=使用多处理)
729
730 def评估(自我,
~\AppData\Roaming\Python\Python37\site packages\tensorflow\u core\Python\keras\engine\training\u v2.py适合(self、model、x、y、批大小、epoch、verbose、回调、验证拆分、验证数据、无序、类权重、样本权重、初始epoch、每epoch步数、验证步数、验证频率、**kwargs)
222验证数据=验证数据,
223验证步骤=验证步骤,
-->224分销(策略=策略)
225
226总样本数=\u获取\u总样本数\u(训练\u数据\u适配器)
~\AppData\Roaming\Python\Python37\site packages\tensorflow\u core\Python\keras\engine\training\u v2.py进程内\u训练输入(模型、x、y、批量大小、历元、样本权重、类权重、每历元步长、验证拆分、验证数据、验证步骤、无序排列、分布策略、最大队列大小、工人、使用多处理)
545最大队列大小=最大队列大小,
546名工人=工人,
-->547使用\多处理=使用\多处理)
548 val_适配器=无
549如果验证数据:
~\AppData\Roaming\Python\Python37\site packages\tensorflow\u core\Python\keras\engine\training\u v2.py进程内输入(模型、x、y、批次大小、历代、样本权重、类权重、随机、步骤、分布策略、最大队列大小、工作人员、使用多处理)
592批次大小=批次大小,
593检查步骤=错误,
-->594步=步)
595适配器=适配器(
596 x,
~\AppData\Roaming\Python\Python37\site packages\tensorflow\u core\Python\keras\engine\training.py in\u-standard\u-user\u数据(self、x、y、sample\u-weight、class\u-weight、batch\u-size、check\u-steps、steps\u-name、steps、validation\u-split、shuffle、extract\u-tensors\u-from\u-dataset)
2532#检查所有阵列的长度是否相同。
2533如果不是自我分配策略:
->2534训练工具。检查数组长度(x、y、样本权重)
2535如果self.\u是网络而不是self.run\u:
2536#额外检查,以避免用户错误地使用不当损耗FN。
~\AppData\Roaming\Python\Python37\site packages\tensorflow\u core\Python\keras\engine\training\u utils.py检查数组长度(输入、目标、权重)
661
662集合x=集合长度(输入)
-->663集合长度=集合长度(目标)
664套长度=套长度(重量)
665如果len(set_x)>1:
~\AppData\Roaming\Python\Python37\site packages\tensorflow\u core\Python\keras\engine\training\u utils.py,长度为x
656返回集([
657 y形[0]
-->x中y的658
659如果y不是无且不是张量或复合张量(y)
660 ])
(.0)中的~\AppData\Roaming\Python\Python37\site packages\tensorflow\u core\Python\keras\engine\training\u utils.py
657 y形[0]
x中y的658
-->659如果y不是无且不是张量或复合张量(y)
660 ])
661
索引器错误:元组索引超出范围
请添加完整回溯我已经编辑了我的帖子以添加完整回溯。谢谢你好@MatiasValdenegro,你能帮我解决这个错误吗?请添加完整的回溯我已经编辑了我的帖子来添加完整的回溯。谢谢你好@MatiasValdenegro,你能帮我解决这个错误吗?
IndexError Traceback (most recent call last)
<ipython-input-31-adaf98404d0e> in <module>
40 model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
41 #training our model
---> 42 model.fit(mfccs, z, epochs=5, validation_data=(val_x, val_y))
43
44
~\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
726 max_queue_size=max_queue_size,
727 workers=workers,
--> 728 use_multiprocessing=use_multiprocessing)
729
730 def evaluate(self,
~\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
222 validation_data=validation_data,
223 validation_steps=validation_steps,
--> 224 distribution_strategy=strategy)
225
226 total_samples = _get_total_number_of_samples(training_data_adapter)
~\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
545 max_queue_size=max_queue_size,
546 workers=workers,
--> 547 use_multiprocessing=use_multiprocessing)
548 val_adapter = None
549 if validation_data:
~\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _process_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
592 batch_size=batch_size,
593 check_steps=False,
--> 594 steps=steps)
595 adapter = adapter_cls(
596 x,
~\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
2532 # Check that all arrays have the same length.
2533 if not self._distribution_strategy:
-> 2534 training_utils.check_array_lengths(x, y, sample_weights)
2535 if self._is_graph_network and not self.run_eagerly:
2536 # Additional checks to avoid users mistakenly using improper loss fns.
~\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_utils.py in check_array_lengths(inputs, targets, weights)
661
662 set_x = set_of_lengths(inputs)
--> 663 set_y = set_of_lengths(targets)
664 set_w = set_of_lengths(weights)
665 if len(set_x) > 1:
~\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_utils.py in set_of_lengths(x)
656 return set([
657 y.shape[0]
--> 658 for y in x
659 if y is not None and not is_tensor_or_composite_tensor(y)
660 ])
~\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_utils.py in <listcomp>(.0)
657 y.shape[0]
658 for y in x
--> 659 if y is not None and not is_tensor_or_composite_tensor(y)
660 ])
661
IndexError: tuple index out of range