Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/363.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/sql-server-2005/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 音频处理Conv1D-keras_Python_Tensorflow_Keras_Neural Network_Conv Neural Network - Fatal编程技术网

Python 音频处理Conv1D-keras

Python 音频处理Conv1D-keras,python,tensorflow,keras,neural-network,conv-neural-network,Python,Tensorflow,Keras,Neural Network,Conv Neural Network,我正在使用音频分类学习Keras,实际上,我正在使用Keras修改代码 数据集的形状是 X_train shape = (800, 32, 1) y_train shape = (800, 10) X_test shape = (200, 32, 1) y_test shape = (200, 10) 模型 model = Sequential() model.add(Conv1D(filters=256, kernel_size=5, input_shape=(32,1), act

我正在使用音频分类学习Keras,实际上,我正在使用Keras修改代码

数据集的形状是

X_train shape = (800, 32, 1)
y_train shape = (800, 10)
X_test shape = (200, 32, 1)
y_test shape = (200, 10)
模型

model = Sequential()

model.add(Conv1D(filters=256, kernel_size=5, input_shape=(32,1),     activation="relu"))
model.add(BatchNormalization(momentum=0.9))
model.add(MaxPooling1D(2))
model.add(Dropout(0.5))
model.add(Conv1D(filters=256, kernel_size=5, activation="relu"))
model.add(BatchNormalization(momentum=0.9))
model.add(MaxPooling1D(2))
model.add(Dropout(0.5))

model.add(Flatten())
model.add(Dense(128, activation="relu", ))
model.add(Dense(10, activation='softmax'))

model.compile(
    loss='categorical_crossentropy',
    optimizer = Adam(lr=0.001),
    metrics = ['accuracy'],
)
model.summary()

red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=2,factor=0.5,min_delta=0.01)
check=ModelCheckpoint(filepath=r'/content/drive/My Drive/Colab Notebooks/gen/cnn.hdf5', verbose=1, save_best_only = True)

History = model.fit(X_train,
                y_train,
                epochs=100,
                #batch_size=512,
                validation_data = (X_test, y_test),
                verbose = 2,
                callbacks=[check, red_lr],
                shuffle=True )

我不明白,为什么val_acc在70%的范围内。我试图修改模型架构,包括优化器,但没有改进

而且,在损失和价值损失之间有很大的区别是好的

如何提高80以上的精度。。。任何帮助


谢谢你

我找到了它,我使用Keras的concatenate函数来连接所有卷积层,它提供了最好的性能。

如果你将耐心增加到20并删除
red\u lr
,会发生什么?我检查了,它返回的val\u acc为52%。。。