Keras 训练人工智能模型需要很长时间

Keras 训练人工智能模型需要很长时间,keras,neural-network,fast-ai,Keras,Neural Network,Fast Ai,我正在解决一个多类分类问题。数据集如下所示: |---------------------|------------------|----------------------|------------------| | feature 1 | feature 3 | feature 4 | feature 2 | |---------------------|------------------|---------------

我正在解决一个多类分类问题。数据集如下所示:

|---------------------|------------------|----------------------|------------------|
|      feature 1      |     feature 3    |   feature 4          |     feature 2    |
|---------------------|------------------|------------------------------------------
|          1.302      |       102.987    |      1.298           |     99.8         |
|---------------------|------------------|----------------------|------------------|
|---------------------|------------------|----------------------|------------------|
|          1.318      |       102.587    |      1.998           |     199.8        |
|---------------------|------------------|----------------------|------------------|
def create_model(optimizer='adam', init='uniform'):
    # create model
    if verbose: print("**Create model with optimizer: %s; init: %s" % (optimizer, init) )
    model = Sequential()
    model.add(Dense(16, input_dim=X.shape[1], kernel_initializer=init, activation='relu'))
    model.add(Dense(8, kernel_initializer=init, activation='relu'))
    model.add(Dense(4, kernel_initializer=init, activation='relu'))
    model.add(Dense(1, kernel_initializer=init, activation='sigmoid'))
    # Compile model
    model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
    return model
cont_names = [ 'feature1', 'feature2', 'feature3', 'feature4']
procs = [FillMissing, Categorify, Normalize]
test = TabularList.from_df(test,cont_names=cont_names, procs=procs)
data = (TabularList.from_df(train, path='.', cont_names=cont_names, procs=procs)
                        .random_split_by_pct(valid_pct=0.2, seed=43)
                        .label_from_df(cols = dep_var)
                        .add_test(test, label=0)
                        .databunch())

learn = tabular_learner(data, layers=[1000, 200, 15], metrics=accuracy, emb_drop=0.1, callback_fns=ShowGraph)
这4个特性是float,我的目标变量类是1、2或3。当我构建follow模型并进行训练时,需要很长时间才能收敛(24小时仍在运行)

我使用了keras模型,如下所示:

|---------------------|------------------|----------------------|------------------|
|      feature 1      |     feature 3    |   feature 4          |     feature 2    |
|---------------------|------------------|------------------------------------------
|          1.302      |       102.987    |      1.298           |     99.8         |
|---------------------|------------------|----------------------|------------------|
|---------------------|------------------|----------------------|------------------|
|          1.318      |       102.587    |      1.998           |     199.8        |
|---------------------|------------------|----------------------|------------------|
def create_model(optimizer='adam', init='uniform'):
    # create model
    if verbose: print("**Create model with optimizer: %s; init: %s" % (optimizer, init) )
    model = Sequential()
    model.add(Dense(16, input_dim=X.shape[1], kernel_initializer=init, activation='relu'))
    model.add(Dense(8, kernel_initializer=init, activation='relu'))
    model.add(Dense(4, kernel_initializer=init, activation='relu'))
    model.add(Dense(1, kernel_initializer=init, activation='sigmoid'))
    # Compile model
    model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
    return model
cont_names = [ 'feature1', 'feature2', 'feature3', 'feature4']
procs = [FillMissing, Categorify, Normalize]
test = TabularList.from_df(test,cont_names=cont_names, procs=procs)
data = (TabularList.from_df(train, path='.', cont_names=cont_names, procs=procs)
                        .random_split_by_pct(valid_pct=0.2, seed=43)
                        .label_from_df(cols = dep_var)
                        .add_test(test, label=0)
                        .databunch())

learn = tabular_learner(data, layers=[1000, 200, 15], metrics=accuracy, emb_drop=0.1, callback_fns=ShowGraph)
拟合模型

best_epochs = 200
best_batch_size = 5
best_init = 'glorot_uniform'
best_optimizer = 'rmsprop'
verbose=0
model_pred = KerasClassifier(build_fn=create_model, optimizer=best_optimizer, init=best_init, epochs=best_epochs, batch_size=best_batch_size, verbose=verbose)
model_pred.fit(X_train,y_train)
我在这里遵循了教程:

还有一个快速ai模型,如下所示:

|---------------------|------------------|----------------------|------------------|
|      feature 1      |     feature 3    |   feature 4          |     feature 2    |
|---------------------|------------------|------------------------------------------
|          1.302      |       102.987    |      1.298           |     99.8         |
|---------------------|------------------|----------------------|------------------|
|---------------------|------------------|----------------------|------------------|
|          1.318      |       102.587    |      1.998           |     199.8        |
|---------------------|------------------|----------------------|------------------|
def create_model(optimizer='adam', init='uniform'):
    # create model
    if verbose: print("**Create model with optimizer: %s; init: %s" % (optimizer, init) )
    model = Sequential()
    model.add(Dense(16, input_dim=X.shape[1], kernel_initializer=init, activation='relu'))
    model.add(Dense(8, kernel_initializer=init, activation='relu'))
    model.add(Dense(4, kernel_initializer=init, activation='relu'))
    model.add(Dense(1, kernel_initializer=init, activation='sigmoid'))
    # Compile model
    model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
    return model
cont_names = [ 'feature1', 'feature2', 'feature3', 'feature4']
procs = [FillMissing, Categorify, Normalize]
test = TabularList.from_df(test,cont_names=cont_names, procs=procs)
data = (TabularList.from_df(train, path='.', cont_names=cont_names, procs=procs)
                        .random_split_by_pct(valid_pct=0.2, seed=43)
                        .label_from_df(cols = dep_var)
                        .add_test(test, label=0)
                        .databunch())

learn = tabular_learner(data, layers=[1000, 200, 15], metrics=accuracy, emb_drop=0.1, callback_fns=ShowGraph)
我遵循了下面的教程


不知道为什么两个模型都要运行这么长时间。我的输入有错误吗?非常感谢您的帮助。

有200个时代和138k多个培训示例(以及近35k个测试示例),您总共要处理向网络显示的34626800(约35M)个示例。这些都是大数字。 假设您正在使用CPU进行培训,这可能需要几个小时,甚至几天,具体取决于您的硬件。
您可以做的一件事是减少历次次数,看看您是否有可接受的早期模型。

您有多少个培训示例?有多少个时代?你能用fit方法提供代码块吗?@alan.elkin我在问题中添加了它。谢谢展示你有很多类(根据上面的描述,我猜是3个)。如果是多类分类,为什么要使用
binary\u crossentropy
(需要使用
category\u crossentropy
或基于目标标签的
sparse\u category\u crossentropy
)@Ricky这回答了你的问题吗?