Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/294.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
为什么我的神经网络没有';你不知道吗?(python tensorflow CNN)_Python_Tensorflow - Fatal编程技术网

为什么我的神经网络没有';你不知道吗?(python tensorflow CNN)

为什么我的神经网络没有';你不知道吗?(python tensorflow CNN),python,tensorflow,Python,Tensorflow,我试图解决一个DNA序列的二元分类问题,这个DNA序列大约有2百万长度 我决定用一个热编码对输入的DNA序列进行编码 我正在使用tensorflow和keras(python) 我使用adam优化器 optimizer = keras.optimizers.Adam(learning_rate=learningrate, name="Adam") 一个非常简单的架构: ishape = (None,4) model = keras.Sequential() model.ad

我试图解决一个DNA序列的二元分类问题,这个DNA序列大约有2百万长度

我决定用一个热编码对输入的DNA序列进行编码

我正在使用tensorflow和keras(python)

我使用adam优化器

optimizer = keras.optimizers.Adam(learning_rate=learningrate, name="Adam")
一个非常简单的架构:

ishape = (None,4)
model = keras.Sequential()
model.add(Conv1D(filternumber, ksize, activation='relu', input_shape=ishape))
model.add(GlobalAvgPool1D(data_format="channels_last"))
model.add(Dense(2, activation='sigmoid'))
这是学习指南:

epoch in range(epochsize):
print("Epoch number "+ str(epoch) + "_____________")
batchnumber = 0
batchavgloss=[]
for batch in batchlist:
    loss_value = tf.constant(0.)
    mini_batch_losses = []
    with tf.GradientTape() as tape:
        for seqref in batch:
            seqref = int(seqref)
            X_train, y_train = loadvalue(seqref) #caricamento elementi
            logits = model(X_train, training=True)
            loss_value = tf.reduce_mean(tf.nn.weighted_cross_entropy_with_logits(y_train, logits, class_weights))

            mini_batch_losses.append(loss_value)
        loss_avg = tf.reduce_mean(mini_batch_losses)
    print("batch " + str(batchnumber+1) + " losses:" + str(loss_avg.numpy()))
    batchavgloss.append(loss_avg.numpy())
    batchnumber += 1
    grads = tape.gradient(loss_avg, model.trainable_weights)
    optimizer.apply_gradients(grads_and_vars=zip(grads, model.trainable_weights))

epochavgloss= sum(batchavgloss)/len(batchavgloss)
if epochavgloss < bestepochloss:
    bestepochloss=epochavgloss
    model.save(savepath)
这是整个历元损失函数值的一个示例: "0.8655851910114288 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438”

有人能帮我吗

Learning rate 0.1 Batch 2  ksize 3
Learnig rate 0.1 Batch 2 ksize 32
Learning rate 0.1 Batch 16 ksize 3 
Learning rate 0.1 Batch 16 ksize 32 

Learning rate 0.01 Batch 2  ksize 3
Learnig rate 0.01 Batch 2 ksize 32
Learning rate 0.01 Batch 16 ksize 3 
Learning rate 0.01 Batch 16 ksize 32