Tensorflow 凯拉斯什么也没学到

Tensorflow 凯拉斯什么也没学到,tensorflow,keras,conv-neural-network,python-3.7,Tensorflow,Keras,Conv Neural Network,Python 3.7,我正在尝试学习keras,我使用的代码中没有一个是在学习。从使用Python进行深入学习的示例代码到使用Python进行深入学习的代码。在最后一个链接中,我无法使用完整的10000个数据集,但即使使用1589大小的训练数据集,精度始终保持在.5 我几乎开始认为问题在于我的超频cpu和ram,但这更像是一个疯狂的猜测 我最初认为问题在于我有tensorflow2.0.0-alpha。然而,即使在我去了常规的tensorflow gpu之后,仍然没有任何东西在学习 #Convolutional Ne

我正在尝试学习keras,我使用的代码中没有一个是在学习。从使用Python进行深入学习的示例代码到使用Python进行深入学习的代码。在最后一个链接中,我无法使用完整的10000个数据集,但即使使用1589大小的训练数据集,精度始终保持在.5

我几乎开始认为问题在于我的超频cpu和ram,但这更像是一个疯狂的猜测

我最初认为问题在于我有tensorflow2.0.0-alpha。然而,即使在我去了常规的tensorflow gpu之后,仍然没有任何东西在学习

#Convolutional Neural Network

# Importing the Keras libraries and packages
    from keras.models import Sequential
    from keras.layers import Convolution2D
    from keras.layers import MaxPooling2D
    from keras.layers import Flatten
    from keras.layers import Dense
    from keras.models import model_from_json
    import os
#initialize the cnn
    classifier = Sequential()

#Step 1 convolution
    classifier.add(Convolution2D(32, 3, 3, input_shape = (64, 64, 3), activation = 'relu'))

#Step 2 Pooling
    classifier.add(MaxPooling2D(pool_size = (2,2)))


#Step 3 Flattening
    classifier.add(Flatten())

#Step 4 Full Connection
    classifier.add(Dense(output_dim = 128, activation = 'relu'))
    classifier.add(Dense(output_dim = 64, activation = 'relu'))
    classifier.add(Dense(output_dim = 32, activation = 'relu'))
    classifier.add(Dense(output_dim = 1, activation = 'sigmoid'))

#Compiling the CNN
    classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

#Part 2 Fitting the CNN to the images
    from keras.preprocessing.image import ImageDataGenerator

    train_datagen = ImageDataGenerator(
        rescale=1/.255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True)

    test_datagen = ImageDataGenerator(rescale=1./255)

    training_set = train_datagen.flow_from_directory(
        'dataset/training_set',
        target_size=(64, 64),
        batch_size=32,
        class_mode='binary')

    test_set = test_datagen.flow_from_directory(
        'dataset/test_set',
        target_size=(64, 64),
        batch_size=32,
        class_mode='binary')

    from IPython.display import display
    from PIL import Image

    classifier.fit_generator(
        training_set,
        steps_per_epoch=1589,
        epochs=10,
        validation_data=test_set,
        validation_steps=378)

    import numpy as np
    from keras.preprocessing import image
    test_image = image.load_img('dataset/test_set/cats/cat.4012.jpg', target_size = (64,64))
    test_image = image.img_to_array(test_image)
    test_image = np.expand_dims(test_image, axis = 0)
    result = classifier.predict(test_image)
    training_set.class_indices
    if result[0][0] >= 0.5:
        prediction = 'dog'
    else:
        prediction = 'cat'
    print(prediction)
使用python进行深入学习的示例

    from keras.datasets import imdb

    (train_data, train_labels), (test_data, test_labels) = 
    imdb.load_data(num_words = 10000)
    import numpy as np

    def vectorize_sequences(sequences, dimension=10000):
        results = np.zeros((len(sequences), dimension))
        for i,sequence in enumerate(sequences):
            results[i, sequence]=1.
        return results
    x_train = vectorize_sequences(train_data)
    x_test =  vectorize_sequences(test_data)

    y_train = np.asarray(train_labels).astype('float32')
    y_test = np.asarray(test_labels).astype('float32')

    from keras import models
    from keras import layers

    model = models.Sequential()
    model.add(layers.Dense(16, activation='relu',input_shape=(10000,)))
    model.add(layers.Dense(16, activation='relu'))
    model.add(layers.Dense(1, activation='sigmoid'))
    x_val = x_train[:10000]
    partial_x_train = x_train[10000:]
    y_val = y_train[:10000]
    partial_y_train = y_train[10000:]
    model.compile(optimizer='rmsprop',
             loss='binary_crossentropy',
             metrics=['acc'])
    history = model.fit(partial_x_train,
                   partial_y_train,
                    epochs=20,
                   batch_size=512,
                   validation_data=(x_val,y_val))
狗和猫的输出:

 Epoch 1/10
1589/1589 [==============================] - 112s 70ms/step - loss: 7.8736 - acc: 0.5115 - val_loss: 7.9528 - val_acc: 0.4976
Epoch 2/10
1589/1589 [==============================] - 111s 70ms/step - loss: 7.8697 - acc: 0.5117 - val_loss: 7.9606 - val_acc: 0.4971
Epoch 3/10
1589/1589 [==============================] - 111s 70ms/step - loss: 7.8740 - acc: 0.5115 - val_loss: 7.9499 - val_acc: 0.4978
Epoch 4/10
1589/1589 [==============================] - 111s 70ms/step - loss: 7.8674 - acc: 0.5119 - val_loss: 7.9634 - val_acc: 0.4969
Epoch 5/10
1589/1589 [==============================] - 111s 70ms/step - loss: 7.8765 - acc: 0.5113 - val_loss: 7.9499 - val_acc: 0.4977
Epoch 6/10
1589/1589 [==============================] - 111s 70ms/step - loss: 7.8737 - acc: 0.5115 - val_loss: 7.9634 - val_acc: 0.4970
Epoch 7/10
1589/1589 [==============================] - 129s 81ms/step - loss: 7.8623 - acc: 0.5122 - val_loss: 7.9626 - val_acc: 0.4970
Epoch 8/10
1589/1589 [==============================] - 112s 71ms/step - loss: 7.8758 - acc: 0.5114 - val_loss: 7.9508 - val_acc: 0.4977
Epoch 9/10
1589/1589 [==============================] - 115s 72ms/step - loss: 7.8708 - acc: 0.5117 - val_loss: 7.9519 - val_acc: 0.4976
Epoch 10/10
1589/1589 [==============================] - 112s 70ms/step - loss: 7.8738 - acc: 0.5115 - val_loss: 7.9614 - val_acc: 0.4971
cat
deeplearning imdb示例输出: 警告:tensorflow:From C:\Users\Mike\Anaconda3\lib\site packages\tensorflow\python\ops\math_ops.py:3066:to_int32(来自tensorflow.python.ops.math_ops)已弃用,将在未来版本中删除

更新说明: 改用tf.cast。 培训15000个样本,验证10000个样本

Epoch 1/20
15000/15000 [==============================] - 4s 246us/step - loss: 0.6932 - acc: 0.4982 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 2/20
15000/15000 [==============================] - 2s 115us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 3/20
15000/15000 [==============================] - 2s 115us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 4/20
15000/15000 [==============================] - 2s 119us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 5/20
15000/15000 [==============================] - 2s 120us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 6/20
15000/15000 [==============================] - 2s 119us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 7/20
15000/15000 [==============================] - 2s 113us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 8/20
15000/15000 [==============================] - 2s 113us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 9/20
15000/15000 [==============================] - 2s 119us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 10/20
15000/15000 [==============================] - 2s 122us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 11/20
15000/15000 [==============================] - 2s 116us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 12/20
15000/15000 [==============================] - 2s 116us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 13/20
15000/15000 [==============================] - 2s 121us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 14/20
15000/15000 [==============================] - 2s 127us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 15/20
15000/15000 [==============================] - 2s 121us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 16/20
15000/15000 [==============================] - 2s 113us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 17/20
15000/15000 [==============================] - 2s 115us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 18/20
15000/15000 [==============================] - 2s 114us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 19/20
15000/15000 [==============================] - 2s 114us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 20/20
15000/15000 [==============================] - 2s 119us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947

为什么在代码中有两个模型和两个完全不同的数据集?我假设这是一个复制粘贴错误。首先,请正确编辑您的代码。您的imdb代码中缺少ytrain,ytest是否有原因。它是不可复制的,因为它是。首先检查你的代码。。。