Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/385.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Keras混合模型在每个时期给出相同的结果_Python_Tensorflow_Keras_Deep Learning_Neural Network - Fatal编程技术网

Python Keras混合模型在每个时期给出相同的结果

Python Keras混合模型在每个时期给出相同的结果,python,tensorflow,keras,deep-learning,neural-network,Python,Tensorflow,Keras,Deep Learning,Neural Network,我创建了一个包含文本和图像的混合模型。当我训练我的模型时,我在每个时代都得到相同的结果。下面是我的代码 import tensorflow as tf import pandas as pd import numpy as np base_dir = "D:/Dataset/xxxx/datasets/xxx/xx/xxxxx/" import os train_dir = os.path.join(base_dir,"trin.jsonl") te

我创建了一个包含文本和图像的混合模型。当我训练我的模型时,我在每个时代都得到相同的结果。下面是我的代码

import tensorflow as tf
import pandas as pd
import numpy as np

base_dir = "D:/Dataset/xxxx/datasets/xxx/xx/xxxxx/"

import os

train_dir = os.path.join(base_dir,"trin.jsonl")
test_dir = os.path.join(base_dir,"tst.jsonl")
dev_dir = os.path.join(base_dir,"dv.jsonl")

df_train = pd.read_json(train_dir,lines=True)
df_test = pd.read_json(test_dir,lines=True)
df_dev = pd.read_json(dev_dir,lines=True)

df_train=df_train.set_index('id')
df_dev=df_dev.set_index('id')
df_test=df_test.set_index('id')

from tensorflow.keras import optimizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import re
import spacy
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

nlp = spacy.load('en_core_web_md')

train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)

label_map = {1:"Hate",0:"No_Hate"}
df_dev['label']=df_dev['label'].map(label_map)
df_train['label']=df_train['label'].map(label_map)

train_generator = train_datagen.flow_from_dataframe(dataframe=df_train,directory=img_path,x_col="img",y_col="label",target_size=(224,224),batch_size=8500,class_mode="binary",shuffle=False)

def spacy_tokenizer(sentence):
    sentence = re.sub(r"[^a-zA-Z0-9]+"," ",sentence)
    sentence_list = [word.lemma_ for word in nlp(sentence) if not (word.is_space or word.is_stop or len(word)==1)]
    return ' '.join(sentence_list)
    
image_files = pd.Series(train_generator.filenames)
image_files = image_files.str.split('/', expand=True)[1].str[:-4]
image_files = list(map(int, image_files))

df_sorted = df_train.reindex(image_files)
df_sorted.head(1)

images,labels = next(train_generator)

tokenizer = Tokenizer(num_words=10000)

tokenizer.fit_on_texts(df_sorted['new_text'].values)
sequences = tokenizer.texts_to_sequences(df_sorted['new_text'].values)
train_padd = pad_sequences(sequences,maxlen=maxlen,padding='post',truncating='post')

from tensorflow.keras.models import Model
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow.keras.layers import Embedding, Flatten, Dense
from tensorflow.keras.layers import Dense, LSTM, Embedding,Dropout,SpatialDropout1D,Conv1D,MaxPooling1D,GRU,BatchNormalization
from tensorflow.keras.layers import Input,Bidirectional,GlobalAveragePooling1D,GlobalMaxPooling1D,concatenate,LeakyReLU

def create_nlp():
    sequence_input=Input(shape=(maxlen))
    embedding_layer=Embedding(input_dim=text_embedding.shape[0],output_dim=text_embedding.shape[1],weights=[text_embedding],input_length=maxlen,trainable=False)
    embedded_sequence = embedding_layer(sequence_input)
    l_conv_1=Conv1D(128,5,activation='relu')(embedded_sequence)
    l_pool_1=MaxPooling1D(5)(l_conv_1)
    l_conv_2=Conv1D(128,5,activation='relu')(l_pool_1)
    l_pool_2=MaxPooling1D(5)(l_conv_2)
    l_flat = Flatten()(l_pool_2)
    model=Model(sequence_input,l_flat)
    return model
    
    
from tensorflow.keras.applications import VGG16
from tensorflow.keras import optimizers

def create_img():
    img_input=Input(shape=(224,224,3))
    conv_base = VGG16(weights='imagenet',include_top=False,input_shape=(224, 224, 3))
    conv_base.trainable = False
    conv_l_1=conv_base(img_input)
    flat_l = Flatten()(conv_l_1)
    dense_l = Dense(256,activation='relu')(flat_l)
    model = Model(img_input,dense_l)
    return model

nlp_1=create_nlp()
img_cnn=create_img()
combinedInput = concatenate([nlp_1.output, img_cnn.output])

x = Dense(4, activation="relu")(combinedInput)
x = Dense(1, activation="sigmoid")(x)
model1 = Model(inputs=[nlp_1.input, img_cnn.input], outputs=x)
opt = optimizers.Adam(lr=1e-3, decay=1e-3 / 200)
model1.compile(loss="binary_crossentropy", metrics=['acc'], optimizer=opt)

model1_history = model1.fit([train_padd, images], train_y, epochs=15, batch_size=16)
以下是我的培训结果:

Epoch 1/15
532/532 [==============================] - 104s 196ms/step - loss: 0.6528 - acc: 0.6412
Epoch 2/15
532/532 [==============================] - 103s 193ms/step - loss: 0.6528 - acc: 0.6412
Epoch 3/15
532/532 [==============================] - 103s 195ms/step - loss: 0.6528 - acc: 0.6412
Epoch 4/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 5/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 6/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 7/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 8/15
532/532 [==============================] - 104s 195ms/step - loss: 0.6528 - acc: 0.6412
Epoch 9/15
532/532 [==============================] - 106s 200ms/step - loss: 0.6528 - acc: 0.6412
Epoch 10/15
532/532 [==============================] - 109s 204ms/step - loss: 0.6528 - acc: 0.6412
Epoch 11/15
532/532 [==============================] - 104s 196ms/step - loss: 0.6528 - acc: 0.6412
Epoch 12/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 13/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 14/15
532/532 [==============================] - 104s 195ms/step - loss: 0.6528 - acc: 0.6412
Epoch 15/15
532/532 [==============================] - 103s 193ms/step - loss: 0.6528 - acc: 0.6412
此外,我在终端中获得以下日志:

分配器(GPU\U 0\U bfc)试图分配2.36GiB时内存不足 通过_计数释放_=0。调用者指出这不是一个 失败,但这可能意味着如果更多 内存可用


看一看,您可能只是使用了不正确的优化器。如果这没有帮助,我会尝试使用1作为批处理大小,以查看是否至少在第一次运行中有更改。此外,学习速度可能是个问题,试着发挥它的价值,看看准确性是否会改变。

谢谢,它起作用了。你能再帮我一件事吗。你知道有没有像文本的imagedatagenerator这样的东西吗。使用fit_generator训练模型真的很有帮助,内存问题也可以解决。