Python Keras:得到一个意外的关键字参数';class#u模式';

Python Keras:得到一个意外的关键字参数';class#u模式';,python,tensorflow,keras,lstm,Python,Tensorflow,Keras,Lstm,我正在尝试从这个github复制一个句子分类模型 这是我的代码: data = [ ( row["sentence"] , row["label"] ) for row in csv.DictReader(open("./test-data.txt"), delimiter='\t', quoting=csv.QUOTE_NONE) ] random.shuffle( data ) train_size = int(len(data) * percent) train_texts = [ tx

我正在尝试从这个github复制一个句子分类模型

这是我的代码:

data = [ ( row["sentence"] , row["label"]  ) for row in csv.DictReader(open("./test-data.txt"), delimiter='\t', quoting=csv.QUOTE_NONE) ]
random.shuffle( data )
train_size = int(len(data) * percent)
train_texts = [ txt.lower() for ( txt, label ) in data[0:train_size] ]
test_texts = [ txt.lower() for ( txt, label ) in data[train_size:-1] ]
train_labels = [ label for ( txt , label ) in data[0:train_size] ]
test_labels = [ label for ( txt , label ) in data[train_size:-1] ]
num_classes = len( set( train_labels + test_labels ) )
tokenizer = Tokenizer(nb_words=max_features, lower=True, split=" ")
tokenizer.fit_on_texts(train_texts)
train_sequences = sequence.pad_sequences( tokenizer.texts_to_sequences( train_texts ) , maxlen=max_sent_len )
test_sequences = sequence.pad_sequences( tokenizer.texts_to_sequences( test_texts ) , maxlen=max_sent_len )
train_matrix = tokenizer.texts_to_matrix( train_texts )
test_matrix = tokenizer.texts_to_matrix( test_texts )
embedding_weights = np.zeros( ( max_features , embeddings_dim ) )
for word,index in tokenizer.word_index.items():
    if index < max_features:
        try: embedding_weights[index,:] = embeddings[word]
        except: embedding_weights[index,:] = np.random.rand( 1 , embeddings_dim )
le = preprocessing.LabelEncoder( )
le.fit( train_labels + test_labels )
train_labels = le.transform( train_labels )
test_labels = le.transform( test_labels )
model = Sequential()
model.add(Embedding(max_features, embeddings_dim, input_length=max_sent_len, mask_zero=True, weights=[embedding_weights] ))
model.add(Dropout(0.25))
model.add(LSTM(output_dim=embeddings_dim , activation='sigmoid', inner_activation='hard_sigmoid', return_sequences=True))
model.add(Dropout(0.25))
model.add(LSTM(output_dim=embeddings_dim , activation='sigmoid', inner_activation='hard_sigmoid'))
model.add(Dropout(0.25))
model.add(Dense(1))
model.add(Activation('sigmoid'))
if num_classes == 2: model.compile(loss='binary_crossentropy', optimizer='adam', class_mode='binary')
else: model.compile(loss='categorical_crossentropy', optimizer='adam')  
model.fit( train_sequences , train_labels , nb_epoch=30, batch_size=32)
TypeError: run() got an unexpected keyword argument 'class_mode'
我收到这个错误:

data = [ ( row["sentence"] , row["label"]  ) for row in csv.DictReader(open("./test-data.txt"), delimiter='\t', quoting=csv.QUOTE_NONE) ]
random.shuffle( data )
train_size = int(len(data) * percent)
train_texts = [ txt.lower() for ( txt, label ) in data[0:train_size] ]
test_texts = [ txt.lower() for ( txt, label ) in data[train_size:-1] ]
train_labels = [ label for ( txt , label ) in data[0:train_size] ]
test_labels = [ label for ( txt , label ) in data[train_size:-1] ]
num_classes = len( set( train_labels + test_labels ) )
tokenizer = Tokenizer(nb_words=max_features, lower=True, split=" ")
tokenizer.fit_on_texts(train_texts)
train_sequences = sequence.pad_sequences( tokenizer.texts_to_sequences( train_texts ) , maxlen=max_sent_len )
test_sequences = sequence.pad_sequences( tokenizer.texts_to_sequences( test_texts ) , maxlen=max_sent_len )
train_matrix = tokenizer.texts_to_matrix( train_texts )
test_matrix = tokenizer.texts_to_matrix( test_texts )
embedding_weights = np.zeros( ( max_features , embeddings_dim ) )
for word,index in tokenizer.word_index.items():
    if index < max_features:
        try: embedding_weights[index,:] = embeddings[word]
        except: embedding_weights[index,:] = np.random.rand( 1 , embeddings_dim )
le = preprocessing.LabelEncoder( )
le.fit( train_labels + test_labels )
train_labels = le.transform( train_labels )
test_labels = le.transform( test_labels )
model = Sequential()
model.add(Embedding(max_features, embeddings_dim, input_length=max_sent_len, mask_zero=True, weights=[embedding_weights] ))
model.add(Dropout(0.25))
model.add(LSTM(output_dim=embeddings_dim , activation='sigmoid', inner_activation='hard_sigmoid', return_sequences=True))
model.add(Dropout(0.25))
model.add(LSTM(output_dim=embeddings_dim , activation='sigmoid', inner_activation='hard_sigmoid'))
model.add(Dropout(0.25))
model.add(Dense(1))
model.add(Activation('sigmoid'))
if num_classes == 2: model.compile(loss='binary_crossentropy', optimizer='adam', class_mode='binary')
else: model.compile(loss='categorical_crossentropy', optimizer='adam')  
model.fit( train_sequences , train_labels , nb_epoch=30, batch_size=32)
TypeError: run() got an unexpected keyword argument 'class_mode'
完全错误回溯:

-------------------------------------------------------------
()
12如果num\u classes==2:model.compile(loss='binary\u crossentropy',optimizer='adam',class='mode='binary')
13其他:model.compile(loss='classifical\u crossentropy',optimizer='adam')
--->14型号配合(序列号、序列号标签、nb\U历元=30、批次号=32)
15结果=模型预测类(测试序列)
16打印(“精度=“+repr(sklearn.metrics.accurity\u分数(测试标签、结果)))
~\Anaconda3\lib\site packages\keras\models.py拟合(self、x、y、批量大小、历元、冗余、回调、验证分割、验证数据、随机、类权重、样本权重、初始历元、每历元的步骤、验证步骤、**kwargs)
961初始纪元=初始纪元,
962步/u历元=步/u历元,
-->963验证步骤=验证步骤)
964
965 def评估(自我,x=无,y=无,
~\Anaconda3\lib\site packages\keras\engine\training.py适合(self、x、y、批量大小、历元、详细、回调、验证分割、验证数据、随机、类权重、样本权重、初始历元、每历元的步骤、验证步骤、**kwargs)1703初始历元=初始历元,1704步每历元=步每历元,
->1705验证步骤=验证步骤)
1706
1707 def评估(自我,x=无,y=无,
~\Anaconda3\lib\site packages\keras\engine\training.py in\u fit\u循环(self、f、ins、out\u标签、批大小、历元、详细、回调、val\u f、val\u ins、随机、回调度量、初始历元、每个历元的步骤、验证步骤)
1233 ins_批次[i]=ins_批次[i].toarray()
1234
->1235 outs=f(ins\U批量)
1236如果不存在(输出,列表):
1237输出=[输出]
~\Anaconda3\lib\site packages\keras\backend\tensorflow\u backend.py\uuuu调用\uuuu(self,输入)
2476会话=获取会话()
2477 updated=session.run(fetches=fetches,feed\u dict=feed\u dict,
->2478**self.session_-kwargs)
2479返回更新[:len(自输出)]
2480

模型。compile
不接受任何名为“class_mode”的参数,请将其删除。

请参阅:

class_模式
与发电机一起使用