Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 这是神经网络的正确输入吗?_Python_Tensorflow_Machine Learning_Keras_Neural Network - Fatal编程技术网

Python 这是神经网络的正确输入吗?

Python 这是神经网络的正确输入吗?,python,tensorflow,machine-learning,keras,neural-network,Python,Tensorflow,Machine Learning,Keras,Neural Network,我试图建立一个算法来预测一篇文章是否有厌女倾向。我使用的数据来自一个.csv文件,有以下标签:id、text、label(0不是厌女症,1是厌女症)。该模型正在运行,因为我使用预制字典对预制数据集进行了测试,所以问题一定是我处理数据的方式。我尝试使用单词袋方法制作自己的词典,然后对数据进行整形,使其符合模型,但在第一个时代就停止了。完整代码如下所示: import random import sklearn import tensorflow as td from tensorflow impo

我试图建立一个算法来预测一篇文章是否有厌女倾向。我使用的数据来自一个.csv文件,有以下标签:id、text、label(0不是厌女症,1是厌女症)。该模型正在运行,因为我使用预制字典对预制数据集进行了测试,所以问题一定是我处理数据的方式。我尝试使用单词袋方法制作自己的词典,然后对数据进行整形,使其符合模型,但在第一个时代就停止了。完整代码如下所示:

import random
import sklearn
import tensorflow as td
from tensorflow import keras
from sklearn import svm
from sklearn import metrics
import pandas as pd
import numpy as np
import string
import nltk
from collections import Counter

def get_corpus_vocabulary(corpus):
    counter = Counter()
    for text in corpus:
        tokens=tokenize(text)
        counter.update(tokens)
    return counter

def tokenize(text):
    return nltk.WordPunctTokenizer().tokenize(text)

def get_representation(vocabulary, how_many):
    most_comm = vocabulary.most_common(how_many)
    wd2idx = {}
    idx2wd = {}

    for position,word in enumerate(most_comm):
        word=words[0] 
        wd2idx[word] = position
        idx2wd[position] = word
    return wd2idx, idx2wd

def shape_data(data):
    #encoded_data = np.array()
    data_clean = data.copy()
    for line in range(len(data)):
        transtable = str.maketrans('', '', string.punctuation)
        data_clean[line] = data[line].translate(transtable).strip().split(" ")
        encoded_line = encode(data_clean[line])
        encoded_line = keras.preprocessing.sequence.pad_sequences([encoded_line], value=word_index["<PAD>"], padding="post", maxlen=35)
        if line == 0:
            encoded_data = np.array(encoded_line)
        else:
            encoded_data = np.append(encoded_data, encoded_line, axis = 0)
    return encoded_data

def encode(word_list):
    encoded = [1] 
    for word in word_list:
        if word in word_index:
            encoded.append(word_index[word])
        else:
            encoded.append(2)
    return encoded

train_data=pd.read_csv('train.csv')
corpus=train_data['text']
train_labels = train_data['label'].values

all_words=get_corpus_vocabulary(corpus)
word_index, index_word = get_representation(all_words,100000)

#v+3 for 1,2,3; 0 never had a key
word_index = {k:(v+3) for k,v in word_index.items()} #dictionary
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2
word_index["<UNUSED>"] = 3

encoded_train_data = shape_data(corpus)

#model
model = keras.Sequential()
model.add(keras.layers.Embedding(100, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation="relu"))
model.add(keras.layers.Dense(1, activation="sigmoid"))

model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])

x_val = encoded_train_data[:1000]
x_train = encoded_train_data[1000:]

y_val = train_labels[:1000]
y_train = train_labels[1000:]

fitModel = model.fit(x_train, y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1)
results = model.evaluate(test_data, test_labels)
print(results)
随机导入
导入sklearn
导入tensorflow作为td
从tensorflow进口keras
从sk学习输入svm
从SKM学习导入度量
作为pd进口熊猫
将numpy作为np导入
导入字符串
导入nltk
从收款进口柜台
def获取语料库词汇(语料库):
计数器=计数器()
对于语料库中的文本:
标记=标记化(文本)
计数器更新(令牌)
返回计数器
def标记化(文本):
返回nltk.WordPunctTokenizer().tokenize(文本)
def get_表示法(词汇表、数量):
most_comm=词汇表。最常见(有多少个)
wd2idx={}
idx2wd={}
对于位置,枚举中的单词(most_comm):
单词=单词[0]
wd2idx[字]=位置
idx2wd[位置]=字
返回wd2idx,idx2wd
def形状_数据(数据):
#encoded_data=np.array()
data\u clean=data.copy()
对于范围内的行(len(数据)):
transtable=str.maketrans(“”,,,string.标点符号)
data_clean[line]=data[line].translate(transtable).strip().split(“”)
编码线=编码(数据线)
encoded_line=keras.preprocessing.sequence.pad_序列([encoded_line],value=word_index[“”],padding=“post”,maxlen=35)
如果行==0:
编码的\u数据=np.数组(编码的\u线)
其他:
encoded_data=np.append(encoded_data,encoded_line,axis=0)
返回编码的数据
def编码(单词列表):
编码=[1]
对于word\u列表中的word:
如果word\u索引中有单词:
encoded.append(单词索引[单词])
其他:
编码。追加(2)
返回编码
列车数据=pd.read\U csv('train.csv'))
语料库=训练数据['text']
列车标签=列车数据['label']。数值
所有单词=获取语料库词汇(语料库)
单词索引,索引单词=获取表示(所有单词,100000)
#v+3表示1,2,3;0从来没有密钥
word_index={k:(v+3)表示k,在word_index.items()}#字典中表示v
word_索引[“”]=0
单词索引[“”]=1
单词索引[“”]=2
单词索引[“”]=3
编码的列车数据=形状数据(语料库)
#模型
模型=keras.Sequential()
模型.添加(keras.层.嵌入(100,16))
model.add(keras.layers.globalaveragepoolg1d())
添加模型(keras.layers.Dense(16,activation=“relu”))
model.add(keras.layers.density(1,activation=“sigmoid”))
compile(optimizer=“adam”,loss=“binary\u crossentropy”,metrics=[“accurity”])
x_val=编码的列车数据[:1000]
x_列=编码的_列数据[1000:]
y_val=列车标签[:1000]
y_列=列标签[1000:]
fitModel=model.fit(x_序列,y_序列,历代数=40,批量大小=512,验证数据=(x_val,y_val),详细度=1)
结果=模型。评估(测试数据、测试标签)
打印(结果)
我还想提一下,x_火车、y_火车、x_瓦尔和y_瓦尔都是numpy NDARRAY

错误是:

Traceback (most recent call last):
  File "C:/programare/python/ProiectML/neuronalNetwork.py", line 99, in <module>
    fitModel = model.fit(x_train, y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1)
  File "C:\Users\User\.conda\envs\tensor\lib\site-packages\tensorflow\python\keras\engine\training.py", line 108, in _method_wrapper
    return method(self, *args, **kwargs)
  File "C:\Users\User\.conda\envs\tensor\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1098, in fit
    tmp_logs = train_function(iterator)
  File "C:\Users\User\.conda\envs\tensor\lib\site-packages\tensorflow\python\eager\def_function.py", line 780, in __call__
    result = self._call(*args, **kwds)
  File "C:\Users\User\.conda\envs\tensor\lib\site-packages\tensorflow\python\eager\def_function.py", line 840, in _call
    return self._stateless_fn(*args, **kwds)
  File "C:\Users\User\.conda\envs\tensor\lib\site-packages\tensorflow\python\eager\function.py", line 2829, in __call__
    return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
  File "C:\Users\User\.conda\envs\tensor\lib\site-packages\tensorflow\python\eager\function.py", line 1848, in _filtered_call
    cancellation_manager=cancellation_manager)
  File "C:\Users\User\.conda\envs\tensor\lib\site-packages\tensorflow\python\eager\function.py", line 1924, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager))
  File "C:\Users\User\.conda\envs\tensor\lib\site-packages\tensorflow\python\eager\function.py", line 550, in call
    ctx=ctx)
  File "C:\Users\User\.conda\envs\tensor\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute
    inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError:  indices[448,1] = 453 is not in [0, 100)
     [[node sequential/embedding/embedding_lookup (defined at /programare/python/ProiectML/neuronalNetwork.py:99) ]] [Op:__inference_train_function_850]

Errors may have originated from an input operation.
Input Source operations connected to node sequential/embedding/embedding_lookup:
 sequential/embedding/embedding_lookup/575 (defined at \Users\User\.conda\envs\tensor\lib\contextlib.py:112)

Function call stack:
train_function
回溯(最近一次呼叫最后一次):
文件“C:/prograare/python/ProiectML/neuronalNetwork.py”,第99行,在
fitModel=model.fit(x_序列,y_序列,历代数=40,批量大小=512,验证数据=(x_val,y_val),详细度=1)
文件“C:\Users\User\.conda\envs\tensor\lib\site packages\tensorflow\python\keras\engine\training.py”,第108行,在方法包装中
返回方法(self、*args、**kwargs)
文件“C:\Users\User\.conda\envs\tensor\lib\site packages\tensorflow\python\keras\engine\training.py”,第1098行
tmp_logs=训练函数(迭代器)
文件“C:\Users\User\.conda\envs\tensor\lib\site packages\tensorflow\python\eager\def_function.py”,第780行,在调用中__
结果=自身调用(*args,**kwds)
文件“C:\Users\User\.conda\envs\tensor\lib\site packages\tensorflow\python\eager\def_function.py”,第840行,在调用中
返回self.\u无状态\u fn(*args,**kwds)
文件“C:\Users\User\.conda\envs\tensor\lib\site packages\tensorflow\python\eager\function.py”,第2829行,在调用中__
返回图形\函数。\过滤\调用(args,kwargs)\ pylint:disable=受保护的访问
文件“C:\Users\User\.conda\envs\tensor\lib\site packages\tensorflow\python\eager\function.py”,第1848行,在\u filtered\u调用中
取消管理器=取消管理器)
文件“C:\Users\User\.conda\envs\tensor\lib\site packages\tensorflow\python\eager\function.py”,第1924行,位于调用平面中
ctx,args,取消管理器=取消管理器)
调用中第550行的文件“C:\Users\User\.conda\envs\tensor\lib\site packages\tensorflow\python\eager\function.py”
ctx=ctx)
文件“C:\Users\User\.conda\envs\tensor\lib\site packages\tensorflow\python\eager\execute.py”,第60行,快速执行
输入、属性、数量(输出)
tensorflow.python.framework.errors\u impl.InvalidArgumentError:索引[448,1]=453不在[0,100]中
[[node sequential/Embedded/Embedded_lookup(在/programare/python/ProiectML/neuronalNetwork.py:99中定义)][Op:u推理_训练_函数_850]
错误可能源于输入操作。
连接到节点顺序/嵌入/嵌入\u查找的输入源操作:
顺序/嵌入/嵌入\查找/575(定义在\Users\User\.conda\envs\tensor\lib\contextlib.py:112)
函数调用堆栈:
列车功能
更改此行:

model.add(keras.layers.Embedding(len(word_index), 16))

请将错误消息格式化为代码,而不是文本。@FloriTache如果是,请接受答案-请参阅