Python 3.x 我的内核在我得到“后死亡”;比较中超过了最大递归深度“;错误

Python 3.x 我的内核在我得到“后死亡”;比较中超过了最大递归深度“;错误,python-3.x,jupyter,Python 3.x,Jupyter,我是stackoverflow的新手。现在我在学习Kaggle的NLP代码。我不断得到“超过最大递归深度比较”错误,我的内核死在jupyter笔记本上。我已经尝试导入sys和sys.setrecursionlimit(1500),但它不起作用。请帮帮我 #Top words after stemming operation #collect vocabulary count #create the object of tfid vectorizer tfid_vectorizer = Tf

我是stackoverflow的新手。现在我在学习Kaggle的NLP代码。我不断得到“超过最大递归深度比较”错误,我的内核死在jupyter笔记本上。我已经尝试导入sys和sys.setrecursionlimit(1500),但它不起作用。请帮帮我

#Top words after stemming operation

#collect vocabulary count

#create the object of tfid vectorizer

tfid_vectorizer = TfidfVectorizer("english")
#fit the vectorizer using the text data
tfid_vectorizer.fit(data['text'])
#collect the vocabulary items used in the vectorizer
dictionary = tfid_vectorizer.vocabulary_.items()

#Bar plot of top words after stemming
#lists to store the vocab and counts

vocab = []
count = []

#iterate through each vocab and count append the value to designated lists
for key, value in dictionary:
    vocab.append(key)
    count.append(value)

#store the count in pandas dataframe with vocab as index
vocab_after_stem = pd.Series(count, index=vocab)

#sort the dataframe
vocab_after_stem = vocab_after_stem.sort_values(ascending=False)

#plot of the top vocab
top_vocab = vocab_after_stem.head(20)
top_vocab.plot(kind = 'barh', figsize=(5,10), xlim = (15120,15145))

#Histogram of text length of each writer
def length(text):
    return length(text)

#Apply the function to each example
data['length'] = data['text'].apply(length)
data.head(10)