Python pickle.dump满足运行时错误:cmp中超出了最大递归深度
我注意到这可能是由beautifulsoup或递归数据结构引起的。但是,导致错误的数据结构似乎没有问题:Python pickle.dump满足运行时错误:cmp中超出了最大递归深度,python,recursion,pickle,Python,Recursion,Pickle,我注意到这可能是由beautifulsoup或递归数据结构引起的。但是,导致错误的数据结构似乎没有问题: class Movie: def __init__(self, name="", dscore=0, mscore=0, durl="", murl=""): self.name = name self.dscore = float(dscore) self.mscore = float(mscore) self.durl = durl self.m
class Movie:
def __init__(self, name="", dscore=0, mscore=0, durl="", murl=""):
self.name = name
self.dscore = float(dscore)
self.mscore = float(mscore)
self.durl = durl
self.murl = murl
def __str__(self):
return unicode(self.name) + u' / ' + unicode(self.dscore) + u' / ' + unicode(self.mscore) \
+ u' / ' + unicode(self.durl) + u' / ' + unicode(self.murl)
导致问题的陈述是:
DataDict['MovieInfo'] = MovieInfo
及
功能如下:
def SaveData():
global LinkUrlQueue
global MovieSet
global MovieInfo
global LinkUrlSet
global MovieUrlQueue
DataDict = {}
DataDict['LinkUrlSet'] = LinkUrlSet
DataDict['MovieSet'] = MovieSet
#DataDict['MovieInfo'] = MovieInfo
DataDict['LinkUrlQueue'] = LinkUrlQueue
DataDict['MovieUrlQueue'] = MovieUrlQueue
f = open('MovieInfo.txt', 'wb')
for item in MovieInfo:
f.write(item.__str__().encode('utf8') + '\n'.encode('utf8'))
f.close()
try:
print 'saving data...'
f = open('spider.dat', 'wb')
pickle.dump(DataDict, f, True)
f.close()
except IOError as e:
print 'IOError, error no: %d' % e.no
print 'saved to spider2.dat'
pickle.dump(DataDict, open('spider2.dat', 'wb'))
time.sleep(10)
我的完整源代码:
spider.py:
fetch.py:
您只需下载并运行
此外,欢迎任何编码风格建议好吧。。。我终于自己解决了这个问题 这个问题的原因是pickle不能处理BEAUTIFULSOUP!!!通常,它不能处理html解析器 我意识到在将参数传递到函数中时,我应该将它们转换为str()或unicode(),然后进行赋值,而不是将它们保留为beautifulsoup对象
谢谢大家~将帮助提供重现问题的解决方案。我更喜欢PEP8风格。函数都是小写的,类的首字母大写。不要从
\uuuu str\uuu
方法返回unicode对象。而是从\uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu。
def SaveData():
global LinkUrlQueue
global MovieSet
global MovieInfo
global LinkUrlSet
global MovieUrlQueue
DataDict = {}
DataDict['LinkUrlSet'] = LinkUrlSet
DataDict['MovieSet'] = MovieSet
#DataDict['MovieInfo'] = MovieInfo
DataDict['LinkUrlQueue'] = LinkUrlQueue
DataDict['MovieUrlQueue'] = MovieUrlQueue
f = open('MovieInfo.txt', 'wb')
for item in MovieInfo:
f.write(item.__str__().encode('utf8') + '\n'.encode('utf8'))
f.close()
try:
print 'saving data...'
f = open('spider.dat', 'wb')
pickle.dump(DataDict, f, True)
f.close()
except IOError as e:
print 'IOError, error no: %d' % e.no
print 'saved to spider2.dat'
pickle.dump(DataDict, open('spider2.dat', 'wb'))
time.sleep(10)