Python 如何处理属性错误:';非类型';对象没有属性';findAll';
在使用下面的功能扫描大量网站时,我收到了一个错误(见下文)。除了步骤之外,我是否可以在下面的函数中添加任何Python 如何处理属性错误:';非类型';对象没有属性';findAll';,python,web-scraping,beautifulsoup,web-crawler,Python,Web Scraping,Beautifulsoup,Web Crawler,在使用下面的功能扫描大量网站时,我收到了一个错误(见下文)。除了步骤之外,我是否可以在下面的函数中添加任何步骤来处理此类错误 async def scrape(url): try: r = requests.get(url, timeout=(3, 6)) r.raise_for_status() soup = BeautifulSoup(r.content, 'html.parser') data = { "co
步骤来处理此类错误
async def scrape(url):
try:
r = requests.get(url, timeout=(3, 6))
r.raise_for_status()
soup = BeautifulSoup(r.content, 'html.parser')
data = {
"coming soon": soup.body.findAll(text = re.compile("coming soon", re.I)),
"Opening Soon": soup.body.findAll(text = re.compile("Opening Soon", re.I)),
"Under Construction": soup.body.findAll(text = re.compile("Under Construction", re.I)),
"Currently Unavailable": soup.body.findAll(text = re.compile("Currently Unavailable", re.I)),
"button": soup.findAll(text = re.compile('button2.js'))}
results[url] = data
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout, requests.exceptions.MissingSchema):
status[url] = "Connection Error"
except (requests.exceptions.HTTPError):
status[url] = "Http Error"
except (requests.exceptions.TooManyRedirects):
status[url] = "Redirects"
except (requests.exceptions.RequestException) as err:
status[url] = "Fatal Error: " + err + url
else:
status[url] = "OK"
错误:
Task exception was never retrieved
future: <Task finished name='Task-4782' coro=<scrape() done, defined at crawler.py:47> exception=AttributeError("'NoneType' object has no attribute 'findAll'")>
Traceback (most recent call last):
File "crawler.py", line 53, in scrape
"coming soon": soup.body.findAll(text = re.compile("coming soon", re.I)),
AttributeError: 'NoneType' object has no attribute 'findAll'
从未检索到任务异常
未来:
回溯(最近一次呼叫最后一次):
文件“crawler.py”,第53行,在scrape中
“即将到来”:soup.body.findAll(text=re.compile(“即将到来”,re.I)),
AttributeError:“非类型”对象没有属性“findAll”
之所以发生这种情况,是因为soup.body
是None
,我们可以用if条件简单地处理这种情况
async def scrape(url):
try:
r = requests.get(url, timeout=(3, 6))
r.raise_for_status()
soup = BeautifulSoup(r.content, 'html.parser')
if soup.body:
data = {
"coming soon": soup.body.findAll(text = re.compile("coming soon", re.I)),
"Opening Soon": soup.body.findAll(text = re.compile("Opening Soon", re.I)),
"Under Construction": soup.body.findAll(text = re.compile("Under Construction", re.I)),
"Currently Unavailable": soup.body.findAll(text = re.compile("Currently Unavailable", re.I)),
"button": soup.findAll(text = re.compile('button2.js'))}
results[url] = data
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout, requests.exceptions.MissingSchema):
status[url] = "Connection Error"
except (requests.exceptions.HTTPError):
status[url] = "Http Error"
except (requests.exceptions.TooManyRedirects):
status[url] = "Redirects"
except (requests.exceptions.RequestException) as err:
status[url] = "Fatal Error: " + err + url
else:
status[url] = "OK"