Python urllib2.HTTPError:在Web抓取巨大列表时

Python urllib2.HTTPError:在Web抓取巨大列表时,python,pandas,dataframe,web-scraping,beautifulsoup,Python,Pandas,Dataframe,Web Scraping,Beautifulsoup,该网页上有一个巨大的期刊名称列表和其他详细信息。我正试图将表内容刮到数据帧中 #http://www.citefactor.org/journal-impact-factor-list-2015.html import bs4 as bs import urllib #Using python 2.7 import pandas as pd dfs = pd.read_html('http://www.citefactor.org/journal-impact-factor-list-

该网页上有一个巨大的期刊名称列表和其他详细信息。我正试图将表内容刮到数据帧中

#http://www.citefactor.org/journal-impact-factor-list-2015.html

import bs4 as bs 
import urllib  #Using python 2.7
import pandas as pd 

dfs = pd.read_html('http://www.citefactor.org/journal-impact-factor-list-2015.html/', header=0)
for df in dfs:
    print(df)
    df.to_csv('citefactor_list.csv', header=True)
但是我有以下错误。。我确实试着提到一些已经提出的问题,但无法解决

错误:

Traceback (most recent call last):
  File "scrape_impact_factor.py", line 7, in <module>
    dfs = pd.read_html('http://www.citefactor.org/journal-impact-factor-list-2015.html/', header=0)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/html.py", line 896, in read_html
    keep_default_na=keep_default_na)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/html.py", line 733, in _parse
    raise_with_traceback(retained)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/html.py", line 727, in _parse
    tables = p.parse_tables()
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/html.py", line 196, in parse_tables
    tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/html.py", line 450, in _build_doc
    return BeautifulSoup(self._setup_build_doc(), features='html5lib',
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/html.py", line 443, in _setup_build_doc
    raw_text = _read(self.io)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/html.py", line 130, in _read
    with urlopen(obj) as url:
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/common.py", line 60, in urlopen
    with closing(_urlopen(*args, **kwargs)) as f:
  File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
    return _opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 410, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 448, in error
    return self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 500: Internal Server Error
回溯(最近一次呼叫最后一次):
文件“刮伤影响系数.py”,第7行,在
dfs=pd.read\u html('http://www.citefactor.org/journal-impact-factor-list-2015.html/,标头=0)
文件“/usr/local/lib/python2.7/dist packages/pandas/io/html.py”,第896行,以只读html格式
keep_default_na=keep_default_na)
文件“/usr/local/lib/python2.7/dist packages/pandas/io/html.py”,第733行,在
使用_回溯(保留)引发_
文件“/usr/local/lib/python2.7/dist packages/pandas/io/html.py”,第727行,在
tables=p.parse_tables()
parse_表中的文件“/usr/local/lib/python2.7/dist packages/pandas/io/html.py”,第196行
tables=self.\u parse\u tables(self.\u build\u doc(),self.match,self.attrs)
文件“/usr/local/lib/python2.7/dist packages/pandas/io/html.py”,第450行,在构建文档中
返回BeautifulSoup(self.\u setup\u build\u doc(),features='html5lib',
文件“/usr/local/lib/python2.7/dist packages/pandas/io/html.py”,第443行,在安装和构建文档中
原始文本=_读取(self.io)
文件“/usr/local/lib/python2.7/dist packages/pandas/io/html.py”,第130行,已读
使用urlopen(obj)作为url:
文件“/usr/lib/python2.7/contextlib.py”,第17行,输入__
返回self.gen.next()
文件“/usr/local/lib/python2.7/dist-packages/pandas/io/common.py”,第60行,在urlopen中
关闭(_urlopen(*args,**kwargs))为f:
文件“/usr/lib/python2.7/urllib2.py”,urlopen中的第127行
return\u opener.open(url、数据、超时)
文件“/usr/lib/python2.7/urllib2.py”,第410行,打开
响应=方法(请求,响应)
http_响应中的文件“/usr/lib/python2.7/urllib2.py”,第523行
“http”、请求、响应、代码、消息、hdrs)
文件“/usr/lib/python2.7/urllib2.py”,第448行出错
返回自我。调用链(*args)
文件“/usr/lib/python2.7/urllib2.py”,第382行,在调用链中
结果=func(*args)
文件“/usr/lib/python2.7/urllib2.py”,第531行,默认为http\u error\u
raise HTTPError(请求获取完整url(),代码,消息,hdrs,fp)
urllib2.HTTPError:HTTP错误500:内部服务器错误

500内部服务器错误意味着服务器出现问题,因此超出了您的控制范围

然而,问题是您使用了错误的URL


如果在浏览器中转到,则会出现404未找到错误。删除尾随斜杠,即,它将工作。

500内部服务器错误意味着服务器出现问题,因此超出您的控制范围

然而,问题是您使用了错误的URL


如果在浏览器中转到,则会出现404未找到错误。删除尾随的斜杠,即,它将起作用。

是否有可能根据ISSN编号或标题搜索刮取数据帧?没有提到任何要引用的url?有没有可能我可以根据ISSN编号或标题搜索刮取数据帧?没有提到任何可参考的url?