Python 3.x ValueError:请求url中缺少方案

Python 3.x ValueError:请求url中缺少方案,python-3.x,scrapy,Python 3.x,Scrapy,我正在努力刮https://www.skynewsarabia.com/使用Scrapy时,我出现此错误ValueError:请求url中缺少方案: 我尝试了在stackoverflow上找到的每一个解决方案,但没有一个对我有效。 这是我的蜘蛛: name = 'skynews' allowed_domains = ['www.skynewsarabia.com'] start_urls = ['https://www.skynewsarabia.com/sport/latest-news-%D

我正在努力刮
https://www.skynewsarabia.com/
使用
Scrapy
时,我出现此错误
ValueError:请求url中缺少方案:
我尝试了在stackoverflow上找到的每一个解决方案,但没有一个对我有效。 这是我的蜘蛛:

name = 'skynews'
allowed_domains = ['www.skynewsarabia.com']
start_urls = ['https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1']
}
    def parse(self, response):
    link = "https://www.skynewsarabia.com"
    # get the urls of each article
    urls = response.css("a.item-wrapper::attr(href)").extract()
    # for each article make a request to get the text of that article
    for url in urls:
        # get the info of that article using the parse_details function
        yield scrapy.Request(url=link +url, callback=self.parse_details)
    # go and get the link for the next article
    next_article = response.css("a.item-wrapper::attr(href)").extract_first()
    if next_article:
        # keep repeating the process until the bot visits all the links in the website!
        yield scrapy.Request(url=next_article, callback=self.parse)  # keep calling yourself!
以下是全部错误:

2019-01-30 11:49:34 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2019-01-30 11:49:34 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2019-01-30 11:49:35 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.skynewsarabia.com/robots.txt> (referer: None)
2019-01-30 11:49:35 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1> (referer: None)
2019-01-30 11:49:35 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1> (referer: None)
Traceback (most recent call last):
File "c:\users\hozrifai\desktop\scraping\venv\lib\site- 
      packages\scrapy\utils\defer.py", line 102, in iter_errback
      yield next(it)
     File "c:\users\hozrifai\desktop\scraping\venv\lib\site- 
    packages\scrapy\spidermiddlewares\offsite.py", line 30, in 
 process_spider_output
       for x in result:
      File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
return (_set_referer(r) for r in result or ())
File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\HozRifai\Desktop\scraping\articles\articles\spiders\skynews.py", line 28, in parse
   yield scrapy.Request(url=next_article, callback=self.parse)  # keep calling yourself!
 File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\http\request\__init__.py", line 25, in __init__
   self._set_url(url)
 File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\http\request\__init__.py", line 62, in _set_url
   raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: /sport/1222754-%D8%A8%D9%8A%D8%B1%D9%86%D9%84%D9%8A-%D9%8A%D8%B6%D8%B9-%D8%AD%D8%AF%D8%A7-%D9%84%D8%B3%D9%84%D8%B3%D9%84%D8%A9-%D8%A7%D9%86%D8%AA%D8%B5%D8%A7%D8%B1%D8%A7%D8%AA-%D8%B3%D9%88%D9%84%D8%B4%D8%A7%D8%B1
2019-01-30 11:49:36 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.skynewsarabia.com/sport/1222754-%D8%A8%D9%8A%D8%B1%D9%86%D9%84%D9%8A-%D9%8A%D8%B6%D8%B9-%D8%AD%D8%AF%D8%A7-%D9%84%D8%B3%D9%84%D8%B3%D9%84%D8%A9-%D8%A7%D9%86%D8%AA%D8%B5%D8%A7%D8%B1%D8%A7%D8%AA-%D8%B3%D9%88%D9%84%D8%B4%D8%A7%D8%B1> (referer: https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1)
2019-01-30 11:49:34[scrapy.extensions.logstats]信息:爬网0页(0页/分钟),爬网0项(0项/分钟)
2019-01-30 11:49:34[scrapy.extensions.telnet]调试:telnet控制台监听127.0.0.1:6023
2019-01-30 11:49:35[刮屑核心引擎]调试:爬网(200)(参考:无)
2019-01-30 11:49:35[刮屑核心引擎]调试:爬网(200)(参考:无)
2019-01-30 11:49:35[刮芯刮片]错误:十字轴错误处理(参考:无)
回溯(最近一次呼叫最后一次):
文件“c:\users\hozrifai\desktop\scraping\venv\lib\site-
iter\u errback中的packages\scrapy\utils\defer.py”,第102行
下一个(it)
文件“c:\users\hozrifai\desktop\scraping\venv\lib\site-
packages\scrapy\spidermiddleware\offsite.py“,第30行,in
进程输出
对于结果中的x:
文件“c:\users\hozrifai\desktop\scraping\venv\lib\site packages\scrapy\spidermiddleware\referer.py”,第339行,在
返回(_set_referer(r)表示结果中的r或())
文件“c:\users\hozrifai\desktop\scraping\venv\lib\site packages\scrapy\spidermiddleware\urlength.py”,第37行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“c:\users\hozrifai\desktop\scraping\venv\lib\site packages\scrapy\spidermiddleware\depth.py”,第58行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“C:\Users\HozRifai\Desktop\scraping\articles\articles\spider\skynews.py”,第28行,解析
产生scrapy.Request(url=next_article,callback=self.parse)#继续给自己打电话!
文件“c:\users\hozrifai\desktop\scraping\venv\lib\site packages\scrapy\http\request\\ uuuu init\uuu.py”,第25行,在\uu init中__
自我设置url(url)
文件“c:\users\hozrifai\desktop\scraping\venv\lib\site packages\scrapy\http\request\\ uuuu init\uuuu.py”,第62行,位于\u set\u url中
raise VALUERROR('请求url中缺少方案:%s'%self.\u url)
ValueError:请求url中缺少方案:/sport/1222754-%D8%A8%D9%8A%D8%B1%D9%86%D9%8A-%D9%8A%D8%B6%D8%B9-%D8%AD%D8%AF%D8%A7-%D9%84%D8%B3%D9%84%D9-%D8%A7%D9%86%D8%A8%AA%D8%B5%D8%A7%D8%A8%D8%D8%D8%D8%A8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%D8%
2019-01-30 11:49:36[刮屑核心引擎]调试:爬网(200)(参考:https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1)

提前感谢

您有
下一篇文章
没有方案的url。尝试:

next_article = response.css("a.item-wrapper::attr(href)").get()
if next_article:
    yield scrapy.Request(response.urljoin(next_article))

在下一篇文章检索中:

next_article = response.css("a.item-wrapper::attr(href)").extract_first()
您确定要从
http/https
获取完整链接吗

如果我们对接收到的url不确定,请始终使用
urljoin
作为更好的方法:

url = response.urljoin(next_article)     # you can also use this in your above logic.

您是否查看了它抱怨的错误消息中的值?这是一个相对链接,您需要为其添加前缀以使其成为绝对URL。
next\u article
的值是相对URL(
/sport/1222754…
)。你必须提供绝对URL。这就是为什么我有一个
链接
变量!然后我将它添加到
相对url
,但实际上您并没有将它添加到下一篇文章中。