Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/selenium/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 刮页不会返回正文_Python_Selenium_Web Scraping_Scrapy - Fatal编程技术网

Python 刮页不会返回正文

Python 刮页不会返回正文,python,selenium,web-scraping,scrapy,Python,Selenium,Web Scraping,Scrapy,我正在尝试获取此url的内容: 但这一反应似乎不包含任何主体。不过脑子里有个剧本。这就是我尝试过的(从浏览器中添加标题,该页面可以很好地加载) 输出为: 2020-02-10 09:18:56 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: scrapybot) 2020-02-10 09:18:56 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssse

我正在尝试获取此url的内容:

但这一反应似乎不包含任何主体。不过脑子里有个剧本。这就是我尝试过的(从浏览器中添加标题,该页面可以很好地加载)

输出为:

2020-02-10 09:18:56 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: scrapybot)
2020-02-10 09:18:56 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 2.7.15+ (default, Jul  9 2019, 16:51:35) - [GCC 7.4.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.8, Platform Linux-4.15.0-20-generic-x86_64-with-LinuxMint-19-tara
2020-02-10 09:18:56 [scrapy.crawler] INFO: Overridden settings: {'SPIDER_LOADER_WARN_ONLY': True}
2020-02-10 09:18:56 [scrapy.extensions.telnet] INFO: Telnet Password: eec1d23ac0e5b987
2020-02-10 09:18:56 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2020-02-10 09:18:56 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-02-10 09:18:56 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-02-10 09:18:56 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-02-10 09:18:56 [scrapy.core.engine] INFO: Spider opened
2020-02-10 09:18:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-02-10 09:18:56 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-02-10 09:18:57 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.alljobs.co.il/SearchResultsGuest.aspx?page=1&position=&type=&freetxt=&city=&region=> (referer: https://www.alljobs.co.il/SearchResultsGuest.aspx?page=1&position=&type=&freetxt=&city=&region=)
(200, False)
2020-02-10 09:18:57 [scrapy.core.engine] INFO: Closing spider (finished)
2020-02-10 09:18:57 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 515,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 34340,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'elapsed_time_seconds': 1.32754,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 2, 10, 3, 33, 57, 430553),
 'log_count/DEBUG': 1,
 'log_count/INFO': 10,
 'memusage/max': 53485568,
 'memusage/startup': 53485568,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2020, 2, 10, 3, 33, 56, 103013)}
2020-02-10 09:18:57 [scrapy.core.engine] INFO: Spider closed (finished)
它显示
works=False
。我不知道这为什么不起作用


任何帮助都将不胜感激。谢谢

首先显示
response.body
以查看您得到的信息-可能您会看到一些有关机器人程序、验证码或其他有用信息的警告。您没有最重要的标题:
USER-AGENT
。这是服务器检查以跳过bot的第一个头。@furas用户代理也一样。我对问题进行了编辑,将其包括在内。我在测试中确实显示了response.body。因为head中包含的脚本太大,所以未包含在此处。响应主体是这样的:
脚本就在这里:因为您已经使用Selenium进行了测试,并且显然包含了正确的标题,所以我不确定是否可以提供更多的信息来诊断Scrapy,但尝试一下可能会有所帮助。它打开一个浏览器窗口并访问它刮取的每个URL,这意味着它不模仿浏览器,而是使用您的浏览器(包括扩展、cookie等)。如果它能工作,也许这就足够了,或者至少可以帮助你缩小这个问题。在
中,似乎是JavaScript加载了所有东西,但
Scrapy
不能运行JavaScript-你可能需要
Selenium
()或
Splash
()才能在真正的web浏览器中运行
Scrapy
2020-02-10 09:18:56 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: scrapybot)
2020-02-10 09:18:56 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 2.7.15+ (default, Jul  9 2019, 16:51:35) - [GCC 7.4.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.8, Platform Linux-4.15.0-20-generic-x86_64-with-LinuxMint-19-tara
2020-02-10 09:18:56 [scrapy.crawler] INFO: Overridden settings: {'SPIDER_LOADER_WARN_ONLY': True}
2020-02-10 09:18:56 [scrapy.extensions.telnet] INFO: Telnet Password: eec1d23ac0e5b987
2020-02-10 09:18:56 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2020-02-10 09:18:56 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-02-10 09:18:56 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-02-10 09:18:56 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-02-10 09:18:56 [scrapy.core.engine] INFO: Spider opened
2020-02-10 09:18:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-02-10 09:18:56 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-02-10 09:18:57 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.alljobs.co.il/SearchResultsGuest.aspx?page=1&position=&type=&freetxt=&city=&region=> (referer: https://www.alljobs.co.il/SearchResultsGuest.aspx?page=1&position=&type=&freetxt=&city=&region=)
(200, False)
2020-02-10 09:18:57 [scrapy.core.engine] INFO: Closing spider (finished)
2020-02-10 09:18:57 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 515,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 34340,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'elapsed_time_seconds': 1.32754,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 2, 10, 3, 33, 57, 430553),
 'log_count/DEBUG': 1,
 'log_count/INFO': 10,
 'memusage/max': 53485568,
 'memusage/startup': 53485568,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2020, 2, 10, 3, 33, 56, 103013)}
2020-02-10 09:18:57 [scrapy.core.engine] INFO: Spider closed (finished)
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r'/path_to_/chromedriver')
url = "https://www.alljobs.co.il/SearchResultsGuest.aspx?page=1&position=&type=&freetxt=&city=&region="
driver.get(url)
html = driver.page_source
print("works= {}".format("open-board" in html))