Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/344.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 使用Scrapy通过html表单时出现的问题_Python_Web Scraping_Scrapy_Scrapy Spider - Fatal编程技术网

Python 使用Scrapy通过html表单时出现的问题

Python 使用Scrapy通过html表单时出现的问题,python,web-scraping,scrapy,scrapy-spider,Python,Web Scraping,Scrapy,Scrapy Spider,我正在尝试获取的Url: 共有3页,第一页选择术语,第二页选择主题,以及包含实际课程信息的页面 我遇到的问题是,一旦subject()调用courses()回调,写入文件的response.body中的html就是主题页面的html,而不是课程页面。我如何知道我正在发送正确的表单数据,以便收到正确的响应 # term(): # Selects the school term to use. Clicks submit def term(self, response): return

我正在尝试获取的Url:

共有3页,第一页选择术语,第二页选择主题,以及包含实际课程信息的页面

我遇到的问题是,一旦subject()调用courses()回调,写入文件的response.body中的html就是主题页面的html,而不是课程页面。我如何知道我正在发送正确的表单数据,以便收到正确的响应

# term():
#   Selects the school term to use. Clicks submit

def term(self, response):
    return scrapy.FormRequest.from_response(
    response,
    formxpath="/html/body/div[3]/form",
    formdata={
        "p_term" : "201705" },
    clickdata = { "type": "submit" },
    callback=self.subject
    )

# subject():
#   Selects the subject to query. Clicks submit

def subject(self, response):
    return scrapy.FormRequest.from_response(
    response,
    formxpath="/html/body/div[3]/form",
    formdata={
        "sel_subj" : "ART" },
    clickdata = { "type": "submit" },
    callback=self.courses
    )

# courses():
#   Currently just saves all the html on the page.

def courses(self, response):
    page = response.url.split("/")[-1]
    filename = 'uvic-%s.html' % page
    with open(filename, 'wb') as f:
        f.write(response.body)
    self.log('Saved file %s' % filename)
调试输出

2017-04-02 01:15:28 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapy4uvic)
2017-04-02 01:15:28 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapy4uvic.spiders', 'SPIDER_MODULES': ['scrapy4uvic.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'scrapy4uvic'}
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-04-02 01:15:28 [scrapy.core.engine] INFO: Spider opened
2017-04-02 01:15:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-02 01:15:28 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.uvic.ca/robots.txt> (referer: None)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched> (referer: None)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://www.uvic.ca/BAN1P/bwckgens.p_proc_term_date> (referer: https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://www.uvic.ca/BAN1P/bwckschd.p_get_crse_unsec> (referer: https://www.uvic.ca/BAN1P/bwckgens.p_proc_term_date)
2017-04-02 01:15:30 [uvic] DEBUG: Saved file uvic-bwckschd.p_get_crse_unsec.html
2017-04-02 01:15:30 [scrapy.core.engine] INFO: Closing spider (finished)
2017-04-02 01:15:30 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 2335,
 'downloader/request_count': 4,
 'downloader/request_method_count/GET': 2,
 'downloader/request_method_count/POST': 2,
 'downloader/response_bytes': 105499,
 'downloader/response_count': 4,
 'downloader/response_status_count/200': 4,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 4, 2, 8, 15, 30, 103536),
 'log_count/DEBUG': 6,
 'log_count/INFO': 7,
 'request_depth_max': 2,
 'response_received_count': 4,
 'scheduler/dequeued': 3,
 'scheduler/dequeued/memory': 3,
 'scheduler/enqueued': 3,
 'scheduler/enqueued/memory': 3,
 'start_time': datetime.datetime(2017, 4, 2, 8, 15, 28, 987034)}
2017-04-02 01:15:30 [scrapy.core.engine] INFO: Spider closed (finished)
2017-04-02 01:15:28[scrapy.utils.log]信息:scrapy 1.3.3已启动(bot:scrapy4uvic)
2017-04-02 01:15:28[scrapy.utils.log]信息:覆盖的设置:{'NEWSPIDER_模块':'scrapy4uvic.SPIDER','SPIDER_模块':['scrapy4uvic.SPIDER','ROBOTSTXT_-obe':True,'BOT_-NAME':'scrapy4uvic'}
2017-04-02 01:15:28[scrapy.middleware]信息:启用的扩展:
['scrapy.extensions.logstats.logstats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.corestats']
2017-04-02 01:15:28[剪贴簿中间件]信息:启用的下载程序中间件:
['scrapy.downloaderMiddleware.robotstxt.RobotsTxtMiddleware',
'scrapy.downloaderMiddleware.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloaderMiddleware.defaultheaders.DefaultHeadersMiddleware',
'scrapy.DownloaderMiddleware.useragent.UserAgentMiddleware',
'scrapy.DownloaderMiddleware.retry.RetryMiddleware',
'scrapy.DownloaderMiddleware.redirect.MetaRefreshMiddleware',
'scrapy.DownloaderMiddleware.httpcompression.HttpCompressionMiddleware',
'scrapy.DownloaderMiddleware.redirect.RedirectMiddleware',
“scrapy.DownloaderMiddleware.cookies.CookiesMiddleware”,
'scrapy.downloadermiddleware.stats.DownloaderStats']
2017-04-02 01:15:28[scrapy.middleware]信息:启用的蜘蛛中间件:
['scrapy.spidermiddleware.httperror.httperror中间件',
'刮皮.SpiderMiddleware.场外.场外Iddleware',
“scrapy.Spidermiddleware.referer.RefererMiddleware”,
'scrapy.spiderMiddleware.urllength.UrlLengthMiddleware',
'scrapy.spidermiddleware.depth.DepthMiddleware']
2017-04-02 01:15:28[scrapy.middleware]信息:启用的项目管道:
[]
2017-04-02 01:15:28[刮屑.堆芯.发动机]信息:十字轴已打开
2017-04-02 01:15:28[scrapy.extensions.logstats]信息:爬网0页(0页/分钟),爬网0项(0项/分钟)
2017-04-02 01:15:28[scrapy.extensions.telnet]调试:telnet控制台在127.0.0.1:6023上侦听
2017-04-02 01:15:29[刮屑核心引擎]调试:爬网(200)(参考:无)
2017-04-02 01:15:29[刮屑核心引擎]调试:爬网(200)(参考:无)
2017-04-02 01:15:29[刮屑核心引擎]调试:爬网(200)(参考:https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched)
2017-04-02 01:15:29[刮屑核心引擎]调试:爬网(200)(参考:https://www.uvic.ca/BAN1P/bwckgens.p_proc_term_date)
2017-04-02 01:15:30[uvic]调试:保存的文件uvic-bwckshd.p_get_crse_unsec.html
2017-04-02 01:15:30[刮屑芯发动机]信息:关闭卡盘(已完成)
2017-04-02 01:15:30[scrapy.StatCollectors]信息:倾销scrapy统计数据:
{'downloader/request_bytes':2335,
“下载程序/请求计数”:4,
“下载器/请求\方法\计数/获取”:2,
“下载程序/请求方法/计数/发布”:2,
“downloader/response_字节”:105499,
“下载程序/响应计数”:4,
“下载/响应状态\计数/200”:4,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2017,4,2,8,15,30,103536),
“日志计数/调试”:6,
“日志计数/信息”:7,
“请求深度最大值”:2,
“收到的响应数”:4,
“调度程序/出列”:3,
“调度程序/出列/内存”:3,
“调度程序/排队”:3,
“调度程序/排队/内存”:3,
“开始时间”:datetime.datetime(2017,4,2,8,15,28987034)}
2017-04-02 01:15:30[刮屑堆芯发动机]信息:十字轴关闭(完成)

您似乎在
subject()
中缺少一些表单数据

我设法使它与以下各项一起工作:

formdata={
    "sel_subj": ["dummy", "ART"],
}
我是如何调试它的。
首先,您不必保存到文件,您可以在爬网过程中
inspect\u response

def courses(self, response):
    from scrapy.shell import inspect_response
    inspect_response(response, self)
这将打开一个带有
response
request
对象的shell,您甚至可以调用
view(response)
在浏览器中打开html。它还将使用
ipython
bpython
shell(如果这些shell可用),在下面的示例中,我使用ipython进行方便的格式化

其次,我检查了我的浏览器(firefox),当我单击按钮并将其复制到variable
bar
下的shell时,它发送的是什么表单,并将其与scrapy发送的请求正文进行了比较:

bar = '''term_in=201705&sel_subj=dummy&sel_day=dummy&sel_schd=dummy&sel_insm=dummy&
      sel_camp=dummy&sel_levl=dummy                                                      
      &sel_sess=dummy&sel_instr=dummy&sel_ptrm=dummy&sel_attr=dummy&sel_subj=ART&sel_crse
      =&sel_title=&sel_schd                                                              
      =%25&sel_insm=%25&sel_from_cred=&sel_to_cred=&sel_camp=%25&sel_levl=%25&sel_ptrm=%2
      5&sel_instr=%25&begin_hh                                                           
      =0&begin_mi=0&begin_ap=a&end_hh=0&end_mi=0&end_ap=a'''
# split into arguments
bar = sorted(bar.split('&'))
# do the same with the request body that was sent by scrapy
foo =sorted(request.body.split('&'))
# now join these together and find the differences!
zip(foo, bar)
[('begin_ap=a', 'begin_ap=a'),
 ('begin_hh=0', 'begin_hh\n=0'),
 ('begin_mi=0', 'begin_mi=0'),
 ('end_ap=a', 'end_ap=a'),
 ('end_hh=0', 'end_hh=0'),
 ('end_mi=0', 'end_mi=0'),
 ('sel_attr=dummy', 'sel_attr=dummy'),
 ('sel_camp=%25', 'sel_camp=%25'),
 ('sel_camp=dummy', 'sel_camp=dummy'),
 ('sel_crse=', 'sel_crse='),
 ('sel_day=dummy', 'sel_day=dummy'),
 ('sel_from_cred=', 'sel_from_cred='),
 ('sel_insm=%25', 'sel_insm=%25'),
 ('sel_insm=dummy', 'sel_insm=dummy'),
 ('sel_instr=%25', 'sel_instr=%25'),
 ('sel_instr=dummy', 'sel_instr=dummy'),
 ('sel_levl=%25', 'sel_levl=%25'),
 ('sel_levl=dummy', 'sel_levl=dummy\n'),
 ('sel_ptrm=%25', 'sel_ptrm=%25'),
 ('sel_ptrm=dummy', 'sel_ptrm=dummy'),
 ('sel_schd=%25', 'sel_schd\n=%25'),
 ('sel_schd=dummy', 'sel_schd=dummy'),
 ('sel_sess=dummy', 'sel_sess=dummy'),
 ('sel_subj=ART', 'sel_subj=ART'),
 ('sel_title=', 'sel_subj=dummy'),
 ('sel_to_cred=', 'sel_title='),
 ('term_in=201705', 'sel_to_cred=')]
正如您所看到的,您在
sel_sub
中缺少了“dummy”,而“term_in”则出现在不应该出现但似乎没有效果的地方:)