Python 刮板蜘蛛中的类型错误

Python 刮板蜘蛛中的类型错误,python,web-scraping,scrapy,web-crawler,Python,Web Scraping,Scrapy,Web Crawler,注意: 我正在抓取的页面直到现在才使用javascript。我也尝试过使用scrapy_splash,但得到了相同的错误! 我一直依赖于启动蜘蛛的过程 问题: import scrapy from scrapy import FormRequest class abcSpider(scrapy.Spider): name = 'abc' allowed_domains = ['citizen.mahapolice.gov.in'] def start_request

注意

我正在抓取的页面直到现在才使用javascript。我也尝试过使用scrapy_splash,但得到了相同的错误! 我一直依赖于启动蜘蛛的过程

问题:

import scrapy
from scrapy import FormRequest


class abcSpider(scrapy.Spider):
    name = 'abc'
    allowed_domains = ['citizen.mahapolice.gov.in']

    def start_requests(self):
        yield scrapy.Request(
            url='http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx',
            headers={
                'Referer': 'https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx'
            },
            callback=self.parse
        )

    def parse(self, response):

        yield FormRequest.from_response(
            response,
            formid='form1',
            formdata={
                '__EVENTTARGET': response.xpath("//input[@name='__EVENTTARGET']/@value"),
                '__EVENTARGUMENT': response.xpath("//*[@id='__EVENTARGUMENT']/@value"),
                '__LASTFOCUS': response.xpath("//*[@id='__LASTFOCUS']/@value"),
                '__VIEWSTATE':response.xpath("//*[@id='__VIEWSTATE']/@value"),
                '__VIEWSTATEGENERATOR': "6F2EA376",
                '__PREVIOUSPAGE': response.xpath("//*[@id='__PREVIOUSPAGE']/@value"),
                '__EVENTVALIDATION': response.xpath("//*[@id='__EVENTVALIDATION']/@value"),
                'ctl00$hdnSessionIdleTime': response.xpath("//*[@id='hdnSessionIdleTime']/@value"),
                'ctl00$hdnUserUniqueId': response.xpath("//*[@id='hdnUserUniqueId']/@value"),
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationFrom_ClientState': response.xpath(
                    "//*[@id='ContentPlaceHolder1_meeDateOfRegistrationFrom_ClientState']/@value"),
                'ctl00$ContentPlaceHolder1$txtDateOfRegistrationFrom': "01/07/2020",
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationTo_ClientState':
                    response.xpath(
                        "//*[@id='ContentPlaceHolder1_meeDateOfRegistrationTo_ClientState']/@value"),
                'ctl00$ContentPlaceHolder1_txtDateOfRegistrationTo': "03/07/2020",
                'ctl00$ContentPlaceHolder1$ddlDistrict': "19409",
                'ctl00$ContentPlaceHolder1$ddlPoliceStation': "",
                'ctl00$ContentPlaceHolder1$txtFirno': "",
                'ctl00$ContentPlaceHolder1$btnSearch': "Search",
                'ctl00$ContentPlaceHolder1$ucRecordView$ddlPageSize': "0",
                'ctl00$ContentPlaceHolder1$ucGridRecordView$txtPageNumber': ""
            },
            callback=(self.after_login),

        )

    def after_login(self, response):

        police_stations = response.xpath(
            '//*[@id="ContentPlaceHolder1_lbltotalrecord"]/text()').get()
        print(police_stations)
2020-07-15 15:11:37 [scrapy.utils.log] INFO: Scrapy 2.2.0 started (bot: xyz)
2020-07-15 15:11:37 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.2 (default, Apr 27 2020, 15:53:34) - [GCC 9.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1f  31 Mar 2020), cryptography 2.8, Platform Linux-5.4.0-40-generic-x86_64-with-glibc2.29
2020-07-15 15:11:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.epollreactor.EPollReactor
2020-07-15 15:11:37 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'xyz',
 'NEWSPIDER_MODULE': 'xyz.spiders',
 'SPIDER_MODULES': ['xyz.spiders']}
2020-07-15 15:11:38 [scrapy.extensions.telnet] INFO: Telnet Password: db3dd9550774d0ab
2020-07-15 15:11:38 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-07-15 15:11:39 [scrapy.core.engine] INFO: Spider opened
2020-07-15 15:11:39 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-15 15:11:39 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-07-15 15:11:40 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> from <GET http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx>
2020-07-15 15:11:40 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> (referer: https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
2020-07-15 15:11:40 [scrapy.core.scraper] ERROR: Spider error processing <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> (referer: https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
Traceback (most recent call last):
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/defer.py", line 120, in iter_errback
    yield next(it)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 346, in __next__
    return next(self.data)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 346, in __next__
    return next(self.data)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/referer.py", line 340, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/Documents/delet/xyz/xyz/spiders/abc.py", line 20, in parse
    yield FormRequest.from_response(
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 58, in from_response
    return cls(url=url, method=method, formdata=formdata, **kwargs)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 31, in __init__
    querystr = _urlencode(items, self.encoding)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 71, in _urlencode
    values = [(to_bytes(k, enc), to_bytes(v, enc))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 71, in <listcomp>
    values = [(to_bytes(k, enc), to_bytes(v, enc))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 104, in to_bytes
    raise TypeError('to_bytes must receive a str or bytes '
TypeError: to_bytes must receive a str or bytes object, got Selector
2020-07-15 15:11:40 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-15 15:11:40 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 648,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 8150,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/302': 1,
 'elapsed_time_seconds': 1.116569,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 7, 15, 9, 41, 40, 607840),
 'log_count/DEBUG': 2,
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'memusage/max': 52281344,
 'memusage/startup': 52281344,
 'response_received_count': 1,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'spider_exceptions/TypeError': 1,
 'start_time': datetime.datetime(2020, 7, 15, 9, 41, 39, 491271)}
2020-07-15 15:11:40 [scrapy.core.engine] INFO: Spider closed (finished)
scrapy spider出现以下错误:

raise TypeError('to_bytes must receive a str or bytes '
TypeError: to_bytes must receive a str or bytes object, got Selector
我想要什么:

import scrapy
from scrapy import FormRequest


class abcSpider(scrapy.Spider):
    name = 'abc'
    allowed_domains = ['citizen.mahapolice.gov.in']

    def start_requests(self):
        yield scrapy.Request(
            url='http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx',
            headers={
                'Referer': 'https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx'
            },
            callback=self.parse
        )

    def parse(self, response):

        yield FormRequest.from_response(
            response,
            formid='form1',
            formdata={
                '__EVENTTARGET': response.xpath("//input[@name='__EVENTTARGET']/@value"),
                '__EVENTARGUMENT': response.xpath("//*[@id='__EVENTARGUMENT']/@value"),
                '__LASTFOCUS': response.xpath("//*[@id='__LASTFOCUS']/@value"),
                '__VIEWSTATE':response.xpath("//*[@id='__VIEWSTATE']/@value"),
                '__VIEWSTATEGENERATOR': "6F2EA376",
                '__PREVIOUSPAGE': response.xpath("//*[@id='__PREVIOUSPAGE']/@value"),
                '__EVENTVALIDATION': response.xpath("//*[@id='__EVENTVALIDATION']/@value"),
                'ctl00$hdnSessionIdleTime': response.xpath("//*[@id='hdnSessionIdleTime']/@value"),
                'ctl00$hdnUserUniqueId': response.xpath("//*[@id='hdnUserUniqueId']/@value"),
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationFrom_ClientState': response.xpath(
                    "//*[@id='ContentPlaceHolder1_meeDateOfRegistrationFrom_ClientState']/@value"),
                'ctl00$ContentPlaceHolder1$txtDateOfRegistrationFrom': "01/07/2020",
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationTo_ClientState':
                    response.xpath(
                        "//*[@id='ContentPlaceHolder1_meeDateOfRegistrationTo_ClientState']/@value"),
                'ctl00$ContentPlaceHolder1_txtDateOfRegistrationTo': "03/07/2020",
                'ctl00$ContentPlaceHolder1$ddlDistrict': "19409",
                'ctl00$ContentPlaceHolder1$ddlPoliceStation': "",
                'ctl00$ContentPlaceHolder1$txtFirno': "",
                'ctl00$ContentPlaceHolder1$btnSearch': "Search",
                'ctl00$ContentPlaceHolder1$ucRecordView$ddlPageSize': "0",
                'ctl00$ContentPlaceHolder1$ucGridRecordView$txtPageNumber': ""
            },
            callback=(self.after_login),

        )

    def after_login(self, response):

        police_stations = response.xpath(
            '//*[@id="ContentPlaceHolder1_lbltotalrecord"]/text()').get()
        print(police_stations)
2020-07-15 15:11:37 [scrapy.utils.log] INFO: Scrapy 2.2.0 started (bot: xyz)
2020-07-15 15:11:37 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.2 (default, Apr 27 2020, 15:53:34) - [GCC 9.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1f  31 Mar 2020), cryptography 2.8, Platform Linux-5.4.0-40-generic-x86_64-with-glibc2.29
2020-07-15 15:11:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.epollreactor.EPollReactor
2020-07-15 15:11:37 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'xyz',
 'NEWSPIDER_MODULE': 'xyz.spiders',
 'SPIDER_MODULES': ['xyz.spiders']}
2020-07-15 15:11:38 [scrapy.extensions.telnet] INFO: Telnet Password: db3dd9550774d0ab
2020-07-15 15:11:38 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-07-15 15:11:39 [scrapy.core.engine] INFO: Spider opened
2020-07-15 15:11:39 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-15 15:11:39 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-07-15 15:11:40 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> from <GET http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx>
2020-07-15 15:11:40 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> (referer: https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
2020-07-15 15:11:40 [scrapy.core.scraper] ERROR: Spider error processing <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> (referer: https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
Traceback (most recent call last):
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/defer.py", line 120, in iter_errback
    yield next(it)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 346, in __next__
    return next(self.data)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 346, in __next__
    return next(self.data)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/referer.py", line 340, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/Documents/delet/xyz/xyz/spiders/abc.py", line 20, in parse
    yield FormRequest.from_response(
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 58, in from_response
    return cls(url=url, method=method, formdata=formdata, **kwargs)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 31, in __init__
    querystr = _urlencode(items, self.encoding)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 71, in _urlencode
    values = [(to_bytes(k, enc), to_bytes(v, enc))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 71, in <listcomp>
    values = [(to_bytes(k, enc), to_bytes(v, enc))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 104, in to_bytes
    raise TypeError('to_bytes must receive a str or bytes '
TypeError: to_bytes must receive a str or bytes object, got Selector
2020-07-15 15:11:40 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-15 15:11:40 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 648,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 8150,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/302': 1,
 'elapsed_time_seconds': 1.116569,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 7, 15, 9, 41, 40, 607840),
 'log_count/DEBUG': 2,
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'memusage/max': 52281344,
 'memusage/startup': 52281344,
 'response_received_count': 1,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'spider_exceptions/TypeError': 1,
 'start_time': datetime.datetime(2020, 7, 15, 9, 41, 39, 491271)}
2020-07-15 15:11:40 [scrapy.core.engine] INFO: Spider closed (finished)
字符串作为输出,其中包括“一些记录数”

我尝试了什么?

还有其他一些问题。他们没有解决我面临的问题

我的代码:

import scrapy
from scrapy import FormRequest


class abcSpider(scrapy.Spider):
    name = 'abc'
    allowed_domains = ['citizen.mahapolice.gov.in']

    def start_requests(self):
        yield scrapy.Request(
            url='http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx',
            headers={
                'Referer': 'https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx'
            },
            callback=self.parse
        )

    def parse(self, response):

        yield FormRequest.from_response(
            response,
            formid='form1',
            formdata={
                '__EVENTTARGET': response.xpath("//input[@name='__EVENTTARGET']/@value"),
                '__EVENTARGUMENT': response.xpath("//*[@id='__EVENTARGUMENT']/@value"),
                '__LASTFOCUS': response.xpath("//*[@id='__LASTFOCUS']/@value"),
                '__VIEWSTATE':response.xpath("//*[@id='__VIEWSTATE']/@value"),
                '__VIEWSTATEGENERATOR': "6F2EA376",
                '__PREVIOUSPAGE': response.xpath("//*[@id='__PREVIOUSPAGE']/@value"),
                '__EVENTVALIDATION': response.xpath("//*[@id='__EVENTVALIDATION']/@value"),
                'ctl00$hdnSessionIdleTime': response.xpath("//*[@id='hdnSessionIdleTime']/@value"),
                'ctl00$hdnUserUniqueId': response.xpath("//*[@id='hdnUserUniqueId']/@value"),
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationFrom_ClientState': response.xpath(
                    "//*[@id='ContentPlaceHolder1_meeDateOfRegistrationFrom_ClientState']/@value"),
                'ctl00$ContentPlaceHolder1$txtDateOfRegistrationFrom': "01/07/2020",
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationTo_ClientState':
                    response.xpath(
                        "//*[@id='ContentPlaceHolder1_meeDateOfRegistrationTo_ClientState']/@value"),
                'ctl00$ContentPlaceHolder1_txtDateOfRegistrationTo': "03/07/2020",
                'ctl00$ContentPlaceHolder1$ddlDistrict': "19409",
                'ctl00$ContentPlaceHolder1$ddlPoliceStation': "",
                'ctl00$ContentPlaceHolder1$txtFirno': "",
                'ctl00$ContentPlaceHolder1$btnSearch': "Search",
                'ctl00$ContentPlaceHolder1$ucRecordView$ddlPageSize': "0",
                'ctl00$ContentPlaceHolder1$ucGridRecordView$txtPageNumber': ""
            },
            callback=(self.after_login),

        )

    def after_login(self, response):

        police_stations = response.xpath(
            '//*[@id="ContentPlaceHolder1_lbltotalrecord"]/text()').get()
        print(police_stations)
2020-07-15 15:11:37 [scrapy.utils.log] INFO: Scrapy 2.2.0 started (bot: xyz)
2020-07-15 15:11:37 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.2 (default, Apr 27 2020, 15:53:34) - [GCC 9.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1f  31 Mar 2020), cryptography 2.8, Platform Linux-5.4.0-40-generic-x86_64-with-glibc2.29
2020-07-15 15:11:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.epollreactor.EPollReactor
2020-07-15 15:11:37 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'xyz',
 'NEWSPIDER_MODULE': 'xyz.spiders',
 'SPIDER_MODULES': ['xyz.spiders']}
2020-07-15 15:11:38 [scrapy.extensions.telnet] INFO: Telnet Password: db3dd9550774d0ab
2020-07-15 15:11:38 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-07-15 15:11:39 [scrapy.core.engine] INFO: Spider opened
2020-07-15 15:11:39 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-15 15:11:39 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-07-15 15:11:40 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> from <GET http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx>
2020-07-15 15:11:40 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> (referer: https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
2020-07-15 15:11:40 [scrapy.core.scraper] ERROR: Spider error processing <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> (referer: https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
Traceback (most recent call last):
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/defer.py", line 120, in iter_errback
    yield next(it)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 346, in __next__
    return next(self.data)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 346, in __next__
    return next(self.data)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/referer.py", line 340, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/Documents/delet/xyz/xyz/spiders/abc.py", line 20, in parse
    yield FormRequest.from_response(
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 58, in from_response
    return cls(url=url, method=method, formdata=formdata, **kwargs)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 31, in __init__
    querystr = _urlencode(items, self.encoding)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 71, in _urlencode
    values = [(to_bytes(k, enc), to_bytes(v, enc))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 71, in <listcomp>
    values = [(to_bytes(k, enc), to_bytes(v, enc))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 104, in to_bytes
    raise TypeError('to_bytes must receive a str or bytes '
TypeError: to_bytes must receive a str or bytes object, got Selector
2020-07-15 15:11:40 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-15 15:11:40 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 648,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 8150,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/302': 1,
 'elapsed_time_seconds': 1.116569,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 7, 15, 9, 41, 40, 607840),
 'log_count/DEBUG': 2,
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'memusage/max': 52281344,
 'memusage/startup': 52281344,
 'response_received_count': 1,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'spider_exceptions/TypeError': 1,
 'start_time': datetime.datetime(2020, 7, 15, 9, 41, 39, 491271)}
2020-07-15 15:11:40 [scrapy.core.engine] INFO: Spider closed (finished)
终端输出:

import scrapy
from scrapy import FormRequest


class abcSpider(scrapy.Spider):
    name = 'abc'
    allowed_domains = ['citizen.mahapolice.gov.in']

    def start_requests(self):
        yield scrapy.Request(
            url='http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx',
            headers={
                'Referer': 'https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx'
            },
            callback=self.parse
        )

    def parse(self, response):

        yield FormRequest.from_response(
            response,
            formid='form1',
            formdata={
                '__EVENTTARGET': response.xpath("//input[@name='__EVENTTARGET']/@value"),
                '__EVENTARGUMENT': response.xpath("//*[@id='__EVENTARGUMENT']/@value"),
                '__LASTFOCUS': response.xpath("//*[@id='__LASTFOCUS']/@value"),
                '__VIEWSTATE':response.xpath("//*[@id='__VIEWSTATE']/@value"),
                '__VIEWSTATEGENERATOR': "6F2EA376",
                '__PREVIOUSPAGE': response.xpath("//*[@id='__PREVIOUSPAGE']/@value"),
                '__EVENTVALIDATION': response.xpath("//*[@id='__EVENTVALIDATION']/@value"),
                'ctl00$hdnSessionIdleTime': response.xpath("//*[@id='hdnSessionIdleTime']/@value"),
                'ctl00$hdnUserUniqueId': response.xpath("//*[@id='hdnUserUniqueId']/@value"),
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationFrom_ClientState': response.xpath(
                    "//*[@id='ContentPlaceHolder1_meeDateOfRegistrationFrom_ClientState']/@value"),
                'ctl00$ContentPlaceHolder1$txtDateOfRegistrationFrom': "01/07/2020",
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationTo_ClientState':
                    response.xpath(
                        "//*[@id='ContentPlaceHolder1_meeDateOfRegistrationTo_ClientState']/@value"),
                'ctl00$ContentPlaceHolder1_txtDateOfRegistrationTo': "03/07/2020",
                'ctl00$ContentPlaceHolder1$ddlDistrict': "19409",
                'ctl00$ContentPlaceHolder1$ddlPoliceStation': "",
                'ctl00$ContentPlaceHolder1$txtFirno': "",
                'ctl00$ContentPlaceHolder1$btnSearch': "Search",
                'ctl00$ContentPlaceHolder1$ucRecordView$ddlPageSize': "0",
                'ctl00$ContentPlaceHolder1$ucGridRecordView$txtPageNumber': ""
            },
            callback=(self.after_login),

        )

    def after_login(self, response):

        police_stations = response.xpath(
            '//*[@id="ContentPlaceHolder1_lbltotalrecord"]/text()').get()
        print(police_stations)
2020-07-15 15:11:37 [scrapy.utils.log] INFO: Scrapy 2.2.0 started (bot: xyz)
2020-07-15 15:11:37 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.2 (default, Apr 27 2020, 15:53:34) - [GCC 9.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1f  31 Mar 2020), cryptography 2.8, Platform Linux-5.4.0-40-generic-x86_64-with-glibc2.29
2020-07-15 15:11:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.epollreactor.EPollReactor
2020-07-15 15:11:37 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'xyz',
 'NEWSPIDER_MODULE': 'xyz.spiders',
 'SPIDER_MODULES': ['xyz.spiders']}
2020-07-15 15:11:38 [scrapy.extensions.telnet] INFO: Telnet Password: db3dd9550774d0ab
2020-07-15 15:11:38 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-07-15 15:11:39 [scrapy.core.engine] INFO: Spider opened
2020-07-15 15:11:39 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-15 15:11:39 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-07-15 15:11:40 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> from <GET http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx>
2020-07-15 15:11:40 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> (referer: https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
2020-07-15 15:11:40 [scrapy.core.scraper] ERROR: Spider error processing <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> (referer: https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
Traceback (most recent call last):
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/defer.py", line 120, in iter_errback
    yield next(it)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 346, in __next__
    return next(self.data)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 346, in __next__
    return next(self.data)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/referer.py", line 340, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/Documents/delet/xyz/xyz/spiders/abc.py", line 20, in parse
    yield FormRequest.from_response(
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 58, in from_response
    return cls(url=url, method=method, formdata=formdata, **kwargs)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 31, in __init__
    querystr = _urlencode(items, self.encoding)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 71, in _urlencode
    values = [(to_bytes(k, enc), to_bytes(v, enc))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 71, in <listcomp>
    values = [(to_bytes(k, enc), to_bytes(v, enc))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 104, in to_bytes
    raise TypeError('to_bytes must receive a str or bytes '
TypeError: to_bytes must receive a str or bytes object, got Selector
2020-07-15 15:11:40 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-15 15:11:40 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 648,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 8150,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/302': 1,
 'elapsed_time_seconds': 1.116569,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 7, 15, 9, 41, 40, 607840),
 'log_count/DEBUG': 2,
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'memusage/max': 52281344,
 'memusage/startup': 52281344,
 'response_received_count': 1,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'spider_exceptions/TypeError': 1,
 'start_time': datetime.datetime(2020, 7, 15, 9, 41, 39, 491271)}
2020-07-15 15:11:40 [scrapy.core.engine] INFO: Spider closed (finished)
2020-07-15 15:11:37[scrapy.utils.log]信息:scrapy 2.2.0已启动(bot:xyz)
2020-07-15 15:11:37[scrapy.utils.log]信息:版本:lxml 4.5.0.0,libxml2.9.10,csselect 1.1.0,parsel 1.6.0,w3lib 1.22.0,Twisted 20.3.0,Python 3.8.2(默认,2020年4月27日,15:53:34)-[GCC 9.3.0],pyOpenSSL 19.1.0(OpenSSL 1.1.1f 2020年3月31日),密码学2.8,平台Linux-5.4.0-40-generic-664-bcx82.29
2020-07-15 15:11:37[scrapy.utils.log]调试:使用reactor:twisted.internet.epollreactor.epollreactor
2020-07-15 15:11:37[抓取程序]信息:覆盖设置:
{'BOT_NAME':'xyz',
“NEWSPIDER_模块”:“xyz.spider”,
'SPIDER_MODULES':['xyz.SPIDER']}
2020-07-15 15:11:38[scrapy.extensions.telnet]信息:telnet密码:db3dd9550774d0ab
2020-07-15 15:11:38[scrapy.middleware]信息:启用的扩展:
['scrapy.extensions.corestats.corestats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.logstats']
2020-07-15 15:11:39[剪贴簿中间件]信息:启用的下载程序中间件:
['scrapy.downloaderMiddleware.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloaderMiddleware.defaultheaders.DefaultHeadersMiddleware',
'scrapy.DownloaderMiddleware.useragent.UserAgentMiddleware',
'scrapy.DownloaderMiddleware.retry.RetryMiddleware',
'scrapy.DownloaderMiddleware.redirect.MetaRefreshMiddleware',
'scrapy.DownloaderMiddleware.httpcompression.HttpCompressionMiddleware',
'scrapy.DownloaderMiddleware.redirect.RedirectMiddleware',
“scrapy.DownloaderMiddleware.cookies.CookiesMiddleware”,
'scrapy.downloadermiddleware.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddleware.stats.DownloaderStats']
2020-07-15 15:11:39[scrapy.middleware]信息:启用的蜘蛛中间件:
['scrapy.spidermiddleware.httperror.httperror中间件',
'刮皮.SpiderMiddleware.场外.场外Iddleware',
“scrapy.Spidermiddleware.referer.RefererMiddleware”,
'scrapy.spiderMiddleware.urllength.UrlLengthMiddleware',
'scrapy.spidermiddleware.depth.DepthMiddleware']
2020-07-15 15:11:39[scrapy.middleware]信息:启用的项目管道:
[]
2020-07-15 15:11:39[刮屑.堆芯.发动机]信息:蜘蛛网已打开
2020-07-15 15:11:39[scrapy.extensions.logstats]信息:抓取0页(以0页/分钟的速度),抓取0项(以0项/分钟的速度)
2020-07-15 15:11:39[scrapy.extensions.telnet]信息:telnet控制台监听127.0.0.1:6023
2020-07-15 15:11:40[scrapy.downloadermiddleware.redirect]调试:重定向(302)到
2020-07-15 15:11:40[刮屑核心引擎]调试:爬网(200)(参考:https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
2020-07-15 15:11:40[刮板芯刮板]错误:卡盘错误处理(参考:https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
回溯(最近一次呼叫最后一次):
iter\u errback中的文件“/home/sangharshmanuski/.local/lib/python3.8/site packages/scrapy/utils/defer.py”,第120行
下一个(it)
文件“/home/sangharshmanuski/.local/lib/python3.8/site packages/scrapy/utils/python.py”,第346行,下一页__
返回下一个(self.data)
文件“/home/sangharshmanuski/.local/lib/python3.8/site packages/scrapy/utils/python.py”,第346行,下一页__
返回下一个(self.data)
文件“/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py”,第64行,可评估
对于iterable中的r:
文件“/home/sangharshmanuski/.local/lib/python3.8/site packages/scrapy/spidermiddleware/offsite.py”,第29行,进程中输出
对于结果中的x:
文件“/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py”,第64行,可评估
对于iterable中的r:
文件“/home/sangharshmanuski/.local/lib/python3.8/site packages/scrapy/spidermiddleware/referer.py”,第340行,在
返回(_set_referer(r)表示结果中的r或())
文件“/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py”,第64行,可评估
对于iterable中的r:
文件“/home/sangharshmanuski/.local/lib/python3.8/site packages/scrapy/spidermiddleware/urlength.py”,第37行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py”,第64行,可评估
对于iterable中的r:
文件“/home/sangharshmanuski/.local/lib/python3.8/site packages/scrapy/spidermiddleware/depth.py”,第58行,in
返回(结果中的r表示r或()如果_过滤器(r))
文件“/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py”,第64行,可评估
对于iterable中的r:
文件“/home/sangharshmanuski/Documents/delet/xyz/xyz/spider/abc.py”,第20行,在parse中
从_响应中生成FormRequest.from(
文件“/home/sangharshmanuski/.local/lib/python3.8/site packages/scrapy/http/request/form.py”,第58行,from_响应
返回cls(url=url,method=method,formdata=formdata,**kwargs)
文件“/home/sangharshmanuski/.local/lib/python3.8/site packages/scrapy/http/request/form.py”,第31行,在__
querystr=_urlencode(项,self.encoding)
文件“/home/sangharshmanuski/.local/lib/python3.8/site packages/scrapy/http/request/form.py”,第71行,在urlencode中
值=[(到_字节(k,enc),到_字节(v,enc))
Fi