Http 如何使用scrapy抓取依赖帖子的网站

Http 如何使用scrapy抓取依赖帖子的网站,http,post,scrapy,web-crawler,Http,Post,Scrapy,Web Crawler,我正在尝试爬网一个保险网站www.ehealthinsurance.com。 它的主页有一个依赖于POST的表单,在该表单中它接受某些值和 生成下一页。我正在尝试传递值,但无法看到 我想要的标签的HTML源代码。任何建议都会大有帮助 内联是一个粗略的代码: class ehealthSpider(BaseSpider): name = "ehealth" allowed_domains = ["ehealthinsurance.com/"] start_urls = ["

我正在尝试爬网一个保险网站www.ehealthinsurance.com。 它的主页有一个依赖于POST的表单,在该表单中它接受某些值和 生成下一页。我正在尝试传递值,但无法看到 我想要的标签的HTML源代码。任何建议都会大有帮助

内联是一个粗略的代码:

class ehealthSpider(BaseSpider):
    name = "ehealth"
    allowed_domains = ["ehealthinsurance.com/"]
    start_urls = ["http://www.ehealthinsurance.com/individual-health-insurance"]

    def parse(self, response):
        yield FormRequest.from_response(response,
                                        formname='main',
                                        formdata={'census.zipCode': '48341',
                                                  'census.requestEffectiveDate': '06/01/2013',
                                                  'census.primary.gender': 'MALE',
                                                  'census.primary.month': '12',
                                                  'census.primary.day': '01',
                                                  'census.primary.year': '1971',
                                                  'census.primary.tobacco': 'No',
                                                  'census.primary.student': 'No'}, callback=self.parseAnnonces)

    def parseAnnonces(self, response):
        hxs = HtmlXPathSelector(response)
        data = hxs.select('//div[@class="main-wrap"]').extract()
        #print encoding
        print data
这是终端响应中的爬虫

  2013-04-30 16:34:16+0530 [elyse] DEBUG: Crawled (200) <GET http://www.ehealthin
  urance.com/individual-health-insurance> (referer: None)
  2013-04-30 16:34:17+0530 [elyse] DEBUG: Filtered offsite request to 'www.ehealt
  insurance.com': <POST http://www.ehealthinsurance.com/individual-health-insuran
  e;jsessionid=F5A1123CE731FDDDC1A7A31CD46CC132.prfo23a>
  2013-04-30 16:34:17+0530 [elyse] INFO: Closing spider (finished)
  2013-04-30 16:34:17+0530 [elyse] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 257,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 32561,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 4, 30, 11, 4, 17, 22000),
     'log_count/DEBUG': 8,
     'log_count/INFO': 4,
     'request_depth_max': 1,
     'response_received_count': 1,
     'scheduler/dequeued': 1,
     'scheduler/dequeued/memory': 1,
     'scheduler/enqueued': 1,
     'scheduler/enqueued/memory': 1,
     'start_time': datetime.datetime(2013, 4, 30, 11, 4, 10, 494000)} 
2013-04-30 16:34:16+0530[elyse]调试:爬网(200)(参考:无)
2013-04-30 16:34:17+0530[elyse]调试:过滤到“www.ehealt”的场外请求
保险网’:
2013-04-30 16:34:17+0530[elyse]信息:关闭蜘蛛侠(已完成)
2013-04-30 16:34:17+0530[爱丽舍]信息:倾倒垃圾统计数据:
{'downloader/request_bytes':257,
“下载程序/请求计数”:1,
“downloader/request\u method\u count/GET”:1,
“downloader/response_字节”:32561,
“下载程序/响应计数”:1,
“下载程序/响应状态\计数/200”:1,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2013,4,30,11,4,17,22000),
“日志计数/调试”:8,
“日志计数/信息”:4,
“请求深度最大值”:1,
“响应\u已接收\u计数”:1,
“调度程序/出列”:1,
“调度程序/出列/内存”:1,
“调度程序/排队”:1,
“调度程序/排队/内存”:1,
“开始时间”:datetime.datetime(2013,4,30,11,4,10494000)}

您能帮我获取所需的数据吗?

中间请求的小技巧有效。 还更正了formname。Scrapy最棒的调试工具是
inspect\u response(response)


S.Cookies应该在Stutux.Py:<代码> CookiSeEnabult= Trime< /Clult> /P>中启用。问题是,在显示页面之前,您试图爬行:在中间加载了一个页面——在提交表单之后,在显示结果之前,再加上重定向。看起来很难刮。谢谢你。我也做了同样的研究,我了解的解决方法是硒和刮痧。但是不知道如何将selenium加载的页面解析为scrapy?谢谢你的帮助

from scrapy.selector import Selector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from healthinsspider.items import HealthinsspiderItem
from scrapy.shell import inspect_response
from scrapy.http import FormRequest
from scrapy.http import Request
import time

class EhealthspiderSpider(CrawlSpider):
    name = 'ehealthSpider'
    allowed_domains = ['ehealthinsurance.com']
    start_urls = ["http://www.ehealthinsurance.com/individual-health-insurance"]

    def parse(self, response):
        yield FormRequest.from_response(response,
                                        formname='form-census',
                                        formdata={'census.zipCode': '48341',
                                                  'census.requestEffectiveDate': '06/01/2013',
                                                  'census.primary.gender': 'MALE',
                                                  'census.primary.month': '12',
                                                  'census.primary.day': '01',
                                                  'census.primary.year': '1971',
                                                  'census.primary.tobacco': 'No',
                                                  'census.primary.student': 'No'}, callback=self.InterRequest,
                                                  dont_filter=True)

    def InterRequest(self, response):
        # sleep so, that our request can be processed by the server, than go to results
        time.sleep(10)
        return Request(url='https://www.ehealthinsurance.com/ehi/ifp/individual-family-health-insurance!goToScreen?referer=https%3A%2F%2Fwww.ehealthinsurance.com%2Fehi%2Fifp%2Findividual-health-insurance%3FredirectFormHTTP&sourcePage=&edit=false&ajax=false&screenName=best-sellers', dont_filter=True, callback=self.parseAnnonces)

    def parseAnnonces(self, response):
        inspect_response(response)
        hxs = Selector(response)
        data = hxs.select('//div[@class="main-wrap"]').extract()
        #print encoding
        print data