Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/sqlite/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Scrapy:爬行开始URL导致问题_Python_Scrapy - Fatal编程技术网

Python Scrapy:爬行开始URL导致问题

Python Scrapy:爬行开始URL导致问题,python,scrapy,Python,Scrapy,在我的开始URL中,如果我定义了主页,那么scrapy不会对页面进行爬网,并且“如果”检入解析项目函数永远不会被点击(例如:“someurl.com/medical/patient info”)。但是当我在start url中提供相同的页面url时(即start_url='someurl.com/medical/patient info),它会对其进行爬网并点击下面的check-in-parse_项 from scrapy.spider import BaseSpider

在我的开始URL中,如果我定义了主页,那么scrapy不会对页面进行爬网,并且“如果”检入解析项目函数永远不会被点击(例如:“someurl.com/medical/patient info”)。但是当我在start url中提供相同的页面url时(即start_url='someurl.com/medical/patient info),它会对其进行爬网并点击下面的check-in-parse_项

      from scrapy.spider import BaseSpider
      from scrapy.contrib.spiders.init import InitSpider
      from scrapy.http import Request, FormRequest
      from scrapy.selector import HtmlXPathSelector
      from tutorial.items import DmozItem
      from scrapy.contrib.spiders import CrawlSpider, Rule
      from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
      import urlparse
      from scrapy import log

      class MySpider(CrawlSpider):

          items = []
          failed_urls = []
          duplicate_responses = []

          name = 'myspiders'
          allowed_domains = ['someurl.com']
          login_page = 'someurl.com/login_form'
          start_urls = 'someurl.com/' # Facing problem for the url here

          rules = [Rule(SgmlLinkExtractor(deny=('logged_out', 'logout',)),         follow=True, callback='parse_item')]

          def start_requests(self):

              yield Request(
                  url=self.login_page,
                  callback=self.login,
                  dont_filter=False
                  )


          def login(self, response):
              """Generate a login request."""
              return FormRequest.from_response(response,
                formnumber=1,
                formdata={'username': 'username', 'password': 'password' },
                callback=self.check_login_response)


          def check_login_response(self, response):
              """Check the response returned by a login request to see if we are
              successfully logged in.
              """
              if "Logout" in response.body:
                  self.log("Successfully logged in. Let's start crawling! :%s" % response, level=log.INFO)
                  self.log("Response Url : %s" % response.url, level=log.INFO)

                  return Request(url=self.start_urls)
              else:
                  self.log("Bad times :(", loglevel=log.INFO)


          def parse_item(self, response):


              # Scrape data from page
              hxs = HtmlXPathSelector(response)

              self.log('response came in from : %s' % (response), level=log.INFO)

              # check for some important page to crawl
              if response.url == 'someurl.com/medical/patient-info' :

                  self.log('yes I am here', level=log.INFO)

                  urls = hxs.select('//a/@href').extract()
                  urls = list(set(urls))


                  for url in urls :

                      self.log('URL extracted : %s' % url, level=log.INFO)

                      item = DmozItem()

                      if response.status == 404 or response.status == 500:
                          self.failed_urls.append(response.url)
                          self.log('failed_url : %s' % self.failed_urls, level=log.INFO)
                          item['failed_urls'] = self.failed_urls

                      else :

                          if url.startswith('http') :
                              if url.startswith('someurl.com'):
                                  item['internal_link'] = url
                                  self.log('internal_link :%s' % url, level=log.INFO)
                              else :
                                  item['external_link'] = url
                                  self.log('external_link :%s' % url, level=log.INFO)

                      self.items.append(item)

                  self.items = list(set(self.items))
                  return self.items
              else :
                  self.log('did not recieved expected response', level=log.INFO)

我想
start\u url
必须是一个列表


尝试以下操作:
start\u url=['http://www.someurl.com/“,]”

如果你不想让问题结束,你需要提供更详细的信息:例如,你正在使用的代码、你面临的确切问题、什么不起作用等等。@isdev:我已经修改了我的问题并更新了代码。让我知道你对我的查询的意见。