Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/xpath/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Xpath 如何使用hxs.select在scrapy中获取整个文档_Xpath_Scrapy - Fatal编程技术网

Xpath 如何使用hxs.select在scrapy中获取整个文档

Xpath 如何使用hxs.select在scrapy中获取整个文档,xpath,scrapy,Xpath,Scrapy,我已经做了12个小时了,我希望有人能帮我一把 这是我的代码,我只想在页面爬行时获得每个链接的锚和url from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.utils.url import urljo

我已经做了12个小时了,我希望有人能帮我一把

这是我的代码,我只想在页面爬行时获得每个链接的锚和url

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.utils.url import urljoin_rfc
from scrapy.utils.response import get_base_url
from urlparse import urljoin

#from scrapy.item import Item
from tutorial.items import DmozItem

class HopitaloneSpider(CrawlSpider):
name = 'dmoz'
allowed_domains = ['domain.co.uk']
start_urls = [
    'http://www.domain.co.uk'
]

rules = (
    #Rule(SgmlLinkExtractor(allow='>example\.org', )),
    Rule(SgmlLinkExtractor(allow=('\w+$', )), callback='parse_item', follow=True),
)

user_agent = 'Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))'

def parse_item(self, response):
    #self.log('Hi, this is an item page! %s' % response.url)

    hxs = HtmlXPathSelector(response)
    #print response.url
    sites = hxs.select('//html')
    #item = DmozItem()
    items = []

    for site in sites: 

                   item = DmozItem()
                   item['title'] = site.select('a/text()').extract()
                   item['link'] = site.select('a/@href').extract()

                   items.append(item)

    return items

我做错了什么。。。我的眼睛现在痛了。

要在一个页面上获取所有链接:

def parse_item(self, response):
  hxs = HtmlXPathSelector(response)
  items = []
  links = hxs.select("//a")

  for link in links: 

                 item = DmozItem()
                 item['title'] = site.select('text()').extract()
                 item['link'] = site.select('@href').extract()

                 items.append(item)

return items

要获取单个页面上的所有链接,请执行以下操作:

def parse_item(self, response):
  hxs = HtmlXPathSelector(response)
  items = []
  links = hxs.select("//a")

  for link in links: 

                 item = DmozItem()
                 item['title'] = site.select('text()').extract()
                 item['link'] = site.select('@href').extract()

                 items.append(item)

return items

响应。身体应该是你想要的

def parse_item(self, response):
    #self.log('Hi, this is an item page! %s' % response.url)

    body = response.body
    item = ....

响应。身体应该是你想要的

def parse_item(self, response):
    #self.log('Hi, this is an item page! %s' % response.url)

    body = response.body
    item = ....

嗯,完整的HTML驻留在响应中;hxs=HTMLX路径选择者回答我愚蠢的问题,非常感谢您抽出时间。对我来说,这真是一条陡峭的学习曲线。你能给我一点时间,为我调整一下代码吗?告诉我如何从任何页面获得锚和链接。我正在尝试编写一个通用的爬虫程序,我只需要这两个元素。嗯,完整的HTML驻留在响应中。主体;hxs=HTMLX路径选择者回答我愚蠢的问题,非常感谢您抽出时间。对我来说,这真是一条陡峭的学习曲线。你能给我一点时间,为我调整一下代码吗?告诉我如何从任何页面获得锚和链接。我正在尝试编写一个通用爬虫程序,我只需要这两个元素。嗨,这正是我在{'link':[],'title':[]上面的脚本中得到的。当我从命令行运行它时,空白标题和链接。嗨,这正是我在{'link':[],'title':[]上面的脚本中得到的。当我从命令行运行它时,空白标题和链接