Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/291.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 刮痧:很小的“刮痧”;蜘蛛“;在蜘蛛身上?_Python_Web Scraping_Scrapy - Fatal编程技术网

Python 刮痧:很小的“刮痧”;蜘蛛“;在蜘蛛身上?

Python 刮痧:很小的“刮痧”;蜘蛛“;在蜘蛛身上?,python,web-scraping,scrapy,Python,Web Scraping,Scrapy,因此,当我试图从epinions.com上获取产品评论信息时,如果主评论文本太长,它会有一个指向其他页面的“阅读更多”链接。 我举了一个例子,如果你看第一篇评论,你就会明白我的意思 我想知道:在for循环的每次迭代中是否可能有一个小小的爬行器来抓取url并从新链接中删除评论?我有以下代码,但它不适用于小“蜘蛛” 这是我的密码: from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector fr

因此,当我试图从epinions.com上获取产品评论信息时,如果主评论文本太长,它会有一个指向其他页面的“阅读更多”链接。 我举了一个例子,如果你看第一篇评论,你就会明白我的意思

我想知道:在for循环的每次迭代中是否可能有一个小小的爬行器来抓取url并从新链接中删除评论?我有以下代码,但它不适用于小“蜘蛛”

这是我的密码:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from epinions_test.items import EpinionsTestItem
from scrapy.http import Response, HtmlResponse

class MySpider(BaseSpider):
    name = "epinions"
    allow_domains = ["epinions.com"]
    start_urls = ['http://www.epinions.com/reviews/samsung-galaxy-note-16-gb-cell-phone/pa_~1']

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//div[@class="review_info"]')

        items = []
        for sites in sites:
            item = EpinionsTestItem()
            item["title"] = sites.select('h2/a/text()').extract()
            item["star"] = sites.select('span/a/span/@title').extract()
            item["date"] = sites.select('span/span/span/@title').extract()
            item["review"] = sites.select('p/span/text()').extract()
# Everything works fine and i do have those four columns beautifully printed out, until....

            url2 = sites.select('p/span/a/@href').extract()
            url = str("http://www.epinions.com%s" %str(url2)[3:-2])
# This url is a string. when i print it out, it's like "http://www.epinions.com/review/samsung-galaxy-note-16-gb-cell-phone/content_624031731332", which looks legit.

            response2 = HtmlResponse(url)
# I tried in a scrapy shell, it shows that this is a htmlresponse...

            hxs2 = HtmlXPathSelector(response2)
            fullReview = hxs2.select('//div[@class = "user_review_full"]')
            item["url"] = fullReview.select('p/text()').extract()
# The three lines above works in an independent spider, where start_url is changed to the url just generated and everything.
# However, i got nothing from item["url"] in this code.

            items.append(item)
        return items
为什么项[“url”]不返回任何内容


谢谢

您应该在回调中实例化一个新的
请求
,并在
元中传递

from scrapy.http import Request
from scrapy.item import Item, Field
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector


class EpinionsTestItem(Item):
    title = Field()
    star = Field()
    date = Field()
    review = Field()


class MySpider(BaseSpider):
    name = "epinions"
    allow_domains = ["epinions.com"]
    start_urls = ['http://www.epinions.com/reviews/samsung-galaxy-note-16-gb-cell-phone/pa_~1']

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//div[@class="review_info"]')

        for sites in sites:
            item = EpinionsTestItem()
            item["title"] = sites.select('h2/a/text()').extract()
            item["star"] = sites.select('span/a/span/@title').extract()
            item["date"] = sites.select('span/span/span/@title').extract()

            url = sites.select('p/span/a/@href').extract()
            url = str("http://www.epinions.com%s" % str(url)[3:-2])

            yield Request(url=url, callback=self.parse_url2, meta={'item': item})

    def parse_url2(self, response):
        hxs = HtmlXPathSelector(response)

        item = response.meta['item']
        fullReview = hxs.select('//div[@class = "user_review_full"]')
        item["review"] = fullReview.select('p/text()').extract()
        yield item
另见


希望有帮助。

您应该在回调中实例化一个新的
请求
,并在
元中传递

from scrapy.http import Request
from scrapy.item import Item, Field
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector


class EpinionsTestItem(Item):
    title = Field()
    star = Field()
    date = Field()
    review = Field()


class MySpider(BaseSpider):
    name = "epinions"
    allow_domains = ["epinions.com"]
    start_urls = ['http://www.epinions.com/reviews/samsung-galaxy-note-16-gb-cell-phone/pa_~1']

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//div[@class="review_info"]')

        for sites in sites:
            item = EpinionsTestItem()
            item["title"] = sites.select('h2/a/text()').extract()
            item["star"] = sites.select('span/a/span/@title').extract()
            item["date"] = sites.select('span/span/span/@title').extract()

            url = sites.select('p/span/a/@href').extract()
            url = str("http://www.epinions.com%s" % str(url)[3:-2])

            yield Request(url=url, callback=self.parse_url2, meta={'item': item})

    def parse_url2(self, response):
        hxs = HtmlXPathSelector(response)

        item = response.meta['item']
        fullReview = hxs.select('//div[@class = "user_review_full"]')
        item["review"] = fullReview.select('p/text()').extract()
        yield item
另见


希望这会有帮助。

这对。。。很多非常感谢你!!我正在阅读关于回调的文档,希望我也能弄明白:Dit有帮助。。。很多非常感谢你!!我正在阅读关于回调的文档,希望我也能弄明白:D