Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/jenkins/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何在我的所有网页中刮取链接?_Python_Scrapy_Screen Scraping - Fatal编程技术网

Python 如何在我的所有网页中刮取链接?

Python 如何在我的所有网页中刮取链接?,python,scrapy,screen-scraping,Python,Scrapy,Screen Scraping,到目前为止,我有这段代码,它使用scrapy从页面URL中提取文本: class QuotesSpider(scrapy.Spider): name = "dialpad" def start_requests(self): urls = [ 'https://help.dialpad.com/hc/en-us/categories/201278063-User-Support', 'https://www.domo.com/', 'ht

到目前为止,我有这段代码,它使用scrapy从页面URL中提取文本:

class QuotesSpider(scrapy.Spider):
name = "dialpad"

def start_requests(self):
    urls = [
        'https://help.dialpad.com/hc/en-us/categories/201278063-User-Support',
        'https://www.domo.com/',
        'https://www.zenreach.com/',
        'https://www.trendkite.com/',
        'https://peloton.com/',
        'https://ting.com/',
        'https://www.cedar.com/',
        'https://tophat.com/',
        'https://www.bambora.com/en/ca/',
        'https://www.hoteltonight.com/'
    ]
    for url in urls:
        BASE_URL = url
        yield scrapy.Request(url=url, callback=self.parse)

def parse(self, response):
    page = response.url.split("/")[2]
    filename = 'quotes-thing-{}.csv'.format(page)
    BASE_URL = response.url

    # with open(filename, 'wb') as f:
    #     f.write(response.body)
    # # with open(filename, 'r') as f:
    with open(filename, 'w') as f:
      for selector in response.css('body').xpath('.//text()'):
        selector = selector.extract()
        f.write(selector)
我如何从这些页面上的链接中提取数据并将其写入我创建的文件名

您可以使用来提取每个链接并刮取它们,您的代码可能如下所示

from scrapy.linkextractors import LinkExtractor
from scrapy.spider import CrawlSpider, Rule


class QuotesSpider(CrawlSpider):
    name = "dialpad"

    start_urls = [
        'https://help.dialpad.com/hc/en-us/categories/201278063-User-Support',
        'https://www.domo.com/',
        'https://www.zenreach.com/',
        'https://www.trendkite.com/',
        'https://peloton.com/',
        'https://ting.com/',
        'https://www.cedar.com/',
        'https://tophat.com/',
        'https://www.bambora.com/en/ca/',
        'https://www.hoteltonight.com/'
    ]

    rules = [
        Rule(
            LinkExtractor(
                allow=(r'url patterns here to follow'),
                deny=(r'other url patterns to deny'),
            ),
            callback='parse_item',
            follow=True,
        )
    ]

    def parse_item(self, response):
        page = response.url.split("/")[2]
        filename = 'quotes-thing-{}.csv'.format(page)

        with open(filename, 'w') as f:
            for selector in response.css('body').xpath('.//text()'):
                selector = selector.extract()
                f.write(selector)
尽管我建议为每个网站创建一个不同的爬行器,并使用
allow
deny
参数选择要在每个网站上提取的链接

而且,使用它会更好