Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/15.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 无法使用从另一个类获取的url从爬行蜘蛛类中刮取数据_Python_Python 3.x_Web Scraping_Scrapy - Fatal编程技术网

Python 无法使用从另一个类获取的url从爬行蜘蛛类中刮取数据

Python 无法使用从另一个类获取的url从爬行蜘蛛类中刮取数据,python,python-3.x,web-scraping,scrapy,Python,Python 3.x,Web Scraping,Scrapy,我的爬行蜘蛛正在奔跑,但在抓取之前,它已经关上了。我正在使用从类的另一个函数返回的URL。可能是导入错误。。。。不确定。有谁能帮我成功地获取这些信息吗 Ipython外壳输出: PS C:\Users\Latitude\Desktop\Shadman\Scrapy_Projects> cd AmazonScrape PS C:\Users\Latitude\Desktop\Shadman\Scrapy_Projects\AmazonScrape> PS C:\Users\Lati

我的爬行蜘蛛正在奔跑,但在抓取之前,它已经关上了。我正在使用从类的另一个函数返回的URL。可能是导入错误。。。。不确定。有谁能帮我成功地获取这些信息吗

Ipython外壳输出:

PS C:\Users\Latitude\Desktop\Shadman\Scrapy_Projects> cd AmazonScrape

PS C:\Users\Latitude\Desktop\Shadman\Scrapy_Projects\AmazonScrape>

PS C:\Users\Latitude\Desktop\Shadman\Scrapy_Projects\AmazonScrape> scrapy crawl Productfeed -o details.json

https://www.amazon.com/Casio-G-Shock-GD350-1C-Black-Resin/dp/B01C71NW9U/ref=sr_1_1?keywords=Casio+Men%27s+G-Shock+GD350-8+Grey+Resin+Sport+Watch&qid=1577591994&sr=8-1

2020-01-05 13:10:00 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: AmazonScrape)

2020-01-05 13:10:00 [scrapy.utils.log] INFO: Versions: lxml 4.4.1.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.7.4 (default, Aug  9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.7, Platform Windows-7-6.1.7601-SP1

2020-01-05 13:10:00 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'AmazonScrape', 'FEED_FORMAT': 'json', 'FEED_URI': 'details.json', 'NEWSPIDER_MODULE': 'AmazonScrape.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['AmazonScrape.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36'}

2020-01-05 13:10:00 [scrapy.extensions.telnet] INFO: Telnet Password: 68694b467e765620

2020-01-05 13:10:00 [scrapy.middleware] INFO: Enabled extensions:

['scrapy.extensions.corestats.CoreStats',

'scrapy.extensions.telnet.TelnetConsole',

'scrapy.extensions.feedexport.FeedExporter',

'scrapy.extensions.logstats.LogStats']

2020-01-05 13:10:01 [scrapy.middleware] INFO: Enabled downloader middlewares:

['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',

'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',

'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',

'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',

'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',

'scrapy.downloadermiddlewares.retry.RetryMiddleware',

'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',

'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',

'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',

'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',

'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',

'scrapy.downloadermiddlewares.stats.DownloaderStats']

2020-01-05 13:10:01 [scrapy.middleware] INFO: Enabled spider middlewares:

['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',

'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',

'scrapy.spidermiddlewares.referer.RefererMiddleware',

'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',

'scrapy.spidermiddlewares.depth.DepthMiddleware']

2020-01-05 13:10:01 [scrapy.middleware] INFO: Enabled item pipelines:

[]

2020-01-05 13:10:01 [scrapy.core.engine] INFO: Spider opened

2020-01-05 13:10:01 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2020-01-05 13:10:01 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023

2020-01-05 13:10:02 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/robots.txt> (referer: None)

2020-01-05 13:10:04 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/Casio-G-Shock-GD350-1C-Black-Resin/dp/B01C71NW9U/ref=sr_1_1?keywords=Casio+Men%27s+G-Shock+GD350-8+Grey+Resin+Sport+Watch&qid=1577591994&sr=8-1> (referer: None)

2020-01-05 13:10:04 [scrapy.core.engine] INFO: Closing spider (finished)

2020-01-05 13:10:04 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

{'downloader/request_bytes': 741,

'downloader/request_count': 2,

'downloader/request_method_count/GET': 2,

'downloader/response_bytes': 261119,

'downloader/response_count': 2,

'downloader/response_status_count/200': 2,

'elapsed_time_seconds': 3.653209,

'finish_reason': 'finished',

'finish_time': datetime.datetime(2020, 1, 5, 7, 10, 4, 701464),

'log_count/DEBUG': 2,

'log_count/INFO': 10,

'response_received_count': 2,

'robotstxt/request_count': 1,

'robotstxt/response_count': 1,

'robotstxt/response_status_count/200': 1,

'scheduler/dequeued': 1,

'scheduler/dequeued/memory': 1,

'scheduler/enqueued': 1,

'scheduler/enqueued/memory': 1,

'start_time': datetime.datetime(2020, 1, 5, 7, 10, 1, 48255)}

2020-01-05 13:10:04 [scrapy.core.engine] INFO: Spider closed (finished)

PS C:\Users\Latitude\Desktop\Shadman\Scrapy_Projects\AmazonScrape>

您是否收到错误消息?以文本(而非图像)的形式显示有问题的内容(非注释)。首先,您可以使用
print()
查看变量中的值。特别是在
start\u请求中,默认情况下
Request
使用方法
parse
从HTMl获取数据。如果您有不同名称的方法
parse_item
,则必须将其添加到
Request(…,callback=self.parse_item)
谢谢furas!我的问题已经解决了。
from difflib import get_close_matches
import JSON, YAML
import print

class Comparestring():
        def __init__(self):
            self.Purl = ''

        with open(r'C:\Users\Latitude\Desktop\Shadman\Scrapy_Projects\database.json', encoding='utf-8') as File:
              data = json.load(File)

    with open(r'C:\Users\Latitude\Desktop\Shadman\Scrapy_Projects\Product_List.yaml') as file:
        ProductList = yaml.load(file, Loader=yaml.FullLoader)
    pp = pprint.PrettyPrinter(indent=4)

  
    title_list = []
    model_list = []
    for doc in data:
        title_list.append(doc.get('title'))
        model_list.append(doc.get('model_no'))

    # pprint (title_list)
    # pprint(model_list)
    str1 = ""
    str2 = ""
    key = 'Product_1'
    for key, value in ProductList.items():
        if key == 'Product_1':
            str1 = value['M_title']
            str2 = value['Manufacturer_Model']
    # str1 = data2.get('Product_1').get('M_title')
    # str2 = 'GD350-1B' #get this from the 1st product of yaml file
    matches1 = get_close_matches(str1, title_list, n=3) #get the invidual model_no of this title and 
    match with str2
    # print(matches1)    #print out the product url of the best matched model_no with str2
    matches2 = []
    for doc in data: 
        if doc.get('title') in matches1:
           matches2.append(doc.get('model_no'))
    # print(matches2)
    matches3 = get_close_matches(str2, matches2, n=1) #get the best matched model_no
    # print(matches3)

    def Final_Product(self):
        for doc in self.data:
            if doc.get('model_no') in self.matches3:
               self.Purl= doc.get('product_url')
            #    print(furl)      
        return self.Purl
    
# product_url = Comparestring()
# URL = product_url.Final_Product()
# print(URL)