Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/sorting/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何在csv文件scrapy中循环起始URL_Python_Web Scraping_Scrapy - Fatal编程技术网

Python 如何在csv文件scrapy中循环起始URL

Python 如何在csv文件scrapy中循环起始URL,python,web-scraping,scrapy,Python,Web Scraping,Scrapy,所以,在我第一次运行蜘蛛时,它基本上是出于某种原因工作的,但在那之后,它只抓取了一个URL -我的程序正在从列表中抓取我想要的部分 -将部件列表转换为文件中的URL -运行并获取所需数据,并将其输入csv文件 问题是: 仅从一个URL获得输出不知道从这里到哪里我已经检查了其他资源并尝试发出启动请求。结果仍然是一样的 那么,基本上,我怎样才能让它使用所有的start_URL并遍历它们中的每一个,而不仅仅是最后一个 这是蜘蛛: import csv import xlrd import scrapy

所以,在我第一次运行蜘蛛时,它基本上是出于某种原因工作的,但在那之后,它只抓取了一个URL

-我的程序正在从列表中抓取我想要的部分

-将部件列表转换为文件中的URL

-运行并获取所需数据,并将其输入csv文件

问题是: 仅从一个URL获得输出不知道从这里到哪里我已经检查了其他资源并尝试发出启动请求。结果仍然是一样的

那么,基本上,我怎样才能让它使用所有的start_URL并遍历它们中的每一个,而不仅仅是最后一个

这是蜘蛛:

import csv
import xlrd
import scrapy

wb = xlrd.open_workbook(r'C:\Users\Jatencio\PycharmProjects\testy\test.xlsx')
ws = wb.sheet_by_index(0)
mylist = ws.col_values(0)
print(mylist)

li = []
for el in mylist:
    baseparts = el[:5]
    url1 = 'https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=' + baseparts + '&pageSize=500&pkeyword=' + baseparts
    li.append(url1)
final = list(set(li))


file = open('templist.csv','w+',newline='')
with file:
    write = csv.writer(file, delimiter =',')
    write.writerows(x.split(',') for x in final)

class DigikeSpider(scrapy.Spider):
    name = 'digike'
    allowed_domains = ['digikey.com']
    custom_settings = {
        "USER_AGENT": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36"

    }

    with open('templist.csv') as file:
        start_urls = [line.strip() for line in file]

    def parse(self, response):
        data = {}
        parts1 = []
        # parts=response.css('Table#productTable.productTable')
        for p in response.css('tbody#lnkPart > tr'):

            if p.css('td.tr-mfgPartNumber span::text').get() not in mylist:
                continue

            else:
                parts1 = p.css('td.tr-mfgPartNumber span::text').get()

            if p.css('td.tr-minQty.ptable-param span.desktop::text').get():
                quantity = p.css('td.tr-minQty.ptable-param span.desktop::text').get()
                quantity = quantity.strip()
                cleaned_quantity = int(quantity.replace(',', ''))
            else:
                quantity = 'No quantity'

            if p.css('td.tr-unitPrice.ptable-param center::text').get() == 'Active':
                p.css('td.tr-mfgPartNumber span::text').remove()

            else:
                pass

            if p.css('td.tr-unitPrice.ptable-param center::text').get() == 'Obsolete':
                p.css('td.tr-mfgPartNumber span::text').remove()

            else:
                pass

            if p.css('td.tr-unitPrice.ptable-param center::text').get() == 'Discontinued at Digi-Key':
                p.css('td.tr-mfgPartNumber span::text').remove()

            else:
                pass

            if p.css('td.tr-unitPrice.ptable-param span::text').get():
                price = p.css('td.tr-unitPrice.ptable-param span::text').get()
                cleaned_price = price.strip()
            else:
                price = 'No Price'

            if p.css('td.tr-qtyAvailable.ptable-param span.desktop::text').get():
                stock = p.css('td.tr-qtyAvailable.ptable-param span.desktop::text').get()
                cleaned_stock = stock.strip()

            else:
                pass
            if p.css('#part-status ::text').get():
                status = p.css('#part-status ::text').get()
                cleaned_status = status.strip()

            else:
                pass

            yield {
                'Part': parts1,
                'Quantity': cleaned_quantity,
                'Price': cleaned_price,
                'Stock': cleaned_stock,
                'Status': cleaned_status,

            }

输出

2020-07-30 10:12:11 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=IS62L&pageSize=500&pkeyword=IS62L> (referer:
 None)
2020-07-30 10:12:11 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY622&pageSize=500&pkeyword=CY622> (referer:
 None)
2020-07-30 10:12:11 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY622&pageSize=500&pkeyword=CY622
> (referer: None)
Traceback (most recent call last):
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\utils\defer.py", line 120, in iter_errback
    yield next(it)
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\utils\python.py", line 346, in __next__
    return next(self.data)
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\utils\python.py", line 346, in __next__
    return next(self.data)
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 340, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\Jatencio\PycharmProjects\testy\testdigi\testdigi\spiders\digike.py", line 93, in parse
    'Quantity': cleaned_quantity,
UnboundLocalError: local variable 'cleaned_quantity' referenced before assignment
2020-07-30 10:12:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=IS62C&pageSize=500&pkeyword=IS62C> (referer:
 None)
2020-07-30 10:12:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=IS62W&pageSize=500&pkeyword=IS62W> (referer:
 None)
2020-07-30 10:12:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY621&pageSize=500&pkeyword=CY621> (referer:
 None)
2020-07-30 10:12:17 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY621&pageSize=500&pkeyword=CY621>
{'Part': 'CY62128ELL-45SXIT', 'Quantity': 1000, 'Price': '$2.29429', 'Stock': '1,000 - Immediate', 'Status': 'Active'}
2020-07-30 10:12:17 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY621&pageSize=500&pkeyword=CY621>
{'Part': 'CY62157EV30LL-45ZSXIT', 'Quantity': 1000, 'Price': '$6.44254', 'Stock': '2,000 - Immediate', 'Status': 'Active'}
2020-07-30 10:12:17 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY621&pageSize=500&pkeyword=CY621>

2020-07-30 10:12:17 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY621&pageSize=500&pkeyword=CY621
> (referer: None)
Traceback (most recent call last):
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\parsel\selector.py", line 368, in remove
    parent = self.root.getparent()
AttributeError: 'str' object has no attribute 'getparent'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\utils\defer.py", line 120, in iter_errback
    yield next(it)
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\utils\python.py", line 346, in __next__
    return next(self.data)
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\utils\python.py", line 346, in __next__
    return next(self.data)
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 340, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\Jatencio\PycharmProjects\testy\testdigi\testdigi\spiders\digike.py", line 55, in parse
    p.css('td.tr-mfgPartNumber span::text').remove()
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\parsel\selector.py", line 164, in remove
    x.remove()
  File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\parsel\selector.py", line 371, in remove
    raise CannotRemoveElementWithoutRoot(
parsel.selector.CannotRemoveElementWithoutRoot: The node you're trying to remove has no root, are you trying to remove a pseudo-element? Try to use 'li' as a selector instead of 'li::text' or '//li' instead of '//li/text()', for exampl
e.
2020-07-30 10:12:17 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-30 10:12:17 [scrapy.extensions.feedexport] INFO: Stored csv feed (40 items) in: DigiKeyPartsList.csv
2020-07-30 10:12:11[scrapy.core.engine]调试:爬网(200)(参考:
(无)
2020-07-30 10:12:11[刮屑核心引擎]调试:爬网(200)(参考:
(无)
2020-07-30 10:12:11[刮板式堆芯刮板]错误:十字轴错误处理(参考:无)
回溯(最近一次呼叫最后一次):
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\utils\defer.py”,第120行,在iter\u errback中
下一个(it)
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\utils\python.py”,第346行,下一步__
返回下一个(self.data)
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\utils\python.py”,第346行,下一步__
返回下一个(self.data)
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\core\spidermw.py”,第64行,在可评估文件中
对于iterable中的r:
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\spidermiddleware\offsite.py”,第29行,进程中输出
对于结果中的x:
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\core\spidermw.py”,第64行,在可评估文件中
对于iterable中的r:
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\spidermiddleware\referer.py”,第340行,在
返回(_set_referer(r)表示结果中的r或())
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\core\spidermw.py”,第64行,在可评估文件中
对于iterable中的r:
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\spidermiddleware\urlength.py”,第37行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\core\spidermw.py”,第64行,在可评估文件中
对于iterable中的r:
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\spidermiddleware\depth.py”,第58行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\core\spidermw.py”,第64行,在可评估文件中
对于iterable中的r:
文件“C:\Users\Jatencio\PycharmProjects\testy\testdigi\testdigi\spiders\digike.py”,第93行,在parse中
“数量”:已清理的数量,
UnboundLocalError:分配前引用的局部变量“cleaned_quantity”
2020-07-30 10:12:15[刮屑核心引擎]调试:爬网(200)(参考:
(无)
2020-07-30 10:12:17[刮屑核心引擎]调试:爬网(200)(参考:
(无)
2020-07-30 10:12:17[刮屑核心引擎]调试:爬网(200)(参考:
(无)
2020-07-30 10:12:17[scrapy.core.scraper]调试:从
{'Part':'CY62128ELL-45SXIT','Quantity':1000','Price':'2.29429','Stock':'1000-立即','Status':'Active'}
2020-07-30 10:12:17[scrapy.core.scraper]调试:从
{'Part':'CY62157EV30LL-45ZSXIT','Quantity':1000','Price':'6.44254美元','Stock':'2000-立即','Status':'Active'}
2020-07-30 10:12:17[scrapy.core.scraper]调试:从
2020-07-30 10:12:17[刮板.核心.刮板]错误:十字轴错误处理(参考:无)
回溯(最近一次呼叫最后一次):
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\parsel\selector.py”,第368行,删除
parent=self.root.getparent()
AttributeError:“str”对象没有属性“getparent”
在处理上述异常期间,发生了另一个异常:
回溯(最近一次呼叫最后一次):
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\utils\defer.py”,第120行,在iter\u errback中
下一个(it)
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\utils\python.py”,第346行,下一步__
返回下一个(self.data)
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\utils\python.py”,第346行,下一步__
返回下一个(self.data)
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\core\spidermw.py”,第64行,在可评估文件中
对于iterable中的r:
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\spidermiddleware\offsite.py”,第29行,进程中输出
对于结果中的x:
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\core\spidermw.py”,第64行,在可评估文件中
对于iterable中的r:
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\spidermiddleware\referer.py”,第340行,在
返回(_set_referer(r)表示结果中的r或())
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\core\spidermw.py”,第64行,在可评估文件中
对于iterable中的r:
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\spidermiddleware\urlength.py”,第37行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\core\spidermw.py”,第64行,在可评估文件中
对于iterable中的r:
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\spidermiddleware\depth.py”,第58行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“c:\users\jatencio\pycharmprojects\testy\venv\lib\site packages\scrapy\core\spiderm
2020-07-30 12:51:31 [scrapy.utils.log] INFO: Scrapy 2.2.1 started (bot: testdigi)
2020-07-30 12:51:31 [scrapy.utils.log] INFO: Versions: lxml 4.5.2.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.3 (tags/v3.8.3:6f8c832, May 13 2020, 22:37:02) [MSC v.1924 64 bit (AMD64)], pyO
penSSL 19.1.0 (OpenSSL 1.1.1g  21 Apr 2020), cryptography 3.0, Platform Windows-10-10.0.17134-SP0
2020-07-30 12:51:31 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2020-07-30 12:51:31 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'testdigi',
 'NEWSPIDER_MODULE': 'testdigi.spiders',
 'SPIDER_MODULES': ['testdigi.spiders'],
 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
               '(KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36'}
2020-07-30 12:51:31 [scrapy.extensions.telnet] INFO: Telnet Password: 4abf97dccc166f2d
2020-07-30 12:51:31 [py.warnings] WARNING: c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\scrapy\extensions\feedexport.py:210: ScrapyDeprecationWarning: The `FEED_URI` and `FEED_FORMAT` settings have been deprecated in
favor of the `FEEDS` setting. Please see the `FEEDS` setting docs for more details
  exporter = cls(crawler)

2020-07-30 12:51:31 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats']
2020-07-30 12:51:32 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-07-30 12:51:32 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-07-30 12:51:32 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-07-30 12:51:32 [scrapy.core.engine] INFO: Spider opened
2020-07-30 12:51:32 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-30 12:51:32 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-07-30 12:51:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=IS62L&pageSize=500&pkeyword=IS62L> (referer:
 None)
2020-07-30 12:51:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=IS62C&pageSize=500&pkeyword=IS62C> (referer:
 None)
2020-07-30 12:51:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY622&pageSize=500&pkeyword=CY622> (referer:
 None)
2020-07-30 12:51:33 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY622&pageSize=500&pkeyword=CY622>
{'Part': 'CY62256NLL-55ZXIT', 'Quantity': 'No quantity', 'Price': '$1.11989', 'Stock': '0', 'Status': 'Obsolete'}
2020-07-30 12:51:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=IS62W&pageSize=500&pkeyword=IS62W> (referer:
 None)
2020-07-30 12:51:33 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY622&pageSize=500&pkeyword=CY622>
{'Part': 'CY62256VNLL-70ZXIT', 'Quantity': 'No quantity', 'Price': 'No Price', 'Stock': '0', 'Status': 'Obsolete'}
2020-07-30 12:51:33 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY622&pageSize=500&pkeyword=CY622>
{'Part': 'CY62256NLL-55SNXIT', 'Quantity': 'No quantity', 'Price': 'No Price', 'Stock': '0', 'Status': 'Obsolete'}
2020-07-30 12:51:33 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY622&pageSize=500&pkeyword=CY622>
{'Part': 'CY62256VNLL-70SNXIT', 'Quantity': 'No quantity', 'Price': 'No Price', 'Stock': '0', 'Status': 'Obsolete'}
2020-07-30 12:51:34 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digikey.com/products/en/integrated-circuits-ics/memory/774?FV=-8%7C774%2C7%7C1&quantity=0&ColumnSort=0&page=1&k=CY621&pageSize=500&pkeyword=CY621> (referer:
 None)
{'Part': 'CY62148EV30LL-45ZSXIT', 'Quantity': 1000, 'Price': 'No Price', 'Stock': '0', 'Status': 'Active'}
2020-07-30 12:51:34 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-30 12:51:34 [scrapy.extensions.feedexport] INFO: Stored csv feed (46 items) in: DigiKeyPartsList.csv
2020-07-30 12:51:34 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 2145,
 'downloader/request_count': 5,
 'downloader/request_method_count/GET': 5,
 'downloader/response_bytes': 289446,
 'downloader/response_count': 5,
 'downloader/response_status_count/200': 5,
 'elapsed_time_seconds': 2.311786,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 7, 30, 16, 51, 34, 681758),
 'item_scraped_count': 46,
 'log_count/DEBUG': 51,
 'log_count/INFO': 11,
 'log_count/WARNING': 1,
 'response_received_count': 5,
 'scheduler/dequeued': 5,
 'scheduler/dequeued/memory': 5,
 'scheduler/enqueued': 5,
 'scheduler/enqueued/memory': 5,
 'start_time': datetime.datetime(2020, 7, 30, 16, 51, 32, 369972)}
2020-07-30 12:51:34 [scrapy.core.engine] INFO: Spider closed (finished)

(venv) C:\Users\Jatencio\PycharmProjects\testy\testdigi\testdigi>

File "C:\Users\Jatencio\PycharmProjects\testy\testdigi\testdigi\spiders\digike.py", line 93, in parse
    'Quantity': cleaned_quantity,
UnboundLocalError: local variable 'cleaned_quantity' referenced before assignment
        if p.css('td.tr-minQty.ptable-param span.desktop::text').get():
            quantity = p.css('td.tr-minQty.ptable-param span.desktop::text').get()
            quantity = quantity.strip()
            cleaned_quantity = int(quantity.replace(',', ''))
        else:
            quantity = 'No quantity'
        yield {
            'Part': parts1,
            'Quantity': cleaned_quantity,
            'Price': cleaned_price,
            'Stock': cleaned_stock,
            'Status': cleaned_status,
        }
 File "C:\Users\Jatencio\PycharmProjects\testy\testdigi\testdigi\spiders\digike.py", line 55, in parse
    p.css('td.tr-mfgPartNumber span::text').remove()
[...]
 File "c:\users\jatencio\pycharmprojects\testy\venv\lib\site-packages\parsel\selector.py", line 371, in remove
    raise CannotRemoveElementWithoutRoot(
parsel.selector.CannotRemoveElementWithoutRoot: The node you're trying to remove has no root, are you trying to remove a pseudo-element? Try to use 'li' as a selector instead of 'li::text' or '//li' instead of '//li/text()', for example.
p.css('td.tr-mfgPartNumber span::text').remove()
p.css('td.tr-mfgPartNumber span').remove()