Python twisted.internet.error.ReactorNotRestartable多个URL

Python twisted.internet.error.ReactorNotRestartable多个URL,python,scrapy,Python,Scrapy,我使用python 3 我使用scrapy 2.4.1 我做了一个脚本,让你输入一些单词,它会搜索我的代码 第一个URL可以正常工作 然后,它给出了一个错误 from scrapy.crawler import CrawlerProcess from scrapy.utils.project import get_project_settings class Amazon_spiders(scrapy.Spider): name = "amazon" star

我使用python 3 我使用scrapy 2.4.1 我做了一个脚本,让你输入一些单词,它会搜索我的代码 第一个URL可以正常工作 然后,它给出了一个错误

from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
class Amazon_spiders(scrapy.Spider):
    name = "amazon"
    start = time.time( )
    search = []
    starting=[]
    parse_number=0
    custom_settings = {
        'DOWNLOD_DELAY': 1,
        'LOG_LEVEL': 'INFO',}
    #make folder
    try:
        os.mkdir('results folder')
    except FileExistsError:
        pass
    def __init__(self):
        self.outfile = open(f"results folder/ result date.csv", "w",newline="", encoding="utf-8")
        self.writer = csv.writer(self.outfile)
        self.writer.writerow(['Title', 'Price', 'url', 'Img', 'Ratings', 'Stars'])
        print("***" * 20, "opened")
    def start_requests(self):
        number = input('Enter the number of times to search \n')
        for n in range(int(number)):
            word = input("Enter one sentence to be searched  \n ")
            # Amazon_spiders.search.append(word)
            words = word.strip( )
            # replace space with +
            words = words.replace(' ', '+')
            url = f'https://www.amazon.com/s?k={words}&ref=nb_sb_noss'
            print(f'currant page {url}')
            yield Request(url=url, callback=self.parse)
process = CrawlerProcess(get_project_settings())

# 'amazon' is the name of one of the spiders of the project.
process.crawl('amazon')
process.start( )  # the script will block here until the crawling is finished
错误是

Traceback (most recent call last):
  File "C:/Users/ahmed/PycharmProjects/web scraping/Amazon/Amazon/spiders/amazon.py", line 109, in <module>
    process.start( )  # the script will block here until the crawling is finished
  File "C:\Users\ahmed\PycharmProjects\web scraping\venv\lib\site-packages\scrapy\crawler.py", line 327, in start
    reactor.run(installSignalHandlers=False)  # blocking call
  File "C:\Users\ahmed\PycharmProjects\web scraping\venv\lib\site-packages\twisted\internet\base.py", line 1282, in run
    self.startRunning(installSignalHandlers=installSignalHandlers)
  File "C:\Users\ahmed\PycharmProjects\web scraping\venv\lib\site-packages\twisted\internet\base.py", line 1262, in startRunning
    ReactorBase.startRunning(self)
  File "C:\Users\ahmed\PycharmProjects\web scraping\venv\lib\site-packages\twisted\internet\base.py", line 765, in startRunning
    raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable
回溯(最近一次呼叫最后一次):
文件“C:/Users/ahmed/PycharmProjects/webscraping/Amazon/Amazon/spider/Amazon.py”,第109行,在
process.start()#脚本将在此处阻塞,直到爬网完成
文件“C:\Users\ahmed\PycharmProjects\web scraping\venv\lib\site packages\scrapy\crawler.py”,第327行,在开始处
reactor.run(installSignalHandlers=False)#阻止调用
文件“C:\Users\ahmed\PycharmProjects\web scraping\venv\lib\site packages\twisted\internet\base.py”,第1282行,正在运行
self.startRunning(installSignalHandlers=installSignalHandlers)
文件“C:\Users\ahmed\PycharmProjects\web scraping\venv\lib\site packages\twisted\internet\base.py”,第1262行,在startRunning中
反应器基础启动耳轴(自)
文件“C:\Users\ahmed\PycharmProjects\web scraping\venv\lib\site packages\twisted\internet\base.py”,第765行,在startRunning中
引发错误。ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable

谢谢你的帮助

你看到这个问题了吗?我没有找到答案