Python-name'';没有定义

Python-name'';没有定义,python,Python,以上是我的代码。我一直收到: NameError:未定义名称“create_workers”,且未定义名称“crawl” 对初学者有什么帮助或建议吗?您必须创建Main类的对象,并通过该对象使用其方法,为此您必须更改: class Main: PROJECT_NAME = 'something' HOMEPAGE = 'something' DOMAIN_NAME = get_domain_name(HOMEPAGE) QUEUE_FILE = PROJECT_

以上是我的代码。我一直收到:

NameError:未定义名称“create_workers”,且未定义名称“crawl”


对初学者有什么帮助或建议吗?

您必须创建Main类的对象,并通过该对象使用其方法,为此您必须更改:

class Main:

    PROJECT_NAME = 'something'
    HOMEPAGE = 'something'
    DOMAIN_NAME = get_domain_name(HOMEPAGE)
    QUEUE_FILE = PROJECT_NAME + '/queue.txt'
    CRAWLED_FILE = PROJECT_NAME + '/crawled.txt'
    DATA_FILE = PROJECT_NAME + '/data.txt'
    NUMBER_OF_THREADS = 20
    queue = Queue()
    Spider(PROJECT_NAME, HOMEPAGE, DOMAIN_NAME)


    # Create worker threads (will die when main exits)
    def create_workers(self):
        for _ in range(self.NUMBER_OF_THREADS):
            t = self.threading.Thread(target=self.work)
            t.daemon = True
            t.start()


    # Do the next job in the queue
    def work(self):
        while True:
            url = self.queue.get()
            Spider.crawl_page(self.threading.current_thread().name, url)
            self.queue.task_done()


    # Each queued link is a new job
    def create_jobs(self):
        for link in self.file_to_set(self.QUEUE_FILE):
            self.queue.put(link)
        self.queue.join()
        self.crawl()


    # Check if there are items in the queue, if so crawl them
    def crawl(self):
        queued_links = self.file_to_set(self.QUEUE_FILE)
        if len(queued_links) > 0:
            print(str(len(queued_links)) + ' links in the queue')
            self.create_jobs()
create_workers()
crawl() 
致:


必须创建Main类的对象,并通过该对象使用其方法,为此必须更改:

class Main:

    PROJECT_NAME = 'something'
    HOMEPAGE = 'something'
    DOMAIN_NAME = get_domain_name(HOMEPAGE)
    QUEUE_FILE = PROJECT_NAME + '/queue.txt'
    CRAWLED_FILE = PROJECT_NAME + '/crawled.txt'
    DATA_FILE = PROJECT_NAME + '/data.txt'
    NUMBER_OF_THREADS = 20
    queue = Queue()
    Spider(PROJECT_NAME, HOMEPAGE, DOMAIN_NAME)


    # Create worker threads (will die when main exits)
    def create_workers(self):
        for _ in range(self.NUMBER_OF_THREADS):
            t = self.threading.Thread(target=self.work)
            t.daemon = True
            t.start()


    # Do the next job in the queue
    def work(self):
        while True:
            url = self.queue.get()
            Spider.crawl_page(self.threading.current_thread().name, url)
            self.queue.task_done()


    # Each queued link is a new job
    def create_jobs(self):
        for link in self.file_to_set(self.QUEUE_FILE):
            self.queue.put(link)
        self.queue.join()
        self.crawl()


    # Check if there are items in the queue, if so crawl them
    def crawl(self):
        queued_links = self.file_to_set(self.QUEUE_FILE)
        if len(queued_links) > 0:
            print(str(len(queued_links)) + ' links in the queue')
            self.create_jobs()
create_workers()
crawl() 
致:

m = Main()

m.create_workers()
m.crawl()