Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何为每个URL提供自己的线程_Python_Python 3.x_Multithreading - Fatal编程技术网

Python 如何为每个URL提供自己的线程

Python 如何为每个URL提供自己的线程,python,python-3.x,multithreading,Python,Python 3.x,Multithreading,我一直在做一个小PoC,我一直在努力提高我的线程知识,但不幸的是,我被卡住了,我在这里 import time found_products = [] site_catalog = [ "https://www.graffitishop.net/Sneakers", "https://www.graffitishop.net/T-shirts", "https://www.graffitishop.net/Sweats

我一直在做一个小PoC,我一直在努力提高我的线程知识,但不幸的是,我被卡住了,我在这里

import time

found_products = []

site_catalog = [
    "https://www.graffitishop.net/Sneakers",
    "https://www.graffitishop.net/T-shirts",
    "https://www.graffitishop.net/Sweatshirts",
    "https://www.graffitishop.net/Shirts"
]


def threading_feeds():
    # Create own thread for each URL as we want to run concurrent
    for get_links in site_catalog:
        monitor_feed(link=get_links)


def monitor_feed(link: str) -> None:
    old_payload = product_data(...)

    while True:
        new_payload = product_data(...)

        if old_payload != new_payload:
            for links in new_payload:
                if links not in found_products:
                    logger.info(f'Detected new link -> {found_link} | From -> {link}')
                    # Execute new thread as we don't want to block this function wait for filtering() to be done before continue
                    filtering(link=found_link)

        else:
            logger.info("Nothing new")
            time.sleep(60)
            continue


def filtering(found_link):
    ...
1-我目前正尝试在有多个链接的情况下进行监视,我的计划是希望每个url同时运行,而不需要逐个等待:

def threading_feeds():
   # Create own thread for each URL as we want to run concurrent
   for get_links in site_catalog:
      monitor_feed(link=get_links)
我该怎么做


2-如果我们似乎发现一个新产品出现在
监视器\u提要
中的给定URL中,我如何设置一个新线程来执行调用
过滤(link=found\u link)
?我不想等到它完成后才继续循环返回
而为True
,但它应该在后台执行
过滤(link=found\u link)
,同时仍然执行
监视器馈送

import concurrent.futures    
with concurrent.futures.ThreadPoolExecutor() as executor:
            executor.map(monitor_feed, site_catalog)

您可以使用。

您的问题涉及线程,但您没有使用线程的代码。StackOverflow充满了关于线程的问题。你看了哪些示例?@quamrana哦,对不起,我已经研究了threading.thread(…)。通过docs.python.org/3/library/threading.html启动,但我认为这可能不正确?我以前没有用过它,因为我对线程的了解一般都很低。哦,对了,那么我们称之为
过滤(link=found\u link)
监视器馈送功能呢?这不是要等到它完成后再继续上厕所吗?