Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/vue.js/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Arrays Python 3关于使用带有Selenium和Concurrent Futures的csv文件在数组中循环的问题_Arrays_Python 3.x_Selenium_Loops_Concurrent.futures - Fatal编程技术网

Arrays Python 3关于使用带有Selenium和Concurrent Futures的csv文件在数组中循环的问题

Arrays Python 3关于使用带有Selenium和Concurrent Futures的csv文件在数组中循环的问题,arrays,python-3.x,selenium,loops,concurrent.futures,Arrays,Python 3.x,Selenium,Loops,Concurrent.futures,PythonNoob在这里,所以我将尽力提供尽可能多的细节。我正在试验Python的并发未来模块,看看是否可以使用Selenium加快一些刮取速度。我将在一个名为“inputURLS.csv”的csv文件中使用以下URL从一个网站上抓取一些财务数据。我们将保持股票列表简短,并有一只假股票来处理例外情况。实际的URL csv较长,因此我想尝试从csv中提取,而不是在python脚本中键入数组 https://www.benzinga.com/quote/TSLA https://www.benzi

PythonNoob在这里,所以我将尽力提供尽可能多的细节。我正在试验Python的并发未来模块,看看是否可以使用Selenium加快一些刮取速度。我将在一个名为“inputURLS.csv”的csv文件中使用以下URL从一个网站上抓取一些财务数据。我们将保持股票列表简短,并有一只假股票来处理例外情况。实际的URL csv较长,因此我想尝试从csv中提取,而不是在python脚本中键入数组

https://www.benzinga.com/quote/TSLA
https://www.benzinga.com/quote/AAPL
https://www.benzinga.com/quote/XXXX
https://www.benzinga.com/quote/SNAP
下面是我的python代码,用于提取3项数据——股票数量、市值和市盈率。该脚本在并发期货之外运行良好

from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
import csv
import concurrent.futures
from random import randint
from time import sleep

options = webdriver.ChromeOptions()
#options.add_argument("--headless") #optional headless
options.add_argument("start-maximized")
options.add_experimental_option("excludeSwitches", ['enable-automation'])
options.add_argument("--disable-extensions")
driver = webdriver.Chrome(options=options, executable_path=r'D:\SeleniumDrivers\Chrome\chromedriver.exe')
driver.execute_cdp_cmd('Network.setUserAgentOverride',{"userAgent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36'})

OutputFile = open('CSVoutput.csv', 'a')
urlList = []

with open('inputURLS.csv', 'r') as f:
    reader = csv.reader(f)
    for row in reader:
        urlList.append(row[0])
    print (urlList) #make array visible in viewer

def extract(theURLS):
    for i in urlList:
        driver.get(i)
        sleep(randint(3, 10)) # random pause
        try:
            bz_shares = driver.find_element_by_css_selector('div.flex:nth-child(10) > div:nth-child(2)').text #get shares number
            print(bz_shares) # to see in viewer
            OutputFile.write(bz_shares) # save number to csv output
        except NoSuchElementException:
            print("N/A") # print N/A if stock does not exist
            OutputFile.write("N/A") # save non value to csv output
        try:
            bz_MktCap = driver.find_element_by_css_selector('div.flex:nth-child(5) > div:nth-child(2)').text #get market cap
            print(bz_MktCap) # to see in viewer
            OutputFile.write("," + bz_MktCap) # save market cap to csv output
        except NoSuchElementException:
            print("N/A") # print N/A if no value
            OutputFile.write(",N/A") # save non value to csv output
        try:
            bz_PE = driver.find_element_by_css_selector('div.flex:nth-child(8) > div:nth-child(2)').text #get PE ratio
            print(bz_PE) # to see in viewer
            OutputFile.write("," + bz_PE) # save PE ratio to csv output
        except NoSuchElementException:
            print("N/A") # print N/A if no value
            OutputFile.write(",N/A") # save non value to csv output
        print(driver.current_url) # see URL screen in viewer
        OutputFile.write("," + driver.current_url + "\n") # save URL to csv output

        return theURLS

with concurrent.futures.ThreadPoolExecutor() as executor:
    executor.map(extract, urlList)
当我运行脚本时,我会在输出文件中得到以下结果:

963.3M,602.9B,624.6,https://www.benzinga.com/quote/TSLA
963.3M,602.9B,624.6,https://www.benzinga.com/quote/TSLA
963.3M,602.9B,624.6,https://www.benzinga.com/quote/TSLA
963.3M,602.9B,624.6,https://www.benzinga.com/quote/TSLA

所以脚本在我的csv文件中循环,但它卡在第一行。我得到了4行数据——这是我开始使用的URL数量——但我只得到了第一个URL的数据。如果我有8个URL,同样的事情会发生8次,等等。我不认为我在函数中正确地循环了URLlist数组。如果您能帮忙解决这个问题,我们将不胜感激。我把这些和我在并发期货上看过的各种网站和youtube视频放在一起,但我完全被卡住了。非常感谢

您的
extract()
应该是应用于
urlist
中每个项目的方法,而不是接受列表本身的方法。在执行写入操作时,您还需要
线程.Lock()
。请参阅:无论如何,每个线程都需要一个单独的驱动程序/浏览器。例如,您可以将列表分成4个部分,并使用新的webdriver实例为每个部分同时运行这些部分。