Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/307.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何将刮取数据保存到CSV文件中?_Python_Pandas_Selenium_Selenium Webdriver_Beautifulsoup - Fatal编程技术网

Python 如何将刮取数据保存到CSV文件中?

Python 如何将刮取数据保存到CSV文件中?,python,pandas,selenium,selenium-webdriver,beautifulsoup,Python,Pandas,Selenium,Selenium Webdriver,Beautifulsoup,我对Python、Selenium和BeautifulSoup非常陌生。我已经在网上看到了很多教程,但是我很困惑。请帮帮我。 基本上这是我的python代码: from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver

我对Python、Selenium和BeautifulSoup非常陌生。我已经在网上看到了很多教程,但是我很困惑。请帮帮我。 基本上这是我的python代码:

    from selenium import webdriver
    from selenium.webdriver.common.by import By
    from selenium.webdriver.support.ui import WebDriverWait
    from selenium.webdriver.support import expected_conditions as EC
    from bs4 import BeautifulSoup as bs
    
    #import requests
    import time 
    #import csv
    
    passwordStr = '***'
    usernameStr='***'
    
    chrome_path = r'C:\Users\wana isa\geckodriver-v0.26.0-win64\geckodriver.exe'
    browser = webdriver.Firefox(executable_path=r'C:\Users\wana isa\geckodriver-v0.26.0-win64\geckodriver.exe')
    browser.get(('http://*********/'))
    
    wait = WebDriverWait(browser,10)
    
    
    # wait for transition then continue to fill items
    #time.sleep(2)
    password = wait.until(EC.presence_of_element_located((By.ID, 'txt_Password')))
    password.send_keys(passwordStr)
    username = wait.until(EC.presence_of_element_located((By.ID, 'txt_Username')))
    username.send_keys(usernameStr)
    
    signInButton = browser.find_element_by_id('button')
    signInButton.click()
    browser.get(('http://******'))
    
    
    MainTab=browser.find_element_by_name('mainli_waninfo').click()
    SubTab=browser.find_element_by_name('subli_bssinfo').click()
    browser.switch_to.frame(browser.find_element_by_id('frameContent'))
    
    html=browser.page_source
    soup=bs(html,'lxml')
    #print(soup.prettify())
    
#for Service Proversioning Status , This is the data that i scrape and need to be saved into csv
    spsList=['ONT  Registration Status','OLT Service Configuration Status','EMS Configuration Status','ACS Registration Status']
    sps_id=['td1_2','td2_2','td3_2','td4_2']
    for i in range(len(sps_id)):
        elemntValu = browser.find_element_by_id(sps_id[i]).text
        output= print(spsList[i] + " : "+ elemntValu)
        
    browser.close()
这是输出:


如果您能帮助我,我将不胜感激。

将此导入添加到您的代码中:

import csv
  with open('FileName.csv', 'w', newline='') as file:
      writer = csv.writer(file)
      for i in range(len(sps_id)):
            elemntValu = browser.find_element_by_id(sps_id[i]).text
            output= print(spsList[i] + " : "+ elemntValu)
            writer.writerow([spsList[i], elemntValu])
  f.close()
  browser.close()
将以下内容添加到代码中:

import csv
  with open('FileName.csv', 'w', newline='') as file:
      writer = csv.writer(file)
      for i in range(len(sps_id)):
            elemntValu = browser.find_element_by_id(sps_id[i]).text
            output= print(spsList[i] + " : "+ elemntValu)
            writer.writerow([spsList[i], elemntValu])
  f.close()
  browser.close()

我遇到了以下错误-->ValueError:closed上的i/O操作file@Joojoo编辑我添加了关闭文件,试着告诉我它是否工作非常感谢你,我已经得到了。但我还有一个问题。这种刮取数据是动态的。每次运行时,该值都会更改。如果已经保存的数据在我每次运行时都自动更改为csv,是否可能?@Joojoo该文件实际上处于“w”模式,这意味着如果不存在,则创建该文件,如果存在,则覆盖该文件,因此您可以将“with open('FileName.csv','w',newline=”)更改为“with open('FileName.csv','w')”`非常感谢!这对我帮助很大。我不太习惯这种编码方式。这是我第一次抓取数据,也是我第一次使用python。我还有一个问题,上面的数据在列表中。如果我想把它做成桌子呢?我应该用熊猫吗?