Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/ssis/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何将web抓取的数据写入csv?_Python_Csv_Web Scraping - Fatal编程技术网

Python 如何将web抓取的数据写入csv?

Python 如何将web抓取的数据写入csv?,python,csv,web-scraping,Python,Csv,Web Scraping,我已经编写了以下代码来使用BeautifulSoup I提取表数据 import requests website= requests.get('https://memeburn.com/2010/09/the-100-most-influential-news-media-twitter-accounts/').text from bs4 import BeautifulSoup soup= BeautifulSoup(website, 'lxml') table= soup.find

我已经编写了以下代码来使用BeautifulSoup I提取表数据

import requests

website= requests.get('https://memeburn.com/2010/09/the-100-most-influential-news-media-twitter-accounts/').text

from bs4 import BeautifulSoup
soup= BeautifulSoup(website, 'lxml')


table= soup.find('table')

table_rows = table.findAll('tr')

for tr in table_rows:
    td= tr.findAll('td')
    rows = [i.text for i in td]
    print(rows)
这是我的输出

['Number', '@name', 'Name', 'Followers', 'Influence Rank']
[]
['1', '@mashable', 'Pete Cashmore', '2037840', '59']
[]
['2', '@cnnbrk', 'CNN Breaking News', '3224475', '71']
[]
['3', '@big_picture', 'The Big Picture', '23666', '92']
[]
['4', '@theonion', 'The Onion', '2289939', '116']
[]
['5', '@time', 'TIME.com', '2111832', '143']
[]
['6', '@breakingnews', 'Breaking News', '1795976', '147']
[]
['7', '@bbcbreaking', 'BBC Breaking News', '509756', '168']
[]
['8', '@espn', 'ESPN', '572577', '187']
[]

请帮助我将这些数据写入.csv文件(我对这类任务不熟悉)

使用csv编写器。将每行写入csv文件

import requests
import csv
from bs4 import BeautifulSoup

website= requests.get('https://memeburn.com/2010/09/the-100-most-influential-news-media-twitter-accounts/').text

soup= BeautifulSoup(website, 'lxml')

table= soup.find('table')

table_rows = table.findAll('tr')

csvfile = 'twitterusers2.csv';

# Python 2
# with open(csvfile, 'wb') as outfile:
# Python 3 to ommit newline caracter
with open(csvfile, 'w', newline='') as outfile:
    wr = csv.writer(outfile)

    for tr in table_rows:
        td= tr.findAll('td')
        # Python 2 .encode("utf8") is mendatory sometimes playing with twitter data
        rows = [i.text.encode("utf8") for i in td]
        #ignore the empty elements and row td count not equal to 5
        if(len(rows) == 5):
            print(rows)
            wr.writerow(rows)

更好的解决方案是使用
pandas
,因为它比其他库更快。以下是完整的代码:

import requests
import pandas as pd 

website= requests.get('https://memeburn.com/2010/09/the-100-most-influential-news-media-twitter-accounts/').text

from bs4 import BeautifulSoup
soup= BeautifulSoup(website, 'lxml')

table= soup.find('table')

table_rows = table.findAll('tr')

first = True 

details_dict = {}

count = 0 

final_rows = []

for tr in table_rows:
    td= tr.findAll('td')
    rows = [i.text for i in td]
    #print(rows)
    
    for i in rows:
        if first == True:
            details_dict[i] = []
        else:
            key = list(details_dict.keys())[count]
            details_dict[key].append(i)
            count+=1 
    count = 0
    first = False 
    #print(details_dict)

df = pd.DataFrame(details_dict)
df.to_csv('D:\\Output.csv',index = False)
输出屏幕截图:


希望这有帮助

最简单的方法是使用熊猫:

#pip安装程序4
作为pd进口熊猫
乌里https://memeburn.com/2010/09/the-100-most-influential-news-media-twitter-accounts/'
#阅读和清洁
data=pd.read_html(URI,flavor='lxml',skiprows=0,header=0)[0].dropna()
#保存到csv称为数据
data.to_csv('data.csv',index=False,encoding='utf-8')

如果问题解决了,请接受我的回答。我还更新了utf8编码,因为在使用twitter数据时可能需要utf8编码。