Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/joomla/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python+;美化组导出为CSV_Python_Csv_Beautifulsoup - Fatal编程技术网

Python+;美化组导出为CSV

Python+;美化组导出为CSV,python,csv,beautifulsoup,Python,Csv,Beautifulsoup,我在自动从维基百科文章中抓取表中的数据时遇到了一些问题。首先,我得到一个编码错误。我指定了UTF-8,错误消失了,但刮取的数据并没有正确显示很多字符。你可以从代码中看出我是一个完全的新手: from bs4 import BeautifulSoup import urllib2 wiki = "http://en.wikipedia.org/wiki/Anderson_Silva" header = {'User-Agent': 'Mozilla/5.0'} #Needed to preven

我在自动从维基百科文章中抓取表中的数据时遇到了一些问题。首先,我得到一个编码错误。我指定了UTF-8,错误消失了,但刮取的数据并没有正确显示很多字符。你可以从代码中看出我是一个完全的新手:

from bs4 import BeautifulSoup
import urllib2

wiki = "http://en.wikipedia.org/wiki/Anderson_Silva"
header = {'User-Agent': 'Mozilla/5.0'} #Needed to prevent 403 error on Wikipedia
req = urllib2.Request(wiki,headers=header)
page = urllib2.urlopen(req)
soup = BeautifulSoup(page)

Result = ""
Record = ""
Opponent = ""
Method = ""
Event = ""
Date = ""
Round = ""
Time = ""
Location = ""
Notes = ""

table = soup.find("table", { "class" : "wikitable sortable" })

f = open('output.csv', 'w')

for row in table.findAll("tr"):
    cells = row.findAll("td")
    #For each "tr", assign each "td" to a variable.
    if len(cells) == 10:
        Result = cells[0].find(text=True)
        Record = cells[1].find(text=True)
        Opponent = cells[2].find(text=True)
        Method = cells[3].find(text=True)
        Event = cells[4].find(text=True)
        Date = cells[5].find(text=True)
        Round = cells[6].find(text=True)
        Time = cells[7].find(text=True)
        Location = cells[8].find(text=True)
        Notes = cells[9].find(text=True)

        write_to_file = Result + "," + Record + "," + Opponent + "," + Method + "," + Event + "," + Date + "," + Round + "," + Time + "," + Location + "\n"
        write_to_unicode = write_to_file.encode('utf-8')
        print write_to_unicode
        f.write(write_to_unicode)

f.close()

正如pswaminathan所指出的,使用
csv
模块将大有帮助。我是这样做的:

table = soup.find('table', {'class': 'wikitable sortable'})
with open('out2.csv', 'w') as f:
    csvwriter = csv.writer(f)
    for row in table.findAll('tr'):
        cells = [c.text.encode('utf-8') for c in row.findAll('td')]
        if len(cells) == 10: 
            csvwriter.writerow(cells)
讨论
  • 使用csv模块,我创建了一个连接到输出文件的
    csvwriter
    对象
  • 通过使用
    with
    命令,我不需要担心在完成后关闭输出文件:它将在with块之后关闭
  • 在我的代码中,
    cells
    是从
    tr
    标记中的
    td
    标记中提取的UTF8编码文本列表
  • 我使用了构造
    c.text
    ,它比
    c.find(text=True)
    更简洁

您是否尝试过使用CSV模块()?它处理引用等。文档还为您指出了书写不同编码文本的正确方向。但是,对于您的特定问题…UTF-8有什么显示不正确?根据该页面上的meta标记,字符集是UTF-8。