Unicode 用BeautifulSoup4刮繁体中文:输出文件无法显示汉字

Unicode 用BeautifulSoup4刮繁体中文:输出文件无法显示汉字,unicode,beautifulsoup,cjk,Unicode,Beautifulsoup,Cjk,这是我正在努力抓取的页面: 页面以UTF-8编码 这是我的密码: import requests as r from bs4 import BeautifulSoup as soup import os import urllib.request #make a list of all web pages' urls webpages=['https://zh.wikisource.org/wiki/%E8%AE%80%E9%80%9A%E9%91%92%E8%AB%96/%E5%8D%B70

这是我正在努力抓取的页面:

页面以UTF-8编码

这是我的密码:

import requests as r
from bs4 import BeautifulSoup as soup
import os
import urllib.request

#make a list of all web pages' urls
webpages=['https://zh.wikisource.org/wiki/%E8%AE%80%E9%80%9A%E9%91%92%E8%AB%96/%E5%8D%B701', 'https://zh.wikisource.org/wiki/%E8%AE%80%E9%80%9A%E9%91%92%E8%AB%96/%E5%8D%B702']

#start looping through all pages

for item in webpages:
    headers = {'User-Agent': 'Mozilla/5.0'}
    data = r.get(item, headers=headers)
    data.encoding = 'utf-8'
    page_soup = soup(data.text, 'html5lib')

    with open(r'sample_srape.txt', 'w') as file:
        file.write(str(page_soup.encode('utf-8')))
        file.close()
输出的txt文件根本不显示汉字。字符显示如下:“\xe7\x9a\x84\xe5\x9c\x96\xe6\x9b\xb8\xe9\xa4\xa8”


如何显示汉字

在文件中写入时,使用
解码(“unicode转义”)
您将看到所有汉字

import requests as r
from bs4 import BeautifulSoup as soup

#make a list of all web pages' urls
webpages=['https://zh.wikisource.org/wiki/%E8%AE%80%E9%80%9A%E9%91%92%E8%AB%96/%E5%8D%B701', 'https://zh.wikisource.org/wiki/%E8%AE%80%E9%80%9A%E9%91%92%E8%AB%96/%E5%8D%B702']

#start looping through all pages

for item in webpages:
    headers = {'User-Agent': 'Mozilla/5.0'}
    data = r.get(item, headers=headers)
    data.encoding = 'utf-8'
    page_soup = soup(data.text, 'html5lib')
    #print(page_soup)

    with open(r'sample_srape.txt', 'w') as file:
        file.write(str(page_soup.decode("unicode-escape")))
        file.close()
最终工作代码:

import requests as r
from bs4 import BeautifulSoup as soup

#make a list of all web pages' urls
webpages=['https://zh.wikisource.org/wiki/%E8%AE%80%E9%80%9A%E9%91%92%E8%AB%96/%E5%8D%B701', 'https://zh.wikisource.org/wiki/%E8%AE%80%E9%80%9A%E9%91%92%E8%AB%96/%E5%8D%B702']

#start looping through all pages

for item in webpages:
    headers = {'User-Agent': 'Mozilla/5.0'}
    data = r.get(item, headers=headers)
    data.encoding = 'utf-8'
    page_soup = soup(data.text, 'html5lib')

    with open(r'sample_srape.txt', 'w', encoding='utf-8') as file:
        file.write(page_soup.decode("unicode-escape"))
        file.close()

我在执行您的建议时遇到了这个错误:“UnicodeEncodeError:'charmap'编解码器无法对115-118位置的字符进行编码:字符映射到“做一件事,尝试一下:file.write(page_soup.decode(“unicode escape”))我刚刚尝试过,我也遇到了同样的错误。它能在你的电脑上工作吗?我的计算机缺少什么吗?当我将encoding='utf-8'添加到open时,问题得到了解决。我把工作代码贴在了下面,想知道你已经找到了一些解决方案。