Python 3.x 从网页代码中删除广告

Python 3.x 从网页代码中删除广告,python-3.x,web-scraping,beautifulsoup,adblock,mechanicalsoup,Python 3.x,Web Scraping,Beautifulsoup,Adblock,Mechanicalsoup,我有adblock规则列表() 如何将它们应用于网页?我用MechanicalSoup(基于BeautifulSoup)下载网页代码。我想将其保存为bs格式,但etree也可以。 我尝试使用,但某些页面出现问题: ValueError:不支持带有编码声明的Unicode字符串。请使用无声明的字节输入或XML片段。因此我提出了以下解决方案: ADBLOCK_RULES = ['https://easylist-downloads.adblockplus.org/ruadlist+easylist.

我有adblock规则列表()
如何将它们应用于网页?我用MechanicalSoup(基于BeautifulSoup)下载网页代码。我想将其保存为bs格式,但etree也可以。
我尝试使用,但某些页面出现问题:

ValueError:不支持带有编码声明的Unicode字符串。请使用无声明的字节输入或XML片段。

因此我提出了以下解决方案:

ADBLOCK_RULES = ['https://easylist-downloads.adblockplus.org/ruadlist+easylist.txt',  
                 'https://filters.adtidy.org/extension/chromium/filters/1.txt']

for rule in ADBLOCK_RULES:
    r = requests.get(rule)
    with open(rule.rsplit('/', 1)[-1], 'wb') as f:
        f.write(r.content)

browser = mechanicalsoup.StatefulBrowser(
    soup_config={'features': 'lxml'},
    raise_on_404=True
)
response = browser.open(url)
webpage = browser.get_current_page()
html_code = re.sub(r'\n+', '\n', str(webpage))
remover = AdRemover(*[rule.rsplit('/', 1)[-1] for rule in ADBLOCK_RULES])
tree = lxml.html.document_fromstring(html_code)
adblocked = remover.remove_ads(tree)
webpage = BeautifulSoup(ElementTree.tostring(adblocked).decode(), 'lxml')

您需要使用更新的remove_ads(),但通过
return tree

使用与Nikita中几乎相同的代码,但希望与所有导入共享它,而不依赖于想要尝试它的人

from lxml.etree import tostring
import lxml.html
import requests

# take AdRemover code from here:
# https://github.com/buriy/python-readability/issues/43#issuecomment-321174825
from adremover import AdRemover

url = 'https://google.com'  # replace it with a url you want to apply the rules to  
rule_urls = ['https://easylist-downloads.adblockplus.org/ruadlist+easylist.txt',
             'https://filters.adtidy.org/extension/chromium/filters/1.txt']

rule_files = [url.rpartition('/')[-1] for url in rule_urls]


# download files containing rules
for rule_url, rule_file in zip(rule_urls, rule_files):
    r = requests.get(rule_url)
    with open(rule_file, 'w') as f:
        print(r.text, file=f)


remover = AdRemover(*rule_files)

html = requests.get(url).text
document = lxml.html.document_fromstring(html)
remover.remove_ads(document)
clean_html = tostring(document).decode("utf-8")

请使用失败的URL更新您的问题。