Python 美化组“非类型”对象不可调用

Python 美化组“非类型”对象不可调用,python,beautifulsoup,Python,Beautifulsoup,我正在尝试这样做: req = urllib.request.Request("http://en.wikipedia.org/wiki/Philosophy") content = urllib.request.urlopen(req).read() soup = bs4.BeautifulSoup(content, "html.parser") content = strip_brackets(soup.find('div', id="bodyContent").p) for link i

我正在尝试这样做:

req = urllib.request.Request("http://en.wikipedia.org/wiki/Philosophy")
content = urllib.request.urlopen(req).read()
soup = bs4.BeautifulSoup(content, "html.parser")
content = strip_brackets(soup.find('div', id="bodyContent").p)

for link in bs4.BeautifulSoup(content, "html.parser").findAll("a"):
    print(link.get("href"))
如果我改为对循环执行此操作:

for link in soup.findAll("a"):
    print(link.get("href"))
我不再得到错误,但我想先去掉内容的括号,然后得到内容的所有链接

错误行36是for循环的行:

Traceback (most recent call last):
  File "....py", line 36, in <module>
    for link in bs4.BeautifulSoup(content, "html.parser").findAll("a"):
  File "C:\Users\...\AppData\Local\Programs\Python\Python35-32\lib\site-packages\bs4\__init__.py", line 191, in __init__
    markup = markup.read()
TypeError: 'NoneType' object is not callable
我做错了什么?

不要在bs4.BeautifulSoupcontent、html.parser.findAlla中使用for链接:尝试在content.findAll'a'中使用for链接:
无需重新解析内容。

您的最终目标是获取链接列表,对吗

这将为您提供以下链接:

content = urlopen('http://en.wikipedia.org/wiki/Philosophy')
soup = BeautifulSoup(content, "html.parser")
base=soup.find('div', id="bodyContent")

for link in BeautifulSoup(str(base), "html.parser").findAll("a"):
    if 'href' in link.attrs:
        print(link['href'])

你想脱掉什么?你可以这样做

from bs4 import BeautifulSoup as bs
import urllib

url = "http://en.wikipedia.org/wiki/Philosophy"
soup = bs(urllib.urlopen(url), "html.parser")
links = soup.find('div', id="bodyContent").p.findAll("a")
for link in links:
    print link.get("href")

我不明白你到底想要什么。用你的代码

import urllib
import bs4 

req = urllib.request.Request("http://en.wikipedia.org/wiki/Philosophy")
content = urllib.request.urlopen(req).read()
soup = bs4.BeautifulSoup(content, "html.parser")

for link in soup.findAll("a"):
    print(link.get("href"))

https://zh.wikipedia.org/wiki/%E5%93%B2%E5%AD%A6
https://www.wikidata.org/wiki/Q5891#sitelinks-wikipedia
//en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License
//creativecommons.org/licenses/by-sa/3.0/
//wikimediafoundation.org/wiki/Terms_of_Use
//wikimediafoundation.org/wiki/Privacy_policy
//www.wikimediafoundation.org/
https://wikimediafoundation.org/wiki/Privacy_policy
/wiki/Wikipedia:About
/wiki/Wikipedia:General_disclaimer
//en.wikipedia.org/wiki/Wikipedia:Contact_us
https://www.mediawiki.org/wiki/Special:MyLanguage/How_to_contribute
https://wikimediafoundation.org/wiki/Cookie_statement
//en.m.wikipedia.org/w/index.php?title=Philosophy&mobileaction=toggle_view_mobile
https://wikimediafoundation.org/
//www.mediawiki.org/
1847



  With Dmitry's code


/wiki/Help:Category
/wiki/Category:Philosophy
/wiki/Category:CS1_maint:_Uses_editors_parameter
/wiki/Category:Pages_using_ISBN_magic_links
/wiki/Category:Wikipedia_indefinitely_semi-protected_pages
/wiki/Category:Use_dmy_dates_from_April_2016
/wiki/Category:Articles_containing_Ancient_Greek-language_text
/wiki/Category:Articles_containing_Sanskrit-language_text
/wiki/Category:All_articles_with_unsourced_statements
/wiki/Category:Articles_with_unsourced_statements_from_May_2016
/wiki/Category:Articles_containing_potentially_dated_statements_from_2016
/wiki/Category:All_articles_containing_potentially_dated_statements
/wiki/Category:Articles_with_DMOZ_links
/wiki/Category:Wikipedia_articles_with_LCCN_identifiers
/wiki/Category:Wikipedia_articles_with_GND_identifiers
1592
我已将此命令用于两个程序

python s2.py | tee >(wc -l)

第二部分用于计算屏幕行数。

粘贴完整的代码请显示完整的错误消息。什么部分导致错误?这或多或少是完整的代码,我导入urllib.request、bs4和re。带括号返回的文本与输入的文本相同,因为我还没有完成。带括号有什么作用?很明显,带括号并不像你想象的那样无害。如果它只是返回其输入,那么for循环不会引发您报告的错误。因为两个循环之间的唯一区别是调用strip_括号,所以问题一定在该函数内部,反之亦然。find_all仅适用于Beautiful Soup 4,在此之前findAll是实际的方法。@Dmitriyfilkovskiy谢谢,我已经更正了答案。如果不重新解析base,它不应该工作吗?我们可以使用类似base=soup.find'div',{'id':bodyContent}这样的东西,然后在base.findAlla:?我总是这样使用它…我添加它只是为了以防万一=我假设使用find它会起作用=但是当使用findAll/find_时,如果不添加str,我的所有代码都无法运行,并且不需要重新解析.read。从文档:要解析文档,请将其传递到BeautifulSoup构造函数。您可以传入字符串或打开的文件句柄:@MD.KhairulBasar