Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/360.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 使用BeautifulSoup从网页上的列表中删除链接_Python_Web Scraping_Beautifulsoup - Fatal编程技术网

Python 使用BeautifulSoup从网页上的列表中删除链接

Python 使用BeautifulSoup从网页上的列表中删除链接,python,web-scraping,beautifulsoup,Python,Web Scraping,Beautifulsoup,我是BeautifulSoup的新手,我尝试用它从维基页面的链接列表中删除链接。下面是我正在使用的代码,但它似乎没有输出任何内容: import requests from bs4 import BeautifulSoup url = "https://wiki.cerner.com/pages/viewpage.action?spaceKey=reference&title=Clinical%20Content%20PowerForms%20Pages" r

我是BeautifulSoup的新手,我尝试用它从维基页面的链接列表中删除链接。下面是我正在使用的代码,但它似乎没有输出任何内容:

import requests from bs4 import BeautifulSoup 

url = "https://wiki.cerner.com/pages/viewpage.action?spaceKey=reference&title=Clinical%20Content%20PowerForms%20Pages" 

r = requests.get(url)

soup = BeautifulSoup(r.content)

links = soup.find_all("PowerForm")

for link in links:
     if "PowerForm" in link.get("href"):
             print ("<a href='%s'>%s</a>" %(link.get("href"), link.text))

来自bs4导入组的导入请求 url=”https://wiki.cerner.com/pages/viewpage.action?spaceKey=reference&title=Clinical%20Content%20PowerForms%20Pages" r=请求。获取(url) 汤=美汤(r.含量) links=soup.find_all(“PowerForm”) 对于链接中的链接: 如果link.get(“href”)中的“PowerForm”: 打印(“%”(link.get(“href”)、link.text)) 这是我的第一个stackoverflow帖子,所以任何关于改进我帖子的建议都将不胜感激


提前谢谢

该页面似乎要求提供登录信息。这是我公司的内部wiki页面。。有没有办法用我自己的凭证绕过登录?请看@MendelG awesome,非常感谢!