Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/362.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Beautifulsoup找不到标记_Python_Html_Beautifulsoup_Tags - Fatal编程技术网

Python Beautifulsoup找不到标记

Python Beautifulsoup找不到标记,python,html,beautifulsoup,tags,Python,Html,Beautifulsoup,Tags,我在做一个关于beautifulsoup的项目 from bs4 import BeautifulSoup as soup from requests import get url = "https://www.yelp.com/search?find_desc=&find_loc=New+York%2C+NY&ns=1" clnt = get(url) page=soup(clnt.text,"html.parser") container = page.findAll(

我在做一个关于beautifulsoup的项目

from bs4 import BeautifulSoup as soup
from requests import get



url = "https://www.yelp.com/search?find_desc=&find_loc=New+York%2C+NY&ns=1"
clnt = get(url)
page=soup(clnt.text,"html.parser")
container = page.findAll("div",{"class":"lemon--div__373c0__1mboc container__373c0__ZB8u4 hoverable__373c0__3CcYQ margin-t3__373c0__1l90z margin-b3__373c0__q1DuY padding-t3__373c0__1gw9E padding-r3__373c0__57InZ padding-b3__373c0__342DA padding-l3__373c0__1scQ0 border--top__373c0__3gXLy border--right__373c0__1n3Iv border--bottom__373c0__3qNtD border--left__373c0__d1B7K border-color--default__373c0__3-ifU"})

container = container[1]
url2= "https://www.yelp.com"+container.a["href"]
clnt2 = get(url2)
page2 = soup(clnt2.text, 'html.parser')

info = page2.find("div",{"class":"lemon--div__373c0__1mboc island__373c0__3fs6U u-padding-t1 u-padding-r1 u-padding-b1 u-padding-l1 border--top__373c0__19Owr border--right__373c0__22AHO border--bottom__373c0__uPbXS border--left__373c0__1SjJs border-color--default__373c0__2oFDT background-color--white__373c0__GVEnp"})

contact=info.div (Example contact variable)
在这个“info”变量中,我得到了一个div,它包含了所有的联系信息,我想从这个div中获取联系电话

当我打印这个“info”变量时,它还显示了变量中存在联系人编号,包括其他详细信息,但当我遍历div以获取联系人编号时,我找不到它。 我也试着得到所有的儿童div,甚至包括div本身的类,但我没能得到

给出的第一个url是:

第二个url“url2”是这样的:其中包含联系人详细信息


任何解决方案???

您可以通过类名获得联系电话。但我严重怀疑它是否适用于任何给定的页面,因为类名似乎是动态的。但你可以试试看

from bs4 import BeautifulSoup as soup
from requests import get

url = "https://www.yelp.com/search?find_desc=&find_loc=New+York%2C+NY&ns=1"
clnt = get(url)
page=soup(clnt.text,"html.parser")
container = page.findAll("div",{"class":"lemon--div__373c0__1mboc container__373c0__ZB8u4 hoverable__373c0__3CcYQ margin-t3__373c0__1l90z margin-b3__373c0__q1DuY padding-t3__373c0__1gw9E padding-r3__373c0__57InZ padding-b3__373c0__342DA padding-l3__373c0__1scQ0 border--top__373c0__3gXLy border--right__373c0__1n3Iv border--bottom__373c0__3qNtD border--left__373c0__d1B7K border-color--default__373c0__3-ifU"})

container = container[1]
url2= "https://www.yelp.com"+container.a["href"]
clnt2 = get(url2)
page2 = soup(clnt2.text, 'html.parser')

info = page2.find("div",{"class":"lemon--div__373c0__1mboc island__373c0__3fs6U u-padding-t1 u-padding-r1 u-padding-b1 u-padding-l1 border--top__373c0__19Owr border--right__373c0__22AHO border--bottom__373c0__uPbXS border--left__373c0__1SjJs border-color--default__373c0__2oFDT background-color--white__373c0__GVEnp"})

ContactNumber = info.find("p",{"class":"lemon--p__373c0__3Qnnj text__373c0__2pB8f text-color--normal__373c0__K_MKN text-align--left__373c0__2pnx_"})

print(ContactNumber.text)
输出:

(917) 464-3769

试试这个。我尝试了第二个URL,它给出了一个联系号码

url = 'https://www.yelp.com/biz/levain-bakery-new-york'
page = requests.get(url)
soup1 = BeautifulSoup(page.content, "lxml")

info = soup1.find_all("div", {
    "class": "lemon--div__373c0__1mboc island-section__373c0__3vKXy border--top__373c0__19Owr border-color--default__373c0__2oFDT"})

contact_number = info[1].find("p", attrs={
    'class': 'lemon--p__373c0__3Qnnj text__373c0__2pB8f text-color--normal__373c0__K_MKN text-align--left__373c0__2pnx_'}).text

print(contact_number)

如果答案有用,请批准。其他人可能会帮上忙。

以下几点对我很有用:

from bs4 import BeautifulSoup as soup
from requests import get

url = "https://www.yelp.com/search?find_desc=&find_loc=New+York%2C+NY&ns=1"
clnt = get(url)
page=soup(clnt.text,"html.parser")
container = page.findAll("div",{"class":"lemon--div__373c0__1mboc container__373c0__ZB8u4 hoverable__373c0__3CcYQ margin-t3__373c0__1l90z margin-b3__373c0__q1DuY padding-t3__373c0__1gw9E padding-r3__373c0__57InZ padding-b3__373c0__342DA padding-l3__373c0__1scQ0 border--top__373c0__3gXLy border--right__373c0__1n3Iv border--bottom__373c0__3qNtD border--left__373c0__d1B7K border-color--default__373c0__3-ifU"})

container = container[1]
url2= "https://www.yelp.com"+container.a["href"]
clnt2 = get(url2)
page2 = soup(clnt2.text, 'html.parser')

info = page2.find("div",{"class":"lemon--div__373c0__1mboc island__373c0__3fs6U u-padding-t1 u-padding-r1 u-padding-b1 u-padding-l1 border--top__373c0__19Owr border--right__373c0__22AHO border--bottom__373c0__uPbXS border--left__373c0__1SjJs border-color--default__373c0__2oFDT background-color--white__373c0__GVEnp"})

p_tags_of_info = info.find_all("p")
print(p_tags_of_info[2].text)

我刚刚从
info
变量中提取了所有
标记,然后选择了第三个
标记的文本。

请求
不要运行java脚本,这样你就不会得到beautifulsoup要解析的任何动态内容,使用类似selenium的东西来代替requests现在正在工作!但是让我们看看!非常感谢,如果有用的话,兄弟标记会被回复。听到这个消息我很难过。给你印什么?