Python 无法同时从两个不同的深度刮取不同的字段
我已经用python编写了一个脚本,从网页的登录页中刮取不同租户的Python 无法同时从两个不同的深度刮取不同的字段,python,python-3.x,web-scraping,Python,Python 3.x,Web Scraping,我已经用python编写了一个脚本,从网页的登录页中刮取不同租户的姓名、地址和电话,并从每个餐厅内页解析作者和评论 我想在获取附加信息(链接)函数中使用yield生成结果,但在获取链接(链接)函数中与其他结果一起打印相同的结果。 到目前为止,我写过: import requests from bs4 import BeautifulSoup from urllib.parse import urljoin url = "https://www.yellowpages.com/search?s
姓名
、地址
和电话
,并从每个餐厅内页解析作者
和评论
我想在获取附加信息(链接)
函数中使用yield
生成结果,但在获取链接(链接)
函数中与其他结果一起打印相同的结果。
到目前为止,我写过:
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
url = "https://www.yellowpages.com/search?search_terms=restaurant&geo_location_terms=San+Francisco%2C+CA"
base = "https://www.yellowpages.com"
def get_links(link):
res = requests.get(link,headers={'User-Agent':'Mozilla/5.0'})
soup = BeautifulSoup(res.text,"lxml")
for item in soup.select(".v-card"):
inner_link = item.select_one("a.business-name")
author,review = get_additional_info(urljoin(base,inner_link.get('href')))
title = inner_link.text
address = item.select_one("p.adr").get_text(strip=True)
phone = item.select_one(".phone").text
yield title,address,phone,author,review
def get_additional_info(link):
res = requests.get(link,headers={'User-Agent':'Mozilla/5.0'})
soup = BeautifulSoup(res.text,"lxml")
for elem in soup.select("article[class='clearfix']"):
try:
author = elem.select_one(".review-info a.author").text
except AttributeError: author = ""
try:
review = elem.select_one(".review-response > p").text
except AttributeError: review = ""
yield author, review
if __name__ == '__main__':
for item in get_links(url):
print(item)
如果我运行上述脚本,它将抛出以下错误,指向行author,review=get\u additional\u info(urljoin(base,internal\u link.get('href'))
:
回溯(最近一次呼叫最后一次):
文件“C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\demo.py”,第36行,在
对于get_链接(url)中的项目:
文件“C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\demo.py”,第14行,在get\u链接中
作者,评论=获取附加信息(urljoin(基本,内部链接.get('href'))
ValueError:要解压缩的值太多(应为2个)
我希望获取的所有字段都已正确定义(选择器)
这就是我所追求的:
PS我希望坚持我已经尝试过的方式,这意味着我不想解析来自内部页面的所有内容,因为数据对我来说是无用的
如果我理解正确,你想“加入”链接和其他信息。一种方法是:
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
from textwrap import shorten
url = "https://www.yellowpages.com/search?search_terms=restaurant&geo_location_terms=San+Francisco%2C+CA"
base = "https://www.yellowpages.com"
def get_links(session, link):
res = session.get(link,headers={'User-Agent':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0'})
soup = BeautifulSoup(res.text,"lxml")
for item in soup.select(".v-card"):
inner_link = item.select_one("a.business-name")
title = inner_link.text
address = item.select_one("p.adr").get_text(strip=True)
phone = item.select_one(".phone").text
for author, review in get_additional_info(session, urljoin(base,inner_link.get('href'))):
yield title,address,phone,author,review
def get_additional_info(session, link):
res = session.get(link,headers={'User-Agent':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0'})
soup = BeautifulSoup(res.text,"lxml")
for elem in soup.select("article[class='clearfix']"):
try:
author = elem.select_one(".review-info a.author").text
except AttributeError: author = ""
try:
review = elem.select_one(".review-response > p").text
except AttributeError: review = ""
yield author, review
if __name__ == '__main__':
with requests.session() as s:
# this sets all cookies
res = s.get("https://www.yellowpages.com", headers={'User-Agent':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0'}).text
for title,address,phone,author,review in get_links(s, url):
print('{: <30}{: <30}{: <20}{: <20}{}'.format(shorten(title, 30), shorten(address, 30), shorten(phone, 20), shorten(author, 20), shorten(review, 60)))
get\u additional\u info()
函数返回一个生成器;您必须使用此生成器才能获取项目,或者如果不需要生成器,请更改函数以返回这些项目。我是否可以在get\u additional\u info()
函数中返回所有这些项目,并在get\u links(link)
中打印相同的项目?如果我按原样返回值,函数将只记住最后一次迭代。你可以在列表中添加所有项,但可以避免。为了打开发电机的包装,您必须知道它将返回多少物品,这在这种情况下可能是不可能的。相反,你可以做的是迭代生成器并打开每个项目,就像Andrej的回答中所做的那样。非常感谢,先生@t.m.adam。这就是我需要知道的。您可以将get_链接(s,url)中标题、地址、电话、作者、评论的缩短为仅get_链接(s,url)中标题的:
,这仍然会产生相同的结果。
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
from textwrap import shorten
url = "https://www.yellowpages.com/search?search_terms=restaurant&geo_location_terms=San+Francisco%2C+CA"
base = "https://www.yellowpages.com"
def get_links(session, link):
res = session.get(link,headers={'User-Agent':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0'})
soup = BeautifulSoup(res.text,"lxml")
for item in soup.select(".v-card"):
inner_link = item.select_one("a.business-name")
title = inner_link.text
address = item.select_one("p.adr").get_text(strip=True)
phone = item.select_one(".phone").text
for author, review in get_additional_info(session, urljoin(base,inner_link.get('href'))):
yield title,address,phone,author,review
def get_additional_info(session, link):
res = session.get(link,headers={'User-Agent':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0'})
soup = BeautifulSoup(res.text,"lxml")
for elem in soup.select("article[class='clearfix']"):
try:
author = elem.select_one(".review-info a.author").text
except AttributeError: author = ""
try:
review = elem.select_one(".review-response > p").text
except AttributeError: review = ""
yield author, review
if __name__ == '__main__':
with requests.session() as s:
# this sets all cookies
res = s.get("https://www.yellowpages.com", headers={'User-Agent':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0'}).text
for title,address,phone,author,review in get_links(s, url):
print('{: <30}{: <30}{: <20}{: <20}{}'.format(shorten(title, 30), shorten(address, 30), shorten(phone, 20), shorten(author, 20), shorten(review, 60)))
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Mark I. Their food is good but i think they need to improve on [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Cathy L. This place is pretty much my go to place is I want [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Mary C. They have so many things in here worth going in here [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Claude R. The appetizers in here are enough to make you ask for [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Felicia M. How can this be? This place looks like magic and their [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Jose H. I feel like I just got from Mexico, we went here last [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Authentic Mexican. Always busy and the house salsa is [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 I'm disappointed. The decor is ecclectic and fun, the [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 This used to be one of my favorite restaurants until I [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 I came to this restarnt for a birthday of a friend of [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 The reviews here, which I consulted before going, were [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 I have been told to give it a try.Food is on [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Great food... love the empalmada... sort of like a [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Definitely the best Mexican restaurant in town!... [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 This place has been consistenly good for a few years. [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 So-so Mexican food served by a vaguely condescending, [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 since the place is small, it gets crowded quickly and [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Go early if you don't want to wait. They don't take [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 A great place where you belong like part of the [...]
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 Keith Y. Loved this place. Food and service was amazing
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 Quintrell P. Was really hungry and needed a place to get some [...]
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 Len K. I'm not usually a fan of red meat, but I'm definitely [...]
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 Emm C. I haven't been able to see San Francisco, one of my [...]
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 James O. For me, it`s one of the best ribs in town, I give [...]
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 Jing H. This is one of the best places if you are craving for [...]
...etc.