用python抓取黄页hrefs

用python抓取黄页hrefs,python,parsing,beautifulsoup,Python,Parsing,Beautifulsoup,我最近发布了请求从yellowpages中提取数据的帖子,@alecxe向我展示了一些提取数据的新方法,这帮了我很多忙,但我又一次陷入困境,我想为yellowpages中的每个链接提取数据,这样我就可以获得yellowpages页面,该页面上有更多的数据。我想添加一个名为“url”的变量,并获取业务的href,而不是实际的业务网站,而是业务的yellowpages页面。我试过各种方法,但似乎都不管用。href位于“class=business name”下 您应该实现多个方面: 从busine

我最近发布了请求从yellowpages中提取数据的帖子,@alecxe向我展示了一些提取数据的新方法,这帮了我很多忙,但我又一次陷入困境,我想为yellowpages中的每个链接提取数据,这样我就可以获得yellowpages页面,该页面上有更多的数据。我想添加一个名为“url”的变量,并获取业务的href,而不是实际的业务网站,而是业务的yellowpages页面。我试过各种方法,但似乎都不管用。href位于“class=business name”下


您应该实现多个方面:

  • business name
    类元素的
    href
    属性中提取业务链接-在
    BeautifulSoup
    中,这可以通过像字典一样“处理”元素来完成
  • 使用
    urljoin()
  • 在维护web抓取会话时向业务页面发出请求
  • 使用
    BeautifulSoup
    解析业务页面,并提取所需信息
  • 添加时间延迟以避免频繁访问站点
完整的工作示例,从搜索结果页面打印出业务名称,从业务概要页面打印出业务描述:

from urllib.parse import urljoin  

import requests
import time
from bs4 import BeautifulSoup


url = "http://www.yellowpages.com/search?search_terms=businesses&geo_location_terms=baton%rouge+LA&page=1"


with requests.Session() as session:
    session.headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36'}

    page = session.get(url)
    soup = BeautifulSoup(page.text, "html.parser")
    for result in soup.select(".search-results .result"):
        business_name_element = result.select_one(".business-name")
        name = business_name_element.get_text(strip=True, separator=" ")

        link = urljoin(page.url, business_name_element["href"])

        # extract additional business information
        business_page = session.get(link)
        business_soup = BeautifulSoup(business_page.text, "html.parser")
        description = business_soup.select_one("dd.description").text

        print(name, description)

        time.sleep(1)  # time delay to not hit the site too often

太棒了!我对python和编程还是很陌生。您的解决方案工作得非常出色,尽管我只添加了
business\u name\u元素=result,但做了一点小小的更改。选择\u one(“.business name”)
link=urljoin(page.url,business\u name\u元素[“href”])
。当我仔细阅读你的代码时,我对它进行了反向工程,这样它才有意义。谢谢你的支持!我正在按原样运行您的代码,但在描述部分出现错误。
from urllib.parse import urljoin  

import requests
import time
from bs4 import BeautifulSoup


url = "http://www.yellowpages.com/search?search_terms=businesses&geo_location_terms=baton%rouge+LA&page=1"


with requests.Session() as session:
    session.headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36'}

    page = session.get(url)
    soup = BeautifulSoup(page.text, "html.parser")
    for result in soup.select(".search-results .result"):
        business_name_element = result.select_one(".business-name")
        name = business_name_element.get_text(strip=True, separator=" ")

        link = urljoin(page.url, business_name_element["href"])

        # extract additional business information
        business_page = session.get(link)
        business_soup = BeautifulSoup(business_page.text, "html.parser")
        description = business_soup.select_one("dd.description").text

        print(name, description)

        time.sleep(1)  # time delay to not hit the site too often