Python 网页抓取问题。我正在用漂亮的汤和蟒蛇

Python 网页抓取问题。我正在用漂亮的汤和蟒蛇,python,html,web-scraping,beautifulsoup,Python,Html,Web Scraping,Beautifulsoup,我需要一个标题,地址,电话号码,这个代码的描述。到目前为止,我已经做到了这一点。现在我被卡住了,请帮助新手抓取网页 from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) from bs4 import BeautifulSoup as soup import urlli

我需要一个标题,地址,电话号码,这个代码的描述。到目前为止,我已经做到了这一点。现在我被卡住了,请帮助新手抓取网页

from IPython.core.display import display, HTML

display(HTML("<style>.container { width:100% !important; }</style>"))

from bs4 import BeautifulSoup as soup

import urllib.request

import pandas as pd

withurllib.request.urlopen("http://buildingcongress.org/list/category/architects-6") as url:

s = url.read()

page_soup = soup(s, 'html.parser')

listings = []

for rows in page_soup.find_all("div"):

    if ("mn-list-item-odd" in rows["mn-listing mn-nonsponsor mn-search-result-priority-highlight-30"]) or ("mn-list-item-even" in rows["mn-listing mn-nonsponsor mn-search-result-priority-highlight-30"]):

        name = rows.find("div", class_="mn-title").a.get_text()
   

我的for a循环中出现错误。我被卡住了,请帮助我使用正则表达式搜索类,然后迭代

import re
import requests
from bs4 import BeautifulSoup

url = "http://buildingcongress.org/list/category/architects-6"

res = requests.get(url)
soup = BeautifulSoup(res.text,"lxml")
for rows in soup.find_all('div',class_=re.compile('mn-list-item-odd|mn-list-item-even')):
    name = rows.find("div", class_="mn-title").find('a').text
    print(name)

当您需要访问每个页面时,可以使用以下内容

import requests
from bs4 import BeautifulSoup as bs
import pandas as pd
import re

results = []
with requests.Session() as s:
    r = s.get('http://buildingcongress.org/list/category/architects-6')
    soup = bs(r.content, 'lxml')
    links = [item['href'] for item in soup.select('.mn-title a')]
    for link in links:
        r = s.get(link)
        soup = bs(r.content, 'lxml')
        name = soup.select_one('[itemprop="name"]').text
        address = re.sub(r'\n|\r', ' ' , ' '.join([item.text.strip() for item in soup.select('.mn-address1, .mn-citystatezip')]))
        tel = soup.select_one('.mn-member-phone1').text
        desc = re.sub(r'\n|\r','',soup.select_one('#about .mn-section-content').text) if soup.select_one('#about .mn-section-content') else 'No desc'
        row = [name, address, tel, desc]
        results.append(row)
df = pd.DataFrame(results, columns = ['name', 'address', 'tel', 'desc'])
print(df)

请提供错误。您需要从每个业务的登录页收集初始链接,然后访问每个页面以获取所有infoWelcome,以改善您的体验,因此请阅读并查看。我还建议你。OP-请不要故意破坏答案和问题。