如何使用BeautifulSoup和Python刮取页面?

如何使用BeautifulSoup和Python刮取页面?,python,python-2.7,web-scraping,Python,Python 2.7,Web Scraping,我试图从BBC美食网站上提取信息,但我在缩小收集的数据范围方面遇到了一些困难 以下是我目前掌握的情况: from bs4 import BeautifulSoup import requests webpage = requests.get('http://www.bbcgoodfood.com/search/recipes?query=tomato') soup = BeautifulSoup(webpage.content) links = soup.find_all("a") for

我试图从BBC美食网站上提取信息,但我在缩小收集的数据范围方面遇到了一些困难

以下是我目前掌握的情况:

from bs4 import BeautifulSoup
import requests

webpage = requests.get('http://www.bbcgoodfood.com/search/recipes?query=tomato')
soup = BeautifulSoup(webpage.content)
links = soup.find_all("a")

for anchor in links:
    print(anchor.get('href')), anchor.text
这将返回相关页面中的所有链接以及链接的文本描述,但我想从页面上的“article”类型对象中提取链接。这些是特定食谱的链接


通过一些实验,我成功地从文章中返回了文本,但我似乎无法提取链接。

我看到的与文章标签相关的两件事是href和img.src:

from bs4 import BeautifulSoup
import requests

webpage = requests.get('http://www.bbcgoodfood.com/search/recipes?query=tomato')
soup = BeautifulSoup(webpage.content)
links = soup.find_all("article")

for ele in links:
    print(ele.a["href"])
    print(ele.img["src"])
链接位于
“class=节点标题”

要访问,您需要在
http://www.bbcgoodfood.com

for l in links:
       print(requests.get("http://www.bbcgoodfood.com{}".format(l.a["href"])).status
200
200
200
200
200
200
200
200
200
200

BBC美食版的结构现在已经改变了

我成功地将代码改编成这样,虽然不完美,但可以建立在以下基础上:

import numpy as np
#Create empty list
listofurls = []
pages = np.arange(1, 10, 1)
ingredientlist = ['milk','eggs','flour']
for ingredient in ingredientlist:
    for page in pages:
        page = requests.get('https://www.bbcgoodfood.com/search/recipes/page/' + str(page) + '/?q=' + ingredient + '&sort=-relevance')
        soup = BeautifulSoup(page.content)
        for link in soup.findAll(class_="standard-card-new__article-title"):
            listofurls.append("https://www.bbcgoodfood.com" + link.get('href'))
import numpy as np
#Create empty list
listofurls = []
pages = np.arange(1, 10, 1)
ingredientlist = ['milk','eggs','flour']
for ingredient in ingredientlist:
    for page in pages:
        page = requests.get('https://www.bbcgoodfood.com/search/recipes/page/' + str(page) + '/?q=' + ingredient + '&sort=-relevance')
        soup = BeautifulSoup(page.content)
        for link in soup.findAll(class_="standard-card-new__article-title"):
            listofurls.append("https://www.bbcgoodfood.com" + link.get('href'))