Web scraping 我如何获得';页面链接';从';每页';?

Web scraping 我如何获得';页面链接';从';每页';?,web-scraping,beautifulsoup,python-requests,web-crawler,href,Web Scraping,Beautifulsoup,Python Requests,Web Crawler,Href,我想从python3的“每页”中获取“每页链接” 在我的代码中,每个页面都位于BaseUrl中。在我的代码中,每个页面的链接都位于正文中 在哪里, BaseUrl = 'https://www.jobplanet.co.kr/companies?sort_by=review_compensation_cache&industry_id=700&page=' select body = #listCompanies > div > div.section_group

我想从python3的“每页”中获取“每页链接”

在我的代码中,每个页面都位于BaseUrl中。在我的代码中,每个页面的链接都位于正文中

在哪里,

BaseUrl = 'https://www.jobplanet.co.kr/companies?sort_by=review_compensation_cache&industry_id=700&page='

select body = #listCompanies > div > div.section_group > section:nth-child(1) > div > div > dl.content_col2_3.cominfo > dt > a'
请检查我的密码。我想收集每一个网页的链接,这样做作为链接URL链接列表。有什么问题吗

from bs4 import BeautifulSoup
import csv
import os
import re
import requests
import json

# jobplanet
BaseUrl = 'https://www.jobplanet.co.kr/companies?sort_by=review_compensation_cache&industry_id=700&page='


for i in range(1, 5, 1):
        url = BaseUrl + str(i)
        r = requests.get(url)
        soup = BeautifulSoup(r.text,'lxml')
        body = soup.select('#listCompanies > div > div.section_group > section:nth-child(1) > div > div > dl.content_col2_3.cominfo > dt > a')
        #print(body)

        linkUrl = []
        for item in body:
            link = item.get('href')
            linkUrl.append(link)
print(linkUrl)

您选择的CSS选择器只返回一条记录。我提供了更简单的CSS选择器,每页返回所有10条记录

您需要在循环之外定义列表

from bs4 import BeautifulSoup
import requests

linkUrl = []
BaseUrl = 'https://www.jobplanet.co.kr/companies?sort_by=review_compensation_cache&industry_id=700&page={}'
for i in range(1, 6):
    url = BaseUrl.format(i)
    r = requests.get(url)
    soup = BeautifulSoup(r.text,'lxml')
    links=soup.select(".us_titb_l3 >a")
    for item in links:
        link = item.get('href')
        linkUrl.append(link)

print(linkUrl)

您的Css选择器错误,并且添加了分页

from bs4 import BeautifulSoup
import csv
import os
import re
import requests
import json
from urllib import parse

# jobplanet
BaseUrl = 'https://www.jobplanet.co.kr/companies?sort_by=review_compensation_cache&industry_id=700&page={}'
source  =  requests.get(BaseUrl.format(1))
soup = BeautifulSoup(source.text,'lxml')
last_page_index = soup.select('a[class="btn_pglast"]') # getting the last page index 
last_page_index = int(last_page_index[0].get('href').split('page=')[1]) if last_page_index else 1
for i in range(1, last_page_index):
    print('## Getting Page {} out of {}'.format(i,last_page_index))
    if i > 1: # to avoid getting the same page again
        url = BaseUrl.format(i)
        r = requests.get(url)
        soup = BeautifulSoup(r.text,'lxml')
    body = soup.select('dt[class="us_titb_l3"] a')
    linkUrl = []
    for item in body:
        link = item.get('href')
        link = parse.urljoin(BaseUrl, link)
        linkUrl.append(link)
print(linkUrl)