Web 网站垃圾/相同的网站不起作用?

Web 网站垃圾/相同的网站不起作用?,web,web-scraping,beautifulsoup,Web,Web Scraping,Beautifulsoup,我想从这两个链接中删除header元素- 对我来说,这两个网站看起来完全相同-图片见下文 为什么只对第二个链接进行刮削而对第一个链接不起作用 import time import requests from bs4 import BeautifulSoup # not working link = "https://apps.apple.com/us/app/bingo-story-live-bingo-games/id1179108009?uo=4" page = req

我想从这两个链接中删除header元素- 对我来说,这两个网站看起来完全相同-图片见下文

为什么只对第二个链接进行刮削而对第一个链接不起作用

import time
import requests
from bs4 import BeautifulSoup

# not working
link = "https://apps.apple.com/us/app/bingo-story-live-bingo-games/id1179108009?uo=4"
page = requests.get (link)
time.sleep (1)
soup = BeautifulSoup (page.content, "html.parser")
erg = soup.find("header")
print(f"First Link: {erg}")

# working
link = "https://apps.apple.com/us/app/jackpot-boom-casino-slots/id1554995201?uo=4"
page = requests.get (link)
time.sleep (1)
soup = BeautifulSoup (page.content, "html.parser")
erg = soup.find("header")
print(f"Second Link: {len(erg)}")
工作:

不工作:

页面有时是由JavaScript加载的,因此
request
不支持它

您可以使用
while
循环来检查
标题是否出现在
soup
中,然后
break

import requests
from bs4 import BeautifulSoup


headers = {
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36"
}
link = "https://apps.apple.com/us/app/bingo-story-live-bingo-games/id1179108009?uo=4"

while True:
    soup = BeautifulSoup(requests.get(link).content, "html.parser")
    header = soup.find("header")
    if header:
        break

print(header)

页面有时由JavaScript加载,因此
request
不支持它

您可以使用
while
循环来检查
标题是否出现在
soup
中,然后
break

import requests
from bs4 import BeautifulSoup


headers = {
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36"
}
link = "https://apps.apple.com/us/app/bingo-story-live-bingo-games/id1179108009?uo=4"

while True:
    soup = BeautifulSoup(requests.get(link).content, "html.parser")
    header = soup.find("header")
    if header:
        break

print(header)

试着从这些链接中获取你想要获取的任何字段。现在它得到了标题。您可以修改
res.json()['data'][0]['attributes']['name']
以获取您感兴趣的任何字段。Mkae请确保将URL放在此列表中
url\u to\u scrape

import json
import requests
from bs4 import BeautifulSoup
from urllib.parse import unquote

urls_to_scrape = {
    'https://apps.apple.com/us/app/bingo-story-live-bingo-games/id1179108009?uo=4',
    'https://apps.apple.com/us/app/jackpot-boom-casino-slots/id1554995201?uo=4'
}

base_url = 'https://apps.apple.com/us/app/bingo-story-live-bingo-games/id1179108009?uo=4'
link = 'https://amp-api.apps.apple.com/v1/catalog/US/apps/{}'

params = {
    'platform': 'web',
    'additionalPlatforms': 'appletv,ipad,iphone,mac',
    'extend': 'customPromotionalText,customScreenshotsByType,description,developerInfo,distributionKind,editorialVideo,fileSizeByDevice,messagesScreenshots,privacy,privacyPolicyText,privacyPolicyUrl,requirementsByDeviceFamily,supportURLForLanguage,versionHistory,websiteUrl',
    'include': 'genres,developer,reviews,merchandised-in-apps,customers-also-bought-apps,developer-other-apps,app-bundles,top-in-apps,related-editorial-items',
    'l': 'en-us',
    'limit[merchandised-in-apps]': '20',
    'omit[resource]': 'autos',
    'sparseLimit[apps:related-editorial-items]': '5'
}

with requests.Session() as s:
    s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.101 Safari/537.36'
    res = s.get(base_url)
    soup = BeautifulSoup(res.text,"lxml")
    token_raw = soup.select_one("[name='web-experience-app/config/environment']").get("content")
    token = json.loads(unquote(token_raw))['MEDIA_API']['token']
    s.headers['Accept'] = 'application/json'
    s.headers['Referer'] = 'https://apps.apple.com/'
    s.headers['Authorization'] = f'Bearer {token}'

    for url in urls_to_scrape:
        id_ = url.split("/")[-1].strip("id").split("?")[0]
        res = s.get(link.format(id_),params=params)
        title = res.json()['data'][0]['attributes']['name']
        print(title)

试着从这些链接中获取你想要获取的任何字段。现在它得到了标题。您可以修改
res.json()['data'][0]['attributes']['name']
以获取您感兴趣的任何字段。Mkae请确保将URL放在此列表中
url\u to\u scrape

import json
import requests
from bs4 import BeautifulSoup
from urllib.parse import unquote

urls_to_scrape = {
    'https://apps.apple.com/us/app/bingo-story-live-bingo-games/id1179108009?uo=4',
    'https://apps.apple.com/us/app/jackpot-boom-casino-slots/id1554995201?uo=4'
}

base_url = 'https://apps.apple.com/us/app/bingo-story-live-bingo-games/id1179108009?uo=4'
link = 'https://amp-api.apps.apple.com/v1/catalog/US/apps/{}'

params = {
    'platform': 'web',
    'additionalPlatforms': 'appletv,ipad,iphone,mac',
    'extend': 'customPromotionalText,customScreenshotsByType,description,developerInfo,distributionKind,editorialVideo,fileSizeByDevice,messagesScreenshots,privacy,privacyPolicyText,privacyPolicyUrl,requirementsByDeviceFamily,supportURLForLanguage,versionHistory,websiteUrl',
    'include': 'genres,developer,reviews,merchandised-in-apps,customers-also-bought-apps,developer-other-apps,app-bundles,top-in-apps,related-editorial-items',
    'l': 'en-us',
    'limit[merchandised-in-apps]': '20',
    'omit[resource]': 'autos',
    'sparseLimit[apps:related-editorial-items]': '5'
}

with requests.Session() as s:
    s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.101 Safari/537.36'
    res = s.get(base_url)
    soup = BeautifulSoup(res.text,"lxml")
    token_raw = soup.select_one("[name='web-experience-app/config/environment']").get("content")
    token = json.loads(unquote(token_raw))['MEDIA_API']['token']
    s.headers['Accept'] = 'application/json'
    s.headers['Referer'] = 'https://apps.apple.com/'
    s.headers['Authorization'] = f'Bearer {token}'

    for url in urls_to_scrape:
        id_ = url.split("/")[-1].strip("id").split("?")[0]
        res = s.get(link.format(id_),params=params)
        title = res.json()['data'][0]['attributes']['name']
        print(title)

它不是100%——有时服务器返回Js-only页面。您需要重复它,直到服务器发送正确的版本为止。@Andrejkese非常有趣,因为我用浏览器禁用JS测试了几次。你可以代替我发布答案。我将升级我所做的第一件事是设置
用户代理
…但是当我已经编写了答案并试图复制输出时,我再次运行脚本,但它失败了:/So no magic:)@AndrejKesely我更新了我的答案以检查响应。似乎根本不需要
用户代理
。谢谢-它似乎可以工作。它不是100%-有时服务器返回Js only页面。您需要重复它,直到服务器发送正确的版本为止。@Andrejkese非常有趣,因为我用浏览器禁用JS测试了几次。你可以代替我发布答案。我将升级我所做的第一件事是设置
用户代理
…但是当我已经编写了答案并试图复制输出时,我再次运行脚本,但它失败了:/So no magic:)@AndrejKesely我更新了我的答案以检查响应。似乎根本不需要
用户代理
。谢谢-它似乎有效。你想从这些链接中获取什么?你想从这些链接中获取什么?