Python 为request.get创建参数时出现错误

Python 为request.get创建参数时出现错误,python,web-scraping,beautifulsoup,parameters,request,Python,Web Scraping,Beautifulsoup,Parameters,Request,我正在迭代这个网站,并试图刮取新闻文章的链接。 首先,我需要获取页面的链接,因此我使用以下代码: def scrape(url): user_agent = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko'} request = 0 params = { 'q': 'China%20COVID-19',

我正在迭代这个网站,并试图刮取新闻文章的链接。

首先,我需要获取页面的链接,因此我使用以下代码:

def scrape(url):
    user_agent = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko'}
    request = 0
    params = {
        'q': 'China%20COVID-19',
        'gsc.tab': '0',
        'gsc.q': 'China%20COVID-19',
    }
    pagelinks = []
    
    myarticle = []
    for page_no in range(1,3):
        try:# to avoid "No connection adapters were found for" error
            params['gsc.page'] = page_no
            response = requests.get(url=url,
                                    headers=user_agent,
                                    params=params) 
            print(response.request.url)
            
        except Exception as e:
            print(e)
但是,结果并不正确:

https://www.usnews.com/search?q=China%252520COVID-19&gsc.tab=0&gsc.q=China%252520COVID-19&gsc.page=1
https://www.usnews.com/search?q=China%252520COVID-19&gsc.tab=0&gsc.q=China%252520COVID-19&gsc.page=2
预期结果如下:

https://www.usnews.com/search?q=China%20COVID-19#gsc.tab=0&gsc.q=China%20COVID-19&gsc.page=1
https://www.usnews.com/search?q=China%20COVID-19#gsc.tab=0&gsc.q=China%20COVID-19&gsc.page=2

有人能帮我解决这个错误吗?我会非常感激的

如果您在浏览器中打开您获得的URL,您将看到搜索字符串是中国%2520COVID-19,而不是您所期望的中国新冠病毒-19

您在查询字符串中看到的%20是URL编码的空格字符。看见如果你对中国%2520COVID-19进行URL解码,你将得到中国%20COVID-19,通过这个我得到%25是编码的百分比字符

可能请求已经对您的查询字符串值进行了URL编码,所以您不需要这样做。为使其工作,您可以更改的是使用解码值,例如空格而不是%20

参数={ ‘q’:‘中国新冠病毒-19’, “gsc.tab”:“0”, ‘gsc.q’:‘中国新冠病毒-19’, }
在浏览器中搜索会产生“哈希”URLhttps://www.usnews.com/search...,您需要自己构建URL。 通过requests.get…,params=params向请求传递参数将创建一个常规查询字符串https://www.usnews.com/search?... 这会导致错误的页面总是第一个

import requests
from urllib.parse import urlencode, unquote

def scrape(url):
    user_agent = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko'}
    request = 0
    url = 'https://www.usnews.com/search'
    params = {
        'q': 'China COVID-19',
        'gsc.tab': '0',
        'gsc.q': 'China COVID-19'
    }
    pagelinks = []
    myarticle = []

    for page_no in range(1,4):
        params['gsc.page'] = page_no
        _url = '%s#%s' % (url, urlencode(params))

        try:# to avoid "No connection adapters were found for" error
            response = requests.get(url=_url,
                                    headers=user_agent)
            print(_url, '>>', _url == unquote(response.request.url))

        except Exception as e:
            print(e)

scrape('https://www.usnews.com/search/')
输出:

https://www.usnews.com/search#q=China+COVID-19&gsc.tab=0&gsc.q=China+COVID-19&gsc.page=1 >> True
https://www.usnews.com/search#q=China+COVID-19&gsc.tab=0&gsc.q=China+COVID-19&gsc.page=2 >> True
https://www.usnews.com/search#q=China+COVID-19&gsc.tab=0&gsc.q=China+COVID-19&gsc.page=3 >> True

我不知道有关请求的任何信息,但您的参数是否应该未编码:“q”:“中国新冠病毒-19”即实际的空格而不是编码为%20的空格?谢谢!但最后一个答案对URL中的标签没有任何作用,它仍然返回正确的URL,但与@Yue Peng的页面相同:我刚刚发布了这个,因为另一个答案对我不起作用。将URL复制到浏览器中总是显示第一页。你说得对!谢谢你的关注!我想知道你能解析你得到的URL吗?因为我需要从整个链接中删除文章链接。类似于:soup_page=bsresponse,'html.parser'为单个页面选择所有文章:containers=soup_page.findAlldiv,{'class':'usn站点搜索项目container'}刮取容器中的文章:for i的链接:url=i.find'a'pagelinks.appendurl.get'href'printpagelinks@Yue彭:搜索结果是通过Javascript加载的,请求库不能这样做。即使在浏览器的源代码中,DOM中也没有搜索结果。此代码不会产生预期的结果。将URL复制到浏览器中始终显示第一页。
https://www.usnews.com/search#q=China+COVID-19&gsc.tab=0&gsc.q=China+COVID-19&gsc.page=1 >> True
https://www.usnews.com/search#q=China+COVID-19&gsc.tab=0&gsc.q=China+COVID-19&gsc.page=2 >> True
https://www.usnews.com/search#q=China+COVID-19&gsc.tab=0&gsc.q=China+COVID-19&gsc.page=3 >> True