Python 3.x Python请求.Get-提供无效架构错误

Python 3.x Python请求.Get-提供无效架构错误,python-3.x,csv,beautifulsoup,python-requests,Python 3.x,Csv,Beautifulsoup,Python Requests,给你再来一杯 尝试从CSV文件中刮取URL列表。这是我的代码: from bs4 import BeautifulSoup import requests import csv with open('TeamRankingsURLs.csv', newline='') as f_urls, open('TeamRankingsOutput.csv', 'w', newline='') as f_output: csv_urls = csv.reader(f_urls) csv_

给你再来一杯

尝试从CSV文件中刮取URL列表。这是我的代码:

from bs4 import BeautifulSoup
import requests
import csv

with open('TeamRankingsURLs.csv', newline='') as f_urls, open('TeamRankingsOutput.csv', 'w', newline='') as f_output:
    csv_urls = csv.reader(f_urls)
    csv_output = csv.writer(f_output)


    for line in csv_urls:
        page = requests.get(line[0]).text
        soup = BeautifulSoup(page, 'html.parser')
        results = soup.findAll('div', {'class' :'LineScoreCard__lineScoreColumnElement--1byQk'})

        for r in range(len(results)):
            csv_output.writerow([results[r].text])
…这给了我以下错误:

Traceback (most recent call last):
  File "TeamRankingsScraper.py", line 11, in <module>
    page = requests.get(line[0]).text
  File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\api.py", line 72, in get
    return request('get', url, params=params, **kwargs)
  File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\api.py", line 58, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\sessions.py", line 512, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\sessions.py", line 616, in send
    adapter = self.get_adapter(url=request.url)
  File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\sessions.py", line 707, in get_adapter
    raise InvalidSchema("No connection adapters were found for '%s'" % url)
requests.exceptions.InvalidSchema: No connection adapters were found for 'https://www.teamrankings.com/mlb/stat/runs-per-game?date=2018-04-15'
回溯(最近一次呼叫最后一次):
文件“teamrankingscraper.py”,第11行,在
page=请求.get(第[0]行)text
文件“C:\Users\windowshopr\AppData\Local\Programs\Python\36\lib\site packages\requests\api.py”,第72行,在get中
返回请求('get',url,params=params,**kwargs)
文件“C:\Users\windowshopr\AppData\Local\Programs\Python\36\lib\site packages\requests\api.py”,请求中第58行
return session.request(method=method,url=url,**kwargs)
文件“C:\Users\windowshopr\AppData\Local\Programs\Python\36\lib\site packages\requests\sessions.py”,请求中的第512行
resp=自我发送(准备,**发送)
文件“C:\Users\windowshopr\AppData\Local\Programs\Python\36\lib\site packages\requests\sessions.py”,第616行,在send中
adapter=self.get\u适配器(url=request.url)
文件“C:\Users\windowshopr\AppData\Local\Programs\Python\36\lib\site packages\requests\sessions.py”,第707行,位于get\U适配器中
raise InvalidSchema(“未找到“%s”的连接适配器%url)
requests.exceptions.InvalidSchema:找不到“ï”的连接适配器https://www.teamrankings.com/mlb/stat/runs-per-game?date=2018-04-15'
我的CSV文件只是a列中几个URL的列表(即…)

(我试图刮取的div类在该页面上不存在,但这不是问题所在。至少我不这么认为。我只需要在从CSV文件读取时更新它。)


有什么建议吗?因为这段代码适用于另一个项目,但由于某些原因,我对这个新的URL列表有问题。非常感谢

从回溯中,
requests.exceptions.InvalidSchema:找不到“ï”的连接适配器https://www.teamrankings.com/mlb/stat/runs-per-game?date=2018-04-15'

查看url中的随机字符,它应该从
https://www.teamrankings.com/mlb/stat/runs-per-game?date=2018-04-15

因此,首先解析csv并使用正则表达式删除http/https之前的任何随机字符。那应该能解决你的问题

如果您想在阅读csv时解决此特定url的当前问题,请执行以下操作:

import regex as re

strin = "https://www.teamrankings.com/mlb/stat/runs-per-game?date=2018-04-15"

re.sub(r'.*http', 'http', strin)
这将为您提供请求可以处理的正确url

由于您要求对循环中可访问的路径进行完整修复,因此您可以执行以下操作:

from bs4 import BeautifulSoup
import requests
import csv
import regex as re

with open('TeamRankingsURLs.csv', newline='') as f_urls, open('TeamRankingsOutput.csv', 'w', newline='') as f_output:
    csv_urls = csv.reader(f_urls)
    csv_output = csv.writer(f_output)


    for line in csv_urls:
        page = re.sub(r'.*http', 'http', line[0])
        page = requests.get(page).text
        soup = BeautifulSoup(page, 'html.parser')
        results = soup.findAll('div', {'class' :'LineScoreCard__lineScoreColumnElement--1byQk'})

        for r in range(len(results)):
            csv_output.writerow([results[r].text])

谢谢你的回复。我不知道为什么这些特殊字符被放在我从CSV获取的URL前面,因为CSV中没有任何提示?不管怎样,有没有一种方法可以使它适应在所有URL的循环中工作,而不仅仅是一个URL?当我尝试requests.get部分时,我会遇到类似的错误?谢谢是的,这是可能的,让我编辑一下答案。如果这个答案对你有用的话,你能接受吗?太棒了!非常感谢你为我做这件事。我在兜圈子,但现在我明白了。我会接受的!