Python 为什么urlopen不适用于某些网站?

Python 为什么urlopen不适用于某些网站?,python,web-scraping,beautifulsoup,urlopen,Python,Web Scraping,Beautifulsoup,Urlopen,我对python非常陌生,我正试图从客户的网站上获取一些基本数据。我在其他网站上也尝试过同样的方法,并收到了预期的效果。这就是我到目前为止所做的: from urllib.request import urlopen from bs4 import BeautifulSoup main_url = 'https://www.grainger.com/category/pipe-hose-tube-fittings/hose-products/hose-fittings-couplings/ca

我对python非常陌生,我正试图从客户的网站上获取一些基本数据。我在其他网站上也尝试过同样的方法,并收到了预期的效果。这就是我到目前为止所做的:

from urllib.request import urlopen
from bs4 import BeautifulSoup

main_url = 'https://www.grainger.com/category/pipe-hose-tube-fittings/hose-products/hose-fittings-couplings/cam-groove-fittings-gaskets/metal-cam-groove-fittings/stainless-steel-cam-groove-fittings'

uClient = urllib.request.urlopen(main_url)
main_html = uClient.read()
uClient.close()
即使是这个简单的读取网站的调用也会导致超时错误。正如我所说,我已经在其他网站上成功地使用了完全相同的代码。错误是:

Traceback (most recent call last):
  File "Pricing_Tool.py", line 6, in <module>
    uClient = uReq(main_url)
  File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 525, in open
    response = self._open(req, data)
  File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 543, in _open
    '_open', req)
  File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 503, in _call_chain
    result = func(*args)
  File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 1362, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py", line 1322, in do_open
    r = h.getresponse()
  File "C:\Users\Brian Knoll\anaconda3\lib\http\client.py", line 1344, in getresponse
    response.begin()
  File "C:\Users\Brian Knoll\anaconda3\lib\http\client.py", line 306, in begin
    version, status, reason = self._read_status()
  File "C:\Users\Brian Knoll\anaconda3\lib\http\client.py", line 267, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "C:\Users\Brian Knoll\anaconda3\lib\socket.py", line 589, in readinto
    return self._sock.recv_into(b)
  File "C:\Users\Brian Knoll\anaconda3\lib\ssl.py", line 1071, in recv_into
    return self.read(nbytes, buffer)
  File "C:\Users\Brian Knoll\anaconda3\lib\ssl.py", line 929, in read
    return self._sslobj.read(len, buffer)
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
回溯(最近一次呼叫最后一次):
文件“Pricing_Tool.py”,第6行,在
uClient=uReq(主url)
文件“C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py”,第222行,在urlopen中
返回opener.open(url、数据、超时)
文件“C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py”,第525行,打开
响应=自身打开(请求,数据)
文件“C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py”,第543行,处于打开状态
"开放",
文件“C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py”,第503行,在调用链中
结果=func(*args)
文件“C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py”,第1362行,https\u open
上下文=self.\u上下文,检查主机名=self.\u检查主机名)
文件“C:\Users\Brian Knoll\anaconda3\lib\urllib\request.py”,第1322行,在do\u open中
r=h.getresponse()
getresponse中第1344行的文件“C:\Users\Brian Knoll\anaconda3\lib\http\client.py”
response.begin()
文件“C:\Users\Brian Knoll\anaconda3\lib\http\client.py”,第306行,在begin中
版本、状态、原因=self.\u读取\u状态()
文件“C:\Users\Brian Knoll\anaconda3\lib\http\client.py”,第267行,处于读取状态
line=str(self.fp.readline(_MAXLINE+1),“iso-8859-1”)
readinto中第589行的文件“C:\Users\Brian Knoll\anaconda3\lib\socket.py”
返回自我。将袜子重新放入(b)
文件“C:\Users\Brian Knoll\anaconda3\lib\ssl.py”,第1071行,在recv\u中
返回自读(N字节,缓冲区)
文件“C:\Users\Brian Knoll\anaconda3\lib\ssl.py”,第929行,已读
返回self.\u sslobj.read(len,buffer)
TimeoutError:[WinError 10060]连接尝试失败,因为连接方在一段时间后没有正确响应,或者建立的连接失败,因为连接的主机没有响应
有没有可能这个网站太大而无法处理?
任何帮助都将不胜感激。谢谢

通常网站在通过
请求发送请求时会返回响应。但也有一些网站需要一些特定的标题,如用户代理、Cookie等。这就是这样一个网站。您已发送
用户代理
,以便网站看到请求来自浏览器。以下代码应返回响应代码200

import requests
headers = {"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"}
res = requests.get("https://www.grainger.com/category/pipe-hose-tube-fittings/hose-products/hose-fittings-couplings/cam-groove-fittings-gaskets/metal-cam-groove-fittings/stainless-steel-cam-groove-fittings", headers=headers)
print(res.status_code)
更新:

from bs4 import BeautifulSoup
soup = BeautifulSoup(res.text, "lxml")
print(soup.find_all("a"))
这将给出所有的锚标签