Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/windows/14.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
HTTP错误404:未找到-BeautifulSoup和Python_Python_Beautifulsoup - Fatal编程技术网

HTTP错误404:未找到-BeautifulSoup和Python

HTTP错误404:未找到-BeautifulSoup和Python,python,beautifulsoup,Python,Beautifulsoup,我有一个脚本来抓取一个站点,但我一直得到一个“urllib.error.HTTPError:HTTP error 404:notfound”。我曾尝试将用户代理添加到头并运行脚本,但仍然出现相同的错误。这是我的密码 from urllib.request import urlopen, Request from bs4 import BeautifulSoup as soup import json atd_url = 'https://courses.lumenlearning.com/ca

我有一个脚本来抓取一个站点,但我一直得到一个“urllib.error.HTTPError:HTTP error 404:notfound”。我曾尝试将用户代理添加到头并运行脚本,但仍然出现相同的错误。这是我的密码

from urllib.request import urlopen, Request
from bs4 import BeautifulSoup as soup
import json

atd_url = 'https://courses.lumenlearning.com/catalog/achievingthedream'

#opening up connection and grabbing page
res = Request(atd_url,headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36'})
uClient = urlopen(res)
page_html = uClient.read()
uClient.close()

#html parsing
page_soup = soup(page_html, "html.parser")

#grabs info for each textbook
containers = page_soup.findAll("div",{"class":"book-info"})

data = []
for container in containers:
   item = {}
   item['type'] = "Course"
   item['title'] = container.h2.text
   item['author'] = container.p.text
   item['link'] = container.p.a["href"]
   item['source'] = "Achieving the Dream Courses"
   item['base_url'] = "https://courses.lumenlearning.com/catalog/achievingthedream"
   data.append(item) # add the item to the list

with open("./json/atd-lumen.json", "w") as writeJSON:
   json.dump(data, writeJSON, ensure_ascii=False)
这是我每次运行脚本时收到的完整错误消息

Traceback (most recent call last):
File "atd-lumen.py", line 9, in <module>
uClient = urlopen(res)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 532, in open
response = meth(req, response)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 570, in error
return self._call_chain(*args)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 650, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
回溯(最近一次呼叫最后一次):
文件“atd lumen.py”,第9行,在
uClient=urlopen(res)
urlopen中的文件“/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py”,第223行
返回opener.open(url、数据、超时)
打开文件“/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py”,第532行
响应=方法(请求,响应)
http_响应中的文件“/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py”,第642行
“http”、请求、响应、代码、消息、hdrs)
文件“/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py”,第570行出错
返回自我。调用链(*args)
文件“/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py”,第504行,在调用链中
结果=func(*args)
文件“/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py”,第650行,默认为http\u error\u
raise HTTPError(请求完整的url、代码、消息、hdrs、fp)
urllib.error.HTTPError:HTTP错误404:未找到

对如何解决这个问题有什么建议吗?当输入浏览器时,它是一个有效链接

改为使用请求库,这样可以:

import requests

#opening up connection and grabbing page
response = requests.get(atd_url,headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36'})

#html parsing
page_soup = soup(response.content, "html.parser")