Python-使用请求抓取图像

Python-使用请求抓取图像,python,request,python-requests,Python,Request,Python Requests,我无法在该位置保存/下载图像。虽然代码似乎是正确的,但我还是无法找出问题所在 我正在使用请求库来抓取图像 import os import urllib import urllib.request from bs4 import BeautifulSoup import requests import re from lxml.html import fromstring r = requests.get("https://www.scoopwhoop.com/subreddit-natur

我无法在该位置保存/下载图像。虽然代码似乎是正确的,但我还是无法找出问题所在

我正在使用请求库来抓取图像

import os
import urllib
import urllib.request
from bs4 import BeautifulSoup
import requests
import re

from lxml.html import fromstring

r = requests.get("https://www.scoopwhoop.com/subreddit-nature/#.lce3tjfci")
data = r.text
soup = BeautifulSoup(data, "lxml")

title = fromstring(r.content).findtext('.//title')

#print(title)


newPath = r'C:\Users\Vicky\Desktop\ScrappedImages\ ' + title

for link in soup.find_all('img'):
    image = link.get('src')
    if 'http' in image:
        print(image)
        imageName = os.path.split(image)[1]
        print(imageName)

        r2 = requests.get(image)

        if not os.path.exists(newPath):
            os.makedirs(newPath)
            with open(imageName, "wb") as f:
                f.write(r2.content)

尝试包装您的
r=请求。获取(“https://www.scoopwhoop.com/subreddit-nature/#.lce3tjfci“
中尝试:
当:
声明确保您正在抓取的网站返回200响应时,可能是网站超时或未提供您的请求。

您遇到了什么错误,如果有,你必须加上,否则,如果,因为如果路径存在,那么它将不会做任何可能的重复
import os
from bs4 import BeautifulSoup
import urllib
import requests
import urlparse

from lxml.html import fromstring

r = requests.get("https://www.scoopwhoop.com/subreddit-nature/#.lce3tjfci")
data = r.text
soup = BeautifulSoup(data, "lxml")

for link in soup.find_all('img'):
    image = link.get('src')
    if bool(urlparse.urlparse(image).netloc):
        print(image)
        imageName = image[image.rfind("/")+1:]
        print(imageName)

        urllib.urlretrieve(image,imageName)