Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x 为什么当我运行脚本时,下载的图像文件有零字节,我收到属性错误?_Python 3.x_Reddit - Fatal编程技术网

Python 3.x 为什么当我运行脚本时,下载的图像文件有零字节,我收到属性错误?

Python 3.x 为什么当我运行脚本时,下载的图像文件有零字节,我收到属性错误?,python-3.x,reddit,Python 3.x,Reddit,我仍然是python新手,并试图通过构建一些脚本来进行练习。这一个应该在图像子Reddit中找到热提交,并使用提交url的基本名称将这些图像下载到redditpics目录路径。我正在使用python 3.7。 首先,我试着这样做: import praw, requests, os, bs4 reddit = praw.Reddit(client_id='xxxx', client_secret='xxxx',

我仍然是python新手,并试图通过构建一些脚本来进行练习。这一个应该在图像子Reddit中找到热提交,并使用提交url的基本名称将这些图像下载到redditpics目录路径。我正在使用python 3.7。 首先,我试着这样做:

import praw, requests, os, bs4

reddit = praw.Reddit(client_id='xxxx', 
                      client_secret='xxxx',
                      user_agent='picture downloader',
                      username='xxxx',
                      password='xxxx'
                      ) 
print(reddit.read_only)

os.makedirs('redditpics', exist_ok=True) 
for submission in reddit.subreddit('earthporn').hot(limit=50):
    url = submission.url
    print(url)
    imageFile = open(os.path.join('redditpics', os.path.basename(url)), 'wb')
print('Done')
下载的图像具有零字节的信息。然后,我从这些无聊的东西中添加了以下内容:

imageFile = open(os.path.join('redditpics', os.path.basename(url)), 'wb')
for chunk in url.iter_content(100000):
    print("saving " + imageFile)

    imageFile.write(chunk)
imageFile.close()
print('Done.')

但是我得到了以下错误:
AttributeError:“str”对象没有属性“iter\u content”

经过一些尝试和错误,使用自动化枯燥的东西和在线搜索,这最终成功了:

for submission in reddit.subreddit('earthporn').hot(limit=50):
print(submission.url)
url = requests.get(submission.url)
imageFile = open(os.path.join('redditpics', os.path.basename(submission.url)), 'wb')
for chunk in url.iter_content(100000):
    print("saving " + str(imageFile))

    imageFile.write(chunk)
imageFile.close()