Python 3.x Python 3 URL库URL下载失败时重试

Python 3.x Python 3 URL库URL下载失败时重试,python-3.x,download,urllib,Python 3.x,Download,Urllib,我有一个从服务器下载文件的程序。它们的容量从2mb到8mb不等。它运行一个循环并获取我请求的文件数。问题是我的互联网在FLIKIN沙漠的中部被吸走了。虽然大多数时候一切都运行得很好,但有时在urllib.request.urlretrieve请求期间,internet会中断,并冻结程序。我需要一种方法让urllib检测网络何时断开,并重试该文件,直到它再次恢复。感谢任何帮助 我正在做的示例: try: numimgs = len(imgsToGet) path1 = "LEVE

我有一个从服务器下载文件的程序。它们的容量从2mb到8mb不等。它运行一个循环并获取我请求的文件数。问题是我的互联网在FLIKIN沙漠的中部被吸走了。虽然大多数时候一切都运行得很好,但有时在urllib.request.urlretrieve请求期间,internet会中断,并冻结程序。我需要一种方法让urllib检测网络何时断开,并重试该文件,直到它再次恢复。感谢任何帮助

我正在做的示例:

try:
    numimgs = len(imgsToGet)

    path1 = "LEVEL II" #HIGHTEST FOLDER
    self.fn = imgs.split('/')[-1] #SPLIT OUT NAME FROM LEFT
    path2 = self.fn[:4] #SPLIT OUT KICX
    path3 = self.fn.split('_')[1] #SPLIT OUT DATE
    savepath = os.path.join(path1, path2, path3) #LEVEL II / RADAR / DATE PATH

    if not os.path.isdir(savepath): #See if it exists
        os.makedirs(savepath) #If not, make it

    fileSavePath = os.path.join(path1, path2, path3, self.fn)

    if os.path.isfile(fileSavePath): #chcek to see if image path already exists
        self.time['text'] = self.fn + ' exists \n'
        continue

    #DOWNLOAD PROGRESS
    def reporthook(blocknum, blocksize, totalsize):
        percent = 0
        readsofar = blocknum * blocksize
        if totalsize > 0:
            percent = readsofar * 1e2 / totalsize
            if percent >= 100:
                percent = 100

            s = "\r%5.1f%% %*d / %d" % (
                percent, len(str(totalsize)), readsofar, totalsize)

            self.time['text'] = 'Downloading File: '+str(curimg)+ ' of '+str(numimgs)+' '+self.fn+'' + s

            if readsofar >= totalsize: # near the end
                self.time['text'] = "Saving File..."
        else: # total size is unknown
            self.time['text'] = "read %d\n" % (readsofar)

        #UPDATE PROGRESSBAR
        self.pb.config(mode="determinate")
        if percent > 0:
            self.dl_p = round(percent,0)
            self.pb['value'] = self.dl_p
            self.pb.update()
        if percent > 100:
            self.pb['value'] = 0
            self.pb.update()

    urllib.request.urlretrieve(imgs, fileSavePath, reporthook)

except urllib.error.HTTPError as err: #catch 404 not found and continue
    if err.code == 404:
        self.time['text'] = ' Not Found'
        continue
干杯


David

您可以将代码放入带有计数器的try-except块中。以下是我所做的:

remaining_download_tries = 15

while remaining_download_tries > 0 :
    try:
        urlretrieve(CaseLawURL,File_Path_and_Name)
        print("successfully downloaded: " + CaseLawURL)
        time.sleep(0.1)
    except:
        print("error downloading " + CaseLawURL +" on trial no: " + str(16 - remaining_download_tries))
        remaining_download_tries = remaining_download_tries - 1
        continue
    else:
        break
我希望代码是不言自明的。问候