Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/328.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 在for in循环中添加暂停_Python_Pandas_Csv_For In Loop - Fatal编程技术网

Python 在for in循环中添加暂停

Python 在for in循环中添加暂停,python,pandas,csv,for-in-loop,Python,Pandas,Csv,For In Loop,我有一个脚本,访问一个网站超过100次,并希望在每个项目之间添加一个小的延迟。此外,如果可能的话,可以添加一个倒计时或进程条,显示剩余项目的状态或数量。下面是代码示例 import pandas as pd urls = ['https://vpic.nhtsa.dot.gov/api/vehicles/GetModelsForMakeIdYear/makeId/440/vehicletype/car?format=csv', 'https://vpic.nhtsa.dot.gov/

我有一个脚本,访问一个网站超过100次,并希望在每个项目之间添加一个小的延迟。此外,如果可能的话,可以添加一个倒计时或进程条,显示剩余项目的状态或数量。下面是代码示例

import pandas as pd

urls = ['https://vpic.nhtsa.dot.gov/api/vehicles/GetModelsForMakeIdYear/makeId/440/vehicletype/car?format=csv', 
    'https://vpic.nhtsa.dot.gov/api/vehicles/GetModelsForMakeIdYear/makeId/441/vehicletype/car?format=csv', 
    'https://vpic.nhtsa.dot.gov/api/vehicles/GetModelsForMakeIdYear/makeId/442/vehicletype/car?format=csv', 
    'https://vpic.nhtsa.dot.gov/api/vehicles/GetModelsForMakeIdYear/makeId/443/vehicletype/car?format=csv', 
    'https://vpic.nhtsa.dot.gov/api/vehicles/GetModelsForMakeIdYear/makeId/445/vehicletype/car?format=csv', 
    'https://vpic.nhtsa.dot.gov/api/vehicles/GetModelsForMakeIdYear/makeId/448/vehicletype/car?format=csv']             


dfs = [pd.read_csv(url) for url in urls]
df = pd.concat(dfs, ignore_index=True)                                                                                      
df.to_csv('foo.csv')                                                                                                        

阅读
tqdm
了解倒计时条和
time.sleep
了解暂停执行。好奇-为什么要添加暂停?@Parfait,因为在这么多服务器请求后,运行脚本访问同一服务器的速度如此之快,服务器会阻止它。太棒了!很好用@KyleRichards:)
import time 
dfs = []
num = len(urls)
for i, url in enumerate(urls):
  dfs.append(pd.read_csv(url))
  print 'Process {}, {} left'.format(i+1, num-i-1)
  time.sleep(2)