Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/350.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
编写多个.txt文件/Python循环_Python_Beautifulsoup - Fatal编程技术网

编写多个.txt文件/Python循环

编写多个.txt文件/Python循环,python,beautifulsoup,Python,Beautifulsoup,我正在用BeautifulSoup抓取一个网站的多个URL,并希望为每个URL生成一个文件 categories = ["NEWS_AND_MAGAZINES", "ART_AND_DESIGN",...,"FAMILY"] subcategories = ["topselling_free",...,"topgrossing"] urls = [] for i in range (0,len(categories)): for j in range (0,len(subcategor

我正在用BeautifulSoup抓取一个网站的多个URL,并希望为每个URL生成一个文件

categories = ["NEWS_AND_MAGAZINES", "ART_AND_DESIGN",...,"FAMILY"]
subcategories = ["topselling_free",...,"topgrossing"]
urls = []

for i in range (0,len(categories)):
    for j in range (0,len(subcategories)):
        url = categories_url_prefix + categories[i]+'/collection/'+subcategories[j]
        urls.extend([url])

for i in urls:
response = get(i)
html_soup = BeautifulSoup(response.text, 'html.parser')
app_container = html_soup.find_all('div', class_="card no-rationale square-cover apps small")
file = open("apps.txt","a+")
for i in range(0, len(app_container)):
    print(app_container[i].div['data-docid'])
    file.write(app_container[i].div['data-docid'] + "\n")

file.close()
我正在生成一个唯一的文件“app.txt”,如何为每个URL生成一个文件?谢谢

请更换此:

for n, i in enumerate(urls):
  response = get(i)
  html_soup = BeautifulSoup(response.text, 'html.parser')
  app_container = html_soup.find_all('div', class_="card no-rationale square-cover apps small")
  with open("file{}.txt".format(n),"a+") as f:
    for i in range(0, len(app_container)):
      print(app_container[i].div['data-docid'])
      f.write(app_container[i].div['data-docid'] + "\n")
只要替换这个:

for n, i in enumerate(urls):
  response = get(i)
  html_soup = BeautifulSoup(response.text, 'html.parser')
  app_container = html_soup.find_all('div', class_="card no-rationale square-cover apps small")
  with open("file{}.txt".format(n),"a+") as f:
    for i in range(0, len(app_container)):
      print(app_container[i].div['data-docid'])
      f.write(app_container[i].div['data-docid'] + "\n")

“没有这样的文件或目录:URL.txt'”目录错误?真奇怪!如果你运行你的程序,你应该有
file0.txt
file1.txt
。。。在您的工作目录中。确保
文件{}
之后出现
.txt
“没有这样的文件或目录:URL.txt”目录错误?真奇怪!如果你运行你的程序,你应该有
file0.txt
file1.txt
。。。在您的工作目录中。确保
文件{}