Python 如何从电脑上的文本文件中读取URL?

Python 如何从电脑上的文本文件中读取URL?,python,Python,我有一个python代码,可以从网站上删除数据。这段代码工作正常,但我想将URL源更改为桌面上的文本列表。我的文本文件中的URL都在一行中。 您建议我如何阅读此文件并循环浏览URL? 提前感谢您抽出时间 import csv import requests from bs4 import BeautifulSoup csv_file = open('cms_scrape.csv', 'w') csv_writer = csv.writer(csv_file) csv_writer.writero

我有一个python代码,可以从网站上删除数据。这段代码工作正常,但我想将URL源更改为桌面上的文本列表。我的文本文件中的URL都在一行中。 您建议我如何阅读此文件并循环浏览URL? 提前感谢您抽出时间

import csv
import requests
from bs4 import BeautifulSoup
csv_file = open('cms_scrape.csv', 'w')
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['name', 'link', 'price'])
for x in range(0, 70):
    try:
        urls = 'https://www.meisamatr.com/fa/product/cat/2-%D8%A2%D8%B1%D8%A7%DB%8C%D8%B4%DB%8C.html&pagesize[]=24&order[]=new&stock[]=1&page[]=' + str(x + 1) + '&ajax=ok?_=1561559181560'
        source = requests.get(urls).text
        soup = BeautifulSoup(source, 'lxml')
        print('Page: %s' % (x + 1))
        for figcaption in soup.find_all('figcaption'):
           price = figcaption.find('span', {'class': 'new_price'}).text.strip()
           name = figcaption.find('a', class_='title').text
           link = figcaption.find('a', class_='title')['href']
           print('%s\n%s\n%s' % (price, name, link))
           csv_writer.writerow([name, link, price])
    except:
        break
csv_file.close()

如果该文本文件中没有太多URL(在我的示例中是url.txt),那么下面的代码段应该可以满足您的需要

导入请求
#一次读取所有URL
将open(“url.txt”、“r”)作为f:
URL=f.read().splitlines()
#绕着他们转一圈
对于url中的url:
尝试:
source=请求.get(url).text
例外情况除外,如e:
打印(e)
打破

假设您有一个名为input.txt的文件,如下所示

url1
url2
url3
url4
.
.
.

然后我们只需打开这个input.txt文件,然后用换行符('\n')拆分。这应该给我们一个URL列表。 像

然后,您可以简单地循环浏览它并对网页进行爬网

这是一个

# crawler.py

import csv
import requests
from bs4 import BeautifulSoup

with open('input.txt','r') as f:
  urls = f.read().split() # here we get a list of urls 

csv_file = open('cms_scrape.csv', 'w')
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['name', 'link', 'price'])
for url in urls:
    try:

        source = requests.get(url).text
        soup = BeautifulSoup(source, 'lxml')
        for figcaption in soup.find_all('figcaption'):
           price = figcaption.find('span', {'class': 'new_price'}).text.strip()
           name = figcaption.find('a', class_='title').text
           link = figcaption.find('a', class_='title')['href']
           print('%s\n%s\n%s' % (price, name, link))
           csv_writer.writerow([name, link, price])
    except Exception as e:
      print(e)
      break


csv_file.close()
# crawler.py

import csv
import requests
from bs4 import BeautifulSoup

with open('input.txt','r') as f:
  urls = f.read().split() # here we get a list of urls 

csv_file = open('cms_scrape.csv', 'w')
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['name', 'link', 'price'])
for url in urls:
    try:

        source = requests.get(url).text
        soup = BeautifulSoup(source, 'lxml')
        for figcaption in soup.find_all('figcaption'):
           price = figcaption.find('span', {'class': 'new_price'}).text.strip()
           name = figcaption.find('a', class_='title').text
           link = figcaption.find('a', class_='title')['href']
           print('%s\n%s\n%s' % (price, name, link))
           csv_writer.writerow([name, link, price])
    except Exception as e:
      print(e)
      break


csv_file.close()