Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/280.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 通过多个Web地址进行Web抓取_Python - Fatal编程技术网

Python 通过多个Web地址进行Web抓取

Python 通过多个Web地址进行Web抓取,python,Python,我试图在一个脚本中迭代几个网页。但是,它只会从我列表中的最后一个URL中提取数据 这是我目前的代码: from bs4 import BeautifulSoup # BeautifulSoup is in bs4 package import requests URLS = ['https://sc2replaystats.com/replay/playerStats/11116819/1809336', 'https://sc2replaystats.com/replay/playerSt

我试图在一个脚本中迭代几个网页。但是,它只会从我列表中的最后一个URL中提取数据

这是我目前的代码:

from bs4 import BeautifulSoup # BeautifulSoup is in bs4 package 
import requests

URLS = ['https://sc2replaystats.com/replay/playerStats/11116819/1809336', 'https://sc2replaystats.com/replay/playerStats/11116819/1809336']

for URL in URLS:
  response = requests.get(URL)
soup = BeautifulSoup(response.content, 'html.parser')

tb = soup.find('table', class_='table table-striped table-condensed')
for link in tb.find_all('tr'):
    name = link.find('span')
    if name is not None:
        print(name['title'])
结果是:

Commandcenter
Supplydepot
Barracks
Refinery
Orbitalcommand
Commandcenter
Barracksreactor
Supplydepot
Factory
Refinery
Factorytechlab
Orbitalcommand
Starport
Bunker
Supplydepot
Supplydepot
Starporttechlab
Supplydepot
Barracks
Refinery
Supplydepot
Barracks
Engineeringbay
Refinery
Starportreactor
Factorytechlab
Supplydepot
Barracks
Supplydepot
Supplydepot
Supplydepot
Supplydepot
Supplydepot
Commandcenter
Barrackstechlab
Barracks
Barracks
Engineeringbay
Supplydepot
Barracksreactor
Barracksreactor
Supplydepot
Armory
Supplydepot
Supplydepot
Supplydepot
Orbitalcommand
Factory
Refinery
Refinery
Supplydepot
Factoryreactor
Supplydepot
Commandcenter
Barracks
Barrackstechlab
Planetaryfortress
Supplydepot
Supplydepot
当我期待的时候:

Nexus
Pylon
Gateway
Assimilator
Cyberneticscore
Pylon
Assimilator
Nexus
Roboticsfacility
Pylon
Shieldbattery
Gateway
Gateway
Commandcenter
Supplydepot
Barracks
Refinery
Orbitalcommand
Commandcenter
Barracksreactor
Supplydepot
Factory
Refinery
Factorytechlab
Orbitalcommand
Starport
Bunker
Supplydepot
Supplydepot
Starporttechlab
Supplydepot
Barracks
Refinery
Supplydepot
Barracks
Engineeringbay
Refinery
Starportreactor
Factorytechlab
Supplydepot
Barracks
Supplydepot
Supplydepot
Supplydepot
Supplydepot
Supplydepot
Commandcenter
Barrackstechlab
Barracks
Barracks
Engineeringbay
Supplydepot
Barracksreactor
Barracksreactor
Supplydepot
Armory
Supplydepot
Supplydepot
Supplydepot
Orbitalcommand
Factory
Refinery
Refinery
Supplydepot
Factoryreactor
Supplydepot
Commandcenter
Barracks
Barrackstechlab
Planetaryfortress
Supplydepot
Supplydepot

按照@RomanPerekhrest的说法,在for循环中

for URL in URLS:
  response = requests.get(URL) 
这意味着您每次都会收到覆盖响应。要解决这个问题,一种方法是创建一个名为responses的数组,并将响应附加到其中,如下所示

responses = []
for URL in URLS:
  response = requests.get(URL) 
  responses.append(response)
for response in responses: 
  soup = BeautifulSoup(response.content, 'html.parser')

  tb = soup.find('table', class_='table table-striped table-condensed')
  for link in tb.find_all('tr'):
    name = link.find('span')
    if name is not None:
        print(name['title'])


您的代码将在每次循环迭代中覆盖
响应
——这就是原因