Python Can';我不明白为什么列表索引超出范围
我是Python的新手,只是想做一个网络垃圾。在开始构建列表之前,我无法理解为什么在第一个索引中将变量设置为0时,我的索引会显示列表超出范围Python Can';我不明白为什么列表索引超出范围,python,Python,我是Python的新手,只是想做一个网络垃圾。在开始构建列表之前,我无法理解为什么在第一个索引中将变量设置为0时,我的索引会显示列表超出范围 import requests from bs4 import BeautifulSoup def kijiji_spider(max_pages): page = 1 while page <= max_pages: url = "http://www.kijiji.ca/b-cars-trucks/alberta
import requests
from bs4 import BeautifulSoup
def kijiji_spider(max_pages):
page = 1
while page <= max_pages:
url = "http://www.kijiji.ca/b-cars-trucks/alberta/convertible__coupe__hatchback__other+body+type__sedan__wagon/page-" + str(page) + "/c174l9003a138?price=__5000"
sourcecode = requests.get(url)
plain_text = sourcecode.text
soup = BeautifulSoup(plain_text)
a = 0
lista=[]
for link in soup.find_all("a", {"class": "title"}):
if a == 0:
href = "|http://www.kijiji.ca" + link.get("href")
lista.append(href)
elif a != 0:
href = "http://www.kijiji.ca" + link.get("href")
lista.append(href)
a += 1
i = 0
listb = []
for link in soup.find_all("a", {"class": "title"}):
title = link.string
listb[i] = listb[i] + "|" + title.strip()
i += 1
x = 0
listc = []
for other in soup.find_all("td", {"class": "price"}):
price = other.string
listc[x] = listc[x] + "|" + price.strip()
x += 1
page += 1
print(lista)
print(listb)
print(listc)
kijiji_spider(1)
导入请求
从bs4导入BeautifulSoup
def kijiji_蜘蛛(最大页数):
页码=1
页面时,您的列表b
为空,然后您尝试访问其中的项目0
。由于其为空,因此没有可访问的内容,因此您将获得索引器
异常:
i = 0
listb = []
for link in soup.find_all("a", {"class": "title"}):
title = link.string
listb[i] = listb[i] + "|" + title.strip()
i += 1
我认为您在这里要做的是将您创建的第一个列表中的值附加到(lista
),因此您可能需要listb.append(lista[I]+'|'+title.split())
在Python中,列表不需要计数器,只需附加到列表中,它就会自动增长
我不知道您为什么要在URL之前添加|
,但您的整个代码可以简化为以下内容:
def kijiji_spider(max_pages):
page = 1
collected_urls = [] # store all URLs on each "run"
while page <= max_pages:
url = "http://www.kijiji.ca/b-cars-trucks/alberta/convertible__coupe__hatchback__other+body+type__sedan__wagon/page-" + str(page) + "/c174l9003a138?price=__5000"
sourcecode = requests.get(url)
plain_text = sourcecode.text
soup = BeautifulSoup(plain_text)
links = [i.get('href') for i in soup.find_all('a', {'class': 'title'})]
titles = [i.string.strip() for i in soup.find_all('a', {'class': 'title'})]
prices = [i.string.strip() for i in soup.find_all("td", {"class": "price"})]
results = zip(links, titles, prices)
collected_urls.append(results)
page += 1
data = kijiji_spider(5)
for results in data:
for link, title, price in results:
print('http://www.kijiji.ca{} | {} | {}'.format(link, title, price))
def kijiji_spider(最大页数):
页码=1
收集的URL=[]。#在每次“运行”时存储所有URL
而佩奇