Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/279.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 卡在网页抓取代码上_Python_Web Scraping - Fatal编程技术网

Python 卡在网页抓取代码上

Python 卡在网页抓取代码上,python,web-scraping,Python,Web Scraping,我有以下代码,我想去一个网页,把所有相关的漫画从网站上,并存储在我的电脑上。第一张图片下载很好,但似乎有一个问题,循环到网页上的前几页。如果有人能看一下代码并提供帮助,我们将不胜感激。 我得到的错误是: 'Traceback (most recent call last): File "C:\Users\528000\Desktop\kids print\Comic-gather.py", line 41, in <module > prevLink = soup.sel

我有以下代码,我想去一个网页,把所有相关的漫画从网站上,并存储在我的电脑上。第一张图片下载很好,但似乎有一个问题,循环到网页上的前几页。如果有人能看一下代码并提供帮助,我们将不胜感激。 我得到的错误是:

'Traceback (most recent call last):
  File "C:\Users\528000\Desktop\kids print\Comic-gather.py", line 41, in <module
>
    prevLink = soup.select('a[class="prevLink"]')[0]
'IndexError: list index out of range


'import requests, os, bs4
url = 'http://darklegacycomics.com'
os.makedirs('darklegacy', exist_ok=True)
while not url.endswith('#'):
    # Download the page.
    print('Downloading page %s...' % url)
    res = requests.get(url)
    res.raise_for_status()

soup = bs4.BeautifulSoup(res.text)
comicElem = soup.select('.comic img')
if comicElem == []:
    print('Could not find comic image.')
else:
    try:
        comicUrl ='http://darklegacycomics.com' + comicElem[0].get('src')
        # Download the image.
        print('Downloading image %s...' % (comicUrl))
        res = requests.get(comicUrl)
        res.raise_for_status()
    except requests.exceptions.MissingSchema:
        # skip this comic
        prevLink = soup.select('.prevlink')[0]
        url = 'http://darklegacycomics.com' + prevLink.get('href')
        continue
    # Save the image to ./darklegacy.
    imageFile = open(os.path.join('darklegacy', os.path.basename(comicUrl)), 'wb')
    for chunk in res.iter_content(100000):
        imageFile.write(chunk)
    imageFile.close()

# Get the Prev button's url.
prevLink = soup.select('a[class="prevLink"]')[0]
url = 'http://darklegacycomics.com' + prevLink.get('href')''
”回溯(最近一次呼叫最后一次):
文件“C:\Users\528000\Desktop\kids print\Comic gather.py”,第41行,在
prevLink=soup.select('a[class=“prevLink”]')[0]
'索引器错误:列表索引超出范围
'导入请求、操作系统、bs4
url='1〕http://darklegacycomics.com'
os.makedirs('darklegacy',exist\u ok=True)
而不是url.endswith(“#”):
#下载该页面。
打印('正在下载页面%s...%url)
res=requests.get(url)
res.为_状态提高_()
soup=bs4.BeautifulSoup(res.text)
comicElem=汤。选择('.ComicImg'))
如果comicElem=[]:
打印('找不到漫画图像')
其他:
尝试:
Comicull公司http://darklegacycomics.com“+comicElem[0]。获取('src')
#下载图片。
打印('正在下载图像%s...%(comicUrl))
res=requests.get(comicUrl)
res.为_状态提高_()
除requests.exceptions.MissingSchema外:
#跳过这部漫画
prevLink=soup。选择('.prevLink')[0]
url='1〕http://darklegacycomics.com“+prevLink.get('href')
持续
#将图像保存到./darklegacy。
imageFile=open(os.path.join('darklegacy',os.path.basename(comicUrl)),'wb')
对于res.iter_内容中的区块(100000):
imageFile.write(块)
imageFile.close()
#获取Prev按钮的url。
prevLink=soup.select('a[class=“prevLink”]')[0]
url='1〕http://darklegacycomics.com“+prevLink.get('href')”

这将获取您的所有图像:

import requests, os, bs4
from urlparse import urljoin
url = 'http://darklegacycomics.com'

soup = bs4.BeautifulSoup(requests.get(url).content)

# get all img links where src value starts with /images
links = soup.select(".comic img[src^=/image]")


for img in links:
    # extract the link
    src = img["src"]
    # use the image name as the file name
    with open(os.path.basename(src),"w") as f:
        # join the base an image url and write content to disk
        f.write(requests.get(urljoin(url, src)).content)