Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/311.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 爬行在刮削过程中停止。_Python_Web Scraping_Beautifulsoup_Web Crawler - Fatal编程技术网

Python 爬行在刮削过程中停止。

Python 爬行在刮削过程中停止。,python,web-scraping,beautifulsoup,web-crawler,Python,Web Scraping,Beautifulsoup,Web Crawler,我正试图通过BeautifulSoup获取产品列表。网站上有80个产品列表。它运行良好,但在第32个产品上停止。如何刮除所有产品 import requests from bs4 import BeautifulSoup from pymongo import MongoClient client = MongoClient('localhost', 27017) db = client.dbsparta headers = {'User-Agent': 'Mozilla/5.0 (Wind

我正试图通过BeautifulSoup获取产品列表。网站上有80个产品列表。它运行良好,但在第32个产品上停止。如何刮除所有产品

import requests
from bs4 import BeautifulSoup

from pymongo import MongoClient
client = MongoClient('localhost', 27017)
db = client.dbsparta

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'}
data = requests.get('https://www.stories.com/kr_krw/top-sellers/top-sellers.html', headers=headers)

soup = BeautifulSoup(data.text, 'html.parser')
#image = #category-list > div:nth-child(1) > a > div.product-image > div > img.a-image.default-image -> src attr.
#name = #category-list > div:nth-child(1) > a > div.description > div.product-title > label -> text
#price = #category-list > div:nth-child(1) > a > div.description > div.m-product-price > label -> text

products = soup.select('#category-list > div.o-product')

for product in products:
    image = product.select_one('div.product-image > div > img.a-image.default-image')['src']
    name = product.select_one('div.description > div.product-title > label').text
    price = product.select_one('div.description > div.m-product-price > label').text
    print(image,name,price)

数据是通过JavaScript动态加载的,但您可以使用
请求
模块模拟数据

例如:

import requests
from bs4 import BeautifulSoup

url = 'https://www.stories.com/kr_krw/top-sellers/top-sellers.html'
ajax_url = 'https://www.stories.com/kr_krw/dpa/aosCtgrItemAddList.html'

soup = BeautifulSoup(requests.get(url).content, 'html.parser')

dispLcatCd = soup.select_one('#dispLcatCd')['value']
dispMcatCd = soup.select_one('#dispMcatCd')['value']

data = {
    'sect_id': dispMcatCd,
    'dispLcatCd': dispLcatCd,
    'dispMcatCd': dispMcatCd,
    'pageNum': 1,
    'viewCnt': 32,
    }

while True:
    print('Processing page {}...'.format(data['pageNum']))
    soup = BeautifulSoup(requests.post(ajax_url, data=data).content, 'html.parser')

    if not soup.select('.o-product'):
        break

    for title, img, price in zip(soup.select('.product-title'),
                                 soup.select('.default-image'),
                                 soup.select('.price')):
        print('{:<50} {:<10} {}'.format(title.get_text(strip=True), price.get_text(strip=True), img['src']))

    data['pageNum'] += 1

数据是通过JavaScript动态加载的,但您可以使用
请求
模块模拟数据

例如:

import requests
from bs4 import BeautifulSoup

url = 'https://www.stories.com/kr_krw/top-sellers/top-sellers.html'
ajax_url = 'https://www.stories.com/kr_krw/dpa/aosCtgrItemAddList.html'

soup = BeautifulSoup(requests.get(url).content, 'html.parser')

dispLcatCd = soup.select_one('#dispLcatCd')['value']
dispMcatCd = soup.select_one('#dispMcatCd')['value']

data = {
    'sect_id': dispMcatCd,
    'dispLcatCd': dispLcatCd,
    'dispMcatCd': dispMcatCd,
    'pageNum': 1,
    'viewCnt': 32,
    }

while True:
    print('Processing page {}...'.format(data['pageNum']))
    soup = BeautifulSoup(requests.post(ajax_url, data=data).content, 'html.parser')

    if not soup.select('.o-product'):
        break

    for title, img, price in zip(soup.select('.product-title'),
                                 soup.select('.default-image'),
                                 soup.select('.price')):
        print('{:<50} {:<10} {}'.format(title.get_text(strip=True), price.get_text(strip=True), img['src']))

    data['pageNum'] += 1

使用请求获取的html表示网页的初始状态,该网页仅包含32个列出的项目。向下滚动时,html将通过javascript更新。您可以将selenium或请求用于会话。这个问题可能有助于您使用请求获取的html表示网页的初始状态,该网页仅包含32个列出的项目。向下滚动时,html将通过javascript更新。您可以将selenium或请求用于会话。这个问题可能会有帮助