Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/selenium/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 巨蟒硒刮刀_Python_Selenium_Web Scraping_Amazon_Tracker - Fatal编程技术网

Python 巨蟒硒刮刀

Python 巨蟒硒刮刀,python,selenium,web-scraping,amazon,tracker,Python,Selenium,Web Scraping,Amazon,Tracker,我正在使用下面的python脚本从Amazon抓取信息 在某个时候,它停止返回页面结果。脚本正在启动,浏览关键字/页面,但我只得到标题作为输出: 关键字排名标题ASIN分数审核主要日期 我怀疑问题出在下面这行,因为这个标签已经不存在了,resultsvar没有得到任何值: results=soup.findAll('div',attrs={'class':'s-item-container'}) 这是完整的代码: from bs4 import BeautifulSoup import time

我正在使用下面的python脚本从Amazon抓取信息

在某个时候,它停止返回页面结果。脚本正在启动,浏览关键字/页面,但我只得到标题作为输出:

关键字排名标题ASIN分数审核主要日期

我怀疑问题出在下面这行,因为这个标签已经不存在了,
results
var没有得到任何值:

results=soup.findAll('div',attrs={'class':'s-item-container'})

这是完整的代码:

from bs4 import BeautifulSoup
import time
from selenium import webdriver
import re
import datetime
from collections import deque
import logging
import csv


class AmazonScaper(object):

    def __init__(self,keywords, output_file='example.csv',sleep=2):

        self.browser = webdriver.Chrome(executable_path='/Users/willcecil/Dropbox/Python/chromedriver')  #Add path to your Chromedriver
        self.keyword_queue = deque(keywords)  #Add the start URL to our list of URLs to crawl
        self.output_file = output_file
        self.sleep = sleep
        self.results = []


    def get_page(self, keyword):
        try:
            self.browser.get('https://www.amazon.co.uk/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords={a}'.format(a=keyword))
            return self.browser.page_source
        except Exception as e:
            logging.exception(e)
            return

    def get_soup(self, html):
        if html is not None:
            soup = BeautifulSoup(html, 'lxml')
            return soup
        else:
            return

    def get_data(self,soup,keyword):

        try:
            results = soup.findAll('div', attrs={'class': 's-item-container'})
            for a, b in enumerate(results):
                soup = b
                header = soup.find('h2')
                result = a + 1
                title = header.text
                try:
                    link = soup.find('a', attrs={'class': 'a-link-normal a-text-normal'})
                    url = link['href']
                    url = re.sub(r'/ref=.*', '', str(url))
                except:
                    url = "None"

                # Extract the ASIN from the URL - ASIN is the breaking point to filter out if the position is sponsored

                ASIN = re.sub(r'.*amazon.co.uk.*/dp/', '', str(url))

                # Extract Score Data using ASIN number to find the span class

                score = soup.find('span', attrs={'name': ASIN})
                try:
                    score = score.text
                    score = score.strip('\n')
                    score = re.sub(r' .*', '', str(score))
                except:
                    score = "None"

                # Extract Number of Reviews in the same way
                reviews = soup.find('a', href=re.compile(r'.*#customerReviews'))
                try:
                    reviews = reviews.text
                except:
                    reviews = "None"

                # And again for Prime

                PRIME = soup.find('i', attrs={'aria-label': 'Prime'})
                try:
                    PRIME = PRIME.text
                except:
                    PRIME = "None"

                data = {keyword:[keyword,str(result),title,ASIN,score,reviews,PRIME,datetime.datetime.today().strftime("%B %d, %Y")]}
                self.results.append(data)

        except Exception as e:
            print(e)

        return 1

    def csv_output(self):
        keys = ['Keyword','Rank','Title','ASIN','Score','Reviews','Prime','Date']
        print(self.results)
        with open(self.output_file, 'a', encoding='utf-8') as outputfile:
            dict_writer = csv.DictWriter(outputfile, keys)
            dict_writer.writeheader()
            for item in self.results:
                for key,value in item.items():
                    print(".".join(value))
                    outputfile.write(",".join('"' + item + '"' for item in value)+"\n") # Add "" quote character so the CSV accepts commas

    def run_crawler(self):
        while len(self.keyword_queue): #If we have keywords to check
            keyword = self.keyword_queue.popleft() #We grab a keyword from the left of the list
            html = self.get_page(keyword)
            soup = self.get_soup(html)
            time.sleep(self.sleep) # Wait for the specified time
            if soup is not None:  #If we have soup - parse and save data
                self.get_data(soup,keyword)
        self.browser.quit()
        self.csv_output() # Save the object data to csv


    if __name__ == "__main__":
        keywords = [str.replace(line.rstrip('\n'),' ','+') for line in 
    open('keywords.txt')] # Use our file of keywords & replaces spaces with +
    ranker = AmazonScaper(keywords) # Create the object
    ranker.run_crawler() # Run the rank checker
输出应该是这样的(为了清晰起见,我已经修剪了标题)

关键字排名标题ASIN分数审核主要日期

蓝色+滑板3鱼鹰完成 Beginn B00IL1JMF4 3.7 40 Prime 2019年2月21日 蓝色+滑板4 ENKEEO全套迷你版 C B078J9Y1DG 4.5 42 Prime 2019年2月21日蓝色+滑板5 skatro- 迷你巡洋舰B00K93PIXM 4.8 223 Prime 2019年2月21日 蓝色+滑板7 Vinsani复古巡洋舰 B00CSV72AK 4.4 8 Prime 2019年2月21日蓝色+滑板8脊 复古巡洋舰Bo B00CA33ISQ 4.1 207 Prime 2019年2月21日 蓝色+滑板9 Xootz儿童完成 Be B01B2YNSJM 3.6 32 Prime 2019年2月21日蓝色+滑板10埃努夫 Pyro II Skateboa B00MGRGX2Y 4.3 68 Prime 2019年2月21日


下面显示了您可以进行的一些更改。在某些情况下,我改为使用css选择器

要循环的主要结果集由
soup检索。选择('.s-result-list[data asin]')
。这指定了类名为
.s-result-list
的元素,其子元素的属性为
数据asin
。这与页面上的60个(当前)项目相匹配

我使用属性=值选择器将基本选择替换为

标题现在是
h5
,即
标题=汤。选择一个('h5')


示例代码:

import datetime
from bs4 import BeautifulSoup
import time
from selenium import webdriver
import re

keyword = 'blue+skateboard'
driver = webdriver.Chrome()

url = 'https://www.amazon.co.uk/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords={}'

driver.get(url.format(keyword))
soup = BeautifulSoup(driver.page_source, 'lxml')
results = soup.select('.s-result-list [data-asin]')

for a, b in enumerate(results):
    soup = b
    header = soup.select_one('h5')
    result = a + 1
    title = header.text.strip()

    try:
        link = soup.select_one('h5 > a')
        url = link['href']
        url = re.sub(r'/ref=.*', '', str(url))
    except:
        url = "None"

    if url !='/gp/slredirect/picassoRedirect.html':
        ASIN = re.sub(r'.*/dp/', '', str(url))
        #print(ASIN)

        try:
            score = soup.select_one('.a-icon-alt')
            score = score.text
            score = score.strip('\n')
            score = re.sub(r' .*', '', str(score))
        except:
            score = "None"

        try:
            reviews = soup.select_one("href*='#customerReviews']")
            reviews = reviews.text.strip()
        except:
            reviews = "None"

        try:
            PRIME = soup.select_one('[aria-label="Amazon Prime"]')
            PRIME = PRIME['aria-label']
        except:
            PRIME = "None"
        data = {keyword:[keyword,str(result),title,ASIN,score,reviews,PRIME,datetime.datetime.today().strftime("%B %d, %Y")]}
        print(data)

示例输出:


首先要检查的是返回的原始页面。尝试插入
导入pdb;pdb.set_trace()
soup=BeautifulSoup(html,'lxml')
之前,手动检查html以查看数据是否存在。在爬行的同一台机器上执行此检查非常重要。这是一个很好的方法。我一定会用这个。
import datetime
from bs4 import BeautifulSoup
import time
from selenium import webdriver
import re

keyword = 'blue+skateboard'
driver = webdriver.Chrome()

url = 'https://www.amazon.co.uk/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords={}'

driver.get(url.format(keyword))
soup = BeautifulSoup(driver.page_source, 'lxml')
results = soup.select('.s-result-list [data-asin]')

for a, b in enumerate(results):
    soup = b
    header = soup.select_one('h5')
    result = a + 1
    title = header.text.strip()

    try:
        link = soup.select_one('h5 > a')
        url = link['href']
        url = re.sub(r'/ref=.*', '', str(url))
    except:
        url = "None"

    if url !='/gp/slredirect/picassoRedirect.html':
        ASIN = re.sub(r'.*/dp/', '', str(url))
        #print(ASIN)

        try:
            score = soup.select_one('.a-icon-alt')
            score = score.text
            score = score.strip('\n')
            score = re.sub(r' .*', '', str(score))
        except:
            score = "None"

        try:
            reviews = soup.select_one("href*='#customerReviews']")
            reviews = reviews.text.strip()
        except:
            reviews = "None"

        try:
            PRIME = soup.select_one('[aria-label="Amazon Prime"]')
            PRIME = PRIME['aria-label']
        except:
            PRIME = "None"
        data = {keyword:[keyword,str(result),title,ASIN,score,reviews,PRIME,datetime.datetime.today().strftime("%B %d, %Y")]}
        print(data)