Python网络刮板打印';无';,我怎样才能修好它?

Python网络刮板打印';无';,我怎样才能修好它?,python,web-scraping,Python,Web Scraping,我正试图为学校做一个项目,在这个项目中,我取一个给定的股票代码名,并在SeekingAlpha上找到“观看”它的人数,但当我尝试打印时,我总是得到一个“无”值。我怎样才能解决这个问题 这是我第一次尝试网络抓取,但我对BeautifulSoup做了一些研究,认为这是最好的选择。我也在使用水蟒环境。在我的代码中,我试图找到股票代码的完整公司名称,以及在SeekingAlpha上跟随它的人数。出于某种原因,我可以检索股票代码的公司名称,但当我试图打印追随者的数量时,它会显示“无”。我尝试了我能想到的每

我正试图为学校做一个项目,在这个项目中,我取一个给定的股票代码名,并在SeekingAlpha上找到“观看”它的人数,但当我尝试打印时,我总是得到一个“无”值。我怎样才能解决这个问题

这是我第一次尝试网络抓取,但我对BeautifulSoup做了一些研究,认为这是最好的选择。我也在使用水蟒环境。在我的代码中,我试图找到股票代码的完整公司名称,以及在SeekingAlpha上跟随它的人数。出于某种原因,我可以检索股票代码的公司名称,但当我试图打印追随者的数量时,它会显示“无”。我尝试了我能想到的每一种变化来寻找追随者,但他们都没有结果

这是我的密码:
导入请求
将urllib.request导入为urllib2
从urllib.request导入请求,urlopen
从bs4导入BeautifulSoup
从lxml导入etree
listOfTickers=[“ATVI”、“GOOG”、“AAPL”、“AMZN”、“BRK.B”、“BRK.A”、“NFLX”、“SNAP”]
对于范围内的i(len(listOfTickers)):
股票代码=股票列表[i]
quotePage=请求(“https://seekingalpha.com/symbol/+ticker,headers={'User-Agent':'Mozilla/5.0'})
page=urlopen(quotePage).read()
汤=美汤(第页,“lxml”)
company_name=soup.find(“div”,“class”:“ticker title”})
followers_number=soup.find('div',{“class”:“followers number”})
company=company\u name.text.strip()
#followers=followers\u number.text.strip()
打印(U编号)

打印(公司)
尝试以下操作以获得所需的输出。您希望获取的内容是动态生成的,因此请求模块或urllib不会有任何帮助。你要么选择任何浏览器模拟器,要么就做这个把戏。也没有必要使用BeautifulSoup。然而,我保留它,因为它只是因为你在第一时间使用它

from requests_html import HTMLSession
from bs4 import BeautifulSoup

tickers = ["ATVI", "GOOG", "AAPL", "AMZN"]

with HTMLSession() as session:
    for i in range(len(tickers)):
        quotePage = session.get("https://seekingalpha.com/symbol/{}".format(tickers[i]))
        quotePage.html.render(5)
        soup = BeautifulSoup(quotePage.html.html, "lxml")
        followers_number = soup.find(class_="followers-number")
        print(followers_number)
您可能会得到如下输出:

<div class="followers-number">(<span>83,532</span> followers)</div>
<div class="followers-number" title="1,032,510">(<span>1.03M</span> followers)</div>
<div class="followers-number" title="2,065,199">(<span>2.07M</span> followers)</div>
(83532个追随者)
(103万追随者)
(207万追随者)

尝试以下操作以获得所需的输出。您希望获取的内容是动态生成的,因此请求模块或urllib不会有任何帮助。你要么选择任何浏览器模拟器,要么就做这个把戏。也没有必要使用BeautifulSoup。然而,我保留它,因为它只是因为你在第一时间使用它

from requests_html import HTMLSession
from bs4 import BeautifulSoup

tickers = ["ATVI", "GOOG", "AAPL", "AMZN"]

with HTMLSession() as session:
    for i in range(len(tickers)):
        quotePage = session.get("https://seekingalpha.com/symbol/{}".format(tickers[i]))
        quotePage.html.render(5)
        soup = BeautifulSoup(quotePage.html.html, "lxml")
        followers_number = soup.find(class_="followers-number")
        print(followers_number)
您可能会得到如下输出:

<div class="followers-number">(<span>83,532</span> followers)</div>
<div class="followers-number" title="1,032,510">(<span>1.03M</span> followers)</div>
<div class="followers-number" title="2,065,199">(<span>2.07M</span> followers)</div>
(83532个追随者)
(103万追随者)
(207万追随者)

只需使用与页面相同的端点即可检索订阅信息,包括#追随者:

import requests

tickers = [ "atvi", "goog", "aapl", "amzn", "brk.b", "brk.a", "nflx", "snap"]

with requests.Session() as s:
    for ticker in tickers:
        r = s.get('https://seekingalpha.com/memcached2/get_subscribe_data/{}?id={}'.format(ticker, ticker)).json()
        print(ticker, r['portfolio_count'])


只需使用与页面相同的端点即可检索订阅信息,包括#追随者:

import requests

tickers = [ "atvi", "goog", "aapl", "amzn", "brk.b", "brk.a", "nflx", "snap"]

with requests.Session() as s:
    for ticker in tickers:
        r = s.get('https://seekingalpha.com/memcached2/get_subscribe_data/{}?id={}'.format(ticker, ticker)).json()
        print(ticker, r['portfolio_count'])


由于通过ajax加载的关注者计数,
BeautifulSoup
无法访问其值。使用selineum/Phantojs等无头浏览器,您可以获得完整的html包含javascript生成的内容。另一种方法是向端点发出额外请求,javascript在端点上呈现页面的特定部分。这里有一个可行的解决方案

import requests
import urllib.request as urllib2
from urllib.request import Request, urlopen
from bs4 import BeautifulSoup
from lxml import etree

listOfTickers = ["ATVI", "GOOG", "AAPL", "AMZN", "BRK.B", "BRK.A", "NFLX", "SNAP"]

def getFollowersCount(ticker):

    # build url 
    url = 'https://seekingalpha.com/memcached2/get_subscribe_data/{}?id={}'.format(ticker.lower(), ticker.lower())

    # Using requests module not urllib.request.
    counter = requests.get(url)

    # If requests return is json than return portfolio_count otherwise 0
    try:
        return counter.json()['portfolio_count']
    except:
        return 0

for ticker in listOfTickers:

    quotePage = Request("https://seekingalpha.com/symbol/" + ticker, headers = {'User-Agent': 'Mozilla/5.0'})

    page = urlopen(quotePage).read()

    soup = BeautifulSoup(page, "lxml")

    company_name = soup.find("div", {"class" :"ticker-title"})
    #followers_number = soup.find('div', {"class":"followers-number"})
    followers_number = getFollowersCount(ticker)

    company= company_name.text.strip()
    #followers = followers_number.text.strip()

    print(followers_number)
    print(company)

由于通过ajax加载的关注者计数,
BeautifulSoup
无法访问其值。使用selineum/Phantojs等无头浏览器,您可以获得完整的html包含javascript生成的内容。另一种方法是向端点发出额外请求,javascript在端点上呈现页面的特定部分。这里有一个可行的解决方案

import requests
import urllib.request as urllib2
from urllib.request import Request, urlopen
from bs4 import BeautifulSoup
from lxml import etree

listOfTickers = ["ATVI", "GOOG", "AAPL", "AMZN", "BRK.B", "BRK.A", "NFLX", "SNAP"]

def getFollowersCount(ticker):

    # build url 
    url = 'https://seekingalpha.com/memcached2/get_subscribe_data/{}?id={}'.format(ticker.lower(), ticker.lower())

    # Using requests module not urllib.request.
    counter = requests.get(url)

    # If requests return is json than return portfolio_count otherwise 0
    try:
        return counter.json()['portfolio_count']
    except:
        return 0

for ticker in listOfTickers:

    quotePage = Request("https://seekingalpha.com/symbol/" + ticker, headers = {'User-Agent': 'Mozilla/5.0'})

    page = urlopen(quotePage).read()

    soup = BeautifulSoup(page, "lxml")

    company_name = soup.find("div", {"class" :"ticker-title"})
    #followers_number = soup.find('div', {"class":"followers-number"})
    followers_number = getFollowersCount(ticker)

    company= company_name.text.strip()
    #followers = followers_number.text.strip()

    print(followers_number)
    print(company)

最好的方法是监视网络。按住Shift+Ctrl+I键(在windows上)并查看页面如何发送和接收数据。:)您将看到数据来自“”,因此这将为您完成以下工作:

from collections import defaultdict
from requests import Session

tickers = ['atvi', 'goog','aapl', 'amzn']
storage = defaultdict(str) # storing data

URL = 'https://seekingalpha.com/memcached2/get_subscribe_data'

# Start a session. Here you can add headers, or(and) cookies
curl = Session()

for tick in tickers:
    param = {'id':tick}
    response = curl.get(f'{URL}/{tick}', params=param).json()
    storage[tick] = response['portfolio_count']

# show the results
print(storage)

最好的方法是监视网络。按住Shift+Ctrl+I键(在windows上)并查看页面如何发送和接收数据。:)您将看到数据来自“”,因此这将为您完成以下工作:

from collections import defaultdict
from requests import Session

tickers = ['atvi', 'goog','aapl', 'amzn']
storage = defaultdict(str) # storing data

URL = 'https://seekingalpha.com/memcached2/get_subscribe_data'

# Start a session. Here you can add headers, or(and) cookies
curl = Session()

for tick in tickers:
    param = {'id':tick}
    response = curl.get(f'{URL}/{tick}', params=param).json()
    storage[tick] = response['portfolio_count']

# show the results
print(storage)

我尝试使用它,但出于某种原因,我得到了以下错误:文件“/anaconda3/lib/python3.7/json/decoder.py”,第355行,在raw_decode中,从None-json.decoder.jsondecorder.jsondecorder.jsondecorder:Expecting值:第1行第1列(char 0)我尝试使用它,但出于某种原因,我得到了以下错误:文件“/anaconda3/lib/python3.7/json/decoder.py”,第355行,在raw_decode中,从None-json.decoder.jsondecorder.jsondecorder.jsondecorder:Expecting值:第1行第1列(char 0)