Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/301.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 美丽的汤只返回100行从雅虎!财务_Python_Web Scraping_Beautifulsoup_Yahoo Finance - Fatal编程技术网

Python 美丽的汤只返回100行从雅虎!财务

Python 美丽的汤只返回100行从雅虎!财务,python,web-scraping,beautifulsoup,yahoo-finance,Python,Web Scraping,Beautifulsoup,Yahoo Finance,我刚刚开始使用网页抓取,并认为使用下面的脚本和BeautifulSoup解析简单的Yahoo财务数据取得了良好的进展。下面的脚本工作得很好,但它只返回100行,尽管我要求一年的价值。我发现一些SO帖子建议在请求中添加一个数据参数,将其设置为AJAX和mobile为no,但这也不起作用。我还尝试传递不同的标题信息,但没有成功。美丽的汤是否有我错过的可以返回完整列表的arugument?当我从请求中打印出完整的HTML内容时,完整的结果就在那里,所以我被难住了 from datetime impor

我刚刚开始使用网页抓取,并认为使用下面的脚本和BeautifulSoup解析简单的Yahoo财务数据取得了良好的进展。下面的脚本工作得很好,但它只返回100行,尽管我要求一年的价值。我发现一些SO帖子建议在请求中添加一个数据参数,将其设置为AJAX和mobile为no,但这也不起作用。我还尝试传递不同的标题信息,但没有成功。美丽的汤是否有我错过的可以返回完整列表的arugument?当我从请求中打印出完整的HTML内容时,完整的结果就在那里,所以我被难住了

from datetime import datetime, timedelta
import time
import requests
from bs4 import BeautifulSoup

def format_date_int_as_str(date_datetime):
     date_timetuple = date_datetime.timetuple()
     date_mktime = time.mktime(date_timetuple)
     date_int = int(date_mktime)
     date_str = str(date_int)
     return date_str
def subdomain(symbol, start, end, filter='history'):
     subdoma="/quote/{0}/history?period1={1}&period2={2}&interval=1d&filter={3}&frequency=1d"
     subdomain = subdoma.format(symbol, start, end, filter)
     return subdomain
 
def header_function(subdomain):
     hdrs =  {"authority": "finance.yahoo.com",
              "method": "GET",
              "path": subdomain,
              "scheme": "https",
              "accept": "text/html",
              "accept-encoding": "gzip, deflate, br",
              "accept-language": "en-US,en;q=0.9",
              "cache-control": "no-cache",
              "dnt": "1",
              "pragma": "no-cache",
              "sec-fetch-mode": "navigate",
              "sec-fetch-site": "same-origin",
              "sec-fetch-user": "?1",
              "upgrade-insecure-requests": "1",
              "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64)"}
     
     return hdrs
if __name__ == '__main__':
     symbol = 'AAPL'
     
     dt_start = datetime.today() - timedelta(days=365)
     dt_end = datetime.today()- timedelta(days=100)
    
     start = format_date_int_as_str(dt_start)
     end = format_date_int_as_str(dt_end)
     
     sub = subdomain(symbol, start, end)
     header = header_function(sub)
     url = "https://finance.yahoo.com" + sub

print("\nREQUESTING: " + url + "\n" + str(dt_start) + "  to  " + str(dt_end))

# Build HTML request and content for BS
r = requests.get (url, headers=header)
c = r.content
classNameTr = "BdT Bdc($seperatorColor) Ta(end) Fz(s) Whs(nw)"
classNameDt = "Py(10px) Ta(start) Pend(10px)"
classNameTd = "Py(10px) Pstart(10px)"
className = " Pb(10px) Ovx(a) W(100%)"

soup = BeautifulSoup(c,"html.parser")
all = soup.find_all( "tr", {"class": classNameTr} )
print("LIST LENGTH: " + str(len(all)))
# https://finance.yahoo.com/quote/AAPL/history?period1=1572244075&period2=1615447675&interval=1d&filter=history&frequency=1d&includeAdjustedClose=true
# https://finance.yahoo.com/quote/AAPL/history?period1=1572244075&period2=1615447675&interval=1d&filter=history&frequency=1d
# Why aren't we getting full look back list ... check to see if its in the HTML just not the soup?  Does soup have  100 array item limit?
for stockDay in all:
     dt = stockDay.find( "td", {"class": classNameDt} )
     td = stockDay.find_all( "td", {"class": classNameTd} )
     if len(td) == 6 :
          print( dt.text + " --|-- OPEN:" + td[0].text + " --|-- HIGH:" + td[1].text + " --|-- LOW:" + td[2].text + " --|-- CLOSE:" + td[3].text + " --|-- ADJCLOSE:" + td[4].text + " --|-- VOLUME:" + td[5].text + " --|-- ") 
     else:
          print(dt.text + " --|-- Skipping-- this is the dividend date!")

您获得100个结果的原因是,当您向下滚动时,网页正在动态加载其余结果。不幸的是,你所要求的是不可能与美丽的汤。我建议研究硒

不过,如果您仍然喜欢在这里使用bs,我相信您可以将时段过滤到可刮取的时间跨度中,并在不滚动的情况下刮取数据。这可能是一种选择

更多信息可在此处找到: