用python抓取AJAX加载的内容?
所以我有一个函数,当我点击一个按钮时被调用,它如下所示用python抓取AJAX加载的内容?,python,ajax,web-scraping,screen-scraping,Python,Ajax,Web Scraping,Screen Scraping,所以我有一个函数,当我点击一个按钮时被调用,它如下所示 var min_news_id = "68feb985-1d08-4f5d-8855-cb35ae6c3e93-1"; function loadMoreNews(){ $("#load-more-btn").hide(); $("#load-more-gif").show(); $.post("/en/ajax/more_news",{'category':'','news_offset':min_news_id},funct
var min_news_id = "68feb985-1d08-4f5d-8855-cb35ae6c3e93-1";
function loadMoreNews(){
$("#load-more-btn").hide();
$("#load-more-gif").show();
$.post("/en/ajax/more_news",{'category':'','news_offset':min_news_id},function(data){
data = JSON.parse(data);
min_news_id = data.min_news_id||min_news_id;
$(".card-stack").append(data.html);
})
.fail(function(){alert("Error : unable to load more news");})
.always(function(){$("#load-more-btn").show();$("#load-more-gif").hide();});
}
jQuery.scrollDepth();
现在我对javascript没有太多经验,但我假设它从“en/ajax/more_news”的某种api返回一些json数据
是否有一种方法可以直接调用此api并从python脚本中获取json数据。如果是,如何进行
如果没有,我如何刮取正在生成的内容 您需要将您在脚本中看到的新闻id发布到,这是一个使用以下内容的示例:
js为您提供所有html,您只需访问
js[“html”]
以下是脚本,它将自动循环浏览inshort.com中的所有页面from bs4 import BeautifulSoup
from newspaper import Article
import requests
import sys
import re
import json
patt = re.compile('var min_news_id\s+=\s+"(.*?)"')
i = 0
while(1):
with requests.Session() as s:
if(i==0):soup = BeautifulSoup(s.get("https://www.inshorts.com/en/read").content,"lxml")
new_id_scr = soup.find("script", text=re.compile("var\s+min_news_id"))
news_id = patt.search(new_id_scr.text).group(1)
js = s.post("https://www.inshorts.com/en/ajax/more_news", data={"news_offset":news_id})
jsn = json.dumps(js.json())
jsonToPython = json.loads(jsn)
news_id = jsonToPython["min_news_id"]
data = jsonToPython["html"]
i += 1
soup = BeautifulSoup(data, "lxml")
for tag in soup.find_all("div", {"class":"news-card"}):
main_text = tag.find("div", {"itemprop":"articleBody"})
summ_text = main_text.text
summ_text = summ_text.replace("\n", " ")
result = tag.find("a", {"class":"source"})
art_url = result.get('href')
if 'www.youtube.com' in art_url:
print("Nothing")
else:
art_url = art_url[:-1]
#print("Hello", art_url)
article = Article(art_url)
article.download()
if article.is_downloaded:
article.parse()
article_text = article.text
article_text = article_text.replace("\n", " ")
print(article_text+"\n")
print(summ_text+"\n")
它提供了inshort.com的摘要和相应新闻频道的完整新闻使用
urlib2
从API检索数据,以及json.loads
将json解析到Python字典中。@Barmar我到底需要发送什么,你是在建议这样的事情吗<代码>r=请求。post('http://inshorts.com/en/ajax/more_news’,json={'category':'','news\u offset':min\u news\u id})是的,差不多就是这样。然后使用json.loads(r)
解析json响应,并且r['html']
将包含响应中的html。@Barmar我尝试过,但它只是将我重定向到主页import json import requests min_news_id=“68feb985-1d08-4f5d-8855-cb35ae6c3e93-1”r=requests.post('http://inshorts.com/en/ajax/more_news,json={'category':'','news\u offset':min\u news\u id})print(r.url)
实际上,它应该是data=
,而不是json=
。它给出的是空结果O/P:{'html':'\n\n'}
您需要更改代码,如thinews\u id=news\u id.split(“”)js=s.post(“”https://www.inshorts.com/en/ajax/more_news,data={“news\u offset”:news\u id[1]})
在您的代码中news\u id
显示var min\u news\u id=“vxy8k83f-1“所以我只是从中提取新闻id值。现在它正在工作properly@SalmanMohammad,只需使用patt.search(new\u id\u scr.text)。组(1)
Yespatt.search(new\u id\u scr.text)。组(1)
。它给出了简单的新闻id,如vxy8k83f-1
。它将从加载更多结果中获得数据多长时间?比如,它将只从点击“加载更多”按钮后出现的一个页面中获得结果,或者它将迭代地从“加载更多”选项中获得结果。
from bs4 import BeautifulSoup
from newspaper import Article
import requests
import sys
import re
import json
patt = re.compile('var min_news_id\s+=\s+"(.*?)"')
i = 0
while(1):
with requests.Session() as s:
if(i==0):soup = BeautifulSoup(s.get("https://www.inshorts.com/en/read").content,"lxml")
new_id_scr = soup.find("script", text=re.compile("var\s+min_news_id"))
news_id = patt.search(new_id_scr.text).group(1)
js = s.post("https://www.inshorts.com/en/ajax/more_news", data={"news_offset":news_id})
jsn = json.dumps(js.json())
jsonToPython = json.loads(jsn)
news_id = jsonToPython["min_news_id"]
data = jsonToPython["html"]
i += 1
soup = BeautifulSoup(data, "lxml")
for tag in soup.find_all("div", {"class":"news-card"}):
main_text = tag.find("div", {"itemprop":"articleBody"})
summ_text = main_text.text
summ_text = summ_text.replace("\n", " ")
result = tag.find("a", {"class":"source"})
art_url = result.get('href')
if 'www.youtube.com' in art_url:
print("Nothing")
else:
art_url = art_url[:-1]
#print("Hello", art_url)
article = Article(art_url)
article.download()
if article.is_downloaded:
article.parse()
article_text = article.text
article_text = article_text.replace("\n", " ")
print(article_text+"\n")
print(summ_text+"\n")