Javascript 在html中提取标题<;脚本>;在python3中使用BeautifulSoup

Javascript 在html中提取标题<;脚本>;在python3中使用BeautifulSoup,javascript,python-3.x,web-scraping,beautifulsoup,Javascript,Python 3.x,Web Scraping,Beautifulsoup,我有一个html页面,我想提取标签和object\u BFD.BFD\u INFO中的标题。我已经访问了里面的所有数据,但它有很多其他数据,比如链接等,现在我不知道如何访问我想提取的标题。请帮帮我。 到目前为止,我编写的代码是 import bs4 as bs import urllib3.request import requests sauce= requests.get('https://www.meishij.net/zuofa/huaguluobodunpaigutang.html

我有一个html页面,我想提取标签和object\u BFD.BFD\u INFO中的标题。我已经访问了里面的所有数据,但它有很多其他数据,比如链接等,现在我不知道如何访问我想提取的标题。请帮帮我。 到目前为止,我编写的代码是

import bs4 as bs
import urllib3.request
import requests

sauce= 
requests.get('https://www.meishij.net/zuofa/huaguluobodunpaigutang.html')
print(sauce.status_code)
soup=bs.BeautifulSoup(sauce.content,'html.parser')
#print(soup.find_all("script", type="text/javascript")[9])
print(soup.find("script",type="text/javascript")[9])
这是html


_推([“追踪事件”、“个人电脑”、“个人电脑新闻]);
_推([“追踪事件”、“个人电脑”、“个人电脑新闻”6类];
视窗[“|BFD”]=视窗[“|BFD”]|{};
_BFD.BFD_信息={
“标题”:花菇萝卜炖排骨汤",

我不太擅长正则表达式,它可以用来在一行中找到“标题”。我想下面的代码应该可以用

import json
import requests
from bs4 import BeautifulSoup
url = 'https://www.meishij.net/zuofa/huaguluobodunpaigutang.html'
headers = requests.utils.default_headers()
headers.update({
    'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0',
})

Link = requests.get(url, headers=headers)
soup =BeautifulSoup(Link.content,"lxml")
scripts = soup.find_all("script")
for script in scripts:
    if "_BFD.BFD_INFO" in script.text:
        text = script.text
        m_text = text.split('=')
        m_text = m_text[2].split(":")
        m_text = m_text[1].split(',')
        encoded = m_text[0].encode('utf-8')
        print(encoded.decode('utf-8'))
获取pic的更新:

for script in scripts:
    text = script.text
    m_text = text.split(',')
    for n in m_text:
        if 'pic'  in n:
            print(n)
输出:

C:\Users\siva\Desktop>python test.py

"pic" :"http://s1.st.meishij.net/r/216/197/6174466/a6174466_152117574296827.jpg"
C:\Users\SSubra02\Desktop>python test.py
[' = {\r\n"title" :"????????"', '\r\n"pic" :"http://s1.st.meishij.net/r/216/197/
6174466/a6174466_152117574296827.jpg"', '\r\n"id" :"1883528"', '\r\n"url" :"http
s://www.meishij.net/zuofa/huaguluobodunpaigutang.html"', '\r\n"category" :[["??"
', '"https://www.meishij.net/chufang/diy/recaipu/"]', '["??"', '"https://www.mei
shij.net/chufang/diy/tangbaocaipu/"]', '["???"', '"https://www.meishij.net/chufa
ng/diy/jiangchangcaipu/"]', '["??"', '"https://www.meishij.net/chufang/diy/wucan
/"]', '["??"', '"https://www.meishij.net/chufang/diy/wancan/"]]', '\r\n"tag" :["
??"', '"??"', '"??"', '"????"', '"????"', '"????"]', '\r\n"author":"????"', '\r\
n"pinglun":"3"', '\r\n"renqi":"4868"', '\r\n"step":"7?"', '\r\n"gongyi":"?"', '\
r\n"nandu":"????"', '\r\n"renshu":"4??"', '\r\n"kouwei":"???"', '\r\n"zbshijian"
:"10??"', '\r\n"prshijian":"<90??"', '\r\n"page_type" :"detail"\r\n};window["_BF
D"] = window["_BFD"] || {};_BFD.client_id = "Cmeishijie";_BFD.script = document.
createElement("script");_BFD.script.type = "text/javascript";_BFD.script.async =
 true;_BFD.script.charset = "utf-8";_BFD.script.src =((\'https:\' == document.lo
cation.protocol?\'https://ssl-static1\':\'http://static1\')+\'.baifendian.com/se
rvice/meishijie/meishijie.js\');']
更新2:

for script in scripts:
text = script.text
m_text = text.split('_BFD.BFD_INFO')
for t in m_text:
    if "title" in t:
        print(t.split(","))
输出:

C:\Users\siva\Desktop>python test.py

"pic" :"http://s1.st.meishij.net/r/216/197/6174466/a6174466_152117574296827.jpg"
C:\Users\SSubra02\Desktop>python test.py
[' = {\r\n"title" :"????????"', '\r\n"pic" :"http://s1.st.meishij.net/r/216/197/
6174466/a6174466_152117574296827.jpg"', '\r\n"id" :"1883528"', '\r\n"url" :"http
s://www.meishij.net/zuofa/huaguluobodunpaigutang.html"', '\r\n"category" :[["??"
', '"https://www.meishij.net/chufang/diy/recaipu/"]', '["??"', '"https://www.mei
shij.net/chufang/diy/tangbaocaipu/"]', '["???"', '"https://www.meishij.net/chufa
ng/diy/jiangchangcaipu/"]', '["??"', '"https://www.meishij.net/chufang/diy/wucan
/"]', '["??"', '"https://www.meishij.net/chufang/diy/wancan/"]]', '\r\n"tag" :["
??"', '"??"', '"??"', '"????"', '"????"', '"????"]', '\r\n"author":"????"', '\r\
n"pinglun":"3"', '\r\n"renqi":"4868"', '\r\n"step":"7?"', '\r\n"gongyi":"?"', '\
r\n"nandu":"????"', '\r\n"renshu":"4??"', '\r\n"kouwei":"???"', '\r\n"zbshijian"
:"10??"', '\r\n"prshijian":"<90??"', '\r\n"page_type" :"detail"\r\n};window["_BF
D"] = window["_BFD"] || {};_BFD.client_id = "Cmeishijie";_BFD.script = document.
createElement("script");_BFD.script.type = "text/javascript";_BFD.script.async =
 true;_BFD.script.charset = "utf-8";_BFD.script.src =((\'https:\' == document.lo
cation.protocol?\'https://ssl-static1\':\'http://static1\')+\'.baifendian.com/se
rvice/meishijie/meishijie.js\');']
C:\Users\SSubra02\Desktop>python test.py
['={\r\n“标题”:“???”,'\r\n“图片”:http://s1.st.meishij.net/r/216/197/
6174466/a6174466\u 1521175744296827.jpg“,”\r\n“id:“1883528”,“\r\n”url:“http
s://www.meishij.net/zuofa/huagulobodunpaugutang.html“,”\r\n“类别”:[[“?”
', '"https://www.meishij.net/chufang/diy/recaipu/"]', '["??"', '"https://www.mei
"shij.net/楚芳/diy/唐宝才铺/"]',"["?","https://www.meishij.net/chufa
ng/diy/Jiangchangecaipu/“]',“[”?“,”https://www.meishij.net/chufang/diy/wucan
/"]', '["??"', '"https://www.meishij.net/chufang/diy/wancan/“]]”,“\r\n”标记“:[”
“,”,“?”,“?”,“?”,“?”,“?”,“?”,“?”],“\r\n“作者”:“??”,”\r\n\
n“平伦”:“3”、'\r\n“仁齐”:“4868”、'\r\n“步骤”:“7”、'\r\n“巩义”:“、”\
r\n“南都”:“?”,”\r\n“仁术”:“4??”,“\r\n“口尾”:“?”,“\r\n”字时建”

:“10??”,“\r\n”prshijian:“此标记中的标题是否与您试图打印的标题类似?否,标题在内部类似于以下内容:”\u czc.push([''u trackEvent','pc','pc_news']);\u czc.push([''u trackEvent','pc','pc_news_class_6']);window['u BFD']=window['u BFD']];\u BFD'.\BFD INFO=”:花菇萝卜炖排骨汤", "图片“:”id“:”1883528“,”url“,”类别“:[”热菜","它成功了。非常感谢。我只是编辑了这行脚本=汤。find_all(“脚本”)@Tehseen是的,这是一个更好的方法。我会在我的代码中更新它。是的。如果我想访问“_BFD.BFD_INFO”中的另一个元素怎么办?就像在html中我有“pic”和“url”“?@Tehseen你可以查看获取picThanks@Siva的更新代码。你真的帮了我很多。我对这些东西不熟悉。如果我遇到任何问题,也许我会再次寻求帮助。