Python 如何通过登录从HTTPS安全站点上的iframe检索数据
我一直在尝试制作一个脚本,这将使我能够从我们的在线内部网页抓取我的成绩 我要从中检索数据的页面是 我尝试过用Python来实现这一点。但每当我登录时,我都不知道如何在脚本中访问此页面。仅仅访问页面是不够的。似乎我在登录后被重定向了 这是我到目前为止所拥有的Python 如何通过登录从HTTPS安全站点上的iframe检索数据,python,authentication,https,urllib2,Python,Authentication,Https,Urllib2,我一直在尝试制作一个脚本,这将使我能够从我们的在线内部网页抓取我的成绩 我要从中检索数据的页面是 我尝试过用Python来实现这一点。但每当我登录时,我都不知道如何在脚本中访问此页面。仅仅访问页面是不够的。似乎我在登录后被重定向了 这是我到目前为止所拥有的 import urllib2 theurl = 'https://intranet.ku.dk/Selvbetjening/Sider/default.aspx' username = 'MYUSERNAME' password = 'MY
import urllib2
theurl = 'https://intranet.ku.dk/Selvbetjening/Sider/default.aspx'
username = 'MYUSERNAME'
password = 'MYPASSWORD'
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, theurl, username, password)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
pagehandle = urllib2.urlopen(theurl)
for elm in pagehandle:
print elm
谢谢大家! 每当响应状态为301或302(这意味着重定向)时,我们将在“location”参数中获得重定向的url。然后使用该url,您需要再次发出请求。请记住,此url期望用户登录,因此您也需要传递所有cookie 实际上,您所做的是删除此网站以从中检索数据。您需要采取以下措施:
class scraper():
def somefunc(self):
self.host = "intranet.ku.dk"
self.url = "https://intranet.ku.dk/Selvbetjening/Sider/default.aspx"
self.data = urllib.urlencode(postDataDict)
self.headers = { #You can fill these values by looking into what the browser sends.
'Accept': 'text/html; */*',
'Accept-Language': '',
'Accept-Encoding': 'identity',
'Connection': 'keep-alive',
'Content-Type': 'application/x-www-form-urlencoded'}
response = makeRequest(host,url,data)
if (response.status == 302):
url = '/'+response.getheader("Location").split('/')[3]
response = makeRequest(host,url,{})
def makeRequest(self,host,url,data):
cookies = ''
for key in self.cookies:
cookies = cookies + key + '=' + self.cookies[key] + '; '
self.headers['Cookie'] = cookies
conn = httplib.HTTPSConnection(host)
conn.request("POST", url, data, self.headers)
response = conn.getresponse()
self.saveCookies(response.getheader("Set-Cookie"))
responseVal = response.read()
conn.close()
self.headers['Referer'] = fullUrl #setting header for next request
return response
def saveCookies(self,cookies):
if cookies is not None:
values = cookies.split()
for value in values:
parts = value.split('=')
if(len(parts) > 1):
if((parts[0] != 'expires') and (parts[0] != 'Max-Age') and (parts[0] != 'Path') and (parts[0] != 'path') and (parts[0] != 'Domain')):
self.cookies[parts[0]] = parts[1].rstrip(';')
PS:我已经修改了我的特定代码,使之更为通用,因此请检查是否有任何错误。@Jonas Lomholdt请通过它是否适用于您来结束这个问题。