Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/322.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
使用python3和beautifulsoup登录网站_Python_Login_Web Scraping_Beautifulsoup - Fatal编程技术网

使用python3和beautifulsoup登录网站

使用python3和beautifulsoup登录网站,python,login,web-scraping,beautifulsoup,Python,Login,Web Scraping,Beautifulsoup,我需要一些关于学习python web抓取的小项目的帮助 from bs4 import BeautifulSoup import urllib.request import http.cookiejar base_url = "https://login.yahoo.com/config/login?.src=flickrsignin&.pc=8190&.scrumb=0&.pd=c%3DH6T9XcS72e4mRnW3NpTAiU8ZkA--&.intl=i

我需要一些关于学习python web抓取的小项目的帮助

from bs4 import BeautifulSoup
import urllib.request
import http.cookiejar

base_url = "https://login.yahoo.com/config/login?.src=flickrsignin&.pc=8190&.scrumb=0&.pd=c%3DH6T9XcS72e4mRnW3NpTAiU8ZkA--&.intl=in&.lang=en&mg=1&.done=https%3A%2F%2Flogin.yahoo.com%2Fconfig%2Fvalidate%3F.src%3Dflickrsignin%26.pc%3D8190%26.scrumb%3D0%26.pd%3Dc%253DJvVF95K62e6PzdPu7MBv2V8-%26.intl%3Din%26.done%3Dhttps%253A%252F%252Fwww.flickr.com%252Fsignin%252Fyahoo%252F%253Fredir%253Dhttps%25253A%25252F%25252Fwww.flickr.com%25252F"
login_action = "/config/login?.src=flickrsignin&.pc=8190&.scrumb=0&.pd=c%3DH6T9XcS72e4mRnW3NpTAiU8ZkA--&.intl=in&.lang=en&mg=1&.done=https%3A%2F%2Flogin.yahoo.com%2Fconfig%2Fvalidate%3F.src%3Dflickrsignin%26.pc%3D8190%26.scrumb%3D0%26.pd%3Dc%253DJvVF95K62e6PzdPu7MBv2V8-%26.intl%3Din%26.done%3Dhttps%253A%252F%252Fwww.flickr.com%252Fsignin%252Fyahoo%252F%253Fredir%253Dhttps%25253A%25252F%25252Fwww.flickr.com%25252F"

cj = http.cookiejar.CookieJar()
opener =  urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
opener.addheaders = [('User-agent',
    ('Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) '
     'AppleWebKit/535.1 (KHTML, like Gecko) '
     'Chrome/13.0.782.13 Safari/535.1'))
]


login_data = urllib.parse.urlencode({
    'login-username' : 'username',
    'login-passwd' : 'password',
    'remember_me' : True
})
login_data = login_data.encode('ascii')
login_url = base_url + login_action
response = opener.open(login_url, login_data)
print (response.read())

我已尝试登录,但返回的结果与登录页面html相同,有人能帮我登录此站点吗

您没有存储登录时收到的
会话令牌
。您可以使用来处理登录会话,而不是手动存储


这是一篇很好的文章,介绍了如何做到这一点。

请尝试使用beautifulsoup按要求阅读更多内容。
用户[电子邮件]
仅为
用户名输入名
用户[密码]
为密码输入名。虽然下面的代码只能在没有
crsf\u令牌
保护的情况下登录站点

import requests
from requests.packages.urllib3 import add_stderr_logger
import urllib
from bs4 import BeautifulSoup
from urllib.error import HTTPError
from urllib.request import urlopen
import re, random, datetime
random.seed(datetime.datetime.now())

add_stderr_logger()

session = requests.Session()
per_session = session.post(url, 
data={'User[email]':'your_email', 'User[password]':'your_password'})
#you can now associate request with beautifulsoup
try:
   #it assumed that by now you are logged so we can now use .get and fetch any page of your choice
   bsObj = BeautifulSoup(session.get(url).content, 'lxml')
except HTTPError as e:
   print(e)

mechanize与python 3不兼容,请检查mechanicalsoup或robobrowser