Python 2.7 循环通过有效负载Python

Python 2.7 循环通过有效负载Python,python-2.7,web-scraping,beautifulsoup,python-requests,Python 2.7,Web Scraping,Beautifulsoup,Python Requests,有一个网站,我连接到,但需要登录4次不同的用户名和密码 无论如何,我可以通过在有效负载中循环用户名和密码来实现这一点 这是我第一次这样做,我真的不知道该怎么做。 如果我只发布一个用户名和密码,代码就可以正常工作 我正在使用Python2.7和BeautifulSoup和请求 这是我的密码 import requests import zipfile, StringIO from bs4 import BeautifulSoup # Here were add the login details

有一个网站,我连接到,但需要登录4次不同的用户名和密码

无论如何,我可以通过在有效负载中循环用户名和密码来实现这一点

这是我第一次这样做,我真的不知道该怎么做。 如果我只发布一个用户名和密码,代码就可以正常工作

我正在使用Python2.7和BeautifulSoup和请求

这是我的密码

import requests
import zipfile, StringIO
from bs4 import BeautifulSoup

# Here were add the login details to be submitted to the login form.
payload = [
{'USERNAME': 'xxxxxx','PASSWORD': 'xxxxxx','option': 'login'},
{'USERNAME': 'xxxxxx','PASSWORD': 'xxxxxxx','option': 'login'},
{'USERNAME': 'xxxxx','PASSWORD': 'xxxxx','option': 'login'},
{'USERNAME': 'xxxxxx','PASSWORD': 'xxxxxx','option': 'login'},
]
#Possibly need headers later.
headers = {'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'}
base_url = "https://service.rl360.com/scripts/customer.cgi/SC/servicing/"

with requests.Session() as s:
        p = s.post('https://service.rl360.com/scripts/customer.cgi?option=login', data=payload)

        # Get the download page to scrape.
        r = s.get('https://service.rl360.com/scripts/customer.cgi/SC/servicing/downloads.php?Folder=DataDownloads&SortField=ExpiryDays&SortOrder=Ascending', stream=True)
        content = r.text
        soup = BeautifulSoup(content, 'lxml')
        #Now i get the most recent download URL.
        download_url = soup.find_all("a", {'class':'tabletd'})[-1]['href']
        #now we join the base url with the download url.
        download_docs = s.get(base_url + download_url, stream=True)
        print "Checking Content"
        content_type = download_docs.headers['content-type']
        print content_type
        print "Checking Filename"
        content_name = download_docs.headers['content-disposition']
        print content_name
        print "Checking Download Size"
        content_size = download_docs.headers['content-length']
        print content_size

        #This is where we extract and download the specified xml files.
        z = zipfile.ZipFile(StringIO.StringIO(download_docs.content))
        print "---------------------------------"
        print "Downloading........."
        #Now we save the files to the specified location.
        z.extractall('C:\Temp')
        print "Download Complete"

只需使用for循环。如果文件将被覆盖,您可能需要调整下载目录

payloads = [
{'USERNAME': 'xxxxxx1','PASSWORD': 'xxxxxx','option': 'login'},
{'USERNAME': 'xxxxxx2','PASSWORD': 'xxxxxxx','option': 'login'},
{'USERNAME': 'xxxxx3','PASSWORD': 'xxxxx','option': 'login'},
{'USERNAME': 'xxxxxx4','PASSWORD': 'xxxxxx','option': 'login'},
]

....

for payload in payloads:
    with requests.Session() as s:
        p = s.post('https://service.rl360.com/scripts/customer.cgi?option=login', data=payload)
        ...

只需使用for循环。如果文件将被覆盖,您可能需要调整下载目录

payloads = [
{'USERNAME': 'xxxxxx1','PASSWORD': 'xxxxxx','option': 'login'},
{'USERNAME': 'xxxxxx2','PASSWORD': 'xxxxxxx','option': 'login'},
{'USERNAME': 'xxxxx3','PASSWORD': 'xxxxx','option': 'login'},
{'USERNAME': 'xxxxxx4','PASSWORD': 'xxxxxx','option': 'login'},
]

....

for payload in payloads:
    with requests.Session() as s:
        p = s.post('https://service.rl360.com/scripts/customer.cgi?option=login', data=payload)
        ...

如果回答您的问题,请使用答案左侧的按钮自由接受我的答案:)如果回答您的问题,请使用答案左侧的按钮自由接受我的答案:)