Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/309.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Php Python:在使用mechanize将数据提交到表单后提取.csv结果_Php_Python_Forms_Csv - Fatal编程技术网

Php Python:在使用mechanize将数据提交到表单后提取.csv结果

Php Python:在使用mechanize将数据提交到表单后提取.csv结果,php,python,forms,csv,Php,Python,Forms,Csv,我不熟悉用Python从web提取数据。多亏了其他一些帖子,我才知道如何使用模块mechanize将数据提交到表单 现在,我一直在寻找如何提取结果。提交表单时会有很多不同的结果,但是如果我可以访问csv文件,那就太完美了。我假设您必须使用模块re,但是如何通过Python下载结果呢 运行作业后,csv文件在这里:Summary=>Results=>Download Heavy Chain Table(您只需单击“load example”查看网页的工作方式) 打印内容时,我感兴趣的行如下所示:

我不熟悉用Python从web提取数据。多亏了其他一些帖子,我才知道如何使用模块
mechanize
将数据提交到表单

现在,我一直在寻找如何提取结果。提交表单时会有很多不同的结果,但是如果我可以访问csv文件,那就太完美了。我假设您必须使用模块
re
,但是如何通过Python下载结果呢

运行作业后,csv文件在这里:Summary=>Results=>Download Heavy Chain Table(您只需单击“load example”查看网页的工作方式)

打印
内容时
,我感兴趣的行如下所示:

<h2>Results</h2><br>
Predictions for Heavy Chain:
<a href='u17003I9f1/Prob_Heavy.csv'>Download Heavy Chain Table</a><br>
Predictions for Light Chain:
<a href='u17003I9f1/Prob_Light.csv'>Download Light Chain Table</a><br>
结果
重链预测:
轻链预测:

因此,问题是:如果格式始终相同,那么即使使用正则表达式解析HTML是一种黑客行为,我如何下载/访问
href='u17003I9f1/Prob_Heavy.csv'

result=re.findall("<a href='([^']*)'>",contents)

在Python2中(看起来您正在使用),使用
urllib2

>>> import urllib2
>>> URL = "http://circe.med.uniroma1.it/proABC/u17003I9f1/Prob_Heavy.csv"
>>> urllib2.urlopen(URL).read()
或者,如果您正试图根据
href
动态修改它,您可以执行以下操作:

>>> import urllib2
>>> href='u17003I9f1/Prob_Heavy.csv'
>>> URL = 'http://circe.med.uniroma1.it/proABC/' + href
>>> urllib2.urlopen(URL).read()

两个先例答案都很好。。。如果该网页存在。但是,当作业正在运行时,程序会计时(大约30秒)。因此,我通过使用
time
模块暂停程序找到了答案:

from urllib2 import urlopen
import time

print "Job running..."
time.sleep(60)

csv_files = []

for href in result:
    URL = "http://circe.med.uniroma1.it/proABC/" + href + ".csv"    
    csv_files.append(urlopen(URL).read())
    print("downloading {}".format(URL))

print "Job finished"
print csv_files

我不确定这是不是一个更优雅的解决方案,但我确实可以在这种情况下使用。

下面是一个使用
BeautifulSoup
requests
避免使用正则表达式解析HTML的快速而肮脏的示例
sudo pip安装bs4
如果您已经安装了
pip
但没有安装
BeautifulSoup

import re
import mechanize
from bs4 import BeautifulSoup as bs
import requests
import time


br = mechanize.Browser()
br.set_handle_robots(False)   # ignore robots
br.set_handle_refresh(False)  # can sometimes hang without this

url_base = "http://circe.med.uniroma1.it/proABC/"
url_index = url_base + "index.php"

response = br.open(url_index)

br.form = list(br.forms())[1]

# Controls can be found by name
control1 = br.form.find_control("light")

# Text controls can be set as a string
br["light"] = "DIQMTQSPASLSASVGETVTITCRASGNIHNYLAWYQQKQGKSPQLLVYYTTTLADGVPSRFSGSGSGTQYSLKINSLQPEDFGSYYCQHFWSTPRTFGGGTKLEIKRADAAPTVSIFPPSSEQLTSGGASVVCFLNNFYPKDINVKWKIDGSERQNGVLNSWTDQDSKDSTYSMSSTLTLTKDEYERHNSYTCEATHKTSTSPIVKSFNRNEC" 
br["heavy"] = "QVQLKESGPGLVAPSQSLSITCTVSGFSLTGYGVNWVRQPPGKGLEWLGMIWGDGNTDYNSALKSRLSISKDNSKSQVFLKMNSLHTDDTARYYCARERDYRLDYWGQGTTLTVSSASTTPPSVFPLAPGSAAQTNSMVTLGCLVKGYFPEPVTVTWNSGSLSSGVHTFPAVLQSDLYTLSSSVTVPSSPRPSETVTCNVAHPASSTKVDKKIVPRDC"

# To submit form
response = br.submit()
content = response.read()
# print content

soup = bs(content)
urls_csv = [x.get("href") for x in soup.findAll("a") if ".csv" in x.get("href")]
for file_path in urls_csv:
    status_code = 404
    retries = 0
    url_csv = url_base + file_path
    file_name = url_csv.split("/")[-1]
    while status_code == 404 and retries < 10:
        print "{} not ready yet".format(file_name)
        req = requests.get(url_csv )
        status_code = req.status_code
        time.sleep(5)
    print "{} ready. Saving.".format(file_name)
    with open(file_name, "wb") as f:
        f.write(req.content)

非常感谢,这看起来离答案不远,但问题是获得的文件是空的。我们怎样才能解决这个问题?结果似乎没有存储在网络上,这很遗憾。我确信如果找不到文件,wget会崩溃。因此,可能它们是空的,或者您可以尝试将我的答案与Christopher answer混合(在python 3中使用
urllib
),看看它是否有区别。这正是我试图做的,混合您的两个答案,这似乎是一个不错的选择。如果您使用Christopher answer,请告诉我/建议编辑,所以我的答案是正确的。你的意思是你没有len文件?直接在firefox中粘贴URL怎么样?
from urllib2 import urlopen
import time

print "Job running..."
time.sleep(60)

csv_files = []

for href in result:
    URL = "http://circe.med.uniroma1.it/proABC/" + href + ".csv"    
    csv_files.append(urlopen(URL).read())
    print("downloading {}".format(URL))

print "Job finished"
print csv_files
import re
import mechanize
from bs4 import BeautifulSoup as bs
import requests
import time


br = mechanize.Browser()
br.set_handle_robots(False)   # ignore robots
br.set_handle_refresh(False)  # can sometimes hang without this

url_base = "http://circe.med.uniroma1.it/proABC/"
url_index = url_base + "index.php"

response = br.open(url_index)

br.form = list(br.forms())[1]

# Controls can be found by name
control1 = br.form.find_control("light")

# Text controls can be set as a string
br["light"] = "DIQMTQSPASLSASVGETVTITCRASGNIHNYLAWYQQKQGKSPQLLVYYTTTLADGVPSRFSGSGSGTQYSLKINSLQPEDFGSYYCQHFWSTPRTFGGGTKLEIKRADAAPTVSIFPPSSEQLTSGGASVVCFLNNFYPKDINVKWKIDGSERQNGVLNSWTDQDSKDSTYSMSSTLTLTKDEYERHNSYTCEATHKTSTSPIVKSFNRNEC" 
br["heavy"] = "QVQLKESGPGLVAPSQSLSITCTVSGFSLTGYGVNWVRQPPGKGLEWLGMIWGDGNTDYNSALKSRLSISKDNSKSQVFLKMNSLHTDDTARYYCARERDYRLDYWGQGTTLTVSSASTTPPSVFPLAPGSAAQTNSMVTLGCLVKGYFPEPVTVTWNSGSLSSGVHTFPAVLQSDLYTLSSSVTVPSSPRPSETVTCNVAHPASSTKVDKKIVPRDC"

# To submit form
response = br.submit()
content = response.read()
# print content

soup = bs(content)
urls_csv = [x.get("href") for x in soup.findAll("a") if ".csv" in x.get("href")]
for file_path in urls_csv:
    status_code = 404
    retries = 0
    url_csv = url_base + file_path
    file_name = url_csv.split("/")[-1]
    while status_code == 404 and retries < 10:
        print "{} not ready yet".format(file_name)
        req = requests.get(url_csv )
        status_code = req.status_code
        time.sleep(5)
    print "{} ready. Saving.".format(file_name)
    with open(file_name, "wb") as f:
        f.write(req.content)
Prob_Heavy.csv not ready yet
Prob_Heavy.csv not ready yet
Prob_Heavy.csv not ready yet
Prob_Heavy.csv ready. Saving.
Prob_Light.csv not ready yet
Prob_Light.csv ready. Saving.
>>> 
>>>