Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/340.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 发出HTTP POST请求_Python_Http_Post_Urllib - Fatal编程技术网

Python 发出HTTP POST请求

Python 发出HTTP POST请求,python,http,post,urllib,Python,Http,Post,Urllib,我正试图发出一个POST请求来检索关于一本书的信息。 下面是返回HTTP代码的代码:302,Moved import httplib, urllib params = urllib.urlencode({ 'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' }) headers = {"Content-type": "

我正试图发出一个POST请求来检索关于一本书的信息。 下面是返回HTTP代码的代码:302,Moved

import httplib, urllib
params = urllib.urlencode({
    'isbn' : '9780131185838',
    'catalogId' : '10001',
    'schoolStoreId' : '15828',
    'search' : 'Search'
    })
headers = {"Content-type": "application/x-www-form-urlencoded",
           "Accept": "text/plain"}
conn = httplib.HTTPConnection("bkstr.com:80")
conn.request("POST", "/webapp/wcs/stores/servlet/BuybackSearch",
             params, headers)
response = conn.getresponse()
print response.status, response.reason
data = response.read()
conn.close()
当我尝试从浏览器,从这个页面:,它的工作。我的代码中缺少了什么

编辑: 下面是我调用print response.msg时得到的信息

302 Moved Date: Tue, 07 Sep 2010 16:54:29 GMT
Vary: Host,Accept-Encoding,User-Agent
Location: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch
X-UA-Compatible: IE=EmulateIE7
Content-Length: 0
Content-Type: text/plain; charset=utf-8
似乎这个位置指向了我第一次尝试访问的同一个url

编辑2:

我已经按照这里的建议尝试使用urllib2。代码如下:

import urllib, urllib2

url = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch'
values = {'isbn' : '9780131185838',
          'catalogId' : '10001',
          'schoolStoreId' : '15828',
          'search' : 'Search' }


data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
print response.geturl()
print response.info()
the_page = response.read()
print the_page
以下是输出:

http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch
Date: Tue, 07 Sep 2010 16:58:35 GMT
Pragma: No-cache
Cache-Control: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: JSESSIONID=0001REjqgX2axkzlR6SvIJlgJkt:1311s25dm; Path=/
Vary: Accept-Encoding,User-Agent
X-UA-Compatible: IE=EmulateIE7
Content-Length: 0
Connection: close
Content-Type: text/html; charset=utf-8
Content-Language: en-US
Set-Cookie: TSde3575=225ec58bcb0fdddfad7332c2816f1f152224db2f71e1b0474c866f3b; Path=/
  • 也许这就是浏览器得到的,您只需遵循
    302
    重定向即可

  • 如果所有这些都失败了,您可以使用FireBug、tcpdump或wireshark监视Firefox和Web服务器之间的对话,并查看哪些HTTP头是不同的。可能只是
    用户代理:
    标题


  • 您可能希望使用应该使用的模块。下面是一个使用urllib2的示例。

    他们的服务器似乎希望您获得适当的cookie。这项工作:

    import urllib, urllib2, cookielib
    
    cookie_jar = cookielib.CookieJar()
    opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie_jar))
    urllib2.install_opener(opener)
    
    # acquire cookie
    url_1 = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828'
    req = urllib2.Request(url_1)
    rsp = urllib2.urlopen(req)
    
    # do POST
    url_2 = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch'
    values = dict(isbn='9780131185838', schoolStoreId='15828', catalogId='10001')
    data = urllib.urlencode(values)
    req = urllib2.Request(url_2, data)
    rsp = urllib2.urlopen(req)
    content = rsp.read()
    
    # print result
    import re
    pat = re.compile('Title:.*')
    print pat.search(content).group()
    
    # OUTPUT: Title:&nbsp;&nbsp;Statics & Strength of Materials for Arch (w/CD)<br />
    
    导入urllib、urllib2、cookielib
    cookie\u jar=cookielib.CookieJar()
    opener=urllib2.build\u opener(urllib2.HTTPCookieProcessor(cookie\u jar))
    urllib2.install_opener(opener)
    #获取cookie
    url_1='2〕http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828'
    请求(url_1)
    rsp=urllib2.urlopen(请求)
    #张贴
    url_2='1〕http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch'
    values=dict(isbn='9780131185838',schoolStoreId='15828',catalogId='10001')
    data=urllib.urlencode(值)
    请求(url_2,数据)
    rsp=urllib2.urlopen(请求)
    content=rsp.read()
    #打印结果
    进口稀土
    pat=re.compile('Title:.*')
    打印pat.search(content.group)()
    #输出:标题:拱门材料的静力学和强度(w/CD)

    302响应还指出了它被移动到的位置-找到该URL并使用它。@infrared:很高兴提供帮助。我可能应该补充一点,解决这类问题的一种方法是运行HTTP代理,它向您显示请求/响应的跟踪。然后,使用浏览器和代码,比较这两个跟踪。通常,您要查找Cookie或标头之间的差异。有时需要一些尝试和错误。我喜欢用小提琴,但任何这样的工具都可以。