Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/json/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 使用json.dumps()从中获取所需的值_Python_Json_Scrapy - Fatal编程技术网

Python 使用json.dumps()从中获取所需的值

Python 使用json.dumps()从中获取所需的值,python,json,scrapy,Python,Json,Scrapy,我仍在努力了解json.loads和json.dumps,以便从网页中提取我想要的内容。我正在寻找一些数据,这些数据采用以下格式: data:{ url: 'stage-player-stat' }, defaultParams: { stageId: 9155, teamId: 32, playerId: -1,

我仍在努力了解json.loads和json.dumps,以便从网页中提取我想要的内容。我正在寻找一些数据,这些数据采用以下格式:

data:{
                url: 'stage-player-stat'
            },
            defaultParams: {
                stageId: 9155,
                teamId: 32,
                playerId: -1,
                field: 2
            },
我使用的代码如下:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import log
from scrapy.cmdline import execute
from scrapy.utils.markup import remove_tags
import time
import re
import json
import requests

class ExampleSpider(CrawlSpider):
    name = "goal2"
    allowed_domains = ["whoscored.com"]
    start_urls = ["http://www.whoscored.com/Teams/32/"]

    rules = [Rule(SgmlLinkExtractor(allow=('\Teams'),deny=(),), follow=False, callback='parse_item')]

    def parse_item(self, response):

        stagematch = re.compile("data:\s*{\s*url:\s*'stage-player-stat'\s*},\s*defaultParams:\s*{\s*(.*?),.*},",re.S)

        stagematch2 = re.search(stagematch, response.body)

        if stagematch2 is not None:
            stagematch3 = stagematch2.group(1)


            stageid = json.dumps(stagematch3)

            print "stageid = ", stageid

    execute(['scrapy','crawl','goal2'])  
在本例中,
stageId
解析为
“stageId:9155”
。我希望它解决的是
9155
。我试图用
stageId=stageId[0]
解析
stageId
,就像它是一个字典一样,但这不起作用。我做错了什么

谢谢

如果您愿意,您可以将其转换回str:

stageid = str(stageid)
有很多其他的方法来解决你的问题。其中之一是使用更简单的regexp,然后使用
json.loads
解析匹配对象

如果您愿意,您可以将其转换回str:

stageid = str(stageid)
有很多其他的方法来解决你的问题。其中之一是使用更简单的regexp,然后使用
json.loads
解析匹配对象

如果您愿意,您可以将其转换回str:

stageid = str(stageid)
有很多其他的方法来解决你的问题。其中之一是使用更简单的regexp,然后使用
json.loads
解析匹配对象

如果您愿意,您可以将其转换回str:

stageid = str(stageid)
有很多其他的方法来解决你的问题。其中之一是使用更简单的regexp,然后使用
json解析匹配对象。使用以下方法加载解决方案:

  • 获取所有
    内容
  • 用js2xml解析每一个:返回一个lxml树
  • 在lxml文档上使用XPath,查找
    var defaultTeamPlayerStatsConfigParams
    并获取其init
    对象
  • 使用
    js2xml.jsonlike.make_dict()
    从中获取Python
    dict
下面是它的运行方式,如本次scrapy shell课程所示:

$ scrapy shell http://www.whoscored.com/Teams/32/
2014-09-08 11:17:31+0200 [scrapy] INFO: Scrapy 0.24.4 started (bot: scrapybot)
...
2014-09-08 11:17:32+0200 [default] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/32/> (referer: None)
[s] Available Scrapy objects:
[s]   crawler    <scrapy.crawler.Crawler object at 0x7f88f0605990>
[s]   item       {}
[s]   request    <GET http://www.whoscored.com/Teams/32/>
[s]   response   <200 http://www.whoscored.com/Teams/32/>
[s]   settings   <scrapy.settings.Settings object at 0x7f88f6046450>
[s]   spider     <Spider 'default' at 0x7f88efdaff50>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   fetch(req_or_url) Fetch request (or URL) and update local objects
[s]   view(response)    View response in a browser

In [1]: import pprint

In [2]: import js2xml

In [3]: for script in response.xpath('//script/text()').extract():
    jstree = js2xml.parse(script)
    params = jstree.xpath('//var[@name="defaultTeamPlayerStatsConfigParams"]/object')
    if params:
        pprint.pprint(js2xml.jsonlike.make_dict(params[0]))
   ...:         
{'data': {'url': 'stage-player-stat'},
 'defaultParams': {'field': 2, 'playerId': -1, 'stageId': 9155, 'teamId': 32},
 'fitText': {'container': '.grid .team-link, .grid .player-link',
             'options': {'width': 150}},
 'fixZeros': True}

In [4]: for script in response.xpath('//script/text()').extract():
    jstree = js2xml.parse(script)
    params = jstree.xpath('//var[@name="defaultTeamPlayerStatsConfigParams"]/object')
    if params:
        params = js2xml.jsonlike.make_dict(params[0])
   ...:         print params["defaultParams"]["stageId"]
   ...:         
9155

In [5]: 
$scrapy shellhttp://www.whoscored.com/Teams/32/
2014-09-08 11:17:31+0200[scrapy]信息:scrapy 0.24.4已启动(机器人:scrapybot)
...
2014-09-08 11:17:32+0200[默认]调试:爬网(200)(参考:无)
[s] 可用的刮擦对象:
[s] 爬虫
[s] 项目{}
[s] 请求
[s] 回应
[s] 背景
[s] 蜘蛛
[s] 有用的快捷方式:
[s] shelp()Shell帮助(打印此帮助)
[s] 获取(请求或url)获取请求(或url)并更新本地对象
[s] 查看(响应)在浏览器中查看响应
在[1]中:导入pprint
在[2]中:导入js2xml
在[3]中:对于响应中的脚本。xpath('//script/text()')。extract():
jstree=js2xml.parse(脚本)
params=jstree.xpath('//var[@name=“defaultTeamPlayerStatsConfigParams”]/object')
如果参数为:
pprint.pprint(js2xml.jsonlike.make_dict(参数[0]))
...:         
{'data':{'url':'stage player stat'},
'defaultParams':{'field':2,'playerId':-1,'stageId':9155,'teamId':32},
'fitText':{'container':'.grid.team link,.grid.player link',
'options':{'width':150},
“fixZeros”:True}
[4]中:对于响应中的脚本。xpath('//script/text()')。extract():
jstree=js2xml.parse(脚本)
params=jstree.xpath('//var[@name=“defaultTeamPlayerStatsConfigParams”]/object')
如果参数为:
params=js2xml.jsonlike.make_dict(params[0])
…:打印参数[“defaultParams”][“stageId”]
...:         
9155
在[5]中:
使用以下方法的解决方案:

  • 获取所有
    内容
  • 用js2xml解析每一个:返回一个lxml树
  • 在lxml文档上使用XPath,查找
    var defaultTeamPlayerStatsConfigParams
    并获取其init
    对象
  • 使用
    js2xml.jsonlike.make_dict()
    从中获取Python
    dict
下面是它的运行方式,如本次scrapy shell课程所示:

$ scrapy shell http://www.whoscored.com/Teams/32/
2014-09-08 11:17:31+0200 [scrapy] INFO: Scrapy 0.24.4 started (bot: scrapybot)
...
2014-09-08 11:17:32+0200 [default] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/32/> (referer: None)
[s] Available Scrapy objects:
[s]   crawler    <scrapy.crawler.Crawler object at 0x7f88f0605990>
[s]   item       {}
[s]   request    <GET http://www.whoscored.com/Teams/32/>
[s]   response   <200 http://www.whoscored.com/Teams/32/>
[s]   settings   <scrapy.settings.Settings object at 0x7f88f6046450>
[s]   spider     <Spider 'default' at 0x7f88efdaff50>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   fetch(req_or_url) Fetch request (or URL) and update local objects
[s]   view(response)    View response in a browser

In [1]: import pprint

In [2]: import js2xml

In [3]: for script in response.xpath('//script/text()').extract():
    jstree = js2xml.parse(script)
    params = jstree.xpath('//var[@name="defaultTeamPlayerStatsConfigParams"]/object')
    if params:
        pprint.pprint(js2xml.jsonlike.make_dict(params[0]))
   ...:         
{'data': {'url': 'stage-player-stat'},
 'defaultParams': {'field': 2, 'playerId': -1, 'stageId': 9155, 'teamId': 32},
 'fitText': {'container': '.grid .team-link, .grid .player-link',
             'options': {'width': 150}},
 'fixZeros': True}

In [4]: for script in response.xpath('//script/text()').extract():
    jstree = js2xml.parse(script)
    params = jstree.xpath('//var[@name="defaultTeamPlayerStatsConfigParams"]/object')
    if params:
        params = js2xml.jsonlike.make_dict(params[0])
   ...:         print params["defaultParams"]["stageId"]
   ...:         
9155

In [5]: 
$scrapy shellhttp://www.whoscored.com/Teams/32/
2014-09-08 11:17:31+0200[scrapy]信息:scrapy 0.24.4已启动(机器人:scrapybot)
...
2014-09-08 11:17:32+0200[默认]调试:爬网(200)(参考:无)
[s] 可用的刮擦对象:
[s] 爬虫
[s] 项目{}
[s] 请求
[s] 回应
[s] 背景
[s] 蜘蛛
[s] 有用的快捷方式:
[s] shelp()Shell帮助(打印此帮助)
[s] 获取(请求或url)获取请求(或url)并更新本地对象
[s] 查看(响应)在浏览器中查看响应
在[1]中:导入pprint
在[2]中:导入js2xml
在[3]中:对于响应中的脚本。xpath('//script/text()')。extract():
jstree=js2xml.parse(脚本)
params=jstree.xpath('//var[@name=“defaultTeamPlayerStatsConfigParams”]/object')
如果参数为:
pprint.pprint(js2xml.jsonlike.make_dict(参数[0]))
...:         
{'data':{'url':'stage player stat'},
'defaultParams':{'field':2,'playerId':-1,'stageId':9155,'teamId':32},
'fitText':{'container':'.grid.team link,.grid.player link',
'options':{'width':150},
“fixZeros”:True}
[4]中:对于响应中的脚本。xpath('//script/text()')。extract():
jstree=js2xml.parse(脚本)
params=jstree.xpath('//var[@name=“defaultTeamPlayerStatsConfigParams”]/object')
如果参数为:
params=js2xml.jsonlike.make_dict(params[0])
…:打印参数[“defaultParams”][“stageId”]
...:         
9155
在[5]中:
使用以下方法的解决方案:

  • 获取所有
    内容
  • 用js2xml解析每一个:返回一个lxml树
  • 在lxml文档上使用XPath,查找
    var defaultTeamPlayerStatsConfigParams
    并获取其init
    对象
  • 使用
    js2xml.jsonlike.make_dict()
    从中获取Python
    dict
下面是它的运行方式,如本次scrapy shell课程所示:

$ scrapy shell http://www.whoscored.com/Teams/32/
2014-09-08 11:17:31+0200 [scrapy] INFO: Scrapy 0.24.4 started (bot: scrapybot)
...
2014-09-08 11:17:32+0200 [default] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/32/> (referer: None)
[s] Available Scrapy objects:
[s]   crawler    <scrapy.crawler.Crawler object at 0x7f88f0605990>
[s]   item       {}
[s]   request    <GET http://www.whoscored.com/Teams/32/>
[s]   response   <200 http://www.whoscored.com/Teams/32/>
[s]   settings   <scrapy.settings.Settings object at 0x7f88f6046450>
[s]   spider     <Spider 'default' at 0x7f88efdaff50>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   fetch(req_or_url) Fetch request (or URL) and update local objects
[s]   view(response)    View response in a browser

In [1]: import pprint

In [2]: import js2xml

In [3]: for script in response.xpath('//script/text()').extract():
    jstree = js2xml.parse(script)
    params = jstree.xpath('//var[@name="defaultTeamPlayerStatsConfigParams"]/object')
    if params:
        pprint.pprint(js2xml.jsonlike.make_dict(params[0]))
   ...:         
{'data': {'url': 'stage-player-stat'},
 'defaultParams': {'field': 2, 'playerId': -1, 'stageId': 9155, 'teamId': 32},
 'fitText': {'container': '.grid .team-link, .grid .player-link',
             'options': {'width': 150}},
 'fixZeros': True}

In [4]: for script in response.xpath('//script/text()').extract():
    jstree = js2xml.parse(script)
    params = jstree.xpath('//var[@name="defaultTeamPlayerStatsConfigParams"]/object')
    if params:
        params = js2xml.jsonlike.make_dict(params[0])
   ...:         print params["defaultParams"]["stageId"]
   ...:         
9155

In [5]: 
$scrapy shellhttp://www.whoscored.com/Teams/32/
2014-09-08 11:17:31+0200[scrapy]信息:scrapy 0.24.4已启动(机器人:scrapybot)
...
2014-09-08 11:17:32+0200[默认]调试:爬网(200)(参考:无)
[s] 可用的刮擦对象:
[s] 爬虫
[s] 项目{}
[s] 请求
[s]