Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/json/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
使用Python从JSON API提取值_Python_Json_Request - Fatal编程技术网

使用Python从JSON API提取值

使用Python从JSON API提取值,python,json,request,Python,Json,Request,我使用api获取特定共享的信息 { "Meta Data": { "1. Information": "Daily Prices (open, high, low, close) and Volumes", "2. Symbol": "MSFT", "3. Last Refreshed": "2020-05-22", "4. Output Size": "Compact", "5. Time Zone": "US/Eastern" }, "Ti

我使用api获取特定共享的信息

{
  "Meta Data": {
    "1. Information": "Daily Prices (open, high, low, close) and Volumes",
    "2. Symbol": "MSFT",
    "3. Last Refreshed": "2020-05-22",
    "4. Output Size": "Compact",
    "5. Time Zone": "US/Eastern"
  },
  "Time Series (Daily)": {
    "2020-05-22": {
      "1. open": "183.1900",
      "2. high": "184.4600",
      "3. low": "182.5400",
      "4. close": "183.5100",
      "5. volume": "20826898"
    },
    "2020-05-21": {
      "1. open": "185.4000",
      "2. high": "186.6700",
      "3. low": "183.2900",
      "4. close": "183.4300",
      "5. volume": "29032741"
    }, and more...
我现在只想提取日期,打开,高转换成一个CSV

import requests
import json

url = "https://alpha-vantage.p.rapidapi.com/query"

querystring = {"outputsize":"compact","datatype":"JSON","function":"TIME_SERIES_DAILY","symbol":"MSFT"}

headers = {
    'x-rapidapi-host': "alpha-vantage.p.rapidapi.com",
    'x-rapidapi-key': "API KEY"
    }

response = requests.request("GET", url, headers=headers, params=querystring)

info = response.json()

with open('data.json', 'w') as fp:
    json.dump(info, fp)

f = open('data.json',)

data = json.load(f)

meta = data["Meta Data"]

for i in data['Meta Data']:
    print(i)

# Closing file
f.close()
输出:

1. Information
2. Symbol
3. Last Refreshed
4. Output Size
5. Time Zone
我以为信息是在“元数据”中,但显然不是。
有人能确切地向我解释一下我做错了什么吗?

你的口述中有好几条

meta = data["Meta Data"]

# Contains dict

 "1. Information": "Daily Prices (open, high, low, close) and Volumes",
 "2. Symbol": "MSFT",
 "3. Last Refreshed": "2020-05-22",
 "4. Output Size": "Compact",
 "5. Time Zone": "US/Eastern"

timeSeries = data['Time Series (Daily)']['2020-05-22']

# Contains dict


  "1. open": "183.1900",
  "2. high": "184.4600",
  "3. low": "182.5400",
  "4. close": "183.5100",
  "5. volume": "20826898"
您只需访问如下值:

eventOpen = data['Time Series (Daily)']['2020-05-22']['1. open']

尝试使用这些对象并打印它们的类型和内容,以便您了解它们包含的内容以及如何访问它们。

时间序列(每日)的结果非常混乱。 无法转换为正确的CSV格式

使用pandas,我格式化了数据,并能够将数据转换为正确的CSV格式


meta = json.load(f)
data = meta['Meta Data']
data = meta['Time Series (Daily)']

df=pd.DataFrame(columns=['date','open','high','low','close','volume'])
for d,p in data.items():
    date=datetime.datetime.strptime(d,'%Y-%m-%d').date()
    data_row=[date,float(p['1. open']),float(p['2. high']),float(p['3. low']),float(p['4. close']),int(p['5. volume'])]
    df.loc[-1,:]=data_row
    df.index=df.index+1
df =df.sort_values('date')
df.to_csv('test.csv')

谢谢你的回答。但是,如果我循环浏览“时间序列(每日)”,我只会得到日期。是的,我的错,与你的括号混淆了,我正在更新答案。
          date     open     high      low    close    volume
0   2019-12-27  3776.82  3794.93  3775.26  3782.27  24437900
1   2019-12-30  3780.44  3780.44  3748.47  3748.47  18684000
2   2020-01-03  3787.57  3787.57  3745.54  3773.37  30343400
3   2020-01-06  3764.34  3764.34  3710.94  3752.52  28339400
4   2020-01-07  3760.09  3784.42  3748.13  3759.25  29853300
.... to continue ()...