Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/340.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/json/14.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 从json链接读取数据并将其下载到dataframe_Python_Json_Python 3.x_Pandas_Dataframe - Fatal编程技术网

Python 从json链接读取数据并将其下载到dataframe

Python 从json链接读取数据并将其下载到dataframe,python,json,python-3.x,pandas,dataframe,Python,Json,Python 3.x,Pandas,Dataframe,我有一个json链接,如下所示: 网址= “” 如何将这些数据下载到pandas dataframe。这就是我如何将所有数据下载到dataframe的方法。我添加了一个名为“legend”的新列,允许您在需要时单独查看每个数据: import pandas as pd import requests # needs header headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '\

我有一个json链接,如下所示:

网址= “”


如何将这些数据下载到pandas dataframe。

这就是我如何将所有数据下载到dataframe的方法。我添加了一个名为“legend”的新列,允许您在需要时单独查看每个数据:

import pandas as pd
import requests

# needs header
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '\
                         'AppleWebKit/537.36 (KHTML, like Gecko) '\
                         'Chrome/75.0.3770.80 Safari/537.36'}


URI = 'https://www.nseindia.com/api/live-analysis-variations?index=gainers'
# since data is returned as json, we can use .json func
data = requests.get(URI, headers=headers).json()


print(data['legends'])

# each legend carries data, so we will append all data and add col legend
dfs = pd.DataFrame([])
for legend, _ in data['legends']:

    df = pd.DataFrame(data[legend]['data'])
    df['legend'] = legend
    dfs = dfs.append(df, ignore_index=True)


print(dfs)

这就是我如何将所有数据获取到DataFrame的方法。我添加了一个名为“legend”的新列,允许您在需要时单独查看每个数据:

import pandas as pd
import requests

# needs header
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '\
                         'AppleWebKit/537.36 (KHTML, like Gecko) '\
                         'Chrome/75.0.3770.80 Safari/537.36'}


URI = 'https://www.nseindia.com/api/live-analysis-variations?index=gainers'
# since data is returned as json, we can use .json func
data = requests.get(URI, headers=headers).json()


print(data['legends'])

# each legend carries data, so we will append all data and add col legend
dfs = pd.DataFrame([])
for legend, _ in data['legends']:

    df = pd.DataFrame(data[legend]['data'])
    df['legend'] = legend
    dfs = dfs.append(df, ignore_index=True)


print(dfs)

获取URLSir时不允许使用405方法我现在更改了链接,因为前一个链接有一些错误。获取URLSir时不允许使用405方法我现在更改了链接,因为前一个链接有一些错误。