Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/298.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/json/15.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python-附加json文件_Python_Json_Python 3.x - Fatal编程技术网

Python-附加json文件

Python-附加json文件,python,json,python-3.x,Python,Json,Python 3.x,我试图将文件夹中的json文件附加到变量中,以便以后可以解析它。以下是我的代码: # Importing dependencies import os import shutil import glob from zipfile import ZipFile from datetime import datetime import zipfile import json from pandas.io.json import json_normalize import urllib imp

我试图将文件夹中的json文件附加到变量中,以便以后可以解析它。以下是我的代码:

    # Importing dependencies
import os
import shutil
import glob
from zipfile import ZipFile
from datetime import datetime
import zipfile
import json
from pandas.io.json import json_normalize
import urllib
import sqlalchemy as sa

# Define the folder sources and destinations
MainDir = 'C:/Test/'
LoadingDir =  'C:/Test/Loading/'
ArchiveDir = 'C:/Test/Archive/'

glob_data = []
# Look for all json files in directory
for file in glob.glob(LoadingDir + '*.json'):
    with open(file) as json_file:
        # Load each json file and append it
        data = json.load(json_file)
        i = 0
        while i < len(data):
            glob_data.append(data[i])
            i += 1
with open(LoadingDir + 'Combined.json', 'w') as f:
    json.dump(glob_data, f, indent=4)
# Load Json file for parsing
file = open(LoadingDir + 'Combined.json')
data = json.load(file)
# Parsing of data
df = json_normalize(data,meta=['timestamp'])
df.to_csv(LoadingDir + "Combined.csv",sep=',', encoding='utf-8')
try:
    df.to_csv(LoadingDir + "Combined.csv",sep=',', encoding='utf-8')
except:
    pass
所以我注意到这个json文件不是以[这意味着它不是字典列表]开头的,但是当我有以[开头的json时,我的代码就可以工作了。 如何调整此选项以适用于此json示例?

将代码更改为:

import os
import shutil
import glob
from zipfile import ZipFile
from datetime import datetime
import zipfile
import json
from pandas.io.json import json_normalize
import urllib
import sqlalchemy as sa

# Define the folder sources and destinations
MainDir = 'C:/Test/'
LoadingDir =  'C:/Test/Loading/'
ArchiveDir = 'C:/Test/Archive/'

glob_data = []
# Look for all json files in directory
for file in glob.glob(LoadingDir + '*.json'):
    with open(file) as json_file:
        # Load each json file and append it
        data = json.load(json_file)
        glob_data.append(data)
with open(LoadingDir + 'Combined.json', 'w') as f:
    json.dump(glob_data, f, indent=4)
# Load Json file for parsing
file = open(LoadingDir + 'Combined.json')
data = json.load(file)
# Parsing of data
df = json_normalize(data,meta=['timestamp'])
df.to_csv(LoadingDir + "Combined.csv",sep=',', encoding='utf-8')
try:
    df.to_csv(LoadingDir + "Combined.csv",sep=',', encoding='utf-8')
except:
    pass

您不需要迭代json.load()返回的返回值,它已经被解析并转换为dict,只需直接附加它。

因此,您有一些json文件,其中包含
[
,而有些没有?我在这个项目中使用的文件没有[。我只是用其他格式作为例子,说如果它是这样的话,[那么代码就行了。添加了一个答案,希望它有帮助;如果没有,请让我知道。
  {
  "sensor-time" : {
    "timezone" : "America/Los_Angeles",
    "time" : "2019-11-05T14:18:36-08:00"
  },
  "status" : {
    "code" : "OK"
  },
  "content" : {
    "element" : [ {
      "element-id" : 0,
      "element-name" : "Line 0",
      "sensor-type" : "SINGLE_SENSOR",
      "data-type" : "LINE",
      "from" : "2019-11-01T00:00:00-07:00",
      "to" : "2019-11-05T15:00:00-08:00",
      "resolution" : "ONE_HOUR",
      "measurement" : [ {
        "from" : "2019-11-01T00:00:00-07:00",
        "to" : "2019-11-01T01:00:00-07:00",
        "value" : [ {
          "value" : 0,
          "label" : "fw"
        }, {
          "value" : 0,
          "label" : "bw"
        } ]
      }, {
        "from" : "2019-11-01T01:00:00-07:00",
        "to" : "2019-11-01T02:00:00-07:00",
        "value" : [ {
          "value" : 0,
          "label" : "fw"
        }, {
          "value" : 0,
          "label" : "bw"
        } ]
      }, {
        "from" : "2019-11-01T02:00:00-07:00",
        "to" : "2019-11-01T03:00:00-07:00",
        "value" : [ {
          "value" : 0,
          "label" : "fw"
        }, {
          "value" : 0,
          "label" : "bw"
        } ]
      },
import os
import shutil
import glob
from zipfile import ZipFile
from datetime import datetime
import zipfile
import json
from pandas.io.json import json_normalize
import urllib
import sqlalchemy as sa

# Define the folder sources and destinations
MainDir = 'C:/Test/'
LoadingDir =  'C:/Test/Loading/'
ArchiveDir = 'C:/Test/Archive/'

glob_data = []
# Look for all json files in directory
for file in glob.glob(LoadingDir + '*.json'):
    with open(file) as json_file:
        # Load each json file and append it
        data = json.load(json_file)
        glob_data.append(data)
with open(LoadingDir + 'Combined.json', 'w') as f:
    json.dump(glob_data, f, indent=4)
# Load Json file for parsing
file = open(LoadingDir + 'Combined.json')
data = json.load(file)
# Parsing of data
df = json_normalize(data,meta=['timestamp'])
df.to_csv(LoadingDir + "Combined.csv",sep=',', encoding='utf-8')
try:
    df.to_csv(LoadingDir + "Combined.csv",sep=',', encoding='utf-8')
except:
    pass