Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/xml/12.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
将XML转换为多个时间段的数据帧_Xml_Pandas_Dataframe - Fatal编程技术网

将XML转换为多个时间段的数据帧

将XML转换为多个时间段的数据帧,xml,pandas,dataframe,Xml,Pandas,Dataframe,我试图查询一个api,该api以xml格式返回数据,然后将该数据放入一个数据框中 我把stackoverflow上其他海报的代码拼凑在一起,当fromUtc和untilUtc在下面的URL中仅相隔一小时时,这些海报就可以工作。但是,我希望能够查询几天或几周的数据,而不是每次只查询一小时 以下是1小时的时间范围(我的代码使用该时间范围): 但是,如果URL持续数天,我无法确定如何将所有数据拉入数据框,如下所示: url = "https://platform.aggm.at/mgm/api/tim

我试图查询一个api,该api以xml格式返回数据,然后将该数据放入一个数据框中

我把stackoverflow上其他海报的代码拼凑在一起,当fromUtc和untilUtc在下面的URL中仅相隔一小时时,这些海报就可以工作。但是,我希望能够查询几天或几周的数据,而不是每次只查询一小时

以下是1小时的时间范围(我的代码使用该时间范围):

但是,如果URL持续数天,我无法确定如何将所有数据拉入数据框,如下所示:

url = "https://platform.aggm.at/mgm/api/timeseriesList.do?key=b73a4778a543fadd3f72bc9ebfe42d4c&fromUtc=2018-01-01T06&untilUtc=2018-04-01T06&group=904"
以下是一小时时间范围内的工作代码:

import xml.etree.ElementTree as et
import requests
import pandas as pd

#for elem in root.findall(".//Value"):
    #print(elem.tag, elem.attrib, elem.text)

#from xml.etree import cElementTree as ElementTree

class XmlListConfig(list):
    def __init__(self, aList):
        for element in aList:
            if element:
                # treat like dict
                if len(element) == 1 or element[0].tag != element[1].tag:
                    self.append(XmlDictConfig(element))
                # treat like list
                elif element[0].tag == element[1].tag:
                    self.append(XmlListConfig(element))
            elif element.text:
                text = element.text.strip()
                if text:
                    self.append(text)


class XmlDictConfig(dict):
    def __init__(self, parent_element):
        if parent_element.items():
            self.update(dict(parent_element.items()))
        for element in parent_element:
            if element:
                # treat like dict - we assume that if the first two tags
                # in a series are different, then they are all different.
                if len(element) == 1 or element[0].tag != element[1].tag:
                    aDict = XmlDictConfig(element)
                # treat like list - we assume that if the first two tags
                # in a series are the same, then the rest are the same.
                else:
                    # here, we put the list in dictionary; the key is the
                    # tag name the list elements all share in common, and
                    # the value is the list itself 
                    aDict = {element[0].tag: XmlListConfig(element)}
                # if the tag has attributes, add those to the dict
                if element.items():
                    aDict.update(dict(element.items()))
                self.update({element.tag: aDict})
            # this assumes that if you've got an attribute in a tag,
            # you won't be having any text. This may or may not be a 
            # good idea -- time will tell. It works for the way we are
            # currently doing XML configuration files...
            elif element.items():
                self.update({element.tag: dict(element.items())})
                # when there is one child to an element with attributes AND text
                #The line just below this was added.
                self[element.tag].update({"TSO-Value":element.text})
            # finally, if there are no child tags and no attributes, extract
            # the text
            else:
                self.update({element.tag: element.text})

url = "https://platform.aggm.at/mgm/api/timeseriesList.do?key=b73a4778a543fadd3f72bc9ebfe42d4c&fromUtc=2018-01-01T06&untilUtc=2018-01-01T07&group=904"
response = requests.get(url)
response.content
root = et.fromstring(response.content)
xmldict = XmlDictConfig(root)

#https://stackoverflow.com/questions/32855045/splitting-nested-dictionary
#retrieve one of the values inside the dictionary
inner = xmldict['TimeseriesList']
df = pd.DataFrame.from_dict(inner)

new_inner = inner['Timeseries']
print(new_inner)
df2 = pd.DataFrame.from_dict(new_inner)


values = new_inner # initial data


def getValueOrDefault(v):
    if v is None:
        return {'FromUTC': None, 'UntilUTC': None, 'TSO-Value': None}
    return v['Value']

values = [{**value['Header'], **getValueOrDefault(value['Values'])} for value in values]
print(values)
df3 = pd.DataFrame(values)
当我查询一小时的数据时,我在df2中得到以下两个字典: 标题:

价值观:

{'Value': {'FromUTC': '2018-01-01T06:00:00.000Z', 'UntilUTC': '2018-01-01T07:00:00.000Z', 'TSO-Value': '10128309'}}
我使用以下函数将其放入以下数据框:

def getValueOrDefault(v):
    if v is None:
        return {'FromUTC': None, 'UntilUTC': None, 'TSO-Value': None}
    return v['Value']

values = [{**value['Header'], **getValueOrDefault(value['Values'])} for value in values]
print(values)
df3 = pd.DataFrame(values)
这将返回一个数据帧,如下所示:

但是当我增加数据的时间段时,我查询的代码无法处理它

这次我的df2包含:

{'TimeserieId': '1501', 'ObjectID': 'NominierterEKVOst', 'Unit': 'kWh/h', 'Granularity': 'HOUR', 'Name': 'Nominated Consumption East', 'LastUpdate': '2019-11-19T15:25:00.000Z'}
然后是以下不包括起始日期和截止日期的日期:

{'Value': ['10128309', '10090691', '9991207.0', '10025856', '10030502', '10158945', '10158071', '10302802', '10838279', '10853112', '11108562', '11046172', '11216328', '11278472', '11288031', '11241307', '11164816', '11017874', '10808995', '10664421', '10498511', '10648369', '11028336', '12492439', '12492750', '12447412', '12365682', '12250841', '12225688', '12207470', '12321979', '12349964', '12303415', '12198112', '12237306', '12242819', '12216428', '12250504', '12265349', '11978096', '11936941', '11876989', '11298411', '11067736', '11134122', '11064653', '11351798', '12602242', '12910271', '12874984', '12790243', '12896733', '12871346', '12800547', '13204986', '13050597', '13225956', '13388547', '13510211', '13519767', '13262630', '12817374', '12323831', '12137506', '11946898', '11625450', '11540814', '11521041', '11586489', '12000038', '12391238', '12601717', '13231766', '13210762', '12947699', '13028445', '13555487', '12936937', '13038339', '13033435', '13078160', '13330834', '13441336', '13205542', '13142700', '13115554', '12055131', '11601545', '11415094', '11323713', '11282856', '11256287', '11244198', '11984312', '12134719', '13009439', '14598346', '14885711', '14849889', '14490393', '14312574', '13654674', '13051538', '12533006', '12614777', '12618908', '12594414', '12603372', '12639542', '12583482', '12523456', '12379896', '11692829', '11149465', '11120051', '11135499', '11130259', '11080760', '11271191', '10909230', '10962510', '11520114', '12022168', '12079581', '12077174', '11948640', '11895253', '11917234', '11946389', '12056458', '11995725', '11985354', '12008127', '11924274', '11783698', '11548238', '11135481', '10679563', '10750011', '10076521', '10470355', '10709176', '10756600', '10320698', '10491483', '10538155', '10650800', '10899565', '10890840', '10881940', '10856757', '10686689', '10798309', '10830784', '10953838', '10960305', '10959465', '11078191', '11001972', '10868302', '10550175', '10373976', '10470765', '10463628', '10651108', '10688276', '11069214', '12540496', '12974473']}
我的目标是在数据帧中获取上述值,并在其旁边设置相应的时间段。目前这还不包括在内,我无法找出原因

如果有更简单的方法将xml拉入数据框,那么任何帮助或建议都将不胜感激。

考虑一下,这是一种专门用于将xml文件转换为其他格式(包括表格CSV文件)的专用语言!Python可以运行XSLT1.0及其第三方、功能丰富且易于使用的库,该库扩展了内置的
ElementTree
API。或者,Python可以调用外部XSLT处理器来运行脚本

从那里,熊猫可以使用
StringIO
直接读取结果树,或者使用
read\u csv
从文件中读取结果树。使用这种方法,两个URL版本都可以工作

XSLT(另存为.xsl文件或嵌入字符串)

输出

# STRING READ
time_series_df = pd.read_csv(StringIO(str(result)))

time_series_df.head(10)    
#    TimeserieId           ObjectID   Unit Granularity                        Name                LastUpdate                   FromUTC                  UntilUTC   TSO_Value
# 0         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T06:00:00.000Z  2018-01-01T07:00:00.000Z  10128309.0
# 1         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T07:00:00.000Z  2018-01-01T08:00:00.000Z  10090691.0
# 2         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T08:00:00.000Z  2018-01-01T09:00:00.000Z   9991207.0
# 3         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T09:00:00.000Z  2018-01-01T10:00:00.000Z  10025856.0
# 4         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T10:00:00.000Z  2018-01-01T11:00:00.000Z  10030502.0
# 5         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T11:00:00.000Z  2018-01-01T12:00:00.000Z  10158945.0
# 6         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T12:00:00.000Z  2018-01-01T13:00:00.000Z  10158071.0
# 7         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T13:00:00.000Z  2018-01-01T14:00:00.000Z  10302802.0
# 8         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T14:00:00.000Z  2018-01-01T15:00:00.000Z  10838279.0
# 9         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T15:00:00.000Z  2018-01-01T16:00:00.000Z  10853112.0

# IO FILE WRITE / READ
with open('Output.csv', 'wb') as f:
    f.write(result)

time_series_df = pd.read_csv('Output.csv')

time_series_df.head(10)        
#   TimeserieId           ObjectID   Unit Granularity                        Name                LastUpdate                   FromUTC                  UntilUTC   TSO_Value
# 0         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T06:00:00.000Z  2018-01-01T07:00:00.000Z  10128309.0
# 1         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T07:00:00.000Z  2018-01-01T08:00:00.000Z  10090691.0
# 2         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T08:00:00.000Z  2018-01-01T09:00:00.000Z   9991207.0
# 3         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T09:00:00.000Z  2018-01-01T10:00:00.000Z  10025856.0
# 4         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T10:00:00.000Z  2018-01-01T11:00:00.000Z  10030502.0
# 5         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T11:00:00.000Z  2018-01-01T12:00:00.000Z  10158945.0
# 6         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T12:00:00.000Z  2018-01-01T13:00:00.000Z  10158071.0
# 7         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T13:00:00.000Z  2018-01-01T14:00:00.000Z  10302802.0
# 8         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T14:00:00.000Z  2018-01-01T15:00:00.000Z  10838279.0
# 9         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T15:00:00.000Z  2018-01-01T16:00:00.000Z  10853112.0
{'Value': ['10128309', '10090691', '9991207.0', '10025856', '10030502', '10158945', '10158071', '10302802', '10838279', '10853112', '11108562', '11046172', '11216328', '11278472', '11288031', '11241307', '11164816', '11017874', '10808995', '10664421', '10498511', '10648369', '11028336', '12492439', '12492750', '12447412', '12365682', '12250841', '12225688', '12207470', '12321979', '12349964', '12303415', '12198112', '12237306', '12242819', '12216428', '12250504', '12265349', '11978096', '11936941', '11876989', '11298411', '11067736', '11134122', '11064653', '11351798', '12602242', '12910271', '12874984', '12790243', '12896733', '12871346', '12800547', '13204986', '13050597', '13225956', '13388547', '13510211', '13519767', '13262630', '12817374', '12323831', '12137506', '11946898', '11625450', '11540814', '11521041', '11586489', '12000038', '12391238', '12601717', '13231766', '13210762', '12947699', '13028445', '13555487', '12936937', '13038339', '13033435', '13078160', '13330834', '13441336', '13205542', '13142700', '13115554', '12055131', '11601545', '11415094', '11323713', '11282856', '11256287', '11244198', '11984312', '12134719', '13009439', '14598346', '14885711', '14849889', '14490393', '14312574', '13654674', '13051538', '12533006', '12614777', '12618908', '12594414', '12603372', '12639542', '12583482', '12523456', '12379896', '11692829', '11149465', '11120051', '11135499', '11130259', '11080760', '11271191', '10909230', '10962510', '11520114', '12022168', '12079581', '12077174', '11948640', '11895253', '11917234', '11946389', '12056458', '11995725', '11985354', '12008127', '11924274', '11783698', '11548238', '11135481', '10679563', '10750011', '10076521', '10470355', '10709176', '10756600', '10320698', '10491483', '10538155', '10650800', '10899565', '10890840', '10881940', '10856757', '10686689', '10798309', '10830784', '10953838', '10960305', '10959465', '11078191', '11001972', '10868302', '10550175', '10373976', '10470765', '10463628', '10651108', '10688276', '11069214', '12540496', '12974473']}
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xsl:output method="text" omit-xml-declaration="yes" indent="yes"/>
    <xsl:strip-space elements="*"/>

    <xsl:template match="/Data">
       <!-- HEADERS -->         
       <xsl:text>TimeserieId,ObjectID,Unit,Granularity,Name,LastUpdate,</xsl:text>
       <xsl:text>FromUTC,UntilUTC,TSO_Value&#xa;</xsl:text>
       <xsl:apply-templates select="descendant::Value"/>
    </xsl:template>

    <xsl:template match="Value">
       <!-- DATA -->
       <xsl:value-of select="concat(ancestor::Timeseries/Header/TimeserieId, ',',
                                    ancestor::Timeseries/Header/ObjectID, ',',
                                    ancestor::Timeseries/Header/Unit, ',',
                                    ancestor::Timeseries/Header/Granularity, ',',
                                    ancestor::Timeseries/Header/Name, ',',
                                    ancestor::Timeseries/Header/LastUpdate, ',',
                                    @FromUTC, ',',
                                    @UntilUTC, ',',
                                    text())" />
       <xsl:text>&#xa;</xsl:text>
    </xsl:template>    

</xsl:stylesheet>
from io import StringIO
import requests as rq
import lxml.etree as et
import pandas as pd

# RETRIEVE WEB CONTENT
url = ("https://platform.aggm.at/mgm/api/timeseriesList.do?"
       "key=b73a4778a543fadd3f72bc9ebfe42d4c&"
       "fromUtc=2018-01-01T06&untilUtc=2018-04-01T06&group=904")
response = rq.get(url)
response.content

# LOAD XML AND XSL
doc = et.fromstring(response.content)    
style = et.fromstring("""xslt string""")
# style = et.parse("/path/to/Script.xsl")

# TRANSFORM
transform = et.XSLT(style)
result = transform(doc)
# STRING READ
time_series_df = pd.read_csv(StringIO(str(result)))

time_series_df.head(10)    
#    TimeserieId           ObjectID   Unit Granularity                        Name                LastUpdate                   FromUTC                  UntilUTC   TSO_Value
# 0         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T06:00:00.000Z  2018-01-01T07:00:00.000Z  10128309.0
# 1         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T07:00:00.000Z  2018-01-01T08:00:00.000Z  10090691.0
# 2         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T08:00:00.000Z  2018-01-01T09:00:00.000Z   9991207.0
# 3         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T09:00:00.000Z  2018-01-01T10:00:00.000Z  10025856.0
# 4         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T10:00:00.000Z  2018-01-01T11:00:00.000Z  10030502.0
# 5         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T11:00:00.000Z  2018-01-01T12:00:00.000Z  10158945.0
# 6         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T12:00:00.000Z  2018-01-01T13:00:00.000Z  10158071.0
# 7         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T13:00:00.000Z  2018-01-01T14:00:00.000Z  10302802.0
# 8         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T14:00:00.000Z  2018-01-01T15:00:00.000Z  10838279.0
# 9         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T15:00:00.000Z  2018-01-01T16:00:00.000Z  10853112.0

# IO FILE WRITE / READ
with open('Output.csv', 'wb') as f:
    f.write(result)

time_series_df = pd.read_csv('Output.csv')

time_series_df.head(10)        
#   TimeserieId           ObjectID   Unit Granularity                        Name                LastUpdate                   FromUTC                  UntilUTC   TSO_Value
# 0         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T06:00:00.000Z  2018-01-01T07:00:00.000Z  10128309.0
# 1         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T07:00:00.000Z  2018-01-01T08:00:00.000Z  10090691.0
# 2         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T08:00:00.000Z  2018-01-01T09:00:00.000Z   9991207.0
# 3         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T09:00:00.000Z  2018-01-01T10:00:00.000Z  10025856.0
# 4         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T10:00:00.000Z  2018-01-01T11:00:00.000Z  10030502.0
# 5         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T11:00:00.000Z  2018-01-01T12:00:00.000Z  10158945.0
# 6         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T12:00:00.000Z  2018-01-01T13:00:00.000Z  10158071.0
# 7         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T13:00:00.000Z  2018-01-01T14:00:00.000Z  10302802.0
# 8         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T14:00:00.000Z  2018-01-01T15:00:00.000Z  10838279.0
# 9         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T15:00:00.000Z  2018-01-01T16:00:00.000Z  10853112.0