Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/sql/79.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 将SQL数据向前传播到新行中_Python_Sql_Sqlite_Sqlalchemy - Fatal编程技术网

Python 将SQL数据向前传播到新行中

Python 将SQL数据向前传播到新行中,python,sql,sqlite,sqlalchemy,Python,Sql,Sqlite,Sqlalchemy,因此,我从一个来源获得了五分钟间隔的数据,从另一个来源获得了一分钟间隔的数据。我所做的是将五分钟间隔的数据加载到一些sqlite表中,其中包含各种数据类型之间的关系。这些表也有一列用于一分钟数据。现在,我要做的是找到与五分钟数据匹配的一分钟数据,更新该行,然后将五分钟数据向前传播到一分钟数据的新行中 即- DB before DB after row [time] [1-min] [5-min] r

因此,我从一个来源获得了五分钟间隔的数据,从另一个来源获得了一分钟间隔的数据。我所做的是将五分钟间隔的数据加载到一些sqlite表中,其中包含各种数据类型之间的关系。这些表也有一列用于一分钟数据。现在,我要做的是找到与五分钟数据匹配的一分钟数据,更新该行,然后将五分钟数据向前传播到一分钟数据的新行中

即-

            DB before                          DB after
    row  [time] [1-min] [5-min]        row  [time] [1-min] [5-min]
          5-t0    null    d0                 5-t0     m0      d0
          5-t1    null    d1                 1-t1     m1      d0
                  ...                        1-t2     m2      d0
                                             1-t3     m3      d0
                                             5-t1     m4      d1
等等

问题是我用来做这件事的函数非常慢。在寒冷的冬日里,速度比糖蜜慢。我不熟悉sql操作和sqlalchemy,因此对我的函数的任何批评都将不胜感激。以下是我所拥有的:

import glob
import gc
import csv
from datetime import datetime, timedelta
from sqlalchemy import create_engine, and_
from sqlalchemy.orm import sessionmaker, exc, lazyload
from metar import Metar
from db_models import Base, BaseObservation, SkyObservation





def w_convertion(w_str):
    if w_str == 'No Data':
        return None
    else:
        return float(w_str)

def db_update(W_PATH):
    with open(W_PATH) as f:
        reader = csv.DictReader(f)

        for line in reader:
            time = datetime.strptime(line['time'], '%m/%d/%Y %H:%M')
            qry = session.query(BaseObservation).\
                  options(lazyload(BaseObservation.sky_observations))

            try:
                result = qry.filter_by(time=time).one()
                if result.W:
                    pass
                setattr(result, 'W', w_convertion(line['W']))
                session.commit()

            except exc.MultipleResultsFound:
                print('More than one result. Skipped {}'.format(time))
                pass

            except exc.NoResultFound:       
                dt = timedelta(minutes=(time.minute % 5))
                t = (time - dt)

                # find previous observation
                try:
                    result = qry.filter_by(time=t).one()
                    keeper_dict = dict( (k, v) for k,v in result.__dict__.items() if k in keepers )

                    # make new observation with previous data
                    obs = BaseObservation(**keeper_dict)
                    setattr(obs, 'time', time)
                    setattr(obs, 'W', w_convertion(line['W']))
                    session.add(obs)
                    session.flush()

                    # link new observation to sky observations
                    for sky_obs in result.sky_observations:
                        sky_obs.base_observations.append(obs)

                    session.commit()

                except exc.NoResultFound:
                    print('No previous weather obs found for {}. Point skipped.'.format(time))
                    pass


        gc.collect()
        print('W additions done.')

1.如果需要性能,请使用SQLAlchemy Core而不是ORM。2.您需要将查询批处理在一起,即对
reader
中的每一行
n
n
=100、1000等)进行一到两次查询,而不是对每一行进行查询。为什么要对每一行调用
commit()