Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/lua/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 使用django通道、apscheduler和google oauth2client的内存泄漏_Python_Django_Google Oauth_Apscheduler_Django Channels - Fatal编程技术网

Python 使用django通道、apscheduler和google oauth2client的内存泄漏

Python 使用django通道、apscheduler和google oauth2client的内存泄漏,python,django,google-oauth,apscheduler,django-channels,Python,Django,Google Oauth,Apscheduler,Django Channels,我的代码每X秒提取一次数据,并通过将其推送到WebSocket前端。我正在使用在指定间隔上运行tick函数。但是当我运行它时,即使我所做的只是授权oauth API,内存使用量也会以相当稳定的速度增长。consumers.py中的示例代码: from channels.sessions import channel_session from apscheduler.schedulers.background import BackgroundScheduler from oauth2client

我的代码每X秒提取一次数据,并通过将其推送到WebSocket前端。我正在使用在指定间隔上运行tick函数。但是当我运行它时,即使我所做的只是授权oauth API,内存使用量也会以相当稳定的速度增长。consumers.py中的示例代码:

from channels.sessions import channel_session
from apscheduler.schedulers.background import BackgroundScheduler
from oauth2client import transport
from apiclient.discovery import build

scheduler = BackgroundScheduler()

def tick(group_id):
    user = GoogleUser.objects.all()[0]
    # gets DjangoORMStorage instance
    credentials = get_credentials(user).get() 

    # THESE TWO LINES SEEM TO CAUSE THE MEMORY LEAK
    oauth_http = credentials.authorize(transport.get_http_object())
    analytics = build('analytics', 'v3', http=oauth_http)

@channel_session
def ws_connect(message):
    # accept socket connection and add channel to group
    message.reply_channel.send({"accept": True})

    # add channel to websocket channel group
    redis_group = Group(group_id, channel_layer=None)
    redis_group.add(message.reply_channel)

    # schedule job
    scheduler.add_job(tick, 'interval', id=slug, kwargs={
        'group_id': group_id,
    }, seconds=settings.INTERVAL)
    scheduler.start()