Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/google-cloud-platform/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 2.7 函数仅从logservice.fetch()接收一个日志_Python 2.7_Google Cloud Platform - Fatal编程技术网

Python 2.7 函数仅从logservice.fetch()接收一个日志

Python 2.7 函数仅从logservice.fetch()接收一个日志,python-2.7,google-cloud-platform,Python 2.7,Google Cloud Platform,我创建了一个函数,它使用logservice.fetch()从Google App Engine检索5分钟的日志。然后创建一个日志字典,这样我就可以将它们传递给ML引擎在线预测。我遇到的问题是,这个函数似乎只接收一个日志。我已经确认在5分钟的时间内有不止一个日志,所以我认为问题在于我是如何编写我的函数的。我对python还是新手,已经没有办法让它工作了。如何在5分钟内接收所有日志 代码: 提前感谢。可能已经找到了解决此问题的方法,但无论如何 在中的任何位置都找不到%s(请注意小写的s)。有一个%

我创建了一个函数,它使用
logservice.fetch()
从Google App Engine检索5分钟的日志。然后创建一个日志字典,这样我就可以将它们传递给ML引擎在线预测。我遇到的问题是,这个函数似乎只接收一个日志。我已经确认在5分钟的时间内有不止一个日志,所以我认为问题在于我是如何编写我的函数的。我对python还是新手,已经没有办法让它工作了。如何在5分钟内接收所有日志

代码:


提前感谢。

可能已经找到了解决此问题的方法,但无论如何

在中的任何位置都找不到
%s
(请注意小写的
s
)。有一个
%S
,它以
[0,61]格式返回秒,在大多数情况下,它不足以表示自Unix时代开始以来的时间,这是
logservice.fetch()所要求的:

然而,它确实返回:

以秒为单位的时间,自历元起为浮点数


只需将strftime('%s')
替换为
time()

@a-queue,谢谢您的响应。我最终选择了一种不同的方法来解决这个问题,但还是把这个问题留了下来,希望有人能给出这个方法的答案。话虽如此,
time()
确实起了作用,所以我将投票并检查,作为这个特定问题的解决方案。谢谢
#retrieve and store timestamp for bigquery query
def timestamp():
    global_settings = GlobalSettings.all().get()

    logsml_last_updated = global_settings.logsml_last_updated
    if not logsml_last_updated:
        logsml_last_updated = datetime.datetime.now() - datetime.timedelta(minutes=5)

    ret_logs = logs(logsml_last_updated, offset=None)

    results = upload_logs(ret_logs, logsml_last_updated)

    global_settings.logsml_last_updated = datetime.datetime.now()

    global_settings.put()

    return results

#retrieve logs from logservice
def logs(timestamp, offset=None):
    CSV_COLUMNS = 'resource,place_id,status,end_time,device,device_os,device_os_version,latency,megacycles,cost,device_brand,device_family,browser_version,app,ua_parse'.split(
        ',')

    start_time = timestamp
    end_time = start_time + datetime.timedelta(minutes=5)
    # MAX_LOGS_TO_READ = 500

    logging.info("start_time")
    logging.info(start_time)
    logging.info(start_time.strftime('%s'))

    ret_logs = logservice.fetch(
        start_time=long(start_time.strftime('%s')),
        end_time=long(end_time.strftime('%s')),
        offset=offset,
        minimum_log_level=logservice.LOG_LEVEL_INFO,
        include_app_logs=True)

    for line in ret_logs:
        combined = ""
        splitted = line.combined.split('"')
        if len(splitted) > 3:
            splitted_again = splitted[3].split('/')
            if len(splitted_again) > 1:
                combined = splitted_again[1].split(' ')[0]
        user_agent = user_agents.parse(line.user_agent or "")
        row_data = [line.resource.split('?')[0][1:], get_param_from_url(line.resource, 'place_id'), line.status,
                    datetime.datetime.fromtimestamp(line.end_time),
                    user_agent.device.model, user_agent.os.family, user_agent.os.version_string,
                    line.latency, line.mcycles, line.cost,
                    user_agent.device.brand, user_agent.device.family,
                    user_agent.browser.version_string,
                    get_param_from_url(line.resource, 'session_id'),
                    line.version_id or "", combined]
        row_string = [x if isinstance(x, basestring) else '' if not x else str(x) for x in row_data]
        logging.info(row_string)

        l1 = dict(zip(CSV_COLUMNS, row_string))
        logging.info(l1)
        l1.update({str(k): float(v) if k == 'megacycles' else v for k, v in l1.items()})
        l1.update({str(k): float(v) if k == 'latency' else v for k, v in l1.items()})
        l1.update({k: v if v is not '' else '0' for k, v in l1.items()})
        l1['key'] = "%s-%s-%s" % (l1['megacycles'], l1['end_time'], l1['latency'])

        ret = {'instances': []}
        ret['orig'] = []
        ret['orig'].append(dict(l1))
        l1.pop('place_id')
        l1.pop('resource')
        l1.pop('status')
        ret['instances'].append(l1)
        logging.info(ret)

        return ret
  Args:
    start_time: The earliest request completion or last-update time that
      results should be fetched for, in seconds since the Unix epoch.
    end_time: The latest request completion or last-update time that
      results should be fetched for, in seconds since the Unix epoch.