如何使用Google BigQuery python API在respond中获得超过100000个结果?

如何使用Google BigQuery python API在respond中获得超过100000个结果?,python,google-app-engine,google-bigquery,Python,Google App Engine,Google Bigquery,现在,我使用此脚本使用python API请求大查询: import argparse from googleapiclient.discovery import build from googleapiclient.errors import HttpError from oauth2client.client import GoogleCredentials credentials = GoogleCredentials.get_application_default() bigquery_

现在,我使用此脚本使用python API请求大查询:

import argparse
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
bigquery_service = build('bigquery', 'v2', credentials=credentials)

def request(query):
    query_request = bigquery_service.jobs()
    query_data = {'query':query, 'timeoutMs':100000}
    query_response = query_request.query(projectId=project, body=query_data).execute()
    return query_response

query = """
select domain
from 
[logs.compressed_v40_20170313]
limit 150000"""

respond = request(query)
我得到的结果是:

print respond['totalRows']  # total number of lines in respond 
u'150000'

print len(respond['raws])  # actual number of lines
100000

问题:如何接收剩余的50000行?

要在第一页结果后获得更多结果,您需要致电

在您的情况下,您需要从响应中获取作业ID和页面令牌

query_response = query_request.query(projectId=project, body=query_data).execute()
page_token = query_response['pageToken']
job_id = query_response['jobReference']['jobId']
next_page = bigquery_service.jobs().getQueryResults(
    projectId=project, jobId=job_id, pageToken=page_token)
在循环中继续此操作,直到获得所有查询结果

注意:对查询的调用可能会超时,但查询仍将在后台运行。我们建议您创建显式作业ID并手动插入作业,而不是使用
query
方法

看。注意:它不是正确的名称,因为此示例确实等待查询完成