Google bigquery 如何最好地处理Google BigQuery中存储在不同位置的数据?
我目前在BigQuery中的工作流程如下: (1) 在公共存储库中查询数据(存储在美国),(2)将其写入我的存储库中的表中,(3)将csv导出到云存储桶中,(4)在我工作的服务器上下载csv,以及(5)在服务器上使用csv 我现在的问题是,我工作的服务器位于EU。因此,我必须为在我的美国bucket和我的欧盟服务器之间传输数据支付相当多的费用。我现在可以继续在欧盟定位我的存储桶,但我仍然有一个问题,那就是我将数据从美国(BigQuery)传输到欧盟(bucket)。因此,我也可以将我在bq中的数据集设置为位于欧盟,但我不能再进行任何查询,因为公共存储库中的数据位于美国,不允许在不同位置之间进行查询Google bigquery 如何最好地处理Google BigQuery中存储在不同位置的数据?,google-bigquery,google-cloud-storage,google-cloud-datastore,Google Bigquery,Google Cloud Storage,Google Cloud Datastore,我目前在BigQuery中的工作流程如下: (1) 在公共存储库中查询数据(存储在美国),(2)将其写入我的存储库中的表中,(3)将csv导出到云存储桶中,(4)在我工作的服务器上下载csv,以及(5)在服务器上使用csv 我现在的问题是,我工作的服务器位于EU。因此,我必须为在我的美国bucket和我的欧盟服务器之间传输数据支付相当多的费用。我现在可以继续在欧盟定位我的存储桶,但我仍然有一个问题,那就是我将数据从美国(BigQuery)传输到欧盟(bucket)。因此,我也可以将我在bq中的数
有人知道如何处理这个问题吗?不管怎样,你在美国有你在欧盟需要的数据,所以我认为你有两个选择:
也许谷歌有一些神奇的方法可以让这一切变得更好,但据我所知,你在大西洋的一端处理大量数据,而你在大西洋的另一端需要这些数据,而将这些数据跨越这条线需要花费很多钱。将BigQuery数据集从一个地区复制到另一个地区的一种方法是利用这些数据。它不能回避您仍然必须这样做的事实,但可能会为您节省一些将数据复制到欧盟服务器上的CPU时间 流程将是:
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import sys
import time
import googleapiclient.discovery
from google.cloud import bigquery
import json
import pytz
PROJECT_ID = 'swast-scratch' # TODO: set this to your project name
FROM_LOCATION = 'US' # TODO: set this to the BigQuery location
FROM_DATASET = 'workflow_test_us' # TODO: set to BQ dataset name
FROM_BUCKET = 'swast-scratch-us' # TODO: set to bucket name in same location
TO_LOCATION = 'EU' # TODO: set this to the destination BigQuery location
TO_DATASET = 'workflow_test_eu' # TODO: set to destination dataset name
TO_BUCKET = 'swast-scratch-eu' # TODO: set to bucket name in destination loc
# Construct API clients.
bq_client = bigquery.Client(project=PROJECT_ID)
transfer_client = googleapiclient.discovery.build('storagetransfer', 'v1')
def extract_tables():
# Extract all tables in a dataset to a Cloud Storage bucket.
print('Extracting {}:{} to bucket {}'.format(
PROJECT_ID, FROM_DATASET, FROM_BUCKET))
tables = list(bq_client.list_tables(bq_client.dataset(FROM_DATASET)))
extract_jobs = []
for table in tables:
job_config = bigquery.ExtractJobConfig()
job_config.destination_format = bigquery.DestinationFormat.AVRO
extract_job = bq_client.extract_table(
table.reference,
['gs://{}/{}.avro'.format(FROM_BUCKET, table.table_id)],
location=FROM_LOCATION, # Available in 0.32.0 library.
job_config=job_config) # Starts the extract job.
extract_jobs.append(extract_job)
for job in extract_jobs:
job.result()
return tables
def transfer_buckets():
# Transfer files from one region to another using storage transfer service.
print('Transferring bucket {} to {}'.format(FROM_BUCKET, TO_BUCKET))
now = datetime.datetime.now(pytz.utc)
transfer_job = {
'description': '{}-{}-{}_once'.format(
PROJECT_ID, FROM_BUCKET, TO_BUCKET),
'status': 'ENABLED',
'projectId': PROJECT_ID,
'transferSpec': {
'transferOptions': {
'overwriteObjectsAlreadyExistingInSink': True,
},
'gcsDataSource': {
'bucketName': FROM_BUCKET,
},
'gcsDataSink': {
'bucketName': TO_BUCKET,
},
},
# Set start and end date to today (UTC) without a time part to start
# the job immediately.
'schedule': {
'scheduleStartDate': {
'year': now.year,
'month': now.month,
'day': now.day,
},
'scheduleEndDate': {
'year': now.year,
'month': now.month,
'day': now.day,
},
},
}
transfer_job = transfer_client.transferJobs().create(
body=transfer_job).execute()
print('Returned transferJob: {}'.format(
json.dumps(transfer_job, indent=4)))
# Find the operation created for the job.
job_filter = {
'project_id': PROJECT_ID,
'job_names': [transfer_job['name']],
}
# Wait until the operation has started.
response = {}
while ('operations' not in response) or (not response['operations']):
time.sleep(1)
response = transfer_client.transferOperations().list(
name='transferOperations', filter=json.dumps(job_filter)).execute()
operation = response['operations'][0]
print('Returned transferOperation: {}'.format(
json.dumps(operation, indent=4)))
# Wait for the transfer to complete.
print('Waiting ', end='')
while operation['metadata']['status'] == 'IN_PROGRESS':
print('.', end='')
sys.stdout.flush()
time.sleep(5)
operation = transfer_client.transferOperations().get(
name=operation['name']).execute()
print()
print('Finished transferOperation: {}'.format(
json.dumps(operation, indent=4)))
def load_tables(tables):
# Load all tables into the new dataset.
print('Loading tables from bucket {} to {}:{}'.format(
TO_BUCKET, PROJECT_ID, TO_DATASET))
load_jobs = []
for table in tables:
dest_table = bq_client.dataset(TO_DATASET).table(table.table_id)
job_config = bigquery.LoadJobConfig()
job_config.source_format = bigquery.SourceFormat.AVRO
load_job = bq_client.load_table_from_uri(
['gs://{}/{}.avro'.format(TO_BUCKET, table.table_id)],
dest_table,
location=TO_LOCATION, # Available in 0.32.0 library.
job_config=job_config) # Starts the load job.
load_jobs.append(load_job)
for job in load_jobs:
job.result()
# Actually run the script.
tables = extract_tables()
transfer_buckets()
load_tables(tables)
前面的示例使用google cloud bigquery库作为bigquery API,使用google API python客户端作为存储数据传输API
请注意,此示例不考虑分区表。数据量是多少,以及您希望避免支付的大致数字是多少?我最终需要传输的csv数据量每次约为3GB。然而,我经常这样做。每天的总金额大约是美元。实际上你在这里付多少钱?好吧,我支付这里列出的“跨大陆转账”:我每天的金额取决于我查询的次数;例如,昨天有2美元。从一家不知名的公司获得一台便宜的虚拟机不是很容易,但服务器位于美国,没有这样的定价,并将其作为中间步骤使用?一些其他选项(使用Composer和Dataflow)现在列在这里的SO问题->