Google cloud platform Google Cloud dataproc无法使用--scopes=Cloud平台从cloudsql访问配置单元元存储

Google cloud platform Google Cloud dataproc无法使用--scopes=Cloud平台从cloudsql访问配置单元元存储,google-cloud-platform,gcloud,google-cloud-dataproc,Google Cloud Platform,Gcloud,Google Cloud Dataproc,我已经创建了2个data proc集群。要求使用1个配置单元元存储,两个集群都可以访问。第一个是ETL集群,它具有--scopes=sql admin,第二个是针对ML用户的--scopes=cloud platform。使用ETL集群创建的数据库和表不被ML集群访问。如果我必须在每个集群中添加--scopes=sqladmin,有人能帮我吗 ETL集群创建命令: gcloud dataproc clusters create amlgcbuatbi-report \ > --p

我已经创建了2个data proc集群。要求使用1个配置单元元存储,两个集群都可以访问。第一个是ETL集群,它具有--scopes=sql admin,第二个是针对ML用户的--scopes=cloud platform。使用ETL集群创建的数据库和表不被ML集群访问。如果我必须在每个集群中添加--scopes=sqladmin,有人能帮我吗

ETL集群创建命令:

 gcloud dataproc clusters create amlgcbuatbi-report \
>     --project=${PROJECT} \
>     --master-machine-type n1-standard-1 --worker-machine-type n1-standard-1 --master-boot-disk-size 50 --worker-boot-disk-size 50 \
>     --zone=${ZONE} \
>     --num-workers=${WORKERS} \
>     --scopes=sql-admin \
>     --image-version=1.3 \
>     --initialization-actions=gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh \
>     --properties=hive:hive.metastore.warehouse.dir=gs://gftat/data \
>     --metadata="hive-metastore-instance=$PROJECT:$REGION:metaore-dev001"
0: jdbc:hive2://localhost:10000/default> show databases;
+------------------+
|  database_name   |
+------------------+
| default          |
| gcb_dw           |
| l1_gcb_trxn_raw  |
+------------------+
gcloud dataproc clusters create amlgcbuatbi-ml \
    >     --project=${PROJECT} \
    >     --master-machine-type n1-standard-1 --worker-machine-type n1-standard-1 --master-boot-disk-size 50 --worker-boot-disk-size 50 \
    >     --zone=${ZONE} \
    >     --num-workers=${WORKERS} \
    >     --scopes=cloud-platform \
    >     --image-version=1.3 \
    >     --optional-components=PRESTO \
    >     --initialization-actions=gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh \
    >     --initialization-actions=gs://dataproc-initialization-actions/presto/presto.sh \
    >     --metadata="hive-metastore-instance=$PROJECT:$REGION:metaore-dev001"
输出:

 gcloud dataproc clusters create amlgcbuatbi-report \
>     --project=${PROJECT} \
>     --master-machine-type n1-standard-1 --worker-machine-type n1-standard-1 --master-boot-disk-size 50 --worker-boot-disk-size 50 \
>     --zone=${ZONE} \
>     --num-workers=${WORKERS} \
>     --scopes=sql-admin \
>     --image-version=1.3 \
>     --initialization-actions=gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh \
>     --properties=hive:hive.metastore.warehouse.dir=gs://gftat/data \
>     --metadata="hive-metastore-instance=$PROJECT:$REGION:metaore-dev001"
0: jdbc:hive2://localhost:10000/default> show databases;
+------------------+
|  database_name   |
+------------------+
| default          |
| gcb_dw           |
| l1_gcb_trxn_raw  |
+------------------+
gcloud dataproc clusters create amlgcbuatbi-ml \
    >     --project=${PROJECT} \
    >     --master-machine-type n1-standard-1 --worker-machine-type n1-standard-1 --master-boot-disk-size 50 --worker-boot-disk-size 50 \
    >     --zone=${ZONE} \
    >     --num-workers=${WORKERS} \
    >     --scopes=cloud-platform \
    >     --image-version=1.3 \
    >     --optional-components=PRESTO \
    >     --initialization-actions=gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh \
    >     --initialization-actions=gs://dataproc-initialization-actions/presto/presto.sh \
    >     --metadata="hive-metastore-instance=$PROJECT:$REGION:metaore-dev001"
ML集群创建命令:

 gcloud dataproc clusters create amlgcbuatbi-report \
>     --project=${PROJECT} \
>     --master-machine-type n1-standard-1 --worker-machine-type n1-standard-1 --master-boot-disk-size 50 --worker-boot-disk-size 50 \
>     --zone=${ZONE} \
>     --num-workers=${WORKERS} \
>     --scopes=sql-admin \
>     --image-version=1.3 \
>     --initialization-actions=gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh \
>     --properties=hive:hive.metastore.warehouse.dir=gs://gftat/data \
>     --metadata="hive-metastore-instance=$PROJECT:$REGION:metaore-dev001"
0: jdbc:hive2://localhost:10000/default> show databases;
+------------------+
|  database_name   |
+------------------+
| default          |
| gcb_dw           |
| l1_gcb_trxn_raw  |
+------------------+
gcloud dataproc clusters create amlgcbuatbi-ml \
    >     --project=${PROJECT} \
    >     --master-machine-type n1-standard-1 --worker-machine-type n1-standard-1 --master-boot-disk-size 50 --worker-boot-disk-size 50 \
    >     --zone=${ZONE} \
    >     --num-workers=${WORKERS} \
    >     --scopes=cloud-platform \
    >     --image-version=1.3 \
    >     --optional-components=PRESTO \
    >     --initialization-actions=gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh \
    >     --initialization-actions=gs://dataproc-initialization-actions/presto/presto.sh \
    >     --metadata="hive-metastore-instance=$PROJECT:$REGION:metaore-dev001"
输出:这里我看不到数据库和表格

0: jdbc:hive2://localhost:10000/default> show databases;
+----------------+
| database_name  |
+----------------+
| default        |
+----------------+
--initialization actions
标志需要一个逗号分隔的列表,而不是重复该标志将多个初始化操作附加到列表中。试一试

--initialization-actions=gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh,gs://dataproc-initialization-actions/presto/presto.sh

而不是两个单独的
--初始化操作
标志。

您确定这是由
--scopes=云平台
引起的吗?