Import 使用Hcatalog导入sqoop时遇到权限问题

Import 使用Hcatalog导入sqoop时遇到权限问题,import,hive,sqoop,hcatalog,Import,Hive,Sqoop,Hcatalog,我正在尝试使用sqoop导入和HCatalog集成来将数据从Teradata摄取到Hive。下面是我的sqoop导入命令: sqoop import -libjars /path/tdgssconfig.jar \ -Dmapreduce.job.queuename=${queue} \ -Dmapreduce.map.java.opts=-Xmx16g \ -Dmapreduce.map.memory.mb=20480 \

我正在尝试使用sqoop导入和HCatalog集成来将数据从Teradata摄取到Hive。下面是我的sqoop导入命令:

sqoop import -libjars /path/tdgssconfig.jar \
         -Dmapreduce.job.queuename=${queue} \
         -Dmapreduce.map.java.opts=-Xmx16g \
         -Dmapreduce.map.memory.mb=20480 \
         --driver com.teradata.jdbc.TeraDriver \
         --connect jdbc:teradata:<db-url>,charset=ASCII,LOGMECH=LDAP \
         --username ${srcDbUsr} \
         --password-file ${srcDbPassFile} \
         --verbose \
         --query "${query} AND \$CONDITIONS" \
         --split-by ${splitBy} \
         --fetch-size ${fetchSize} \
         --null-string '\\N' \
         --null-non-string '\\N' \
         --fields-terminated-by , \
         --hcatalog-database ${tgtDbName} \
         --hcatalog-table ${tgtTblName} \
         --hcatalog-partition-keys ${partitionKey} \
         --hcatalog-partition-values "${partitionValue}"
sqoop导入-libjars/path/tdgssconfig.jar\
-Dmapreduce.job.queuename=${queue}\
-Dmapreduce.map.java.opts=-Xmx16g\
-Dmapreduce.map.memory.mb=20480\
--驱动程序com.teradata.jdbc.TeraDriver\
--连接jdbc:teradata:,charset=ASCII,LOGMECH=LDAP\
--用户名${srcDbUsr}\
--密码文件${srcDbPassFile}\
--冗长的\
--查询“${query}和\$CONDITIONS”\
--按${splitBy}拆分\
--获取大小${fetchSize}\
--空字符串'\\N'\
--空非字符串'\\N'\
--以结尾的字段\
--hcatalog数据库${tgtDbName}\
--hcatalog表${tgtTblName}\
--hcatalog分区键${partitionKey}\
--hcatalog分区值“${partitionValue}”
我遇到了以下错误-将分区添加到元存储时出错。拒绝许可:

18/07/03 12:14:02 INFO mapreduce.Job: Job job_1530241180113_6487 failed with state FAILED due to: Job commit failed: org.apache.hive.hcatalog.common.HCatException : 2006 : Error adding partition to metastore. Cause : org.apache.hadoop.security.AccessControlException: Permission denied. user=<usr-name> is not the owner of inode=<partition-key=partition-value>
    at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkOwner(DefaultAuthorizationProvider.java:195)
    at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:181)
    at org.apache.sentry.hdfs.SentryAuthorizationProvider.checkPermission(SentryAuthorizationProvider.java:178)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3560)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3543)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:3508)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6559)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1807)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1787)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:654)
    at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setPermission(AuthorizationProviderProxyClientProtocol.java:174)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:454)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2141)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1714)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2135)

    at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.registerPartitions(FileOutputCommitterContainer.java:969)
    at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.commitJob(FileOutputCommitterContainer.java:249)
    at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:274)
    at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
18/07/03 12:14:02 INFO mapreduce.Job:Job Job_1530241180113_6487失败,状态为失败,原因是:Job commit failed:org.apache.hive.hcatalog.common.HCatException:2006:将分区添加到元存储时出错。原因:org.apache.hadoop.security.AccessControlException:权限被拒绝。user=不是inode的所有者=
位于org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkOwner(DefaultAuthorizationProvider.java:195)
位于org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:181)
位于org.apache.sentry.hdfs.SentryAuthorizationProvider.checkPermission(SentryAuthorizationProvider.java:178)
位于org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
位于org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3560)
位于org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3543)
位于org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:3508)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6559)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1807)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1787)
位于org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:654)
位于org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setPermission(AuthorizationProviderProxyClientProtocol.java:174)
位于org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:454)
位于org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
在org.apache.hadoop.ipc.protobufrpcnengine$Server$protobufrpinvoker.call(protobufrpcnengine.java:617)
位于org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
位于org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2141)
位于org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
位于java.security.AccessController.doPrivileged(本机方法)
位于javax.security.auth.Subject.doAs(Subject.java:422)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1714)
位于org.apache.hadoop.ipc.Server$Handler.run(Server.java:2135)
位于org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.registerPartitions(FileOutputCommitterContainer.java:969)
位于org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.commitJob(FileOutputCommitterContainer.java:249)
位于org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:274)
位于org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
运行(Thread.java:745)

如何解决此权限问题?

解决了此问题。sqoop hcatalog无法将文件添加到配置单元内部表,因为它位于配置单元目录中,并且所有者是配置单元,而不是特定用户。解决方案是创建一个外部表,以便基础目录的所有者是用户(而不是配置单元)