Hadoop sqoop查询无法导入表

Hadoop sqoop查询无法导入表,hadoop,hive,sqoop,Hadoop,Hive,Sqoop,我正在尝试执行下面的sqoop导入 sqoop import --connect 'jdbc:sqlserver://server-IP;database=db_name' --username xxx --password xxx --table xxx --hive-import --hive-table amit_hive --target-dir /user/hive/amitesh123 -m 1. 我必须将DB表直接导入到所需的位置。根据我的理解,上面的sqoop命令行语法是正确编

我正在尝试执行下面的sqoop导入

sqoop import --connect 'jdbc:sqlserver://server-IP;database=db_name' --username xxx --password xxx --table xxx --hive-import --hive-table amit_hive --target-dir /user/hive/amitesh123 -m 1.
我必须将DB表直接导入到所需的位置。根据我的理解,上面的sqoop命令行语法是正确编写的。但在执行它时,我得到以下错误:-

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=xxx, access=EXECUTE, inode="/user/hive/amitesh123":hive:hdfs:drwx
有人告诉我,我们必须在上面的sqoop命令中提到配置单元数据库名称,这是真的吗?如果是,有人能帮我使用参数吗?据我所知,我们只需要提到--table就可以将表从DB带到HIVE表。请建议

为了进一步测试,我创建了一个新文件夹,并授予了777个权限,但我还是遇到了同样的错误。我现在添加了HIVE DB.HIVEtable名称和--HIVE table,因此现在新的sqoop导入如下所示:

sqoop import --connect 'jdbc:sqlserver://server-IP;database=db_name' --username xxx --password xxx --table xxx --hive-import --hive-table amitesh_db.amit_hive   --target-dir  /amitesh012345/amitesh -m 1. 
但是,权限拒绝错误仍然存在

INFO mapreduce.Job: Job job_1486315054135_2834 failed with state FAILED due to: Job setup failed : org.apache.hadoop.security.AccessControlException: Permission denied: user=xxx, access=WRITE, inode="/amitesh012345/amitesh/_temporary/1":hdfs:hdfs:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1704)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1687)
        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3890)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:983)        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
        at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
        at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
        at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:305)
        at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobSetup(CommitterEventHandler.java:254)
        at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:234)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=at732615, access=WRITE, inode="/amitesh012345/amitesh/_temporary/1":hdfs:hdfs:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1704)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1687)
        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3890)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:983)
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
        at org.apache.hadoop.ipc.Client.call(Client.java:1475)
        at org.apache.hadoop.ipc.Client.call(Client.java:1412)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
        org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:55
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
        ... 13 more
17/03/14 05:23:38 INFO mapreduce.Job: Counters: 2
        Job Counters
                Total time spent by all maps in occupied slots (ms)=0
                Total time spent by all reduces in occupied slots (ms)=0
17/03/14 05:23:38 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
17/03/14 05:23:38 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 24.4698 seconds (0 bytes/sec)
17/03/14 05:23:38 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
17/03/14 05:23:38 INFO mapreduce.ImportJobBase: Retrieved 0 records.
17/03/14 05:23:38 ERROR tool.ImportTool: Error during import: Import job failed!
第二个完整堆栈跟踪

+++++++++++++

Please set $ACCUMULO_HOME to the root of your Accumulo installation.
17/03/14 05:38:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6_IBM_27
17/03/14 05:38:02 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/03/14 05:38:02 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
17/03/14 05:38:02 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
17/03/14 05:38:02 INFO manager.SqlManager: Using default fetchSize of 1000
17/03/14 05:38:02 INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:path_to/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:path_to/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/03/14 05:38:03 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM [T_VND] AS t WHERE 1=0
17/03/14 05:38:03 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is path_to/hadoop
Note: path_to/T_VND.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/03/14 05:38:04 INFO orm.CompilationManager: Writing jar file: path_to/T_VND.jar
17/03/14 05:38:04 INFO mapreduce.ImportJobBase: Beginning import of T_VND
17/03/14 05:38:05 INFO impl.TimelineClientImpl: Timeline service address: http://xxxxxx/
17/03/14 05:38:05 INFO client.RMProxy: Connecting to ResourceManager at xxxxxx/server-IP:port
17/03/14 05:38:06 INFO db.DBInputFormat: Using read commited transaction isolation
17/03/14 05:38:07 INFO mapreduce.JobSubmitter: number of splits:1
17/03/14 05:38:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1486315054135_2836
17/03/14 05:38:07 INFO impl.YarnClientImpl: Submitted application application_1486315054135_2836
17/03/14 05:38:07 INFO mapreduce.Job: The url to track the job: http://xxxxxx/server-IP:port/proxy/application_1486315054135_2836/
17/03/14 05:38:07 INFO mapreduce.Job: Running job: job_1486315054135_2836
17/03/14 05:38:13 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=FAILED. Redirecting to job history server
17/03/14 05:38:13 INFO mapreduce.Job: Job job_1486315054135_2836 running in uber mode : false
17/03/14 05:38:13 INFO mapreduce.Job:  map 0% reduce 100%
17/03/14 05:38:13 INFO mapreduce.Job: Job job_1486315054135_2836 failed with state FAILED due to:
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: The MapReduce job has already been retired. Performance
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: counters are unavailable. To get this information,
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: you will need to enable the completed job store on
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: the jobtracker with:
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: mapreduce.jobtracker.persist.jobstatus.active = true
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: mapreduce.jobtracker.persist.jobstatus.hours = 1
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: A jobtracker restart is required for these settings
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: to take effect.
17/03/14 05:38:13 ERROR tool.ImportTool: Error during import: Import job failed!

Sqoop将数据导入到
--target dir
后,将数据加载到配置单元表中。因此,运行sqoop命令的用户必须具有目标drectory和hive warehouse目录的权限。在第一个命令中,sqoop无权访问
--target dir
本身。你能发布第二个命令的stacktrace吗?用户
at732615
没有
write
访问
--target dir
。我想提醒你一件事,在第二个命令中,/amitesh012345/amitesh,“amitesh”不存在,所以我从命令中删除了它,这次的错误不同。第二个完整的stacktrace在原始帖子中编辑。请原谅我的错误。请关闭线程,我已经解决了问题。我只是从amitesh012345中删除了“/”,解决了这个问题。这是有道理的,就像前面一样,我使用/amitesh012345作为我的--target目录,所以当我运行“hadoop fs-ls/”时,我没有在那里找到amitesh012345。感谢大家的帮助和timeSqoop将数据导入
--target dir
后,它会将数据加载到配置单元表中。因此,运行sqoop命令的用户必须具有目标drectory和hive warehouse目录的权限。在第一个命令中,sqoop无权访问
--target dir
本身。你能发布第二个命令的stacktrace吗?用户
at732615
没有
write
访问
--target dir
。我想提醒你一件事,在第二个命令中,/amitesh012345/amitesh,“amitesh”不存在,所以我从命令中删除了它,这次的错误不同。第二个完整的stacktrace在原始帖子中编辑。请原谅我的错误。请关闭线程,我已经解决了问题。我只是从amitesh012345中删除了“/”,解决了这个问题。这是有道理的,就像前面一样,我使用/amitesh012345作为我的--target目录,所以当我运行“hadoop fs-ls/”时,我没有在那里找到amitesh012345。谢谢你们的帮助和时间