Teradata 导入到配置单元表时TDCH失败

Teradata 导入到配置单元表时TDCH失败,teradata,sqoop,Teradata,Sqoop,hadoop jar/usr/lib/tdch/1.4/lib/teradata-connector-1.4.4.jar com.teradata.connector.common.tool.ConnectorImportTool\ -url jdbc:teradata://192.168.2.128/DATABASE=db_1 \ -用户名dbc\ -密码dbc\ -工作型蜂巢\ -文件格式文本文件\ -sourcetable员工\ -nummappers 1\ -目标td_员工\ -targ

hadoop jar/usr/lib/tdch/1.4/lib/teradata-connector-1.4.4.jar com.teradata.connector.common.tool.ConnectorImportTool\ -url jdbc:teradata://192.168.2.128/DATABASE=db_1 \ -用户名dbc\ -密码dbc\ -工作型蜂巢\ -文件格式文本文件\ -sourcetable员工\ -nummappers 1\ -目标td_员工\ -targettableschema“emp_id int,firstname字符串,lastname字符串”

这是日志。我在HADOOP_类路径中添加了hive serde jar

17/04/20 04:26:56 INFO tool.ConnectorImportTool: ConnectorImportTool starts at 1492687616920
17/04/20 04:26:58 INFO common.ConnectorPlugin: load plugins in jar:file:/usr/lib/tdch/1.4/lib/teradata-connector-1.4.4.jar!/teradata.connector.plugins.xml
17/04/20 04:26:59 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
17/04/20 04:26:59 INFO metastore.ObjectStore: ObjectStore, initialize called
17/04/20 04:26:59 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
17/04/20 04:26:59 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
17/04/20 04:27:03 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
17/04/20 04:27:03 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5.  Encountered: "@" (64), after : "".
17/04/20 04:27:05 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/04/20 04:27:05 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/04/20 04:27:05 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/04/20 04:27:05 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/04/20 04:27:05 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
17/04/20 04:27:05 INFO metastore.ObjectStore: Initialized ObjectStore
17/04/20 04:27:06 INFO metastore.HiveMetaStore: Added admin role in metastore
17/04/20 04:27:06 INFO metastore.HiveMetaStore: Added public role in metastore
17/04/20 04:27:06 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
17/04/20 04:27:06 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=td_employee
17/04/20 04:27:06 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=get_table : db=default tbl=td_employee  
17/04/20 04:27:06 INFO processor.TeradataInputProcessor: input preprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor starts at:  1492687626978
17/04/20 04:27:08 INFO utils.TeradataUtils: the input database product is Teradata
17/04/20 04:27:08 INFO utils.TeradataUtils: the input database version is 16.0
17/04/20 04:27:08 INFO utils.TeradataUtils: the jdbc driver version is 15.0
17/04/20 04:27:08 INFO processor.TeradataInputProcessor: the teradata connector for hadoop version is: 1.4.4
17/04/20 04:27:08 INFO processor.TeradataInputProcessor: input jdbc properties are jdbc:teradata://192.168.2.128/DATABASE=db_1
17/04/20 04:27:09 INFO processor.TeradataInputProcessor: the number of mappers are 1
17/04/20 04:27:09 INFO processor.TeradataInputProcessor: input preprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor ends at:  1492687629069
17/04/20 04:27:09 INFO processor.TeradataInputProcessor: the total elapsed time of input preprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor is: 2s
17/04/20 04:27:10 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
17/04/20 04:27:10 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
17/04/20 04:27:10 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=td_employee
17/04/20 04:27:10 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=get_table : db=default tbl=td_employee  
17/04/20 04:27:10 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
17/04/20 04:27:10 INFO metastore.ObjectStore: ObjectStore, initialize called
17/04/20 04:27:10 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5.  Encountered: "@" (64), after : "".
17/04/20 04:27:10 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
17/04/20 04:27:10 INFO metastore.ObjectStore: Initialized ObjectStore
17/04/20 04:27:10 INFO processor.HiveOutputProcessor: hive table default.td_employee does not exist
17/04/20 04:27:10 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
17/04/20 04:27:10 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=Shutting down the object store...   
17/04/20 04:27:10 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
17/04/20 04:27:10 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=Metastore shutdown complete.    
17/04/20 04:27:11 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
17/04/20 04:27:13 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
17/04/20 04:27:13 WARN mapred.ResourceMgrDelegate: getBlacklistedTrackers - Not implemented yet
17/04/20 04:27:13 INFO mapreduce.JobSubmitter: number of splits:1
17/04/20 04:27:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1492661647325_0001
17/04/20 04:27:14 INFO impl.YarnClientImpl: Submitted application application_1492661647325_0001
17/04/20 04:27:14 INFO mapreduce.Job: The url to track the job: http://sandbox.hortonworks.com:8088/proxy/application_1492661647325_0001/
17/04/20 04:27:14 INFO mapreduce.Job: Running job: job_1492661647325_0001
17/04/20 04:27:34 INFO mapreduce.Job: Job job_1492661647325_0001 running in uber mode : false
17/04/20 04:27:34 INFO mapreduce.Job:  map 0% reduce 0%
17/04/20 04:27:49 INFO mapreduce.Job: Task Id : attempt_1492661647325_0001_m_000000_0, Status : FAILED
Error: java.lang.ClassNotFoundException: org.apache.hadoop.hive.serde2.SerDeException
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:190)
    at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.<init>(ConnectorOutputFormat.java:91)
    at com.teradata.connector.common.ConnectorOutputFormat.getRecordWriter(ConnectorOutputFormat.java:38)
    at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)

17/04/20 04:28:00 INFO mapreduce.Job: Task Id : attempt_1492661647325_0001_m_000000_1, Status : FAILED
Error: org.apache.hadoop.fs.FileAlreadyExistsException: /user/root/temp_042710/part-m-00000 for client 10.0.2.15 already exists
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2309)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2237)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2190)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:520)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
    at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1604)
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1465)
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1390)
    at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:394)
    at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:390)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:390)
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:334)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)
    at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:132)
    at com.teradata.connector.hive.HiveTextFileOutputFormat.getRecordWriter(HiveTextFileOutputFormat.java:22)
    at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.<init>(ConnectorOutputFormat.java:89)
    at com.teradata.connector.common.ConnectorOutputFormat.getRecordWriter(ConnectorOutputFormat.java:38)
    at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException): /user/root/temp_042710/part-m-00000 for client 10.0.2.15 already exists
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2309)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2237)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2190)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:520)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

    at org.apache.hadoop.ipc.Client.call(Client.java:1410)
    at org.apache.hadoop.ipc.Client.call(Client.java:1363)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy15.create(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
    at com.sun.proxy.$Proxy15.create(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:258)
    at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1600)
    ... 22 more

17/04/20 04:28:05 INFO mapreduce.Job: Task Id : attempt_1492661647325_0001_m_000000_2, Status : FAILED
Error: org.apache.hadoop.fs.FileAlreadyExistsException: /user/root/temp_042710/part-m-00000 for client 10.0.2.15 already exists
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2309)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2237)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2190)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:520)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
    at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1604)
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1465)
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1390)
    at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:394)
    at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:390)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:390)
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:334)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)
    at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:132)
    at com.teradata.connector.hive.HiveTextFileOutputFormat.getRecordWriter(HiveTextFileOutputFormat.java:22)
    at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.<init>(ConnectorOutputFormat.java:89)
    at com.teradata.connector.common.ConnectorOutputFormat.getRecordWriter(ConnectorOutputFormat.java:38)
    at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException): /user/root/temp_042710/part-m-00000 for client 10.0.2.15 already exists
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2309)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2237)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2190)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:520)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

    at org.apache.hadoop.ipc.Client.call(Client.java:1410)
    at org.apache.hadoop.ipc.Client.call(Client.java:1363)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy15.create(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
    at com.sun.proxy.$Proxy15.create(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:258)
    at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1600)
    ... 22 more

17/04/20 04:28:13 INFO mapreduce.Job:  map 100% reduce 0%
17/04/20 04:28:14 INFO mapreduce.Job: Job job_1492661647325_0001 failed with state FAILED due to: Task failed task_1492661647325_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

17/04/20 04:28:14 INFO mapreduce.Job: Counters: 12
    Job Counters 
        Failed map tasks=4
        Launched map tasks=4
        Other local map tasks=3
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=30868
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=30868
        Total vcore-seconds taken by all map tasks=30868
        Total megabyte-seconds taken by all map tasks=7717000
    Map-Reduce Framework
        CPU time spent (ms)=0
        Physical memory (bytes) snapshot=0
        Virtual memory (bytes) snapshot=0
17/04/20 04:28:14 WARN tool.ConnectorJobRunner: com.teradata.connector.common.exception.ConnectorException: The output post processor returns 1
17/04/20 04:28:14 INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor starts at:  1492687694783
17/04/20 04:28:15 INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor ends at:  1492687694783
17/04/20 04:28:15 INFO processor.TeradataInputProcessor: the total elapsed time of input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor is: 0s
17/04/20 04:28:15 INFO tool.ConnectorImportTool: ConnectorImportTool ends at 1492687695150
17/04/20 04:28:15 INFO tool.ConnectorImportTool: ConnectorImportTool time is 78s
17/04/20 04:28:15 INFO tool.ConnectorImportTool: job completed with exit code 1
17/04/20 04:26:56信息工具。连接器导入工具:连接器导入工具从1492687616920开始
17/04/20 04:26:58 INFO common.ConnectorPlugin:在jar中加载插件:文件:/usr/lib/tdch/1.4/lib/teradata-connector-1.4.4.jar/teradata.connector.plugins.xml
17/04/20 04:26:59信息metastore.HiveMetaStore:0:使用实现类打开原始存储:org.apache.hadoop.hive.metastore.ObjectStore
17/04/20 04:26:59信息元存储。对象存储:对象存储,已调用初始化
17/04/20 04:26:59信息DataNucleus.持久性:属性DataNucleus.cache.level2未知-将被忽略
17/04/20 04:26:59信息DataNucleus.持久性:属性hive.metastore.integral.jdo.pushdown未知-将被忽略
17/04/20 04:27:03信息metastore.ObjectStore:使用hive.metastore.cache.pinobjtypes=“表,存储描述符,SerDeInfo,分区,数据库,类型,FieldSchema,顺序”设置metastore对象pin类
17/04/20 04:27:03 INFO metastore.MetaStoreDirectSql:MySQL检查失败,假设我们不在第1行第5列的MySQL:Lexical错误上。遇到:“@”(64),在:“”之后。
17/04/20 04:27:05信息DataNucleus.Datastore:类“org.apache.hadoop.hive.metastore.model.MFieldSchema”标记为“embedded only”,因此没有自己的数据存储表。
17/04/20 04:27:05信息DataNucleus.Datastore:类“org.apache.hadoop.hive.metastore.model.MOrder”标记为“embedded only”,因此没有自己的数据存储表。
17/04/20 04:27:05信息DataNucleus.Datastore:类“org.apache.hadoop.hive.metastore.model.MFieldSchema”标记为“embedded only”,因此没有自己的数据存储表。
17/04/20 04:27:05信息DataNucleus.Datastore:类“org.apache.hadoop.hive.metastore.model.MOrder”标记为“embedded only”,因此没有自己的数据存储表。
17/04/20 04:27:05信息DataNucleus.Query:读取查询“org.DataNucleus.store.rdbms.Query”的结果。SQLQuery@0“因为使用的连接正在关闭
17/04/20 04:27:05信息元存储。对象存储:初始化的对象存储
17/04/20 04:27:06信息元存储。HiveMetaStore:在元存储中添加了管理员角色
17/04/20 04:27:06信息元存储。HiveMetaStore:在元存储中添加了公共角色
17/04/20 04:27:06 INFO metastore.HiveMetaStore:未在管理员角色中添加任何用户,因为配置为空
17/04/20 04:27:06 INFO metastore.HiveMetaStore:0:get_table:db=default tbl=td_employee
17/04/20 04:27:06 INFO HiveMetaStore.audit:ugi=root ip=unknown ip addr cmd=get_table:db=default tbl=td_employee
17/04/20 04:27:06 INFO processor.TeradataInputProcessor:input预处理器com.teradata.connector.teradata.processor.teradasplitbyhashProcessor开始于:14926878
17/04/20 04:27:08 INFO-utils.TeradataUtils:输入数据库产品是Teradata
17/04/20 04:27:08 INFO-utils.TeradataUtils:输入数据库版本为16.0
17/04/20 04:27:08 INFO-utils.TeradataUtils:jdbc驱动程序版本为15.0
17/04/20 04:27:08信息处理器。teradata输入处理器:hadoop版本的teradata连接器为:1.4.4
17/04/20 04:27:08信息处理器。TeradataInputProcessor:输入jdbc属性为jdbc:teradata://192.168.2.128/DATABASE=db_1
17/04/20 04:27:09信息处理器。TeradataInputProcessor:映射程序的数量为1
17/04/20 04:27:09 INFO processor.TeradataInputProcessor:输入预处理器com.teradata.connector.teradata.processor.teradasplitbyhashProcessor结束于:14926876229069
17/04/20 04:27:09 INFO processor.TeradataInputProcessor:输入预处理器com.teradata.connector.teradata.processor.teradastasplitbyhashProcessor的总运行时间为:2s
17/04/20 04:27:10信息配置。弃用:mapred.output.dir已弃用。相反,请使用mapreduce.output.fileoutputformat.outputdir
17/04/20 04:27:10 INFO metastore.HiveMetaStore:未在管理员角色中添加任何用户,因为配置为空
17/04/20 04:27:10 INFO metastore.HiveMetaStore:0:get_table:db=default tbl=td_employee
17/04/20 04:27:10 INFO HiveMetaStore.audit:ugi=root ip=unknown ip addr cmd=get_table:db=default tbl=td_employee
17/04/20 04:27:10 INFO metastore.HiveMetaStore:0:使用实现类打开原始存储:org.apache.hadoop.hive.metastore.ObjectStore
17/04/20 04:27:10信息元存储。对象存储:对象存储,已调用初始化
17/04/20 04:27:10 INFO metastore.MetaStoreDirectSql:MySQL检查失败,假设我们不在第1行第5列的MySQL:Lexical错误上。遇到:“@”(64),在:“”之后。
17/04/20 04:27:10信息DataNucleus.Query:读取查询结果“org.DataNucleus.store.rdbms.Query”。SQLQuery@0“因为使用的连接正在关闭
17/04/20 04:27:10信息元存储。对象存储:初始化的对象存储
17/04/20 04:27:10信息处理器。配置单元输出处理器:配置单元表默认值。td_员工不存在
17/04/20 04:27:10信息元存储。HiveMetaStore:0:正在关闭对象存储。。。
17/04/20 04:27:10信息HiveMetaStore.audit:ugi=根ip=未知ip地址cmd=正在关闭对象存储。。。
17/04/20 04:27:10信息元存储。HiveMetaStore:0:元存储关闭完成。
17/04/20 04:27:10信息HiveMetaStore.audit:ugi=根ip=未知ip地址cmd=Metastore关闭完成。
17/04/20 04:27:11 INFO client.RMProxy:连接到位于sandbox.hortonworks.com/10.0.2.15:8050的ResourceManager
17/04/20 04:27:13 INFO client.RMProxy:连接到位于sandbox.hortonworks.com/10.0.2.15:8050的ResourceManager
17/04/20 04:27:13 WARN mapred.ResourceMgrDelegate:getBlacklistedTrackers-尚未实现
17/04/20 04:27:13信息mapreduce.JobSubmitter:数量