Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/310.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Hadoop无法与lxc和ubuntu16.04一起工作_Java_Hadoop_Lxc_Ubuntu 16.04 - Fatal编程技术网

Java Hadoop无法与lxc和ubuntu16.04一起工作

Java Hadoop无法与lxc和ubuntu16.04一起工作,java,hadoop,lxc,ubuntu-16.04,Java,Hadoop,Lxc,Ubuntu 16.04,当我在lxc容器上运行hadoop-2.6.0时。 主机PC的操作系统是ubuntu 16.04,container的操作系统也是ubuntu 16.04 下面的行是错误 有谁能在lxc(ubuntu16.04)上运行hadoop 以下几行是运行hadoop的错误代码 hadoop@master:/tmp$ ./restart-hadoop-dfs.sh + stop-dfs.sh Stopping namenodes on [master] master: stopping namenode

当我在lxc容器上运行hadoop-2.6.0时。 主机PC的操作系统是ubuntu 16.04,container的操作系统也是ubuntu 16.04

下面的行是错误

有谁能在lxc(ubuntu16.04)上运行hadoop

以下几行是运行hadoop的错误代码

hadoop@master:/tmp$ ./restart-hadoop-dfs.sh
+ stop-dfs.sh
Stopping namenodes on [master]
master: stopping namenode
cluster-slave02: no datanode to stop
master: stopping datanode
cluster01-slave04: no datanode to stop
cluster01-slave03: no datanode to stop
cluster-slave01: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
+ stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
cluster-slave02: stopping nodemanager
master: stopping nodemanager
cluster-slave01: stopping nodemanager
cluster01-slave04: stopping nodemanager
cluster01-slave03: stopping nodemanager
master: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
+ sudo rm -rf /var/hadoop/hdfs/datanode /var/hadoop/hdfs/namenode
+ hdfs namenode -format
16/07/01 09:33:58 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/157.82.3.142
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.8.0_91
************************************************************/
16/07/01 09:33:58 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/07/01 09:33:58 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-dff9fad1-84cc-448d-a135-77ab870488a6
16/07/01 09:33:58 INFO namenode.FSNamesystem: No KeyProvider found.
16/07/01 09:33:58 INFO namenode.FSNamesystem: fsLock is fair:true
16/07/01 09:33:58 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/07/01 09:33:58 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/07/01 09:33:58 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/07/01 09:33:58 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Jul 01 09:33:58
16/07/01 09:33:58 INFO util.GSet: Computing capacity for map BlocksMap
16/07/01 09:33:58 INFO util.GSet: VM type       = 64-bit
16/07/01 09:33:58 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
16/07/01 09:33:58 INFO util.GSet: capacity      = 2^21 = 2097152 entries

16/07/01 09:33:58 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/07/01 09:33:58 INFO blockmanagement.BlockManager: defaultReplication         = 2
16/07/01 09:33:58 INFO blockmanagement.BlockManager: maxReplication             = 512
16/07/01 09:33:58 INFO blockmanagement.BlockManager: minReplication             = 1
16/07/01 09:33:58 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/07/01 09:33:58 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
16/07/01 09:33:58 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/07/01 09:33:58 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/07/01 09:33:58 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/07/01 09:33:58 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
16/07/01 09:33:58 INFO namenode.FSNamesystem: supergroup          = supergroup
16/07/01 09:33:58 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/07/01 09:33:58 INFO namenode.FSNamesystem: HA Enabled: false
16/07/01 09:33:58 INFO namenode.FSNamesystem: Append Enabled: true
16/07/01 09:33:58 INFO util.GSet: Computing capacity for map INodeMap
16/07/01 09:33:58 INFO util.GSet: VM type       = 64-bit
16/07/01 09:33:58 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
16/07/01 09:33:58 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/07/01 09:33:58 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/07/01 09:33:58 INFO util.GSet: Computing capacity for map cachedBlocks
16/07/01 09:33:58 INFO util.GSet: VM type       = 64-bit
16/07/01 09:33:58 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
16/07/01 09:33:58 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/07/01 09:33:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/07/01 09:33:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/07/01 09:33:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/07/01 09:33:58 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/07/01 09:33:58 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/07/01 09:33:58 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/07/01 09:33:58 INFO util.GSet: VM type       = 64-bit
16/07/01 09:33:58 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/07/01 09:33:58 INFO util.GSet: capacity      = 2^15 = 32768 entries
16/07/01 09:33:58 INFO namenode.NNConf: ACLs enabled? false
16/07/01 09:33:58 INFO namenode.NNConf: XAttrs enabled? true
16/07/01 09:33:58 INFO namenode.NNConf: Maximum size of an xattr: 16384
16/07/01 09:33:58 INFO namenode.FSImage: Allocated new BlockPoolId: BP-275974701-157.82.3.142-1467365638786
16/07/01 09:33:58 INFO common.Storage: Storage directory /var/hadoop/hdfs/namenode has been successfully formatted.
16/07/01 09:33:59 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/07/01 09:33:59 INFO util.ExitUtil: Exiting with status 0
16/07/01 09:33:59 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/157.82.3.142
************************************************************/
+ start-dfs.sh
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-namenode-master.out
cluster-slave02: starting datanode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-datanode-cluster-slave02.out
cluster-slave01: starting datanode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-datanode-cluster-slave01.out
cluster01-slave03: starting datanode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-datanode-cluster01-slave03.out
master: starting datanode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-datanode-master.out
cluster01-slave04: starting datanode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-datanode-cluster01-slave04.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-master.out
+ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-resourcemanager-master.out
cluster-slave01: starting nodemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-cluster-slave01.out
cluster01-slave03: starting nodemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-cluster01-slave03.out
cluster01-slave04: starting nodemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-cluster01-slave04.out
cluster-slave02: starting nodemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-cluster-slave02.out
master: starting nodemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-master.out
+ hadoop jar /home/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 3 3
Number of Maps  = 3
Samples per Map = 3
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Starting Job
16/07/01 09:34:24 INFO client.RMProxy: Connecting to ResourceManager at master/157.82.3.142:8032
16/07/01 09:34:25 INFO input.FileInputFormat: Total input paths to process : 3
16/07/01 09:34:25 INFO mapreduce.JobSubmitter: number of splits:3
16/07/01 09:34:25 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1467365657240_0001
16/07/01 09:34:25 INFO impl.YarnClientImpl: Submitted application application_1467365657240_0001
16/07/01 09:34:26 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1467365657240_0001/
16/07/01 09:34:26 INFO mapreduce.Job: Running job: job_1467365657240_0001
16/07/01 09:34:35 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:38101. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:36 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:38101. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:37 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:38101. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:40 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:36772. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:41 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:36772. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:42 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:36772. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:42 INFO mapreduce.Job: Job job_1467365657240_0001 running in uber mode : false
16/07/01 09:34:42 INFO mapreduce.Job:  map 0% reduce 0%
16/07/01 09:34:42 INFO mapreduce.Job: Job job_1467365657240_0001 failed with state FAILED due to: Application application_1467365657240_0001 failed 2 times due to AM Container for appattempt_1467365657240_0001_000002 exited with  exitCode: 255
For more detailed output, check application tracking page:http://master:8088/proxy/application_1467365657240_0001/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1467365657240_0001_02_000001
Exit code: 255
Stack trace: ExitCodeException exitCode=255:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 255
Failing this attempt. Failing the application.
16/07/01 09:34:42 INFO mapreduce.Job: Counters: 0
Job Finished in 17.93 seconds
java.io.FileNotFoundException: File does not exist: hdfs://master:9000/user/hadoop/QuasiMonteCarlo_1467365661919_1251182664/out/reduce-out
        at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
        at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
        at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1750)
        at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1774)
        at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
        at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
hadoop@master:/tmp$。/restart-hadoop-dfs.sh
+stop-dfs.sh
正在停止[主机]上的namenodes
主节点:停止namenode
cluster-slave02:没有要停止的数据节点
主节点:正在停止数据节点
cluster01-slave04:没有要停止的数据节点
cluster01-slave03:没有要停止的数据节点
cluster-slave01:没有要停止的数据节点
正在停止辅助名称节点[0.0.0.0]
0.0.0.0:停止secondarynamenode
+stop-thread.sh
停止线程守护进程
停止resourcemanager
cluster-slave02:停止节点管理器
主:停止节点管理器
cluster-slave01:停止节点管理器
cluster01-slave04:停止节点管理器
cluster01-slave03:停止节点管理器
大师:节点管理员在5秒后并没有优雅地停止:用kill-9杀人
没有要停止的代理服务器
+sudorm-rf/var/hadoop/hdfs/datanode/var/hadoop/hdfs/namenode
+hdfs namenode-格式
16/07/01 09:33:58信息名称节点。名称节点:启动\u消息:
/************************************************************
STARTUP\u MSG:正在启动NameNode
启动消息:主机=主机/157.82.3.142
启动消息:args=[-格式]
启动消息:版本=2.6.0
启动\u消息:生成=https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1;詹金斯于2014-11-13T21:10Z编译
启动消息:java=1.8.0\u 91
************************************************************/
16/07/01 09:33:58信息namenode.namenode:[TERM,HUP,INT]的已注册UNIX信号处理程序
16/07/01 09:33:58信息名称节点。名称节点:createNameNode[-格式]
使用群集ID格式化:CID-dff9fad1-84cc-448d-a135-77ab870488a6
16/07/01 09:33:58信息namenode.FSNamesystem:未找到密钥提供程序。
16/07/01 09:33:58信息namenode.FSNamesystem:fsLock是公平的:true
16/07/01 09:33:58 INFO blockmanagement.DatanodeManager:dfs.block.invalidate.limit=1000
16/07/01 09:33:58信息块管理.DatanodeManager:dfs.namenode.datanode.registration.ip主机名检查=true
16/07/01 09:33:58 INFO blockmanagement.BlockManager:dfs.namenode.startup.delay.block.deletation.sec设置为000:00:00:00.000
16/07/01 09:33:58信息区块管理。区块管理器:区块删除将在2016年7月01日09:33:58左右开始
16/07/01 09:33:58 INFO util.GSet:地图块映射的计算能力
16/07/01 09:33:58 INFO util.GSet:VM type=64位
16/07/01 09:33:58 INFO util.GSet:2.0%最大内存889 MB=17.8 MB
16/07/01 09:33:58 INFO util.GSet:capacity=2^21=2097152个条目
16/07/01 09:33:58 INFO blockmanagement.BlockManager:dfs.block.access.token.enable=false
16/07/01 09:33:58 INFO blockmanagement.BlockManager:defaultReplication=2
16/07/01 09:33:58信息块管理。块管理器:maxReplication=512
16/07/01 09:33:58 INFO blockmanagement.BlockManager:minReplication=1
16/07/01 09:33:58信息块管理。块管理器:maxReplicationStreams=2
16/07/01 09:33:58信息块管理。块管理器:shouldCheckForEnoughRacks=false
16/07/01 09:33:58信息块管理。块管理器:复制重新检查间隔=3000
16/07/01 09:33:58信息块管理。块管理器:encryptDataTransfer=false
16/07/01 09:33:58 INFO blockmanagement.BlockManager:maxNumBlocksToLog=1000
16/07/01 09:33:58 INFO namenode.FSNamesystem:fsOwner=hadoop(auth:SIMPLE)
16/07/01 09:33:58信息namenode.FSNamesystem:超级组=超级组
16/07/01 09:33:58信息namenode.FSNamesystem:isPermissionEnabled=true
16/07/01 09:33:58信息namenode.FSNamesystem:HA已启用:false
16/07/01 09:33:58信息namenode.FSNamesystem:附加已启用:true
16/07/01 09:33:58 INFO util.GSet:映射索引映射的计算能力
16/07/01 09:33:58 INFO util.GSet:VM type=64位
16/07/01 09:33:58 INFO util.GSet:1.0%最大内存889 MB=8.9 MB
16/07/01 09:33:58 INFO util.GSet:capacity=2^20=1048576个条目
16/07/01 09:33:58信息namenode.namenode:缓存文件名超过10次
16/07/01 09:33:58 INFO util.GSet:映射缓存块的计算能力
16/07/01 09:33:58 INFO util.GSet:VM type=64位
16/07/01 09:33:58 INFO util.GSet:0.25%最大内存889 MB=2.2 MB
16/07/01 09:33:58 INFO util.GSet:capacity=2^18=262144个条目
16/07/01 09:33:58信息namenode.FSNamesystem:dfs.namenode.safemode.threshold-pct=0.9990000128746033
16/07/01 09:33:58信息namenode.FSNamesystem:dfs.namenode.safemode.min.datanodes=0
16/07/01 09:33:58信息namenode.FSNamesystem:dfs.namenode.safemode.extension=30000
16/07/01 09:33:58信息namenode.FSNamesystem:已启用namenode上的重试缓存
16/07/01 09:33:58 INFO namenode.FSNamesystem:重试缓存将使用总堆的0.03,重试缓存项到期时间为600000毫秒
16/07/01 09:33:58 INFO util.GSet:地图名NodeRetryCache的计算能力
16/07/01 09:33:58 INFO util.GSet:VM type=64位
16/07/01 09:33:58 INFO util.GSet:0.029999993929447746%最大内存889 MB=273.1 KB
16/07/01 09:33:58 INFO util.GSet:capacity=2^15=32768个条目
16/07/01 09:33:58 INFO namenode.NNConf:是否启用ACL?假的
16/07/01 09:33:58信息namenode.NNConf:XAttrs已启用?真的
16/07/01 09:33:58 INFO namenode.NNConf:xattr的最大大小:16384
16/07/01 09:33:58 INFO namenode.FSImage:分配的新区块池ID:BP-275974701-157.82.3.142-14673638786
16/07/01 09:33:58信息公用。存储:存储目录/var/hadoop/hdfs/namenode已成功格式化。
16/07/01 09:33:59信息名称node.NNStorageRetentionManager:将保留1个txid>=0的图像
16/07/01 09:33:59信息util.ExitUtil:正在退出,状态为0
16/07/01 09:33:59信息namenode.namenode:SHUTDOWN\u消息:
/************************************************************
SHUTDOWN\u MSG:正在关闭位于的NameNode