Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop集群处于安全模式(Namenode处于安全模式)我需要释放哪些资源才能删除安全模式?_Hadoop_Mapreduce_Bigdata - Fatal编程技术网

Hadoop集群处于安全模式(Namenode处于安全模式)我需要释放哪些资源才能删除安全模式?

Hadoop集群处于安全模式(Namenode处于安全模式)我需要释放哪些资源才能删除安全模式?,hadoop,mapreduce,bigdata,Hadoop,Mapreduce,Bigdata,我已拍摄群集的快照。以下是我的发现: Safe mode is ON Configured Capacity: 47430737653760 (43.14 TB) Present Capacity: 20590420062208 (18.73 TB) DFS Remaining: 19343468953600 (17.59 TB) DFS Used: 1246951108608 (1.13 TB) DFS Used%: 6.06% Under replicated blocks: 2 Block

我已拍摄群集的快照。以下是我的发现:

Safe mode is ON
Configured Capacity: 47430737653760 (43.14 TB)
Present Capacity: 20590420062208 (18.73 TB)
DFS Remaining: 19343468953600 (17.59 TB)
DFS Used: 1246951108608 (1.13 TB)
DFS Used%: 6.06%
Under replicated blocks: 2
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (5):

Name: 10.0.70.144:50010 (slave2)
Hostname: slave2
Decommission Status : Normal
Configured Capacity: 9486147530752 (8.63 TB)
DFS Used: 209829912576 (195.42 GB)
Non DFS Used: 4733044670464 (4.30 TB)
DFS Remaining: 4543272947712 (4.13 TB)
DFS Used%: 2.21%
DFS Remaining%: 47.89%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Oct 13 16:57:21 IST 2018


Name: 10.0.70.143:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 9486147530752 (8.63 TB)
DFS Used: 206771748864 (192.57 GB)
Non DFS Used: 4070449033216 (3.70 TB)
DFS Remaining: 5208926748672 (4.74 TB)
DFS Used%: 2.18%
DFS Remaining%: 54.91%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Oct 13 16:57:21 IST 2018


Name: 10.0.70.145:50010 (slave3)
Hostname: slave3
Decommission Status : Normal
Configured Capacity: 9486147530752 (8.63 TB)
DFS Used: 205542408192 (191.43 GB)
Non DFS Used: 5523446423552 (5.02 TB)
DFS Remaining: 3757158699008 (3.42 TB)
DFS Used%: 2.17%
DFS Remaining%: 39.61%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Oct 13 16:57:21 IST 2018


Name: 10.0.70.147:50010 (slave5)
Hostname: slave5
Decommission Status : Normal
Configured Capacity: 9486147530752 (8.63 TB)
DFS Used: 209182961664 (194.82 GB)
Non DFS Used: 5516635717632 (5.02 TB)
DFS Remaining: 3760328851456 (3.42 TB)
DFS Used%: 2.21%
DFS Remaining%: 39.64%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Oct 13 16:57:22 IST 2018


Name: 10.0.70.146:50010 (slave4)
Hostname: slave4
Decommission Status : Normal
Configured Capacity: 9486147530752 (8.63 TB)
DFS Used: 415624077312 (387.08 GB)
Non DFS Used: 6996741746688 (6.36 TB)
DFS Remaining: 2073781706752 (1.89 TB)
DFS Used%: 4.38%
DFS Remaining%: 21.86%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Oct 13 16:57:21 IST 2018

我需要释放哪些资源才能使NN离开安全模式。我可以尝试使用dfsadmin safemode leave命令,但我需要确保MR作业下次不会失败。我正试图通过MR作业处理400GB的文本数据。我还有3.2 TB的数据要处理。请帮助我有效地处理数据。

Namenode存在磁盘空间问题<代码>/dev/mapper/centos root已满100%。 我从中删除了一些文件以创建一些空间。以下是当前快照:

[hduser@secondary ~]$ df 
Filesystem               1K-blocks      Used Available Use% Mounted on
/dev/mapper/centos-root 1116838084 557248752 502833928  53% /

创建空间后,我使用命令hadoop dfsadmin-safemode leave退出了safemode,并运行了成功完成的MR作业。

MR作业通常不会有效地处理数据。你想做什么工作?当它们失败时,您会遇到哪些错误?因为存储显然不是原因,而且纱线作业通常不会导致namenode进入安全模式。。。去看看它的日志。当我在hdfs中对原始数据目录进行简单的ls时,日志会这么说。原始数据目录有180万个文件:
线程“main”java.lang中出现异常。OutOfMemoryError:超出了GC开销限制
是的,Hadoop不适用于数百万个小文件。您应该将它们全部压缩并存档,或者必须增加namenode堆的大小。不过,我认为这不是问题所在。主要问题是
[hduser@secondarymapper]$df-h使用的文件系统大小Avail Use%安装在/dev/mapper/centos根目录1.1T 1011G 19M 100%/
centos根目录已满100%。我需要删除一些占用大部分空间的文件。