Hadoop 无法关闭文件,因为最后一个块没有足够数量的副本;

Hadoop 无法关闭文件,因为最后一个块没有足够数量的副本;,hadoop,hdfs,Hadoop,Hdfs,我犯了这样的错误。HDFS_CANARY_health的运行状况测试结果已变差:CANARY test无法在目录/tmp/.cloudera_health_monitoring_CANARY_文件中写入文件 2019-07-14 00:02:13,312 INFO hive.metastore: Trying to connect to metastore with URI xxxx:abc 2019-07-14 00:02:13,322 INFO hive.metastore: Opened

我犯了这样的错误。HDFS_CANARY_health的运行状况测试结果已变差:CANARY test无法在目录/tmp/.cloudera_health_monitoring_CANARY_文件中写入文件

2019-07-14 00:02:13,312 INFO hive.metastore: Trying to connect to metastore with URI xxxx:abc
2019-07-14 00:02:13,322 INFO hive.metastore: Opened a connection to metastore, current connections: 1

2019-07-14 00:02:13,322 INFO hive.metastore: Connected to metastore.

2019-07-14 00:02:14,255 INFO hive.metastore: Closed a connection to metastore, current connections: 0​
2019-07-14 00:02:21,444 INFO com.cloudera.cmon.tstore.leveldb.LDBTimeSeriesRollupManager: Finished rollup: duration=PT21.280S, numStreamsChecked=607071, numStreamsRolle
dUp=199173
2019-07-14 00:03:04,172 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2659d409 connecting to ZooKeeper ensemble=___
2019-07-14 00:03:04,182 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Process identifier=ReplicationAdmin connecting to ZooKeeper ensemble=____
2019-07-14 00:03:04,198 INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x16b95f7c324bcf3
2019-07-14 00:03:08,346 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2fc2fce5 connecting to ZooKeeper ensemble=___
2019-07-14 00:03:08,354 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Process identifier=ReplicationAdmin connecting to ZooKeeper ensemble=___
2019-07-14 00:03:08,365 INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x26b8ea7e7c6cfde
2019-07-14 00:03:11,897 INFO org.apache.hadoop.hdfs.DFSClient: Could not complete /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2019_07_14-00_03_03 retrying...
2019-07-14 00:03:18,388 INFO org.apache.hadoop.hdfs.DFSClient: Could not complete /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2019_07_14-00_03_03 retrying...
2019-07-14 00:03:18,562 ERROR com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary: com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary@5b4e2d83 for hdfs://hdfs-cluster_name Failed to write to /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2019_07_14-00_03_03. Error: java.io.IOException: Unable to close file because the last block
BP-972893351-10.14.208.100-1438206436439:blk_4349006744_3275530539 does not have enough number of replicas.​
我的问题是它错误地引用了哪个“文件”:无法关闭文件,因为最后一个块没有足够数量的副本。​