Hadoop 在NodeManager启动时关闭它

Hadoop 在NodeManager启动时关闭它,hadoop,command-prompt,Hadoop,Command Prompt,我已经在笔记本电脑上运行了hadoop。当hadoop启动时,我执行命令start all.cmd。然后从4个守护进程开始。cmd显示4个进程中的3个 SHUTDOWN\u消息:关闭桌面上的NameNode-T7R9JV1/192.168.1.100 我如何避免这种情况 STARTUP\u MSG:Starting NameNode STARTUP_MSG:host=DESKTOP-T7R9JV1/192.168.1.101 启动消息:args=[] 启动消息:版本=2.9.1 19/09/0

我已经在笔记本电脑上运行了hadoop。当hadoop启动时,我执行命令start all.cmd。然后从4个守护进程开始。cmd显示4个进程中的3个

SHUTDOWN\u消息:关闭桌面上的NameNode-T7R9JV1/192.168.1.100

我如何避免这种情况


STARTUP\u MSG:Starting NameNode
STARTUP_MSG:host=DESKTOP-T7R9JV1/192.168.1.101
启动消息:args=[]
启动消息:版本=2.9.1
19/09/08 22:03:13信息名称节点。名称节点:createNameNode[]
19/09/08 22:03:14 INFO impl.MetricsConfig:已从hadoop-metrics2.properties加载属性
19/09/08 22:03:14信息impl.MetricsSystemImpl:计划的公制快照周期为10秒。
19/09/08 22:03:14信息impl.MetricsSystemImpl:NameNode度量系统已启动
19/09/08 22:03:14信息namenode.namenode:fs.defaultFS是hdfs://0.0.0.0:19000
19/09/08 22:03:14信息名称节点。名称节点:客户端将使用0.0.0.0:19000访问此名称节点/服务。
19/09/08 22:03:14警告util.NativeCodeLoader:无法为您的平台加载本机hadoop库。。。在适用的情况下使用内置java类
19/09/08 22:03:15 INFO util.JvmPauseMonitor:启动JVM暂停监视器
19/09/08 22:03:15 INFO hdfs.DFSUtil:正在以下位置启动hdfs的Web服务器:http://0.0.0.0:50070
19/09/08 22:03:15 INFO mortbay.log:通过org.mortbay.log.Slf4jLog登录到org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
19/09/08 22:03:15信息服务器。AuthenticationFilter:无法初始化FileSignerSecretProvider,正在退回使用随机机密。
19/09/08 22:03:15信息http.HttpRequestLog:未定义http.requests.namenode的http请求日志
19/09/08 22:03:15信息http.HttpServer2:添加了全局筛选器“安全”(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
19/09/08 22:03:15信息http.HttpServer2:在上下文hdfs中添加了过滤器static_user_filter(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
19/09/08 22:03:15信息http.HttpServer2:在上下文日志中添加了筛选器static\u user\u筛选器(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
19/09/08 22:03:15信息http.HttpServer2:将筛选器static\u user\u筛选器(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)添加到上下文静态
19/09/08 22:03:16信息http.HttpServer2:添加了过滤器“org.apache.hadoop.hdfs.web.AuthFilter”(class=org.apache.hadoop.hdfs.web.AuthFilter)
19/09/08 22:03:16 INFO http.HttpServer2:addJerseyResourcePackage:packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,pathSpec=/webhdfs/v1/*
19/09/08 22:03:16信息http.HttpServer2:Jetty绑定到端口50070
19/09/08 22:03:16信息mortbay.log:jetty-6.1.26
19/09/08 22:03:16 INFO mortbay.log:已启动HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
19/09/08 22:03:17常见错误。Util:URI C:\BigData\hadoop-2.9.1\data\namenode中的语法错误。请检查hdfs配置。
java.net.URISyntaxException:索引2处不透明部分中的非法字符:C:\BigData\hadoop-2.9.1\data\namenode
在java.net.URI$Parser.fail(URI.java:2848)
位于java.net.URI$Parser.checkChars(URI.java:3021)
位于java.net.URI$Parser.parse(URI.java:3058)
位于java.net.URI。(URI.java:588)
位于org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
位于org.apache.hadoop.hdfs.server.common.Util.StringCollectionSuris(Util.java:99)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(FSNamesystem.java:1417)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkConfiguration(FSNamesystem.java:617)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)
位于org.apache.hadoop.hdfs.server.namenode.namenode.loadNamesystem(namenode.java:666)
位于org.apache.hadoop.hdfs.server.namenode.namenode.initialize(namenode.java:728)
位于org.apache.hadoop.hdfs.server.namenode.namenode.(namenode.java:953)
位于org.apache.hadoop.hdfs.server.namenode.namenode.(namenode.java:932)
位于org.apache.hadoop.hdfs.server.namenode.namenode.createNameNode(namenode.java:1673)
位于org.apache.hadoop.hdfs.server.namenode.namenode.main(namenode.java:1741)
19/09/08 22:03:17 WARN common.Util:Path C:\BigData\hadoop-2.9.1\data\namenode应在配置文件中指定为URI。请更新hdfs配置。
19/09/08 22:03:17常见错误。Util:URI C:\BigData\hadoop-2.9.1\data\namenode中的语法错误。请检查hdfs配置。
java.net.URISyntaxException:索引2处不透明部分中的非法字符:C:\BigData\hadoop-2.9.1\data\namenode
在java.net.URI$Parser.fail(URI.java:2848)
位于java.net.URI$Parser.checkChars(URI.java:3021)
位于java.net.URI$Parser.parse(URI.java:3058)
位于java.net.URI。(URI.java:588)
位于org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
位于org.apache.hadoop.hdfs.server.common.Util.StringCollectionSuris(Util.java:99)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getnamespaceditsdirs(FSNamesystem.java:1507)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getnamespaceditsdirs(FSNamesystem.java:1476)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkConfiguration(FSNamesystem.java:619)
位于org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)
位于org.apache.hadoop.hdfs.server.namenode.namenode.loadNamesystem(namenode.java:66
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = DESKTOP-T7R9JV1/192.168.1.101
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.9.1  
19/09/08 22:03:13 INFO namenode.NameNode: createNameNode []
    19/09/08 22:03:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
    19/09/08 22:03:14 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
    19/09/08 22:03:14 INFO impl.MetricsSystemImpl: NameNode metrics system started
    19/09/08 22:03:14 INFO namenode.NameNode: fs.defaultFS is hdfs://0.0.0.0:19000
    19/09/08 22:03:14 INFO namenode.NameNode: Clients are to use 0.0.0.0:19000 to access this namenode/service.
    19/09/08 22:03:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    19/09/08 22:03:15 INFO util.JvmPauseMonitor: Starting JVM pause monitor
    19/09/08 22:03:15 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
    19/09/08 22:03:15 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
    19/09/08 22:03:15 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
    19/09/08 22:03:15 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
    19/09/08 22:03:15 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
    19/09/08 22:03:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
    19/09/08 22:03:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
    19/09/08 22:03:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
    19/09/08 22:03:16 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
    19/09/08 22:03:16 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
    19/09/08 22:03:16 INFO http.HttpServer2: Jetty bound to port 50070
    19/09/08 22:03:16 INFO mortbay.log: jetty-6.1.26
    19/09/08 22:03:16 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
    19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
    java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
            at java.net.URI$Parser.fail(URI.java:2848)
            at java.net.URI$Parser.checkChars(URI.java:3021)
            at java.net.URI$Parser.parse(URI.java:3058)
            at java.net.URI.<init>(URI.java:588)
            at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
            at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(FSNamesystem.java:1417)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkConfiguration(FSNamesystem.java:617)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
    19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
    19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
    java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
            at java.net.URI$Parser.fail(URI.java:2848)
            at java.net.URI$Parser.checkChars(URI.java:3021)
            at java.net.URI$Parser.parse(URI.java:3058)
            at java.net.URI.<init>(URI.java:588)
            at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
            at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1507)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1476)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkConfiguration(FSNamesystem.java:619)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
    19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
    19/09/08 22:03:17 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
    19/09/08 22:03:17 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
    19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
    java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
            at java.net.URI$Parser.fail(URI.java:2848)
            at java.net.URI$Parser.checkChars(URI.java:3021)
            at java.net.URI$Parser.parse(URI.java:3058)
            at java.net.URI.<init>(URI.java:588)
            at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
            at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(FSNamesystem.java:1417)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:670)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
    19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
    19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
    java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
            at java.net.URI$Parser.fail(URI.java:2848)
            at java.net.URI$Parser.checkChars(URI.java:3021)
            at java.net.URI$Parser.parse(URI.java:3058)
            at java.net.URI.<init>(URI.java:588)
            at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
            at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1507)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1476)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:670)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
    19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
    19/09/08 22:03:17 INFO namenode.FSEditLog: Edit logging is async:true
    19/09/08 22:03:17 INFO namenode.FSNamesystem: KeyProvider: null
    19/09/08 22:03:17 INFO namenode.FSNamesystem: fsLock is fair: true
    19/09/08 22:03:17 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
    19/09/08 22:03:17 INFO namenode.FSNamesystem: fsOwner             = User (auth:SIMPLE)
    19/09/08 22:03:17 INFO namenode.FSNamesystem: supergroup          = supergroup
    19/09/08 22:03:17 INFO namenode.FSNamesystem: isPermissionEnabled = true
    19/09/08 22:03:17 INFO namenode.FSNamesystem: HA Enabled: false
    19/09/08 22:03:17 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
    19/09/08 22:03:17 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
    19/09/08 22:03:17 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
    19/09/08 22:03:17 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
    19/09/08 22:03:17 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Sep 08 22:03:17
    19/09/08 22:03:17 INFO util.GSet: Computing capacity for map BlocksMap
    19/09/08 22:03:17 INFO util.GSet: VM type       = 32-bit
    19/09/08 22:03:17 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
    19/09/08 22:03:17 INFO util.GSet: capacity      = 2^22 = 4194304 entries
    19/09/08 22:03:17 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
    19/09/08 22:03:17 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS
    19/09/08 22:03:17 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
    19/09/08 22:03:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
    19/09/08 22:03:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
    19/09/08 22:03:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
    19/09/08 22:03:17 INFO blockmanagement.BlockManager: defaultReplication         = 1
    19/09/08 22:03:17 INFO blockmanagement.BlockManager: maxReplication             = 512
    19/09/08 22:03:17 INFO blockmanagement.BlockManager: minReplication             = 1
    19/09/08 22:03:17 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
    19/09/08 22:03:17 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
    19/09/08 22:03:17 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
    19/09/08 22:03:17 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
    19/09/08 22:03:17 INFO namenode.FSNamesystem: Append Enabled: true
    19/09/08 22:03:17 INFO util.GSet: Computing capacity for map INodeMap
    19/09/08 22:03:17 INFO util.GSet: VM type       = 32-bit
    19/09/08 22:03:17 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
    19/09/08 22:03:17 INFO util.GSet: capacity      = 2^21 = 2097152 entries
    19/09/08 22:03:17 INFO namenode.FSDirectory: ACLs enabled? false
    19/09/08 22:03:17 INFO namenode.FSDirectory: XAttrs enabled? true
    19/09/08 22:03:17 INFO namenode.NameNode: Caching file names occurring more than 10 times
    19/09/08 22:03:17 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
    19/09/08 22:03:17 INFO util.GSet: Computing capacity for map cachedBlocks
    19/09/08 22:03:17 INFO util.GSet: VM type       = 32-bit
    19/09/08 22:03:17 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
    19/09/08 22:03:17 INFO util.GSet: capacity      = 2^19 = 524288 entries
    19/09/08 22:03:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
    19/09/08 22:03:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
    19/09/08 22:03:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
    19/09/08 22:03:17 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
    19/09/08 22:03:17 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
    19/09/08 22:03:17 INFO util.GSet: Computing capacity for map NameNodeRetryCache
    19/09/08 22:03:17 INFO util.GSet: VM type       = 32-bit
    19/09/08 22:03:17 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
    19/09/08 22:03:17 INFO util.GSet: capacity      = 2^16 = 65536 entries
    19/09/08 22:03:17 ERROR namenode.NameNode: Failed to start namenode.
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
            at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
            at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:606)
            at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:1006)
            at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:558)
            at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:518)
            at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:370)
            at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:226)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1048)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
    19/09/08 22:03:17 INFO util.ExitUtil: Exiting with status 1: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
    19/09/08 22:03:17 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at DESKTOP-T7R9JV1/192.168.1.101
    ************************************************************/
 C:\BigData\hadoop-2.9.1\data\namenode
file:/C:/BigData/hadoop-2.9.1/data/namenode