Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/docker/9.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
无法在Centos 7上的Docker内部运行Apache Kylin_Docker_Centos7_Kylin - Fatal编程技术网

无法在Centos 7上的Docker内部运行Apache Kylin

无法在Centos 7上的Docker内部运行Apache Kylin,docker,centos7,kylin,Docker,Centos7,Kylin,我正在使用Kylin的存储库。然后运行它,正如所有教程建议的那样: docker run -d \ -m 8G \ -p 7070:7070 \ -p 8088:8088 \ -p 50070:50070 \ -p 8032:8032 \ -p 8042:8042 \ -p 60010:60010 \ apache-kylin-standalone 此命令中列出的所有端口都不被任何其他服务使用,因此没有问题。但是,我在Docker日志中看到一些其他错误消息: STARTUP_MSG:

我正在使用Kylin的存储库。然后运行它,正如所有教程建议的那样:

docker run -d \
-m 8G \
-p 7070:7070 \
-p 8088:8088 \
-p 50070:50070 \
-p 8032:8032 \
-p 8042:8042 \
-p 60010:60010 \
 apache-kylin-standalone
此命令中列出的所有端口都不被任何其他服务使用,因此没有问题。但是,我在Docker日志中看到一些其他错误消息:

    STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r d4c8d4d4d203c934e8074b31289a28724c0842cf; compiled by 'jenkins' on 2015-04-10T18:40Z
STARTUP_MSG:   java = 1.8.0_141
************************************************************/
20/02/24 13:34:07 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
20/02/24 13:34:07 INFO namenode.NameNode: createNameNode [-format]
20/02/24 13:34:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/24 13:34:08 WARN common.Util: Path /data/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
20/02/24 13:34:08 WARN common.Util: Path /data/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-bcf44896-4c6d-47e7-97f2-0302ba9963fd
20/02/24 13:34:08 INFO namenode.FSNamesystem: No KeyProvider found.
20/02/24 13:34:08 INFO namenode.FSNamesystem: fsLock is fair:true
20/02/24 13:34:08 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
20/02/24 13:34:08 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
20/02/24 13:34:08 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20/02/24 13:34:08 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Feb 24 13:34:08
20/02/24 13:34:08 INFO util.GSet: Computing capacity for map BlocksMap
20/02/24 13:34:08 INFO util.GSet: VM type       = 64-bit
20/02/24 13:34:08 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
20/02/24 13:34:08 INFO util.GSet: capacity      = 2^21 = 2097152 entries
20/02/24 13:34:08 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
20/02/24 13:34:08 INFO blockmanagement.BlockManager: defaultReplication         = 1
20/02/24 13:34:08 INFO blockmanagement.BlockManager: maxReplication             = 512
20/02/24 13:34:08 INFO blockmanagement.BlockManager: minReplication             = 1
20/02/24 13:34:08 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
20/02/24 13:34:08 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
20/02/24 13:34:08 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
20/02/24 13:34:08 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
20/02/24 13:34:08 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
20/02/24 13:34:08 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
20/02/24 13:34:08 INFO namenode.FSNamesystem: supergroup          = supergroup
20/02/24 13:34:08 INFO namenode.FSNamesystem: isPermissionEnabled = true
20/02/24 13:34:08 INFO namenode.FSNamesystem: HA Enabled: false
20/02/24 13:34:08 INFO namenode.FSNamesystem: Append Enabled: true
20/02/24 13:34:08 INFO util.GSet: Computing capacity for map INodeMap
20/02/24 13:34:08 INFO util.GSet: VM type       = 64-bit
20/02/24 13:34:08 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
20/02/24 13:34:08 INFO util.GSet: capacity      = 2^20 = 1048576 entries
20/02/24 13:34:08 INFO namenode.FSDirectory: ACLs enabled? false
20/02/24 13:34:08 INFO namenode.FSDirectory: XAttrs enabled? true
20/02/24 13:34:08 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
20/02/24 13:34:08 INFO namenode.NameNode: Caching file names occuring more than 10 times
20/02/24 13:34:08 INFO util.GSet: Computing capacity for map cachedBlocks
20/02/24 13:34:08 INFO util.GSet: VM type       = 64-bit
20/02/24 13:34:08 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
20/02/24 13:34:08 INFO util.GSet: capacity      = 2^18 = 262144 entries
20/02/24 13:34:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
20/02/24 13:34:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
20/02/24 13:34:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
20/02/24 13:34:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
20/02/24 13:34:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
20/02/24 13:34:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
20/02/24 13:34:08 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
20/02/24 13:34:08 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20/02/24 13:34:08 INFO util.GSet: Computing capacity for map NameNodeRetryCache
20/02/24 13:34:08 INFO util.GSet: VM type       = 64-bit
20/02/24 13:34:08 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
20/02/24 13:34:08 INFO util.GSet: capacity      = 2^15 = 32768 entries
20/02/24 13:34:08 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1695907627-172.23.0.2-1582551248773
20/02/24 13:34:08 INFO common.Storage: Storage directory /data/hadoop/dfs/name has been successfully formatted.
20/02/24 13:34:08 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
20/02/24 13:34:08 INFO util.ExitUtil: Exiting with status 0
20/02/24 13:34:09 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ee0fc3002cae/172.23.0.2
************************************************************/
chown: missing operand after `/home/admin/hadoop-2.7.0/logs'
Try `chown --help' for more information.
starting namenode, logging to /home/admin/hadoop-2.7.0/logs/hadoop--namenode-ee0fc3002cae.out
starting datanode, logging to /home/admin/hadoop-2.7.0/logs/hadoop--datanode-ee0fc3002cae.out
starting resourcemanager, logging to /home/admin/hadoop-2.7.0/logs/yarn--resourcemanager-ee0fc3002cae.out
starting nodemanager, logging to /home/admin/hadoop-2.7.0/logs/yarn--nodemanager-ee0fc3002cae.out
chown: missing operand after `/home/admin/hadoop-2.7.0/logs'
Try `chown --help' for more information.
starting historyserver, logging to /home/admin/hadoop-2.7.0/logs/mapred--historyserver-ee0fc3002cae.out
starting master, logging to /home/admin/hbase-1.1.2/logs/hbase--master-ee0fc3002cae.out
20/02/24 13:34:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/24 13:34:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/24 13:34:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/24 13:34:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/24 13:34:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/24 13:35:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/24 13:35:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/24 13:35:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/24 13:35:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
put: `/lib/kylin-job-3.0.0-alpha2.jar': No such file or directory
starting /home/admin/jdk1.8.0_141/bin/java  -cp /home/admin/apache-livy-0.6.0-incubating-bin/jars/*:/home/admin/apache-livy-0.6.0-incubating-bin/conf:/home/admin/spark-2.3.1-bin-hadoop2.6/conf:/home/admin/hadoop-2.7.0/etc/hadoop: org.apache.livy.server.LivyServer, logging to /home/admin/apache-livy-0.6.0-incubating-bin/logs/livy--server.out
我不确定这是否重要,但我在日志中看到:

/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ee0fc3002cae/172.23.0.2
************************************************************/
chown: missing operand after `/home/admin/hadoop-2.7.0/logs'
总之,当我转到
http://127.0.0.1:7070/
http://127.0.0.1:60010/
,这些页面未加载。但是,
http://127.0.0.1:50070/
127.0.0.1:8088
按预期工作。这可能有什么问题,我该如何解决

put: `/lib/kylin-job-3.0.0-alpha2.jar': No such file or directory
这就是问题所在。它来自
entrypoint.sh
脚本,在这一行:

hdfs dfs -put -f $KYLIN_HOME/lib/kylin-job-$KYLIN_VERSION.jar hdfs://localhost:9000/kylin/livy/
您的
KYLIN\u版本设置为什么?它正在寻找
3.0.0-alpha2
,但主分支Dockerfile具有
KYLIN_VERSION=3.1.0

您可以忽略
SHUTDOWN\u MSG
chown
错误,它们不相关。我用Kylin的装置来安装