Apache spark 如何在Spark群集模式下处理日志

Apache spark 如何在Spark群集模式下处理日志,apache-spark,apache-spark-sql,log4j,bigdata,hadoop2,Apache Spark,Apache Spark Sql,Log4j,Bigdata,Hadoop2,我是新手。我无法找到如何在Spark Cluster模式下处理日志。我在Spark脚本中添加了以下属性 spark.conf.set("yarn.log-aggregation-enable","true") spark.conf.set("yarn.nodemanager.log-dirs","HDFS_LOCATION") spark.conf.set("yarn.nodemanager.remote-app-log-dir","HDFS_LOCATION") spark.conf.set(

我是新手。我无法找到如何在Spark Cluster模式下处理日志。我在Spark脚本中添加了以下属性

spark.conf.set("yarn.log-aggregation-enable","true")
spark.conf.set("yarn.nodemanager.log-dirs","HDFS_LOCATION")
spark.conf.set("yarn.nodemanager.remote-app-log-dir","HDFS_LOCATION")
spark.conf.set("spark.eventLog.enabled", "true")
spark.conf.set("spark.eventLog.dir", "HDFS_LOCATION")
spark.conf.set("spark.scheduler.mode", "FAIR")
运行spark submit时,我添加了以下选项:

--驱动程序java选项“-Dlog4j.debug=true-Dlog4j.configuration=$LOCATION/log4j.properties”

但我得到以下例外情况:

Exception in thread "main" org.apache.spark.SparkException: Application
我在HDFS日志位置找不到任何日志

请帮助我,因为我被代码困住了