Scala 错误MapRZKRMFinderUtils:无法从Zookeeper确定ResourceManager服务地址

Scala 错误MapRZKRMFinderUtils:无法从Zookeeper确定ResourceManager服务地址,scala,dataframe,apache-spark,Scala,Dataframe,Apache Spark,我在尝试使用以下命令从csv文件创建数据帧时遇到以下错误: val auctionDataFrame=spark.read.format(“csv”) .option(“推断模式”,true) .load(“/apps/auctiondata.csv”) .toDF(“拍卖ID”、“投标”、“投标时间”、“投标人”、“投标费率”、“公开投标”、“价格”、“项目”、“有效期”)` 我使用:/opt/mapr/spark/spark-2.1.0/bin/spark-shell运行spark-she

我在尝试使用以下命令从csv文件创建数据帧时遇到以下错误:

val auctionDataFrame=spark.read.format(“csv”)
.option(“推断模式”,true)
.load(“/apps/auctiondata.csv”)
.toDF(“拍卖ID”、“投标”、“投标时间”、“投标人”、“投标费率”、“公开投标”、“价格”、“项目”、“有效期”)`
我使用:/opt/mapr/spark/spark-2.1.0/bin/spark-shell运行spark-shell

你能帮我解决这个错误吗。 谢谢


Abir

当我的spark streaming应用程序是根据旧版本的MapR和依赖项编译时,我也遇到了类似的问题

但是,当我重新提交Spark应用程序时,用“最新”版本替换了一些依赖项,我执行了它

确保编译时jar的版本和运行时jar相同。
这包括Spark 2.1.0、hadoop JAR等。当我的Spark流媒体应用程序是根据旧版本的MapR和依赖项编译时,我遇到了类似的问题

但是,当我重新提交Spark应用程序时,用“最新”版本替换了一些依赖项,我执行了它

确保编译时jar的版本和运行时jar相同。
其中包括Spark 2.1.0、hadoop jars,我不太了解。你能再解释一下吗。??!我不太明白。你能再解释一下吗。??!
20/05/06 15:27:14 WARN ZKDataRetrieval: Can not get children of /services/resourcemanager/master with error: KeeperErrorCode = NoNode for /services/resourcemanager/master
20/05/06 15:27:14 ERROR MapRZKRMFinderUtils: Unable to determine ResourceManager service address from Zookeeper at node1:5181,node2:5181,node3:5181
java.lang.RuntimeException: Unable to determine ResourceManager service address from Zookeeper at node1:5181,node2:5181,node3:5181
  at org.apache.hadoop.yarn.client.MapRZKRMFinderUtils.mapRZkBasedRMFinder(MapRZKRMFinderUtils.java:121)
  at org.apache.hadoop.yarn.client.MapRZKBasedRMAddressFinder.getRMAddress(MapRZKBasedRMAddressFinder.java:43)
  at org.apache.hadoop.yarn.conf.HAUtil.getCurrentRMAddress(HAUtil.java:72)
  at org.apache.hadoop.mapred.Master.getMasterAddress(Master.java:60)
  at org.apache.hadoop.mapred.Master.getMasterPrincipal(Master.java:74)
  at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:114)
  at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
  at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
  at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
  at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:317)
  at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:206)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1333)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
  at org.apache.spark.rdd.RDD.take(RDD.scala:1327)
  at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1368)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
  at org.apache.spark.rdd.RDD.first(RDD.scala:1367)
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.findFirstLine(CSVFileFormat.scala:206)
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.inferSchema(CSVFileFormat.scala:60)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$7.apply(DataSource.scala:184)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$7.apply(DataSource.scala:184)
  at scala.Option.orElse(Option.scala:289)
  at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$getOrInferFileFormatSchema(DataSource.scala:183)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:387)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:135)
  ... 48 elided