如何在windows上从R启动spark群集的主节点?

如何在windows上从R启动spark群集的主节点?,r,apache-spark,apache-spark-standalone,R,Apache Spark,Apache Spark Standalone,从“使用R控制spark”一书中,介绍了如何从R启动spark独立群集的主节点: # Retrieve the Spark installation directory spark_home <- spark_home_dir() # Build paths and classes spark_path <- file.path(spark_home, "bin", "spark-class") # Start cluster manager master node #ko on w

从“使用R控制spark”一书中,介绍了如何从R启动spark独立群集的主节点:

# Retrieve the Spark installation directory
spark_home <- spark_home_dir()
# Build paths and classes
spark_path <- file.path(spark_home, "bin", "spark-class")
# Start cluster manager master node #ko on windows ?!
system2(spark_path, "org.apache.spark.deploy.master.Master", wait = FALSE)
并添加一些具有

spark-class org.apache.spark.deploy.worker.Worker -i 10.0.75.1 -p 7077
但当从R连接到主机时,使用:

sc <- spark_connect(spark_home = spark_install_find(version="2.4.3")$sparkVersionDir,master = "spark://localhost:7077")
if(file.exists("codes_ages")) unlink("codes_ages", TRUE)
codes.ages.df <- spark_read_csv(sc,name = "codes_ages", path = paste0(datadir,"/age.txt"), header = FALSE, delimiter = " ")
sc
sc <- spark_connect(spark_home = spark_install_find(version="2.4.3")$sparkVersionDir,master = "spark://localhost:7077")
if(file.exists("codes_ages")) unlink("codes_ages", TRUE)
codes.ages.df <- spark_read_csv(sc,name = "codes_ages", path = paste0(datadir,"/age.txt"), header = FALSE, delimiter = " ")