Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/17.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 从系统目录读取输入数据时发生异常_Scala_Apache Spark - Fatal编程技术网

Scala 从系统目录读取输入数据时发生异常

Scala 从系统目录读取输入数据时发生异常,scala,apache-spark,Scala,Apache Spark,我正在尝试读取系统文件夹的文件。我在从目录读取时遇到以下异常 Exception in thread "main" java.io.IOException: No FileSystem for scheme: null at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.j

我正在尝试读取系统文件夹的文件。我在从目录读取时遇到以下异常

Exception in thread "main" java.io.IOException: No FileSystem for scheme: null
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:372)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:370)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:344)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:135)
at org.directory.spark.filter.sparksql$.run(sparksql.scala:47)
at org.directory.spark.filter.WisilicaSanitizerDataDriver$$anonfun$main$2.apply(WisilicaSanitizerDataDriver.scala:57)
at org.directory.spark.filter.WisilicaSanitizerDataDriver$$anonfun$main$2.apply(WisilicaSanitizerDataDriver.scala:56)
at scala.Option.map(Option.scala:146)
at org.directory.spark.filter.WisilicaSanitizerDataDriver$.main(WisilicaSanitizerDataDriver.scala:56)
at org.directory.spark.filter.WisilicaSanitizerDataDriver.main(WisilicaSanitizerDataDriver.scala)
这是我的密码

 while (currentDate.isBefore(endDate) || currentDate.isEqual(endDate)) {
val (inpath_tag,outpath) = buildPaths(currentDate, sc);

val df = sqlContext.read.format("com.databricks.spark.csv")
  .option("header", "false") // Use first line of all files as header
  .option("inferSchema", "true") // Automatically infer data types
  .option("delimiter", ":")
  .load(inpath_tag.toString())
  }

   val inpath_tag = new Path(
    makePath("/", Some("") :: Some("/home/rakshi/workspace1/spark/spark-warehouse/") :: Some(year) :: Some(month) :: Some(day) :: Some(hour) :: Nil))

任何帮助都将不胜感激。

您能打印makePath()的输出并查看该位置是否存在吗?您签出了吗?