Java 使用SPARK从ftp读取文件时发生异常

Java 使用SPARK从ftp读取文件时发生异常,java,apache-spark,ftp,Java,Apache Spark,Ftp,尝试使用Spark从FTP读取数据时出现以下错误 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.io.IOException: Seek not supported at org.apache.hadoop.fs.ftp.FTPInputStream.seek(FTPInputStream.java:62) at org.apache.hadoop.fs.FSDataInputStream.s

尝试使用Spark从FTP读取数据时出现以下错误

WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.io.IOException: Seek not supported

at org.apache.hadoop.fs.ftp.FTPInputStream.seek(FTPInputStream.java:62)

at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:62)

at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:127)

at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)

at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:245)

at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)

at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)

at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)

at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)

at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)

at org.apache.spark.scheduler.Task.run(Task.scala:86)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)
看起来FPT服务器不支持seek,而Spark在默认情况下尝试在内部使用seek将文件拆分为较小的文件


如何读取FTP文件而不出现任何问题?

最简单的方法是将文件作为一个整体读取,而不是使用seek

以下代码是Java中的答案:

 String dataSource = "ftp://user:pwd/host/path/input.txt";
 sparkContext.wholeTextFiles(dataSource).values().saveAsTextFile("/Users/parmarh/git/spark-rdd-dataframe-dataset/output/ftp/");
缺点是非常缓慢,如果文件太大