如何使用Scala写入HDFS

如何使用Scala写入HDFS,scala,hadoop,hdfs,Scala,Hadoop,Hdfs,我正在学习Scala,我需要将自定义文件写入HDFS。我在笔记本电脑上使用vmware fusion在Cloudera映像上运行自己的HDFS 这是我的实际代码: package org.glassfish.samples import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import java.io.Pri

我正在学习Scala,我需要将自定义文件写入HDFS。我在笔记本电脑上使用vmware fusion在Cloudera映像上运行自己的HDFS

这是我的实际代码:

package org.glassfish.samples

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.PrintWriter;

/**
* @author ${user.name}
*/
object App {

def main(args : Array[String]) {
println( "Trying to write to HDFS..." )
val conf = new Configuration()
val fs= FileSystem.get(conf)
val output = fs.create(new Path("hdfs://quickstart.cloudera:8020/tmp/mySample.txt"))
val writer = new PrintWriter(output)
try {
    writer.write("this is a test") 
    writer.write("\n")
}
finally {
    writer.close()
}
print("Done!")
}

}
package org.glassfish.samples

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.PrintWriter;

/**
* @author ${user.name}
*/
object App {

//def foo(x : Array[String]) = x.foldLeft("")((a,b) => a + b)

def main(args : Array[String]) {
println( "Trying to write to HDFS..." )
val conf = new Configuration()
//conf.set("fs.defaultFS", "hdfs://quickstart.cloudera:8020")
conf.set("fs.defaultFS", "hdfs://192.168.30.147:8020")
val fs= FileSystem.get(conf)
val output = fs.create(new Path("/tmp/mySample.txt"))
val writer = new PrintWriter(output)
try {
    writer.write("this is a test") 
    writer.write("\n")
}
finally {
    writer.close()
    println("Closed!")
}
println("Done!")
}

}
我得到一个例外:

Caused by: java.lang.IllegalArgumentException: Wrong FS: hdfs://quickstart.cloudera:8020/tmp, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:414)
at org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:588)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:439)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
at org.glassfish.samples.App$.main(App.scala:19)
at org.glassfish.samples.App.main(App.scala)
... 6 more
我可以使用终端和色调访问hdfs

[cloudera@quickstart ~]$ hdfs dfs -ls /tmp
Found 3 items
drwxr-xr-x   - hdfs     supergroup          0 2015-06-09 17:54 /tmp/hadoop-yarn
drwx-wx-wx   - hive     supergroup          0 2015-08-17 15:24 /tmp/hive
drwxr-xr-x   - cloudera supergroup          0 2015-08-17 16:50 /tmp/labdata
这是我的

我使用以下命令运行项目:

mvn清理包scala:运行

我做错了什么?提前谢谢你

在@jeroenr通知后编辑

这是实际代码:

package org.glassfish.samples

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.PrintWriter;

/**
* @author ${user.name}
*/
object App {

def main(args : Array[String]) {
println( "Trying to write to HDFS..." )
val conf = new Configuration()
val fs= FileSystem.get(conf)
val output = fs.create(new Path("hdfs://quickstart.cloudera:8020/tmp/mySample.txt"))
val writer = new PrintWriter(output)
try {
    writer.write("this is a test") 
    writer.write("\n")
}
finally {
    writer.close()
}
print("Done!")
}

}
package org.glassfish.samples

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.PrintWriter;

/**
* @author ${user.name}
*/
object App {

//def foo(x : Array[String]) = x.foldLeft("")((a,b) => a + b)

def main(args : Array[String]) {
println( "Trying to write to HDFS..." )
val conf = new Configuration()
//conf.set("fs.defaultFS", "hdfs://quickstart.cloudera:8020")
conf.set("fs.defaultFS", "hdfs://192.168.30.147:8020")
val fs= FileSystem.get(conf)
val output = fs.create(new Path("/tmp/mySample.txt"))
val writer = new PrintWriter(output)
try {
    writer.write("this is a test") 
    writer.write("\n")
}
finally {
    writer.close()
    println("Closed!")
}
println("Done!")
}

}
看看这个。我认为问题在于您没有使用配置默认文件系统

conf.set("fs.defaultFS", "hdfs://quickstart.cloudera:8020")
并传递相对路径,如下所示:

fs.create(new Path("/tmp/mySample.txt"))
val os = fs.create(new Path("/tmp/mySample.txt"))
os.write("This is a test".getBytes)
要写入文件,请直接对fs.create返回的输出流调用“write”,如下所示:

fs.create(new Path("/tmp/mySample.txt"))
val os = fs.create(new Path("/tmp/mySample.txt"))
os.write("This is a test".getBytes)

嗨@jeroenr,谢谢你的建议。补丁后,我可以在hdfs中看到新文件,但没有内容,不是吗?我可以在终端上看到已关闭和完成的消息。@aironman我不知道端口的情况,但我认为您应该直接在返回值“fs.create(new Path(“/tmp/mySample.txt”)”上调用“write”。所以:val output=fs.create(新路径(“/tmp/mySample.txt”))output.write(“这是一个测试”)。getBytes@aironman默认端口是7180,现在是一个死链接。在我的情况下,我必须调用
.close()
来执行写操作。
.hflush()
.hsync()
都没有执行对远程位置的写入。