Hive 无法将数据从拼花文件加载到配置单元外部表

Hive 无法将数据从拼花文件加载到配置单元外部表,hive,apache-spark-sql,parquet,bigdata,Hive,Apache Spark Sql,Parquet,Bigdata,我已经在下面编写了创建拼花地板文件的scala代码 scala> case class Person(name:String,age:Int,sex:String) defined class Person scala> val data = Seq(Person("jack",25,"m"),Person("john",26,"m"),Person("anu",27,"f")) data: Seq[Person] = List(Person(jack,25,m), Person(

我已经在下面编写了创建拼花地板文件的scala代码

scala> case class Person(name:String,age:Int,sex:String)
defined class Person

scala> val data = Seq(Person("jack",25,"m"),Person("john",26,"m"),Person("anu",27,"f"))
data: Seq[Person] = List(Person(jack,25,m), Person(john,26,m), Person(anu,27,f))

scala> import  sqlContext.implicits._
import sqlContext.implicits._

scala> import org.apache.spark.sql.SaveMode
import org.apache.spark.sql.SaveMode

scala> df.select("name","age","sex").write.format("parquet").mode("overwrite").save("sparksqloutput/person")
HDFS状态:

[cloudera@quickstart ~]$ hadoop fs -ls sparksqloutput/person
Found 4 items
-rw-r--r--   1 cloudera cloudera          0 2017-08-14 23:03 sparksqloutput/person/_SUCCESS
-rw-r--r--   1 cloudera cloudera        394 2017-08-14 23:03 sparksqloutput/person/_common_metadata
-rw-r--r--   1 cloudera cloudera        721 2017-08-14 23:03 sparksqloutput/person/_metadata
-rw-r--r--   1 cloudera cloudera        773 2017-08-14 23:03 sparksqloutput/person/part-r-00000-2dd2f334-1985-42d6-9dbf-16b0a51e53a8.gz.parquet
然后,我使用下面的命令创建了一个外部配置单元表

hive> CREATE EXTERNAL TABLE person (name STRING,age INT,sex STRING) STORED AS PARQUET LOCATION '/sparksqlouput/person/';
OK
Time taken: 0.174 seconds
hive> select * from person
    > ;
OK
Time taken: 0.125 seconds

但在select查询上方运行时,未返回任何行。请有人帮帮忙。

通常,hive sql语句
'select*from'
只是定位表数据所在的表目录,并从该
hdfs
目录转储文件内容

在您的情况下,
select*
不起作用,这意味着位置不正确

请注意,在scala中,您的上一条语句包含
save(“sparksqloutput/person”)
,其中
“sparksqloutput/person”
是相对路径,它将扩展到
“/user//sparksqloutput/person”
(即
“/user/cloudera/sparksqloutput/person”


因此,在创建配置单元表时,应该使用
“/user/cloudera/sparksqloutput/person”
而不是
“/sparksqloutput/person”
。实际上,
“/sparksqloutput/person”
不存在,因此您在
select*from person
中没有得到任何输出

用于创建dataframe df的scala语句缺失。你能把它加起来吗?