无法加载HBase的spark sql数据源

无法加载HBase的spark sql数据源,hbase,apache-spark-sql,Hbase,Apache Spark Sql,我想使用Spark SQL从HBase表中获取数据。但我在创建数据帧时得到classNotFoundException。这是我的例外 Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/types/NativeType at org.apache.hadoop.hbase.spark.DefaultSource$$anonfun$generateSchemaMappingMap$1

我想使用Spark SQL从HBase表中获取数据。但我在创建数据帧时得到classNotFoundException。这是我的例外

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/types/NativeType
    at org.apache.hadoop.hbase.spark.DefaultSource$$anonfun$generateSchemaMappingMap$1.apply(DefaultSource.scala:127)
    at org.apache.hadoop.hbase.spark.DefaultSource$$anonfun$generateSchemaMappingMap$1.apply(DefaultSource.scala:116)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
    at org.apache.hadoop.hbase.spark.DefaultSource.generateSchemaMappingMap(DefaultSource.scala:116)
    at org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:97)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
    at com.apache.spark.gettingStarted.SparkSQLOnHBaseTable.createTableAndPutData(SparkSQLOnHBaseTable.java:146)
    at com.apache.spark.gettingStarted.SparkSQLOnHBaseTable.main(SparkSQLOnHBaseTable.java:154)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.types.NativeType
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 14 more
你们有没有遇到过这样的问题?你是怎么解决的

这是我的密码

// initializing spark context
    SparkConf sconf = new SparkConf().setMaster("local").setAppName("Test");
    // SparkContext sc = new SparkContext("local", "test", sconf);
    Configuration conf = HBaseConfiguration.create();
    JavaSparkContext jsc = new JavaSparkContext(sconf);
    try {
        HBaseAdmin.checkHBaseAvailable(conf);
        System.out.println("HBase is running");
    } catch (ServiceException e) {
        System.out.println("HBase is not running");
        e.printStackTrace();
    }
    SQLContext sqlContext = new SQLContext(jsc);

    String sqlMapping = "KEY_FIELD STRING :key" + " sql_city STRING personal:city" + ","
            + "sql_name STRING personal:name" + "," + "sql_designation STRING professional:designation" + ","
            + "sql_salary STRING professional:salary";

    HashMap<String, String> colMap = new HashMap<String, String>();
    colMap.put("hbase.columns.mapping", sqlMapping);
    colMap.put("hbase.table", "emp");

    // DataFrame dfJail =
    DataFrame df = sqlContext.read().format("org.apache.hadoop.hbase.spark").options(colMap).load();
    //DataFrame df = sqlContext.load("org.apache.hadoop.hbase.spark", colMap);

    // This is useful when issuing SQL text queries directly against the
    // sqlContext object.
    df.registerTempTable("temp_emp");

    DataFrame result = sqlContext.sql("SELECT count(*) from temp_emp");
    System.out.println("df  " + df);
    System.out.println("result " + result);
//初始化spark上下文
SparkConf sconf=新的SparkConf().setMaster(“本地”).setAppName(“测试”);
//SparkContext sc=新的SparkContext(“本地”、“测试”、sconf);
Configuration=HBaseConfiguration.create();
JavaSparkContext jsc=新的JavaSparkContext(sconf);
试一试{
HBaseAdmin.checkHBaseAvailable(conf);
System.out.println(“HBase正在运行”);
}捕获(服务异常e){
System.out.println(“HBase未运行”);
e、 printStackTrace();
}
SQLContext SQLContext=新的SQLContext(jsc);
String sqlMapping=“KEY\u字段字符串:KEY”+“sql\u城市字符串个人:城市”+,”
+sql_名称字符串个人:名称“+”,“+”sql_名称字符串专业:名称“+”,“
+“sql_薪资字符串专业人员:薪资”;
HashMap colMap=新的HashMap();
colMap.put(“hbase.columns.mapping”,sqlMapping);
colMap.put(“hbase.table”、“emp”);
//数据帧=
DataFrame df=sqlContext.read().format(“org.apache.hadoop.hbase.spark”).options(colMap.load();
//DataFrame df=sqlContext.load(“org.apache.hadoop.hbase.spark”,colMap);
//这在直接针对数据库发出SQL文本查询时非常有用
//sqlContext对象。
df.寄存器可清空(“临时emp”);
DataFrame result=sqlContext.sql(“从temp_emp中选择count(*));
系统输出打印项次(“df”+df);
系统输出打印项次(“结果”+结果);
下面是pom.xml依赖项

<dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.10</artifactId>
        <version>1.6.0</version>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hbase</groupId>
        <artifactId>hbase-client</artifactId>
        <version>1.1.3</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hbase</groupId>
        <artifactId>hbase-spark</artifactId>
        <version>2.0.0-SNAPSHOT</version>
    </dependency>
</dependencies>

org.apache.spark
spark-core_2.10
1.6.0
org.apache.spark
spark-sql_2.10
1.6.1
org.apache.hbase
hbase客户端
1.1.3
org.apache.hbase
hbase spark
2.0.0-SNAPSHOT

NativeType不再存在:(也不存在dataTypes.scala)

它曾经存在于dataTypes.scala中的Spark 1.3.1中

您可以在此处看到NativeType已受到保护:


您可能正在使用一个旧的示例。

NativeType不再存在:(也不存在dataTypes.scala)

它曾经存在于dataTypes.scala中的Spark 1.3.1中

您可以在此处看到NativeType已受到保护:


您可能正在使用一个旧的示例。

虽然此链接可以回答问题,但最好在此处包含答案的基本部分,并提供链接供参考。如果链接页面更改,仅链接的答案可能会无效。感谢威震天,我将其更改为截图。虽然此链接可能会回答问题,但最好在此处包含答案的基本部分,并提供链接以供参考。如果链接页面更改,只有链接的答案可能无效。谢谢威震天,我将它们更改为截图