Apache spark java.lang.NoClassDefFoundError:org/apache/spark/sql/types/UTF8String$在使用Dataframes读取cassandra表数据时

Apache spark java.lang.NoClassDefFoundError:org/apache/spark/sql/types/UTF8String$在使用Dataframes读取cassandra表数据时,apache-spark,dataframe,types,cassandra,noclassdeffounderror,Apache Spark,Dataframe,Types,Cassandra,Noclassdeffounderror,我正在使用Spark 1.6.1,并尝试使用下面的Spark Java代码将cassandra表数据读入数据帧: DataFrame DF = sqlContext.read().format("org.apache.spark.sql.cassandra").option("keyspace", "myKeySpace").option("table", "myTable").load().select(col("Column1"), col("column2")); Cassandra表中

我正在使用Spark 1.6.1,并尝试使用下面的Spark Java代码将cassandra表数据读入数据帧:

DataFrame DF = sqlContext.read().format("org.apache.spark.sql.cassandra").option("keyspace", "myKeySpace").option("table", "myTable").load().select(col("Column1"), col("column2"));
Cassandra表中的上述两列数据类型是Column1:big Int和Column2:Text

当我尝试只选择column1时,程序运行良好。但是,对于第2列,它抛出以下错误:
请在此建议如何使用数据框处理Cassandra文本类型

问候,,
Rishabh

我认为您缺少
spark sql
JAR文件。添加了检查
import org.apache.spark.sql.types.UTF8String
。我已尝试添加我的spark sql JAR:spark-sql_2.10-1.6.1-mapr-1604.JAR。此版本的spark sql没有org.apache.spark.sql.types.UTF8String。相反,它有org.apache.spark.unsafe.types.UTF8String。我已经进口了Spark和Spark Cassandra连接器的使用版本是什么?还请包括Scala Versionscom.datastax.spark spark-cassandra-connector_2.10 1.4.3------------------------------org.apache.spark spark-core_2.10 1.6.1-mapr-1604--------------------我没有使用Scala。代码是用Spark Java编写的,我认为您缺少
Spark sql
JAR文件。添加了检查
import org.apache.Spark.sql.types.UTF8String
。我尝试添加我的Spark sql JAR:Spark-sql_2.10-1.6.1-mapr-1604.JAR。此版本的spark sql没有org.apache.spark.sql.types.UTF8String。相反,它有org.apache.spark.unsafe.types.UTF8String。我已经进口了Spark和Spark Cassandra连接器的使用版本是什么?还请包括Scala Versionscom.datastax.spark spark-cassandra-connector_2.10 1.4.3------------------------------org.apache.spark spark-core_2.10 1.6.1-mapr-1604--------------------我没有使用Scala。代码是用Spark Java编写的
**java.lang.NoClassDefFoundError: org/apache/spark/sql/types/UTF8String$**
        at org.apache.spark.sql.cassandra.CassandraSQLRow$.org$apache$spark$sql$cassandra$CassandraSQLRow$$toSparkSqlType(CassandraSQLRow.scala:68)
        at org.apache.spark.sql.cassandra.CassandraSQLRow$$anonfun$fromJavaDriverRow$1.apply$mcVI$sp(CassandraSQLRow.scala:51)
        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
        at org.apache.spark.sql.cassandra.CassandraSQLRow$.fromJavaDriverRow(CassandraSQLRow.scala:49)
        at org.apache.spark.sql.cassandra.CassandraSQLRow$CassandraSQLRowReader$.read(CassandraSQLRow.scala:59)
        at org.apache.spark.sql.cassandra.CassandraSQLRow$CassandraSQLRowReader$.read(CassandraSQLRow.scala:56)
        at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$12.apply(CassandraTableScanRDD.scala:210)
        at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$12.apply(CassandraTableScanRDD.scala:210)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$$anon$13.next(Iterator.scala:372)
        at com.datastax.spark.connector.util.CountingIterator.next(CountingIterator.scala:16)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:140)
        at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:130)
        at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:285)
        at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
        at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.types.UTF8String$
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        ... 34 more

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
        at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212)
        at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
        at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
        at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
        at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
        at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
        at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
        at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
        at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
        at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
        at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
        at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
        at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
        at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
        at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350)
        at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311)
        at org.apache.spark.sql.DataFrame.show(DataFrame.scala:319)
        at com.cisco.cmccntrpt.CMCFieldLevelCountPublisher.main(**MyClassName**.java:141)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:742)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NoClassDefFoundError: org/apache/spark/sql/types/UTF8String$