Apache spark Spark作业服务器中的Java程序抛出scala.MatchError
我正在使用DSESpark作业服务器。我试图完成的任务如下: 我用Java创建的spark作业预计将从cassandra db获取一些数据,这将部署在DSE Analytics集群中 代码如下所示:Apache spark Spark作业服务器中的Java程序抛出scala.MatchError,apache-spark,datastax,spark-jobserver,Apache Spark,Datastax,Spark Jobserver,我正在使用DSESpark作业服务器。我试图完成的任务如下: 我用Java创建的spark作业预计将从cassandra db获取一些数据,这将部署在DSE Analytics集群中 代码如下所示: package com.symantec.nsp.analytics; import static com.datastax.spark.connector.japi.CassandraJavaUtil.javaFunctions; import static com.datastax.spark
package com.symantec.nsp.analytics;
import static com.datastax.spark.connector.japi.CassandraJavaUtil.javaFunctions;
import static com.datastax.spark.connector.japi.CassandraJavaUtil.mapRowTo;
import java.io.Serializable;
import java.util.List;
import java.util.UUID;
import org.apache.commons.lang.StringUtils;
import org.apache.spark.SparkContext;
import org.apache.spark.api.java.JavaSparkContext;
import spark.jobserver.JavaSparkJob;
import spark.jobserver.SparkJobInvalid;
import spark.jobserver.SparkJobValid$;
import spark.jobserver.SparkJobValidation;
import com.symantec.nsp.analytics.model.Bucket;
import com.typesafe.config.Config;
public class JavaSparkJobBasicQuery extends JavaSparkJob {
public String runJob(JavaSparkContext jsc, Config config) {
try {
List<UUID> bucketRecords = javaFunctions(jsc).cassandraTable("nsp_storage", "bucket", mapRowTo(Bucket.class))
.select("id", "deleted").filter(s -> s.getDeleted()).map(s -> s.getId()).collect();
System.out.println(">>>>>>>> Total Buckets getting scanned by Spark :" + bucketRecords.size());
return bucketRecords.toString();
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
public SparkJobValidation validate(SparkContext sc, Config config) {
return null;
}
public String invalidate(JavaSparkContext jsc, Config config) {
return null;
}
}
有人能解决这个问题吗。
注意:我多次尝试清理
/tmp
文件夹。不能解决这个问题。我使用的DSE版本是4.8.10。我不确定您是否希望在异常时返回null。我会让它传播。我不确定是否在异常时不返回null。我会让它传播
"status": "ERROR",
"result":
"message": "null",
"errorClass": "scala.MatchError",
"stack": ["spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:244)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)", "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)", "java.lang.Thread.run(Thread.java:745)"]