Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何解决java.io.InvalidClassException:org.apache.solr.client.solrj.SolrQuery_Java_Apache Spark_Solr_Solrj - Fatal编程技术网

如何解决java.io.InvalidClassException:org.apache.solr.client.solrj.SolrQuery

如何解决java.io.InvalidClassException:org.apache.solr.client.solrj.SolrQuery,java,apache-spark,solr,solrj,Java,Apache Spark,Solr,Solrj,我编写了以下代码来查询solr,并使用Spark和solrj对文档集执行一些操作: SolrQuery sq = new SolrQuery(); sq.set(key, JobUtils.removeFrontEndQuotesWithBackSlash(queryParams.get(key).render())); JavaRDD<SolrDocument> tempRDD = solrRDD.queryShardsBIL(sq,

我编写了以下代码来查询solr,并使用Spark和solrj对文档集执行一些操作:

SolrQuery sq = new SolrQuery();
sq.set(key, JobUtils.removeFrontEndQuotesWithBackSlash(queryParams.get(key).render()));
JavaRDD<SolrDocument> tempRDD = solrRDD.queryShardsBIL(sq,
                        paramsObj.get("splitField").render().replaceAll("\"", ""),
                        Integer.parseInt(paramsObj.get("splitsPerShard").render().replaceAll("\"", "")),
                        paramsObj.get("exportHandler").render().replaceAll("\"", ""));

combinedRDD = combinedRDD.union(tempRDD);
combinedRDD.mapToPair(new SolrJobMapper1(jobConfig))
                    .reduceByKey(new SolrJobReducer1(jobConfig))
                    .foreachPartition(new SolrJobPartitionIndexer1(JobUtils.removeFrontEndQuotes(paramsObj.get("zkHost").render()),
                    JobUtils.removeFrontEndQuotes(paramsObj.get("solrCollection").render()),
                    Boolean.parseBoolean(JobUtils.removeFrontEndQuotes(paramsObj.get("doCommit").render())),accum,JobUtils.removeFrontEndQuotes(paramsObj.get("uniqueIdField").render())));

然而,如果我从main方法在本地运行它,它工作得很好。我在两种环境中使用相同的solrj-6.1.0。我这里缺少什么?

我想
SolrQuery
不能从Spark序列化


不确定您是如何实现Solr查询的,如果您发布
SolrJobMapper1
类和
SolrJobPartitionIndexer1
的源代码,您会很感兴趣,但是对于这类作品,我使用并强烈建议。

你检查过这里提到的要点了吗:请发布
SolrJobMapper1
类和
SolrJobPartitionIndexer1
的源代码。
java.io.InvalidClassException: org.apache.solr.client.solrj.SolrQuery; local class incompatible: stream classdesc serialVersionUID = -323500251212286545, local class serialVersionUID = -7606622609766730986
    at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1623)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)