json4s scala.MatchError(属于scala.Tuple2类)

json4s scala.MatchError(属于scala.Tuple2类),scala,json4s,Scala,Json4s,我有一个自定义类,我想将其转换为JSON,但在这里发现了一个奇怪的错误: Exception in thread "main" scala.MatchError: (23,com.xxx.dts.dq.common.utils.DQOpsStoreProfileStatus@5f275ae4) (of class scala.Tuple2) 代码如下: implicit val formats = org.json4s.DefaultFormats val A = Serialization.

我有一个自定义类,我想将其转换为JSON,但在这里发现了一个奇怪的错误:

Exception in thread "main" scala.MatchError: (23,com.xxx.dts.dq.common.utils.DQOpsStoreProfileStatus@5f275ae4) (of class scala.Tuple2)
代码如下:

implicit val formats = org.json4s.DefaultFormats
val A = Serialization.write(resultsMap)
println(A)
现在如果我做一个foreach:

 resultsMap.foreach(x => println(Serialization.write(x)))
我得到了一些结果,但它们看起来不正确:

{"_1":23,"_2":{}}
{"_1":32,"_2":{}}
元组缺少其核心信息。我假设是因为我们使用的自定义类导致了一些问题?有什么办法吗

如果我提取映射的第二个元素并将其转换为JSON,它将如下所示:

{"errorCode":null,"id":null,"fieldType":"STRING","fieldIndex":0,"datasetFieldName":"RECORD_ID","datasetFieldSum":0.0,"datasetFieldMin":0.0,"datasetFieldMax":0.0,"datasetFieldMean":0.0,"datasetFieldSigma":0.0,"datasetFieldNullCount":0.0,"datasetFieldObsCount":0.0,"datasetFieldKurtosis":0.0,"datasetFieldSkewness":0.0,"frequencyDistribution":"(D,4488)","runStatusId":null,"lakeHdfsPath":"/user/jvy234/20140817_011500_zoot_kohls_offer_init.dat"}
class Tuple2Serializer extends CustomSerializer[(String, Int)](   format => (
    {
      case JObject(JField(k, JInt(v))) => (k, v)
    },
    {
      case (s: String, t: Int) => (s -> t)
    }   ) )

implicit val formats = org.json4s.DefaultFormats + new Tuple2Serializer
还有一点值得注意的是,这个类是用java编写的,如果这可能是罪魁祸首的话

完整堆栈跟踪:

Exception in thread "main" scala.MatchError: (0,com.xxx.dts.dq.common.utils.DQOpsStoreProfileStatus@315a29f4) (of class scala.Tuple2)
    at org.json4s.Extraction$.internalDecomposeWithBuilder(Extraction.scala:132)
    at org.json4s.Extraction$.decomposeWithBuilder(Extraction.scala:67)
    at org.json4s.Extraction$.decompose(Extraction.scala:194)
    at org.json4s.jackson.Serialization$.write(Serialization.scala:22)
    at com.xxx.dts.toolset.jsonWrite$.jsonClob(jsonWrite.scala:16)
    at com.xxx.dts.dq.profiling.DQProfilingEngine.profile(DQProfilingEngine.scala:255)
    at com.xxx.dts.dq.profiling.Profiler$.main(DQProfilingEngine.scala:64)
    at com.xxx.dts.dq.profiling.Profiler.main(DQProfilingEngine.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

我认为你只有两种方法:

  • 为tuple2编写序列化

  • 将其转换为列表映射,例如:
    resultsMap.map(map()).foreach(…)

  • 更新: 对于序列化,可以使用如下内容:

    {"errorCode":null,"id":null,"fieldType":"STRING","fieldIndex":0,"datasetFieldName":"RECORD_ID","datasetFieldSum":0.0,"datasetFieldMin":0.0,"datasetFieldMax":0.0,"datasetFieldMean":0.0,"datasetFieldSigma":0.0,"datasetFieldNullCount":0.0,"datasetFieldObsCount":0.0,"datasetFieldKurtosis":0.0,"datasetFieldSkewness":0.0,"frequencyDistribution":"(D,4488)","runStatusId":null,"lakeHdfsPath":"/user/jvy234/20140817_011500_zoot_kohls_offer_init.dat"}
    
    class Tuple2Serializer extends CustomSerializer[(String, Int)](   format => (
        {
          case JObject(JField(k, JInt(v))) => (k, v)
        },
        {
          case (s: String, t: Int) => (s -> t)
        }   ) )
    
    implicit val formats = org.json4s.DefaultFormats + new Tuple2Serializer
    

    您应该在错误后看到一个堆栈跟踪,总是将其包含在这样的帖子中。