如何在Java的嵌套异常中获取异常类型?

如何在Java的嵌套异常中获取异常类型?,java,apache-spark,exception,apache-kafka,nested,Java,Apache Spark,Exception,Apache Kafka,Nested,如果我的代码得到org.apache.kafka.clients.consumer.offsetAutoFrangeException,我想执行一些操作。我用这张支票试过了 if(e.getCause().getCause() instanceof OffsetOutOfRangeException) 但我还是得到了一个SparkException,而不是OffsetAutoFrangeException ERROR Driver:86 - Error in executing stream

如果我的代码得到
org.apache.kafka.clients.consumer.offsetAutoFrangeException
,我想执行一些操作。我用这张支票试过了

if(e.getCause().getCause() instanceof OffsetOutOfRangeException)
但我还是得到了一个SparkException,而不是OffsetAutoFrangeException

ERROR Driver:86 - Error in executing stream
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 11, localhost, executor 0): org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {dns_data-0=23245772}
        at org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(Fetcher.java:588)
        at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:354)
        at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1000)
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938)
        at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:136)
        at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:68)
        at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:271)
        at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:231)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$10.next(Iterator.scala:393)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)`
Caused by: org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {dns_data-0=23245772}
        at org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(Fetcher.java:588)
        at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:354)
        at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1000)
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938)
        at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:136)
        at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:68)
        at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:271)
        at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:231)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)

请尝试以下条件:

e.getCause().getClass().equals(OffsetOutOfRangeException.class)

从所有的意图和目的来看,这与op在问题中的尝试是一样的。它工作得非常完美。我能够比较捕获块中的OffsetAutoFrangeException并应用一些操作。非常感谢您的知识共享。@anandbabu-1)这不是完整的堆栈跟踪。没有堆栈帧!显示完整的堆栈跟踪。。。包括所有“造成”的附属痕迹。2) 把额外的信息放在你的问题中,而不是评论中。(使用“编辑”按钮!)我们要求查看真实的stacktrace是有原因的。那就是。。。查看您所拥有的内容中是否嵌套了
OffsetOutOfRangeException
。是的,我们可以在消息中看到名称,但这并不能证明什么。对于猜测答案的人来说,
instanceof
e.getClass()
版本正在测试相同的东西。真正的问题是
OffsetOutOfRangeException
是否实际上是
e
的嵌套异常。。。以及嵌套的深度。这就是我们需要stacktrace的原因。这个问题怎么会被如此负面地注意到?这根本不是一个愚蠢的问题,而且更重要的是:每个人都有自己的答案,而且它们并不完全相同!在一天结束时,我们有一个OP,他的代码(显然)在不理解为什么的情况下工作。未来的读者将完全无知。否决票的真正目的是过滤掉对普通读者没有帮助的问题;i、 e.指导StackOverflow知识库的管理。从这个角度来看,否决票是合理的。