Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/multithreading/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kafka使用者抛出java.lang.OutOfMemoryError:直接缓冲内存_Java_Multithreading_Apache Kafka_Kafka Consumer Api_Jvm Arguments - Fatal编程技术网

Kafka使用者抛出java.lang.OutOfMemoryError:直接缓冲内存

Kafka使用者抛出java.lang.OutOfMemoryError:直接缓冲内存,java,multithreading,apache-kafka,kafka-consumer-api,jvm-arguments,Java,Multithreading,Apache Kafka,Kafka Consumer Api,Jvm Arguments,我正在使用单节点Kafka代理(0.10.2)和单节点zookeeper代理(3.4.9)。我有一个消费者服务器(单核和1.5 GB RAM)。每当我运行一个包含5个或更多线程的进程时,我的消费者的线程在抛出这些异常后就会被杀死 例外情况1 java.lang.OutOfMemoryError:java堆空间 位于java.nio.HeapByteBuffer。(HeapByteBuffer.java:57) 位于java.nio.ByteBuffer.allocate(ByteBuffer.j

我正在使用单节点Kafka代理(0.10.2)和单节点zookeeper代理(3.4.9)。我有一个消费者服务器(单核和1.5 GB RAM)。每当我运行一个包含5个或更多线程的进程时,我的消费者的线程在抛出这些异常后就会被杀死

  • 例外情况1
  • java.lang.OutOfMemoryError:java堆空间 位于java.nio.HeapByteBuffer。(HeapByteBuffer.java:57) 位于java.nio.ByteBuffer.allocate(ByteBuffer.java:335) 位于org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) 位于org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) 位于org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:169) 位于org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:150) 位于org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:355) 位于org.apache.kafka.common.network.Selector.poll(Selector.java:303) 位于org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) 位于org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226) 位于org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:263) 位于org.apache.kafka.clients.consumer.internal.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:887)

  • 例外情况2
  • 卡夫卡协调器心跳线程中的未捕获异常|主题1: java.lang.OutOfMemoryError:直接缓冲内存 位于java.nio.Bits.reserveMemory(Bits.java:693) 位于java.nio.DirectByteBuffer。(DirectByteBuffer.java:123) 位于java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 位于sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241) 位于sun.nio.ch.IOUtil.read(IOUtil.java:195) 在sun.nio.ch.socketchannelmpl.read(socketchannelmpl.java:380) 位于org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:110) 位于org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:97) 位于org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) 位于org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:169) 位于org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:150) 位于org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:355) 位于org.apache.kafka.common.network.Selector.poll(Selector.java:303) 位于org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) 位于org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226) 位于org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:263) 位于org.apache.kafka.clients.consumer.internal.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:887)

    我在谷歌上搜索并使用了下面提到的JVM参数,但仍然出现了相同的异常

    -XX:MaxDirectMemorySize=768m

    -Xms512m

    如何解决此问题?是否需要任何其他javm参数调整

    我的卡夫卡消费代码是

    import com.mongodb.DBObject
    import org.apache.kafka.clients.consumer.ConsumerRebalanceListener
    import org.apache.kafka.clients.consumer.ConsumerRecord
    import org.apache.kafka.clients.consumer.ConsumerRecords
    import org.apache.kafka.clients.consumer.KafkaConsumer
    import org.apache.kafka.clients.consumer.OffsetAndMetadata
    import org.apache.kafka.clients.consumer.OffsetCommitCallback
    import org.apache.kafka.common.TopicPartition
    import org.apache.kafka.common.errors.InterruptException
    import org.apache.kafka.common.errors.WakeupException
    import org.slf4j.Logger
    import org.slf4j.LoggerFactory
    import java.util.regex.Pattern
    
    class KafkaPollingConsumer implements Runnable {
    private static final Logger logger = LoggerFactory.getLogger(KafkaPollingConsumer.class)
    private static final String TAG = "[KafkaPollingConsumer]"
    private final KafkaConsumer<String, byte []> kafkaConsumer
    private Map<TopicPartition,OffsetAndMetadata> currentOffsetsMap = new HashMap<>()
    List topicNameList
    Map kafkaTopicConfigMap = new HashMap<String,Object>()
    Map kafkaTopicMessageListMap = new HashMap<String,List>()
    Boolean isRebalancingTriggered = false
    private final Long REBALANCING_SLEEP_TIME = 1000
    
    public KafkaPollingConsumer(String serverType, String groupName, String topicNameRegex, Integer batchSize, Integer maxPollTime, Integer requestTime){
        logger.debug("{} [Constructor] [Enter] Thread Name {} serverType group Name TopicNameRegex",TAG,Thread.currentThread().getName(),serverType,groupName,topicNameRegex)
        logger.debug("Populating Property for kafak consumer")
        logger.debug("BatchSize {}",batchSize)
        Properties kafkaConsumerProperties = new Properties()
        kafkaConsumerProperties.put("group.id", groupName)
        kafkaConsumerProperties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
        kafkaConsumerProperties.put("value.deserializer", "com.custom.kafkaconsumerv2.deserializer.CustomObjectDeserializer")
        switch(serverType){
            case KafkaTopicConfigEntity.KAFKA_NODE_TYPE_ENUM.Priority.toString() :
                kafkaConsumerProperties.put("bootstrap.servers",ConfigLoader.conf.kafkaServer.priority.kafkaNode)
                kafkaConsumerProperties.put("enable.auto.commit",ConfigLoader.conf.kafkaServer.priority.consumer.enable.auto.commit)
                kafkaConsumerProperties.put("auto.offset.reset",ConfigLoader.conf.kafkaServer.priority.consumer.auto.offset.reset)
                break
            case KafkaTopicConfigEntity.KAFKA_NODE_TYPE_ENUM.Bulk.toString() :
                kafkaConsumerProperties.put("bootstrap.servers",ConfigLoader.conf.kafkaServer.bulk.kafkaNode)
                kafkaConsumerProperties.put("enable.auto.commit",ConfigLoader.conf.kafkaServer.bulk.consumer.enable.auto.commit)
                kafkaConsumerProperties.put("auto.offset.reset",ConfigLoader.conf.kafkaServer.bulk.consumer.auto.offset.reset)
                kafkaConsumerProperties.put("max.poll.records",1)
                kafkaConsumerProperties.put("max.poll.interval.ms",600000)
                kafkaConsumerProperties.put("request.timeout.ms",600005)
                break
            default :
                throw "Invalid server type"
                break
        }
        logger.debug("{} [Constructor] KafkaConsumer Property Populated {}",properties.toString())
        kafkaConsumer = new KafkaConsumer<String, byte []>(kafkaConsumerProperties)
        topicNameList = topicNameRegex.split(Pattern.quote('|'))
        logger.debug("{} [Constructor] Kafkatopic List {}",topicNameList.toString())
        logger.debug("{} [Constructor] Exit",TAG)
    }
    
    private class HandleRebalance implements ConsumerRebalanceListener {
        public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
            logger.error('{} In onPartitionAssigned setting isRebalancingTriggered to false',TAG)
            isRebalancingTriggered = false
        }
    
        public void onPartitionsRevoked(Collection<TopicPartition> partitions) {
            logger.error("{} In onPartitionsRevoked setting osRebalancingTriggered to true",TAG)
            isRebalancingTriggered = true
            publishAllKafkaTopicBatchMessages()
            commitOffset()
    
        }
    }
    
    private class AsyncCommitCallBack implements OffsetCommitCallback{
    
        @Override
        void onComplete(Map<TopicPartition, OffsetAndMetadata> map, Exception e) {
    
        }
    }
    
    @Override
    void run() {
        logger.debug("{} Starting Thread ThreadName {}",TAG,Thread.currentThread().getName())
        populateKafkaConfigMap()
        initializeKafkaTopicMessageListMap()
        String topicName
        String consumerClassName
        String consumerMethodName
        Boolean isBatchJob
        Integer batchSize = 0
        final Thread mainThread = Thread.currentThread()
        Runtime.getRuntime().addShutdownHook(new Thread() {
            public void run() {
                logger.error("{},gracefully shutdowning thread {}",TAG,mainThread.getName())
                kafkaConsumer.wakeup()
                try {
                    mainThread.join()
                } catch (InterruptedException exception) {
                    logger.error("{} Error : {}",TAG,exception.getStackTrace().join("\n"))
                }
            }
        })
        kafkaConsumer.subscribe(topicNameList , new HandleRebalance())
        try{
            while(true){
                logger.debug("{} Starting Consumer with polling time in ms 100",TAG)
                ConsumerRecords kafkaRecords
                if(isRebalancingTriggered == false) {
                    kafkaRecords = kafkaConsumer.poll(100)
                }
                else{
                    logger.error("{} in rebalancing going to sleep",TAG)
                    Thread.sleep(REBALANCING_SLEEP_TIME)
                    continue
                }
                for(ConsumerRecord record: kafkaRecords){
                    if(isRebalancingTriggered == true){
                        break
                    }
                    currentOffsetsMap.put(new TopicPartition(record.topic(), record.partition()),new OffsetAndMetadata(record.offset() +1))
                    topicName = record.topic()
                    DBObject kafkaTopicConfigDBObject = kafkaTopicConfigMap.get(topicName)
                    consumerClassName = kafkaTopicConfigDBObject.get(KafkaTopicConfigEntity.CLASS_NAME_KEY)
                    consumerMethodName = kafkaTopicConfigDBObject.get(KafkaTopicConfigEntity.METHOD_NAME_KEY)
                    isBatchJob = kafkaTopicConfigDBObject.get(KafkaTopicConfigEntity.IS_BATCH_JOB_KEY)
                    logger.debug("Details about Message")
                    logger.debug("Thread {}",mainThread.getName())
                    logger.debug("Topic {}",topicName)
                    logger.debug("Partition {}",record.partition().toString())
                    logger.debug("Offset {}",record.offset().toString())
                    logger.debug("clasName {}",consumerClassName)
                    logger.debug("methodName {}",consumerMethodName)
                    logger.debug("isBatchJob {}",isBatchJob.toString())
                    Object message = record.value()
                    logger.debug("message {}",message.toString())
                    if(isBatchJob == true){
                        prepareMessagesBatch(topicName,message)
                        //batchSize = Integer.parseInt(kafkaTopicConfigDBObject.get(KafkaTopicConfigEntity.BATCH_SIZE_KEY).toString())
                        //logger.debug("batchSize {}",batchSize.toString())
                    }
                    else{
                        publishMessageToNonBatchConsumer(consumerClassName,consumerMethodName,message)
                    }
                    //publishMessageToConsumers(consumerClassName,consumerMethodName,isBatchJob,batchSize,message,topicName)
                    //try {
                    //  kafkaConsumer.commitAsync(currentOffsetsMap,new AsyncCommitCallBack())
                    logger.debug("{} Commiting Messages to Kafka",TAG)
                    //}
                    /*catch(Exception exception){
                        kafkaConsumer.commitSync(currentOffsetsMap)
                        currentOffsetsMap.clear()
                        logger.error("{} Error while commiting async so commiting in sync {}",TAG,exception.getStackTrace().join("\n"))
                    }*/
                }
                commitOffset()
                publishAllKafkaTopicBatchMessages()
            }
        }
        catch(InterruptException exception){
            logger.error("{} In InterruptException",TAG)
            logger.error("{} In Exception exception message {}",TAG,exception.getMessage())
            logger.error("{} Exception {}",TAG,exception.getStackTrace().join("\n"))
        }
        catch (WakeupException exception) {
            logger.error("{} In WakeUp Exception",TAG)
            logger.error("{} In Exception exception message {}",TAG,exception.getMessage())
            logger.error("{} Exception {}",TAG,exception.getStackTrace().join("\n"))
        }
        catch(Exception exception){
            exception.getMessage()
            logger.error("{} In Exception",TAG)
            logger.error("{} In Exception exception message {}",TAG,exception.getMessage())
            logger.error("{} Exception {}",TAG,exception.getStackTrace().join("\n"))
        }
        finally {
            logger.error("{} In finally commiting remaining offset ",TAG)
            publishAllKafkaTopicBatchMessages()
            //kafkaConsumer.commitSync(currentOffsetsMap)
            kafkaConsumer.close()
            logger.error("{} Exiting Consumer",TAG)
        }
    }
    
    private void commitOffset(){
        logger.debug("{} [commitOffset] Enter")
        logger.debug("{} currentOffsetMap {}",currentOffsetsMap.toString())
        if(currentOffsetsMap.size() > 0) {
            kafkaConsumer.commitSync(currentOffsetsMap)
            currentOffsetsMap.clear()
        }
        logger.debug("{} [commitOffset] Exit")
    
    }
    
    private void publishMessageToConsumers(String consumerClassName,String consumerMethodName,Boolean isBatchJob,Integer batchSize,Object message, String topicName){
        logger.debug("{} [publishMessageToConsumer] Enter",TAG)
        if(isBatchJob == true){
            publishMessageToBatchConsumer(consumerClassName, consumerMethodName,batchSize, message, topicName)
        }
        else{
            publishMessageToNonBatchConsumer(consumerClassName, consumerMethodName, message)
        }
        logger.debug("{} [publishMessageToConsumer] Exit",TAG)
    }
    
    private void publishMessageToNonBatchConsumer(String consumerClassName, String consumerMethodName, message){
        logger.debug("{} [publishMessageToNonBatchConsumer] Enter",TAG)
        executeConsumerMethod(consumerClassName,consumerMethodName,message)
        logger.debug("{} [publishMessageToNonBatchConsumer] Exit",TAG)
    }
    
    private void publishMessageToBatchConsumer(String consumerClassName, String consumerMethodName, Integer batchSize, Object message, String topicName){
        logger.debug("{} [publishMessageToBatchConsumer] Enter",TAG)
        List consumerMessageList = kafkaTopicMessageListMap.get(topicName)
        consumerMessageList.add(message)
        if(consumerMessageList.size() == batchSize){
            logger.debug("{} [publishMessageToBatchConsumer] Pushing Messages In Batches",TAG)
            executeConsumerMethod(consumerClassName, consumerMethodName, consumerMessageList)
            consumerMessageList.clear()
        }
        kafkaTopicMessageListMap.put(topicName,consumerMessageList)
        logger.debug("{} [publishMessageToBatchConsumer] Exit",TAG)
    }
    
    private void populateKafkaConfigMap(){
        logger.debug("{} [populateKafkaConfigMap] Enter",TAG)
        KafkaTopicConfigDBService kafkaTopicConfigDBService = KafkaTopicConfigDBService.getInstance()
        topicNameList.each { topicName ->
            DBObject kafkaTopicDBObject = kafkaTopicConfigDBService.findByTopicName(topicName)
            kafkaTopicConfigMap.put(topicName,kafkaTopicDBObject)
        }
        logger.debug("{} [populateKafkaConfigMap] kafkaConfigMap {}",TAG,kafkaTopicConfigMap.toString())
        logger.debug("{} [populateKafkaConfigMap] Exit",TAG)
    }
    
    private void initializeKafkaTopicMessageListMap(){
        logger.debug("{} [initializeKafkaTopicMessageListMap] Enter",TAG)
        topicNameList.each { topicName ->
            kafkaTopicMessageListMap.put(topicName,[])
        }
        logger.debug("{} [populateKafkaConfigMap] kafkaTopicMessageListMap {}",TAG,kafkaTopicMessageListMap.toString())
        logger.debug("{} [initializeKafkaTopicMessageListMap] Exit",TAG)
    }
    
    private void executeConsumerMethod(String className, String methodName, def messages){
        try{
            logger.debug("{} [executeConsumerMethod] Enter",TAG)
            logger.debug("{} [executeConsumerMethod] className  {} methodName {} messages {}",TAG,className,methodName,messages.toString())
            Class.forName(className)."$methodName"(messages)
        } catch (Exception exception){
            logger.error("{} [{}] Error while executing method : {} of class: {} with params : {} - {}", TAG, Thread.currentThread().getName(), methodName,
                    className, messages.toString(), exception.getStackTrace().join("\n"))
        }
        logger.debug("{} [executeConsumerMethod] Exit",TAG)
    }
    
    private void publishAllKafkaTopicBatchMessages(){
        logger.debug("{} [publishAllKafkaTopicBatchMessages] Enter",TAG)
        String consumerClassName = null
        String consumerMethodName = null
        kafkaTopicMessageListMap.each { topicName, messageList ->
            if (messageList != null && messageList.size() > 0) {
                DBObject kafkaTopicDBObject = kafkaTopicConfigMap.get(topicName)
                consumerClassName = kafkaTopicDBObject.get(KafkaTopicConfigEntity.CLASS_NAME_KEY)
                consumerMethodName = kafkaTopicDBObject.get(KafkaTopicConfigEntity.METHOD_NAME_KEY)
                logger.debug("{} Pushing message in topic {} className {} methodName {} ", TAG, topicName, consumerClassName, consumerMethodName)
                if (messageList != null && messageList.size() > 0) {
                    executeConsumerMethod(consumerClassName, consumerMethodName, messageList)
                    messageList.clear()
                    kafkaTopicMessageListMap.put(topicName, messageList)
    
                }
            }
        }
        logger.debug("{} [publishAllKafkaTopicBatchMessages] Exit",TAG)
    }
    
    private void prepareMessagesBatch(String topicName,Object message){
        logger.debug("{} [prepareMessagesBatch] Enter",TAG)
        logger.debug("{} [prepareMessagesBatch] preparing batch for topic {}",TAG,topicName)
        logger.debug("{} [prepareMessagesBatch] preparting batch for message {}",TAG,message.toString())
        List consumerMessageList = kafkaTopicMessageListMap.get(topicName)
        consumerMessageList.add(message)
        kafkaTopicMessageListMap.put(topicName,consumerMessageList)
    
    }
    
    导入com.mongodb.DBObject
    导入org.apache.kafka.clients.consumer.ConsumerBalanceListener
    导入org.apache.kafka.clients.consumer.ConsumerRecord
    导入org.apache.kafka.clients.consumer.ConsumerRecords
    导入org.apache.kafka.clients.consumer.KafkaConsumer
    导入org.apache.kafka.clients.consumer.OffsetAndMetadata
    导入org.apache.kafka.clients.consumer.OffsetCommitCallback
    导入org.apache.kafka.common.TopicPartition
    导入org.apache.kafka.common.errors.InterruptException
    导入org.apache.kafka.common.errors.WakeupException
    导入org.slf4j.Logger
    导入org.slf4j.LoggerFactory
    导入java.util.regex.Pattern
    类kafkappollingconsumer实现Runnable{
    私有静态最终记录器Logger=LoggerFactory.getLogger(kafkappollingconsumer.class)
    私有静态最终字符串标记=“[kafkappollingconsumer]”
    私人最终卡夫卡消费者卡夫卡消费者
    私有映射currentOffsetsMap=newHashMap()
    列表主题名称列表
    Map kafkaTopicConfigMap=newhashmap()
    Map kafkaTopicMessageListMap=newhashmap()
    布尔值IsReBankingTriggered=false
    私人最终长时间再平衡睡眠时间=1000
    公共KafkappollingConsumer(字符串服务器类型、字符串组名、字符串topicNameRegex、整数batchSize、整数maxPollTime、整数requestTime){
    logger.debug(“{}[Constructor][Enter]线程名{}serverType组名TopicNameRegex”,标记Thread.currentThread().getName(),serverType,groupName,TopicNameRegex)
    debug(“为kafak使用者填充属性”)
    debug(“BatchSize{}”,BatchSize)
    属性kafkaConsumerProperties=新属性()
    kafkaConsumerProperties.put(“group.id”,groupName)
    kafkaConsumerProperties.put(“key.deserializer”、“org.apache.kafka.common.serialization.StringDeserializer”)
    kafkaConsumerProperties.put(“value.deserializer”、“com.custom.kafkaconsumerv2.deserializer.CustomObjectDeserializer”)
    交换机(服务器类型){
    case KafkaTopicConfigEntity.KAFKA_NODE_TYPE_ENUM.Priority.toString():
    kafkaConsumerProperties.put(“bootstrap.servers”,ConfigLoader.conf.kafkaServer.priority.kafkaNode)
    kafkaConsumerProperties.put(“enable.auto.commit”,ConfigLoader.conf.kafkaServer.priority.consumer.enable.auto.commit)
    kafkaConsumerProperties.put(“auto.offset.reset”,ConfigLoader.conf.kafkaServer.priority.consumer.auto.offset.reset)
    打破
    case KafkaTopicConfigEntity.KAFKA_NODE_TYPE_ENUM.Bulk.toString():
    kafkaConsumerProperties.put(“bootstrap.servers”,ConfigLoader.conf.kafkaSer