Scala 如何减少卡夫卡消费者/制作人的滞后

Scala 如何减少卡夫卡消费者/制作人的滞后,scala,apache-kafka,kafka-consumer-api,kafka-producer-api,Scala,Apache Kafka,Kafka Consumer Api,Kafka Producer Api,我正在寻找scala kafka代码的改进。为了减少滞后,我应该在消费者和生产者中做些什么。 这是我从某人那里得到的密码。 我知道这个代码并不难。但我以前从未见过scala代码,我刚刚开始了解kafka。所以我很难找到问题所在 import java.util.Properties import org.apache.kafka.clients.producer.{KafkaProducer, ProducerRecord} import scala.util.Try class Kafka

我正在寻找scala kafka代码的改进。为了减少滞后,我应该在消费者和生产者中做些什么。 这是我从某人那里得到的密码。 我知道这个代码并不难。但我以前从未见过scala代码,我刚刚开始了解kafka。所以我很难找到问题所在

import java.util.Properties
import org.apache.kafka.clients.producer.{KafkaProducer, ProducerRecord}

import scala.util.Try

class KafkaMessenger(val servers: String, val sender: String) {
  val props = new Properties()
  props.put("bootstrap.servers", servers)
  props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
  props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
  props.put("producer.type", "async")

  val producer = new KafkaProducer[String, String](props)

  def send(topic: String, message: Any): Try[Unit] = Try {
    producer.send(new ProducerRecord(topic, message.toString))
  }

  def close(): Unit = producer.close()
}

object KafkaMessenger {
  def apply(host: String, topic: String, sender: String, message: String): Unit = {
    val messenger = new KafkaMessenger(host, sender)
    messenger.send(topic, message)
    messenger.close()
  }
}
这是消费者代码

import java.util.Properties
import java.util.concurrent.Executors

import com.satreci.g2gs.common.impl.utils.KafkaMessageTypes._
import kafka.admin.AdminUtils
import kafka.consumer._
import kafka.utils.ZkUtils
import org.I0Itec.zkclient.{ZkClient, ZkConnection}
import org.slf4j.LoggerFactory

import scala.language.postfixOps

class KafkaListener(val zookeeper: String,
                    val groupId: String,
                    val topic: String,
                    val handleMessage: ByteArrayMessage => Unit,
                    val workJson: String = ""
                   ) extends AutoCloseable {
  private lazy val logger = LoggerFactory.getLogger(this.getClass)
  val config: ConsumerConfig = createConsumerConfig(zookeeper, groupId)
  val consumer: ConsumerConnector = Consumer.create(config)
  val sessionTimeoutMs: Int = 10 * 1000
  val connectionTimeoutMs: Int = 8 * 1000
  val zkClient: ZkClient = ZkUtils.createZkClient(zookeeper, sessionTimeoutMs, connectionTimeoutMs)
  val zkUtils = new ZkUtils(zkClient, new ZkConnection(zookeeper), false)

  def createConsumerConfig(zookeeper: String, groupId: String): ConsumerConfig = {
    val props = new Properties()
    props.put("zookeeper.connect", zookeeper)
    props.put("group.id", groupId)
    props.put("auto.offset.reset", "smallest")
    props.put("zookeeper.session.timeout.ms", "5000") 
    props.put("zookeeper.sync.time.ms", "200")
    props.put("auto.commit.interval.ms", "1000")
    props.put("partition.assignment.strategy", "roundrobin")
    new ConsumerConfig(props)
  }

  def run(threadCount: Int = 1): Unit = {
    val streams = consumer.createMessageStreamsByFilter(Whitelist(topic), threadCount)

    if (!AdminUtils.topicExists(zkUtils, topic)) {
      AdminUtils.createTopic(zkUtils, topic, 1, 1)
    }

    val executor = Executors.newFixedThreadPool(threadCount)
    for (stream <- streams) {
      executor.submit(new MessageConsumer(stream))
    }
    logger.debug(s"KafkaListener start with ${threadCount}thread (topic=$topic)")
  }

  override def close(): Unit = {
    consumer.shutdown()
    logger.debug(s"$topic Listener close")
  }

  class MessageConsumer(val stream: MessageStream) extends Runnable {
    override def run(): Unit = {
      val it = stream.iterator()
      while (it.hasNext()) {
        val message = it.next().message()
        if (workJson == "") {
          handleMessage(message)
        }
        else {
          val strMessage = new String(message)
          val newMessage = s"$strMessage/#/$workJson"
          val outMessage = newMessage.toCharArray.map(c => c.toByte)
          handleMessage(outMessage)
        }
      }
    }
  }
}
import java.util.Properties
导入java.util.concurrent.Executors
导入com.satreci.g2gs.common.impl.utils.KafkaMessageTypes_
导入kafka.admin.AdminUtils
进口卡夫卡_
导入kafka.utils.ZkUtils
导入org.I0Itec.zkclient.{zkclient,ZkConnection}
导入org.slf4j.LoggerFactory
导入scala.language.postfix操作
KafkaListener类(val zookeeper:String,
val-groupId:String,
val主题:字符串,
val handleMessage:ByteArrayMessage=>Unit,
val-workJson:String=“”
)扩展自动关闭{
private lazy val logger=LoggerFactory.getLogger(this.getClass)
val-config:ConsumerConfig=createConsumerConfig(zookeeper,groupId)
val consumer:ConsumerConnector=consumer.create(配置)
val sessionTimeoutMs:Int=10*1000
val connectionTimeoutMs:Int=8*1000
val zkClient:zkClient=ZkUtils.createZkClient(zookeeper、sessionTimeoutMs、connectionTimeoutMs)
val zkUtils=new zkUtils(zkClient,new ZkConnection(zookeeper),false)
def createConsumerConfig(zookeeper:String,groupId:String):ConsumerConfig={
val props=新属性()
道具。放置(“zookeeper.connect”,zookeeper)
props.put(“group.id”,groupId)
道具放置(“自动偏移重置”,“最小”)
道具放置(“zookeeper.session.timeout.ms”,“5000”)
道具放置(“zookeeper.sync.time.ms”,“200”)
props.put(“auto.commit.interval.ms”,“1000”)
道具放置(“划分、分配、策略”、“循环”)
新消费者配置(道具)
}
def运行(线程数:Int=1):单位={
val streams=consumer.createMessageStreamsByFilter(白名单(主题),线程计数)
if(!adminiutils.topicExists(zkUtils,topic)){
createTopic(zkUtils,topic,1,1)
}
val executor=Executors.newFixedThreadPool(线程计数)
用于(c.toByte流)
handleMessage(outMessage)
}
}
}
}
}

具体来说,我想修改在发送消息时创建KafkaProduce对象的结构。似乎还有许多其他改进可以减少延迟。

增加具有相同组id的使用者(KafkaListener)实例的数量。
这将提高消费率。最终,您的生产者和消费者之间的差距将最小化

增加具有相同组id的使用者(KafkaListener)实例的数量。
这将提高消费率。最终,您的生产者和消费者之间的差距将最小化

由于您的回答,这假设有多个分区要使用。我会考虑。假设你的答案有一个以上的分区要消耗。我会考虑的。