Apache kafka 连接akka流kafka和akka http
我目前正在使用akka streams,并尝试了以下示例 请求某个HTTP端点时,从kafka获取第一个元素。 这就是我写的代码和它的工作原理Apache kafka 连接akka流kafka和akka http,apache-kafka,akka-stream,akka-http,Apache Kafka,Akka Stream,Akka Http,我目前正在使用akka streams,并尝试了以下示例 请求某个HTTP端点时,从kafka获取第一个元素。 这就是我写的代码和它的工作原理 get { path("ticket" / IntNumber) { ticketNr => val future = Consumer.plainSource(consumerSettings, Subscriptions.topics("tickets")) .take(1)
get {
path("ticket" / IntNumber) { ticketNr =>
val future = Consumer.plainSource(consumerSettings, Subscriptions.topics("tickets"))
.take(1)
.completionTimeout(5 seconds)
.runWith(Sink.head)
onComplete(future) {
case Success(record) => complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, record.value()))
case _ => complete(HttpResponse(StatusCodes.NotFound))
}
}
}
我只是想知道这是否是一种与(akka)流打交道的意识形态方式。
那么,是否有一种更“直接”的方式将kafka流连接到HTTP响应流
例如,在发布时,我会这样做:
val kafkaTicketsSink = Flow[String]
.map(new ProducerRecord[Array[Byte], String]("tickets", _))
.toMat(Producer.plainSink(producerSettings))(Keep.right)
post {
path("ticket") {
(entity(as[Ticket]) & extractMaterializer) { (ticket, mat) => {
val f = Source.single(ticket).map(t => t.description).runWith(kafkaTicketsSink)(mat)
onComplete(f) { _ =>
val locationHeader = headers.Location(s"/ticket/${ticket.id}")
complete(HttpResponse(StatusCodes.Created, headers = List(locationHeader)))
}
}
}
}
}
也许这也可以改进???您可以使用
Sink.queue
使单个背压流保持活动状态。每次收到请求时,都可以从物化队列中提取元素。这将返回一个元素(如果可用),否则返回背压
大致如下:
val queue = Consumer.plainSource(consumerSettings, Subscriptions.topics("tickets"))
.runWith(Sink.queue())
get {
path("ticket" / IntNumber) { ticketNr =>
val future: Future[Option[ConsumerRecord[String, String]]] = queue.pull()
onComplete(future) {
case Success(Some(record)) => complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, record.value()))
case _ => complete(HttpResponse(StatusCodes.NotFound))
}
}
}
有关Sink.queue
的更多信息,请参阅