Scala Lagom框架:未创建卡夫卡主题

Scala Lagom框架:未创建卡夫卡主题,scala,apache-kafka,lagom,Scala,Apache Kafka,Lagom,我正在尝试使用lagom框架编写一个小型微服务,并实现读端以支持mysql。 此服务的目标是公开API以创建、更新和读取员工 但是,在执行时,项目不会创建卡夫卡主题并向其发布消息。我试着调试、阅读文档和参考其他几个类似的项目,但到目前为止运气不佳 Lagom文档和类似的项目是唯一可以为这种相当新的技术找到帮助的来源。我真的需要帮助调试和理解这个问题。让我知道这是否是寻求这种帮助的合适平台 我创建员工并可能看到创建卡夫卡主题的步骤如下: #1. sbt runAll #2. curl -X P

我正在尝试使用lagom框架编写一个小型微服务,并实现读端以支持mysql。

此服务的目标是公开API以创建、更新和读取员工

但是,在执行时,项目不会创建卡夫卡主题并向其发布消息。我试着调试、阅读文档和参考其他几个类似的项目,但到目前为止运气不佳

Lagom文档和类似的项目是唯一可以为这种相当新的技术找到帮助的来源。我真的需要帮助调试和理解这个问题。让我知道这是否是寻求这种帮助的合适平台

我创建员工并可能看到创建卡夫卡主题的步骤如下:

#1. sbt runAll

#2. curl -X POST \
  http://localhost:9000/api/employees \
  -H 'Content-Type: application/json' \
  -d '{
    "id": "128",
    "name": "Shivam",
    "gender": "M",
    "doj": "2017-01-16",
    "pfn": "PFKN110"
}'

#3. /opt/kafka_2.12-2.3.0/bin/kafka-topics.sh --list --zookeeper localhost:2181

#4. /opt/kafka_2.12-2.3.0/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic employee --from-beginning

员工服务,其中我添加了一个方法getEmployees:

在应用程序配置中添加了一行,因此cassandra设置如下所示:

cassandra-journal.keyspace = ${employees.cassandra.keyspace}
cassandra-snapshot-store.keyspace = ${employees.cassandra.keyspace}
lagom.persistence.read-side.cassandra.keyspace = ${employees.cassandra.keyspace}
abstract class EmployeeApplication(context: LagomApplicationContext)
  extends LagomApplication(context)
    with LagomKafkaComponents
    with CassandraPersistenceComponents
    with HikariCPComponents
    with AhcWSComponents {
package com.codingkapoor.employee.persistence.read

import java.time.LocalDate

import akka.Done
import com.codingkapoor.employee.api.Employee
import com.lightbend.lagom.scaladsl.persistence.cassandra.CassandraSession

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future

class EmployeeRepository(session: CassandraSession) {

  def createTable: Future[Done] = {
    for {
      r <- session.executeCreateTable("CREATE TABLE IF NOT EXISTS employees(id text, name text, gender text, PRIMARY KEY (id))")
    } yield r
  }

  def getEmployees(): Future[Vector[Employee]] = {
    session.selectAll("SELECT * FROM employees").map(rows =>
      rows.map(r => Employee(
        id = r.getString("id"),
        name = r.getString("name"),
        gender = r.getString("gender"),
        doj = LocalDate.now(),
        pfn = "pfn")).toVector)
  }
}
package com.codingkapoor.employee.persistence.read

import akka.Done
import com.codingkapoor.employee.persistence.write.{EmployeeAdded, EmployeeEvent}
import com.datastax.driver.core.{BoundStatement, PreparedStatement}
import com.lightbend.lagom.scaladsl.persistence.cassandra.{CassandraReadSide, CassandraSession}
import com.lightbend.lagom.scaladsl.persistence.{AggregateEventTag, EventStreamElement, ReadSideProcessor}

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{Future, Promise}

class EmployeeEventProcessor(readSide: CassandraReadSide, employeeRepository: EmployeeRepository, session: CassandraSession)
  extends ReadSideProcessor[EmployeeEvent] {

  override def buildHandler(): ReadSideProcessor.ReadSideHandler[EmployeeEvent] =
    readSide
      .builder[EmployeeEvent]("employeeoffset")
      .setGlobalPrepare(() => employeeRepository.createTable)
      .setPrepare(_ => prepare())
      .setEventHandler[EmployeeAdded](processEmployeeAdded)
      .build()

  private val createPromise = Promise[PreparedStatement]

  private def createFuture: Future[PreparedStatement] = createPromise.future

  override def aggregateTags: Set[AggregateEventTag[EmployeeEvent]] = Set(EmployeeEvent.Tag)


  private def prepare(query: String, promise: Promise[PreparedStatement]): Future[Done] = {
    val f = session.prepare(query)
    promise.completeWith(f)
    f.map(_ => Done)
  }

  def prepare(): Future[Done] = {
    for {
      r <- prepare("INSERT INTO employees (id, name, gender) VALUES (?, ?, ?)", createPromise)
    } yield r
  }

  private def processEmployeeAdded(eventElement: EventStreamElement[EmployeeAdded]): Future[List[BoundStatement]] = {
    createFuture.map { ps =>
      val bindCreate = ps.bind()
      bindCreate.setString("id", eventElement.event.id)
      bindCreate.setString("name", eventElement.event.name)
      bindCreate.setString("gender", eventElement.event.gender)

      List(bindCreate)
    }
  }

}

EmployeeApplication如下所示:

cassandra-journal.keyspace = ${employees.cassandra.keyspace}
cassandra-snapshot-store.keyspace = ${employees.cassandra.keyspace}
lagom.persistence.read-side.cassandra.keyspace = ${employees.cassandra.keyspace}
abstract class EmployeeApplication(context: LagomApplicationContext)
  extends LagomApplication(context)
    with LagomKafkaComponents
    with CassandraPersistenceComponents
    with HikariCPComponents
    with AhcWSComponents {
package com.codingkapoor.employee.persistence.read

import java.time.LocalDate

import akka.Done
import com.codingkapoor.employee.api.Employee
import com.lightbend.lagom.scaladsl.persistence.cassandra.CassandraSession

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future

class EmployeeRepository(session: CassandraSession) {

  def createTable: Future[Done] = {
    for {
      r <- session.executeCreateTable("CREATE TABLE IF NOT EXISTS employees(id text, name text, gender text, PRIMARY KEY (id))")
    } yield r
  }

  def getEmployees(): Future[Vector[Employee]] = {
    session.selectAll("SELECT * FROM employees").map(rows =>
      rows.map(r => Employee(
        id = r.getString("id"),
        name = r.getString("name"),
        gender = r.getString("gender"),
        doj = LocalDate.now(),
        pfn = "pfn")).toVector)
  }
}
package com.codingkapoor.employee.persistence.read

import akka.Done
import com.codingkapoor.employee.persistence.write.{EmployeeAdded, EmployeeEvent}
import com.datastax.driver.core.{BoundStatement, PreparedStatement}
import com.lightbend.lagom.scaladsl.persistence.cassandra.{CassandraReadSide, CassandraSession}
import com.lightbend.lagom.scaladsl.persistence.{AggregateEventTag, EventStreamElement, ReadSideProcessor}

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{Future, Promise}

class EmployeeEventProcessor(readSide: CassandraReadSide, employeeRepository: EmployeeRepository, session: CassandraSession)
  extends ReadSideProcessor[EmployeeEvent] {

  override def buildHandler(): ReadSideProcessor.ReadSideHandler[EmployeeEvent] =
    readSide
      .builder[EmployeeEvent]("employeeoffset")
      .setGlobalPrepare(() => employeeRepository.createTable)
      .setPrepare(_ => prepare())
      .setEventHandler[EmployeeAdded](processEmployeeAdded)
      .build()

  private val createPromise = Promise[PreparedStatement]

  private def createFuture: Future[PreparedStatement] = createPromise.future

  override def aggregateTags: Set[AggregateEventTag[EmployeeEvent]] = Set(EmployeeEvent.Tag)


  private def prepare(query: String, promise: Promise[PreparedStatement]): Future[Done] = {
    val f = session.prepare(query)
    promise.completeWith(f)
    f.map(_ => Done)
  }

  def prepare(): Future[Done] = {
    for {
      r <- prepare("INSERT INTO employees (id, name, gender) VALUES (?, ?, ?)", createPromise)
    } yield r
  }

  private def processEmployeeAdded(eventElement: EventStreamElement[EmployeeAdded]): Future[List[BoundStatement]] = {
    createFuture.map { ps =>
      val bindCreate = ps.bind()
      bindCreate.setString("id", eventElement.event.id)
      bindCreate.setString("name", eventElement.event.name)
      bindCreate.setString("gender", eventElement.event.gender)

      List(bindCreate)
    }
  }

}

EmployeeServiceImpl添加了以下方法:

  override def getEmployees(): ServiceCall[NotUsed, Vector[Employee]] = ServiceCall { _ =>
    employeeRepository.getEmployees()
  }
雇员档案我重写如下:

cassandra-journal.keyspace = ${employees.cassandra.keyspace}
cassandra-snapshot-store.keyspace = ${employees.cassandra.keyspace}
lagom.persistence.read-side.cassandra.keyspace = ${employees.cassandra.keyspace}
abstract class EmployeeApplication(context: LagomApplicationContext)
  extends LagomApplication(context)
    with LagomKafkaComponents
    with CassandraPersistenceComponents
    with HikariCPComponents
    with AhcWSComponents {
package com.codingkapoor.employee.persistence.read

import java.time.LocalDate

import akka.Done
import com.codingkapoor.employee.api.Employee
import com.lightbend.lagom.scaladsl.persistence.cassandra.CassandraSession

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future

class EmployeeRepository(session: CassandraSession) {

  def createTable: Future[Done] = {
    for {
      r <- session.executeCreateTable("CREATE TABLE IF NOT EXISTS employees(id text, name text, gender text, PRIMARY KEY (id))")
    } yield r
  }

  def getEmployees(): Future[Vector[Employee]] = {
    session.selectAll("SELECT * FROM employees").map(rows =>
      rows.map(r => Employee(
        id = r.getString("id"),
        name = r.getString("name"),
        gender = r.getString("gender"),
        doj = LocalDate.now(),
        pfn = "pfn")).toVector)
  }
}
package com.codingkapoor.employee.persistence.read

import akka.Done
import com.codingkapoor.employee.persistence.write.{EmployeeAdded, EmployeeEvent}
import com.datastax.driver.core.{BoundStatement, PreparedStatement}
import com.lightbend.lagom.scaladsl.persistence.cassandra.{CassandraReadSide, CassandraSession}
import com.lightbend.lagom.scaladsl.persistence.{AggregateEventTag, EventStreamElement, ReadSideProcessor}

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{Future, Promise}

class EmployeeEventProcessor(readSide: CassandraReadSide, employeeRepository: EmployeeRepository, session: CassandraSession)
  extends ReadSideProcessor[EmployeeEvent] {

  override def buildHandler(): ReadSideProcessor.ReadSideHandler[EmployeeEvent] =
    readSide
      .builder[EmployeeEvent]("employeeoffset")
      .setGlobalPrepare(() => employeeRepository.createTable)
      .setPrepare(_ => prepare())
      .setEventHandler[EmployeeAdded](processEmployeeAdded)
      .build()

  private val createPromise = Promise[PreparedStatement]

  private def createFuture: Future[PreparedStatement] = createPromise.future

  override def aggregateTags: Set[AggregateEventTag[EmployeeEvent]] = Set(EmployeeEvent.Tag)


  private def prepare(query: String, promise: Promise[PreparedStatement]): Future[Done] = {
    val f = session.prepare(query)
    promise.completeWith(f)
    f.map(_ => Done)
  }

  def prepare(): Future[Done] = {
    for {
      r <- prepare("INSERT INTO employees (id, name, gender) VALUES (?, ?, ?)", createPromise)
    } yield r
  }

  private def processEmployeeAdded(eventElement: EventStreamElement[EmployeeAdded]): Future[List[BoundStatement]] = {
    createFuture.map { ps =>
      val bindCreate = ps.bind()
      bindCreate.setString("id", eventElement.event.id)
      bindCreate.setString("name", eventElement.event.name)
      bindCreate.setString("gender", eventElement.event.gender)

      List(bindCreate)
    }
  }

}

package com.codingkapoor.employee.persistence.read
导入java.time.LocalDate
导入akka,完成
导入com.codingkapoor.employee.api.employee
导入com.lightbend.lagom.scaladsl.persistence.cassandra.CassandraSession
导入scala.concurrent.ExecutionContext.Implicits.global
导入scala.concurrent.Future
班级雇员安置(课程:CassandraSession){
def createTable:未来[完成]={
为了{
R
rows.map(r=>Employee(
id=r.getString(“id”),
name=r.getString(“name”),
性别=r.getString(“性别”),
doj=LocalDate.now(),
pfn=“pfn”)).toVector)
}
}
事件处理器如下所示:

cassandra-journal.keyspace = ${employees.cassandra.keyspace}
cassandra-snapshot-store.keyspace = ${employees.cassandra.keyspace}
lagom.persistence.read-side.cassandra.keyspace = ${employees.cassandra.keyspace}
abstract class EmployeeApplication(context: LagomApplicationContext)
  extends LagomApplication(context)
    with LagomKafkaComponents
    with CassandraPersistenceComponents
    with HikariCPComponents
    with AhcWSComponents {
package com.codingkapoor.employee.persistence.read

import java.time.LocalDate

import akka.Done
import com.codingkapoor.employee.api.Employee
import com.lightbend.lagom.scaladsl.persistence.cassandra.CassandraSession

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future

class EmployeeRepository(session: CassandraSession) {

  def createTable: Future[Done] = {
    for {
      r <- session.executeCreateTable("CREATE TABLE IF NOT EXISTS employees(id text, name text, gender text, PRIMARY KEY (id))")
    } yield r
  }

  def getEmployees(): Future[Vector[Employee]] = {
    session.selectAll("SELECT * FROM employees").map(rows =>
      rows.map(r => Employee(
        id = r.getString("id"),
        name = r.getString("name"),
        gender = r.getString("gender"),
        doj = LocalDate.now(),
        pfn = "pfn")).toVector)
  }
}
package com.codingkapoor.employee.persistence.read

import akka.Done
import com.codingkapoor.employee.persistence.write.{EmployeeAdded, EmployeeEvent}
import com.datastax.driver.core.{BoundStatement, PreparedStatement}
import com.lightbend.lagom.scaladsl.persistence.cassandra.{CassandraReadSide, CassandraSession}
import com.lightbend.lagom.scaladsl.persistence.{AggregateEventTag, EventStreamElement, ReadSideProcessor}

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{Future, Promise}

class EmployeeEventProcessor(readSide: CassandraReadSide, employeeRepository: EmployeeRepository, session: CassandraSession)
  extends ReadSideProcessor[EmployeeEvent] {

  override def buildHandler(): ReadSideProcessor.ReadSideHandler[EmployeeEvent] =
    readSide
      .builder[EmployeeEvent]("employeeoffset")
      .setGlobalPrepare(() => employeeRepository.createTable)
      .setPrepare(_ => prepare())
      .setEventHandler[EmployeeAdded](processEmployeeAdded)
      .build()

  private val createPromise = Promise[PreparedStatement]

  private def createFuture: Future[PreparedStatement] = createPromise.future

  override def aggregateTags: Set[AggregateEventTag[EmployeeEvent]] = Set(EmployeeEvent.Tag)


  private def prepare(query: String, promise: Promise[PreparedStatement]): Future[Done] = {
    val f = session.prepare(query)
    promise.completeWith(f)
    f.map(_ => Done)
  }

  def prepare(): Future[Done] = {
    for {
      r <- prepare("INSERT INTO employees (id, name, gender) VALUES (?, ?, ?)", createPromise)
    } yield r
  }

  private def processEmployeeAdded(eventElement: EventStreamElement[EmployeeAdded]): Future[List[BoundStatement]] = {
    createFuture.map { ps =>
      val bindCreate = ps.bind()
      bindCreate.setString("id", eventElement.event.id)
      bindCreate.setString("name", eventElement.event.name)
      bindCreate.setString("gender", eventElement.event.gender)

      List(bindCreate)
    }
  }

}

package com.codingkapoor.employee.persistence.read
导入akka,完成
导入com.codingkapoor.employee.persistence.write.{EmployeeAdded,EmployeeEvent}
导入com.datastax.driver.core.{BoundStatement,PreparedStatement}
导入com.lightbend.lagom.scaladsl.persistence.cassandra.{CassandraReadSide,CassandraSession}
导入com.lightbend.lagom.scaladsl.persistence.{AggregateEventTag,EventStreamElement,ReadSideProcessor}
导入scala.concurrent.ExecutionContext.Implicits.global
导入scala.concurrent.{Future,Promise}
类EmployeeEventProcessor(readSide:CassandraReadSide,employeeRepository:employeeRepository,session:CassandraSession)
扩展ReadSideProcessor[EmployeeEvent]{
重写def buildHandler():ReadSideProcessor.ReadSideHandler[EmployeeEvent]=
阅读区
.builder[EmployeeEvent](“employeeoffset”)
.setGlobalPrepare(()=>employeeRepository.createTable)
.setPrepare(=>prepare())
.setEventHandler[EmployeeAdded](processEmployeeAdded)
.build()
private val createPromise=Promise[PreparedStatement]
private def createFuture:Future[PreparedStatement]=createPromise.Future
覆盖def aggregateTag:Set[AggregateEventTag[EmployeeEvent]]=Set(EmployeeEvent.Tag)
private def prepare(查询:字符串,承诺:承诺[PreparedStatement]):Future[Done]={
val f=会话。准备(查询)
承诺。用(f)完成
f、 映射(=>Done)
}
def prepare():未来[完成]={
为了{
R
val bindCreate=ps.bind()
bindCreate.setString(“id”,eventElement.event.id)
bindCreate.setString(“名称”,eventElement.event.name)
bindCreate.setString(“性别”,eventElement.event.gender)
列表(bindCreate)
}
}
}

我添加了方法getEmployees来检查read side是否正常工作。此外,在发送create Employees后,您需要等待10-20秒,然后该员工才会出现在数据库中,之后您可以从readside获取该员工。

经过一些努力,我能够解决该问题。因此,基本上有两个问题:

  • 第一个问题是因为traits
    ReadSideJdbcPersistenceComponents
    WriteSideCassandRaperResistenceComponents
    被扩展以创建
    EmployeeApplication
    。由于Lagom中的一个bug,这两个traits的混合顺序是相关的,并且只有当您混合使用ReadSideJdbcPersi时写入前的模具组件DecassandRapeResistenceComponents您将能够使用此组合

    请看拉各姆的样品

  • 此外,我没有按照lagom文档中的解释正确地实现多态事件流

  • 我现在提出了一个正在运行的github项目,您可以参考:
    .

    我已经运行了你的代码,我在卡夫卡中看到了主题。你能解释一下什么不起作用吗?@VladislavKievski首先非常感谢你花时间和精力帮助我理解这个问题。我已经升级了创建员工的步骤,并看到了为员工创建的卡夫卡主题。在上面提到的步骤3中,我没有看到任何名为“employee”已创建。另外,我想知道在启动curl cmd之前您是否看到创建的kafka主题?您是否为此项目设置了mysql?您看到了哪些主题?请让我知道我是否可以在结尾处更好地解释。@VladislavKievski还想知道您是否看到kafka和mysql“employee”中的数据"表。我看到在执行curl cmd后创建的kafka主题。不,我将其删除,并在需要时替换为cassnadra,这样对我来说更容易测试。我只看到一个主题。所有事件都应存储在数据库中。我回答了你的问题吗?@VladislavKievski这似乎对我不起作用我不知道为什么?如果您配置为在读取端创建,您是否在“employee”cass键空间中看到数据?请发布来自kafka和cassandra“employee”的命令和结果keyspace?您的解决方案看起来不错,但我仍然无法找出我的解决方案的问题。我想尝试使用Slick在Jdbc中实现readside的这个项目。但是,我想让您注意到调试这个问题的两个重要进展。#1.我