如何将数据流数据(json)存储到cassandra中?

如何将数据流数据(json)存储到cassandra中?,json,spark-streaming,kafka-consumer-api,spark-cassandra-connector,Json,Spark Streaming,Kafka Consumer Api,Spark Cassandra Connector,我得到的数据流(json)是这样的 val topics= "test" val zkQuorum="localhost:2181" val group="test-consumer-group" val sparkConf = new org.apache.spark.SparkConf() .setAppName("XXXXX") .setMaster("local[*]")

我得到的数据流(json)是这样的

       val topics= "test"
       val zkQuorum="localhost:2181"
       val group="test-consumer-group"    
       val sparkConf = new org.apache.spark.SparkConf()
          .setAppName("XXXXX")
          .setMaster("local[*]")
          .set("cassandra.connection.host", "127.0.0.1")
          .set("cassandra.connection.port", "9042")

        val ssc = new StreamingContext(sparkConf, Seconds(2))
        ssc.checkpoint("checkpoint")
        val topicMap = topics.split(",").map((_, numThreads.toInt)).toMap

        val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2)
通过上面的这个程序,我得到了DStream中的json数据。
我将如何处理此数据流数据并将其存储到Cassandra或elastic search中?然后我将如何从数据流(json格式)检索数据并存储在Cassandra中?

您需要导入
com.datasax.spark.connector.\u
,将数据流的元素转换为适当的案例类

[{"id":100,"firstName":"Beulah","lastName":"Fleming","gender":"female","ethnicity":"SpEd","height":167,"address":27,"createdDate":1494489672243,"lastUpdatedDate":1494489672244,"isDeleted":0},{"id":101,"firstName":"Traci","lastName":"Summers","gender":"female","ethnicity":"Frp","height":181,"address":544,"createdDate":1494510639611,"lastUpdatedDate":1494510639611,"isDeleted":0}]
并使用隐式函数saveToCassandra保存它

case class Record(id: String, firstName: String, ...)
val colums = SomeColums("id", "first_name", ...)
val mapped = lines.map(whateverDataYouHave => fuctionThatReutrnsARecordObject)
有关更多信息,请查看文档

mapped.saveToCassandra(KEYSPACE_NAME, TABLE_NAME, columns)