Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/19.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala Kafka的Spark流媒体并与Memsql的记录进行比较(计数不正确)_Scala_Apache Spark_Apache Kafka_Spark Streaming_Singlestore - Fatal编程技术网

Scala Kafka的Spark流媒体并与Memsql的记录进行比较(计数不正确)

Scala Kafka的Spark流媒体并与Memsql的记录进行比较(计数不正确),scala,apache-spark,apache-kafka,spark-streaming,singlestore,Scala,Apache Spark,Apache Kafka,Spark Streaming,Singlestore,我们正在从Kafka获取记录,我们正在Spark streaming中从Kafka获取Cardnumber,并从Memsql记录中执行Kafka Cardnumber比较,并通过分组Cardnumber选择计数和Cardnumber。但在火花流中计数的方式不正确 例如,当我们执行查询时,它在Memsql命令提示符中给出以下输出 memsql> select card_number,count(*) from cardnumberalert5 where inserted_time <

我们正在从Kafka获取记录,我们正在Spark streaming中从Kafka获取Cardnumber,并从Memsql记录中执行Kafka Cardnumber比较,并通过分组Cardnumber选择计数和Cardnumber。但在火花流中计数的方式不正确

例如,当我们执行查询时,它在Memsql命令提示符中给出以下输出

memsql> select card_number,count(*) from cardnumberalert5 where 
inserted_time <= now() and inserted_time >= NOW() - INTERVAL 10 MINUTE group 
by card_number;
+------------------+----------+
| card_number      | count(*) |
+------------------+----------+
| 4556655960290527 |        2 |
| 6011255715328120 |        4 |
| 4532133676538232 |        2 |
| 6011614607071620 |        2 |
| 4024007117099605 |        2 |
| 347138718258304  |        4 |
+------------------+----------+
+------------------+----------+
| card_number      | count(*) |
+------------------+----------+
| 4556655960290527 |        2 |
当在Spark Streaming中执行相同的sql时,它将输出打印为

RECORDS FOUNDS ****************************************
CARDNUMBER KAFKA ############### 4024007117099605
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1
这里的计数显示为2,但我们得到了2条计数为1的卡号记录

在Spark Streaming中打印输出

Spark流媒体程序

不知道如何修复,任何建议或帮助都将不胜感激


提前感谢

我们能够通过在Spark配置中设置以下属性来解决此问题

代码:


您好,Bhavesh,查询似乎被错误地下推到memsql分区,而不是通过聚合器运行。要检查是否发生了这种情况,您可以检查结果包含的分区数吗?我相信您可以使用result.rdd.getNumPartitions进行检查。另外,您使用的是哪个版本的MemSQL和Spark以及MemSQL Spark连接器?您好Carl,非常感谢您的回复我们已经检查了结果的分区数,我们得到的结果是4个分区。我们使用的是MemSQL版本5.5.8 MemSQL源代码分发版和Spark 2.1.0以及MemSQL Spark 2.0连接器com.MemSQL MemSQL-Connector_2.11 2.0.2很抱歉Bhavesh的回复太晚-我将针对我们的连接器提交一个任务,因为这看起来像是下推逻辑中的一个错误。谢谢你提出来!
RECORDS FOUNDS ****************************************
CARDNUMBER KAFKA ############### 4024007117099605
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011255715328120
COUNT MEMSQL ############### 2
CARDNUMBER MEMSQL ############### 4532133676538232
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011614607071620
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 4024007117099605
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 347138718258304
COUNT MEMSQL ############### 2
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011255715328120
COUNT MEMSQL ############### 2
CARDNUMBER MEMSQL ############### 4532133676538232
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011614607071620
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 4024007117099605
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 347138718258304
COUNT MEMSQL ############### 2
class SparkKafkaConsumer11(val ssc : StreamingContext,val sc : SparkContext,val spark : org.apache.spark.sql.SparkSession, val topics : Array[String], val kafkaParam : scala.collection.immutable.Map[String,Object]) {

 val stream = KafkaUtils.createDirectStream[String, String](
            ssc,
            PreferConsistent,
            Subscribe[String, String](topics, kafkaParam)
          )

  val recordStream = stream.map(record => (record.value)) // Take the value only from the key,value pair for processing

   recordStream.foreachRDD{rdd =>

val brokers = "174.24.154.244:9092" // Specify the BROKER
val props = new HashMap[String, Object]()
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers)
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer")
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer")
    props.put(ProducerConfig.CLIENT_ID_CONFIG,"SparkKafkaConsumer__11")
val producer = new KafkaProducer[String,String](props)

val result = spark.read
            .format("com.memsql.spark.connector")
            .options(Map("query" -> ("select card_number,count(*) from cardnumberalert5 where inserted_time <= now() and inserted_time >= NOW() - INTERVAL 10 MINUTE group by card_number"),"database" -> "fraud"))
            .load()

val record = rdd.map(line => line.split("\\|")) //Split the record and create a array of it.

 record.collect().foreach{recordRDD =>
    val now1 = System.currentTimeMillis

    val now = new java.sql.Timestamp(now1)
    val cardnumber_kafka = recordRDD(13).toString
    val sessionID = recordRDD(1).toString
    println("RECORDS FOUNDS ****************************************")
    println("CARDNUMBER KAFKA ############### "+cardnumber_kafka)

    result.collect().foreach{t => 

      val resm1 = t.getAs[String]("card_number")
      println("CARDNUMBER MEMSQL ############### "+resm1)
      val resm2 = t.getAs[Long]("count(*)")
      println("COUNT MEMSQL ############### "+resm2)

      if(resm1.equals(cardnumber_kafka)){
        if(resm2 > 2){
          println("INSIDE IF CONDITION FOR MORE THAN 3 COUNT"+now)
          val messageToKafka = "---- THIRD OR MORE OCCURANCE  ---- "+cardnumber_kafka
          val message=new ProducerRecord[String, String]("output1",0,sessionID,messageToKafka)
          try {
            producer.send(message)

          } catch {
              case e: Exception =>
              e.printStackTrace
              System.exit(1)
          }
        }
      }

    }

}


producer.close()

}

}
 .set("spark.memsql.disablePartitionPushdown","true")