Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/332.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
使用Cassandra spark connector(java)从spark流媒体推送Cassandra中的大量消息时出现问题_Java_Apache Kafka_Spark Streaming_Spark Cassandra Connector - Fatal编程技术网

使用Cassandra spark connector(java)从spark流媒体推送Cassandra中的大量消息时出现问题

使用Cassandra spark connector(java)从spark流媒体推送Cassandra中的大量消息时出现问题,java,apache-kafka,spark-streaming,spark-cassandra-connector,Java,Apache Kafka,Spark Streaming,Spark Cassandra Connector,我一直在尝试将大量json消息(每个消息大约2KB)推送到cassandra,这些消息来自kafka,用于spark streaming 模拟器-->卡夫卡-->SparkStreaming-->卡桑德拉 每个都在单独的ec2实例上运行,具有30GB的Ram和8核处理器作为独立的单节点设置 当我试图从模拟器中推送大约500万条消息时,在大约10万条消息之后,cassandra停止插入消息,spark streaming job只是继续创建批处理(如spark streaming web ui中所

我一直在尝试将大量json消息(每个消息大约2KB)推送到cassandra,这些消息来自kafka,用于spark streaming

模拟器-->卡夫卡-->SparkStreaming-->卡桑德拉

每个都在单独的ec2实例上运行,具有30GB的Ram和8核处理器作为独立的单节点设置

当我试图从模拟器中推送大约500万条消息时,在大约10万条消息之后,cassandra停止插入消息,spark streaming job只是继续创建批处理(如spark streaming web ui中所示)。我甚至检查了日志,但没有发现任何问题

另外,我不确定我在写cassandra的代码中使用spark连接器的方式

请看下面的代码

/**
 * Spark Streaming to cassandra code
 */

package org.sparkexample;



import java.util.HashMap;
import java.util.Map;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;

import com.datastax.spark.connector.japi.CassandraJavaUtil;
import com.datastax.spark.connector.japi.CassandraStreamingJavaUtil;

import scala.Tuple2;

public class SparkStreamingKafkaTest {


private SparkStreamingKafkaTest() {
}

public static void main(String[] args) {
    if (args.length < 6) {
        System.err.println("Usage: SparkStreamingKafka <zkQuorum> <group> <topics> <numThreads> <conc write> <cassandra ip>");
        System.exit(1);
    }

    SparkConf sparkConf = new SparkConf().setAppName("SparkStreamingKafka");




    //specific to cassandra

    sparkConf.set("spark.cassandra.output.concurrent.writes", args[4]);
    sparkConf.set("spark.cassandra.connection.host",args[5]);

    // Create the context with a 2 second batch size
    JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));
    int numThreads = Integer.parseInt(args[3]);
    Map<String, Integer> topicMap = new HashMap<String, Integer>();
    String[] topics = args[2].split(",");
    for (String topic : topics) {
        topicMap.put(topic, numThreads);
    }

    JavaPairReceiverInputDStream<String, String> messages = KafkaUtils.createStream(jssc, args[0], args[1],
            topicMap);

    JavaDStream<WordCount> wc = messages.map(new Function<Tuple2<String, String>, WordCount>() {

        @Override
        public WordCount call(Tuple2<String, String> tuple2) {
            String key = System.currentTimeMillis()+ "_"+ Math.random();
            return new WordCount(key, tuple2._2());
        }
    });

    Map <String, String> map =  new HashMap<String, String>();
    map.put("word", "word");
    map.put("count", "count");

    CassandraStreamingJavaUtil.javaFunctions(wc).writerBuilder("mykeyspace", "wordcount",CassandraJavaUtil.mapToRow(WordCount.class, map)).saveToCassandra(); 

    jssc.start();
    jssc.awaitTermination();
  }


  }
我一直在使用默认的cassandra.yml,它具有以下主要依赖项:

  • spark-cassandra-connector_2.10-1.4.0-M3
  • spark-cassandra-connector-java_2.10-1.4.0-M3
  • 卡桑德拉驱动核心-2.1.7.1
  • spark-streaming-kafka_2.10-1.4.1
  • spark-U 2.10-1.4.1
  • spark-core_2.10-1.4.1
请提出可能的问题

nodetool info和nodetool tpstats的输出如下所示

  package org.sparkexample;

  import java.io.Serializable;

  public class WordCount implements Serializable{

  private String word;
  private String count;

  public WordCount(){

  }

  public String getWord() {
    return word;
  }
  public void setWord(String word) {
    this.word = word;
  }
  public String getCount() {
    return count;
  }
  public void setCount(String count) {
    this.count = count;
  }

  public WordCount(String key, String count) {
    this.word = key;
    this.count = count;
  }
  }