Java中spark连接到Cassandra的问题

Java中spark连接到Cassandra的问题,java,apache-spark,docker,cassandra,spark-cassandra-connector,Java,Apache Spark,Docker,Cassandra,Spark Cassandra Connector,我有一个带docker的服务器,创建了3个Cassandra节点、2个worker spark节点和一个master spark节点。 现在我想通过java应用程序从我的笔记本电脑连接到spark。 我的java代码是: public SparkTestPanel(String id, User user) { super(id); form = new Form("form"); form.setOutputMarkupId(true); this.add(fo

我有一个带docker的服务器,创建了3个Cassandra节点、2个worker spark节点和一个master spark节点。 现在我想通过java应用程序从我的笔记本电脑连接到spark。 我的java代码是:

public SparkTestPanel(String id, User user) {
    super(id);
    form = new Form("form");
    form.setOutputMarkupId(true);
    this.add(form);
    SparkConf conf = new SparkConf(true);
    conf.setAppName("Spark Test");
    conf.setMaster("spark://172.11.100.156:9050");
    conf.set("spark.cassandra.connection.host", "cassandra-0");
    conf.set("spark.cassandra.connection.port", "9042");
    conf.set("spark.cassandra.auth.username", "cassandra");
    conf.set("spark.cassandra.auth.password", "cassandra"); 
    JavaSparkContext sc = null;
    try {
        sc = new JavaSparkContext(conf);
        CassandraTableScanJavaRDD<com.datastax.spark.connector.japi.CassandraRow> cassandraTable = javaFunctions(sc).cassandraTable("test", "test_table");

        List<com.datastax.spark.connector.japi.CassandraRow> collect = cassandraTable.collect();
        for(com.datastax.spark.connector.japi.CassandraRow cassandraRow : collect){
            Logger.getLogger(SparkTestPanel.class).error(cassandraRow.toString());
        }
    } finally {
        sc.stop();
    }

}
和其他错误:

Caused by: java.lang.IllegalArgumentException: Cannot build a cluster without contact points
at com.datastax.driver.core.Cluster.checkNotEmpty(Cluster.java:119)
at com.datastax.driver.core.Cluster.<init>(Cluster.java:112)
at com.datastax.driver.core.Cluster.buildFrom(Cluster.java:178)
at com.datastax.driver.core.Cluster$Builder.build(Cluster.java:1335)
at com.datastax.spark.connector.cql.DefaultConnectionFactory$.createCluster(CassandraConnectionFactory.scala:131)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:159)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:32)
at com.datastax.spark.connector.cql.RefCountedCache.syncAcquire(RefCountedCache.scala:69)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:57)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:79)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:111)
at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:122)
at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:330)
at com.datastax.spark.connector.cql.Schema$.tableFromCassandra(Schema.scala:350)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:50)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:137)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:262)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
我的应用程序中发生了什么会导致此错误?
有人能帮忙吗?

你在“spark.cassandra.connection.host”中尝试过cassandra-0的ip地址吗?cassandra-0应该在master和worker节点中得到解析是的,我尝试过docker中看到的ip地址。但是当尝试服务器ip地址(“172.11.100.156”)和spark.cassandra.connection.host=7005时(我侦听的Cassandra-0容器的9042端口)可以连接,并且没有看到错误。@addmeaningcassandra-0在master和workers中已解析,但我的应用程序(不在docker中)无法解析。@undefined\u variable发布创建群集的配置
2017-08-17 12:14:31,906 ERROR CassandraConnectorConf:72 - Unknown host 'cassandra-0'
java.net.UnknownHostException: cassandra-0: nodename nor servname provided, or not known
...
Caused by: java.lang.IllegalArgumentException: Cannot build a cluster without contact points
at com.datastax.driver.core.Cluster.checkNotEmpty(Cluster.java:119)
at com.datastax.driver.core.Cluster.<init>(Cluster.java:112)
at com.datastax.driver.core.Cluster.buildFrom(Cluster.java:178)
at com.datastax.driver.core.Cluster$Builder.build(Cluster.java:1335)
at com.datastax.spark.connector.cql.DefaultConnectionFactory$.createCluster(CassandraConnectionFactory.scala:131)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:159)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:32)
at com.datastax.spark.connector.cql.RefCountedCache.syncAcquire(RefCountedCache.scala:69)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:57)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:79)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:111)
at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:122)
at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:330)
at com.datastax.spark.connector.cql.Schema$.tableFromCassandra(Schema.scala:350)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:50)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:137)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:262)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
root@708d210056af:/# ping cassandra-0
PING cassandra-0 (21.1.0.21): 56 data bytes
64 bytes from 21.1.0.21: icmp_seq=0 ttl=64 time=0.554 ms
64 bytes from 21.1.0.21: icmp_seq=1 ttl=64 time=0.117 ms
64 bytes from 21.1.0.21: icmp_seq=2 ttl=64 time=0.116 ms
64 bytes from 21.1.0.21: icmp_seq=3 ttl=64 time=0.093 ms