Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 警告会话:将池创建到/xxx.xxx.xxx.xxx:28730时出错_Apache Spark_Ibm Cloud_Compose_Scylla_Analytics Engine - Fatal编程技术网

Apache spark 警告会话:将池创建到/xxx.xxx.xxx.xxx:28730时出错

Apache spark 警告会话:将池创建到/xxx.xxx.xxx.xxx:28730时出错,apache-spark,ibm-cloud,compose,scylla,analytics-engine,Apache Spark,Ibm Cloud,Compose,Scylla,Analytics Engine,我正试图从运行在IBM Analytics Engine上的Spark 2.3连接到运行在IBM Cloud上的Scyllab数据库 我正在启动火花壳,就像这样 $spark shell--主本地[1]\ --文件jaas.conf\ --packagesorg.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0,datasax:spark-cassandra连接器:2.3.0-s_2.11,公共配置:公共配置:1.10\ --conf“spark.driv

我正试图从运行在IBM Analytics Engine上的Spark 2.3连接到运行在IBM Cloud上的Scyllab数据库

我正在启动火花壳,就像这样

$spark shell--主本地[1]\
--文件jaas.conf\
--packagesorg.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0,datasax:spark-cassandra连接器:2.3.0-s_2.11,公共配置:公共配置:1.10\
--conf“spark.driver.extraJavaOptions=-Djava.security.auth.login.config=jaas.conf”\
--conf“spark.executor.extraJavaOptions=-Djava.security.auth.login.config=jaas.conf”\
--conf spark.cassandra.connection.host=xxx1.composedb.com、xxx2.composedb.com、xxx3.composedb.com\
--conf spark.cassandra.connection.port=28730\
--conf spark.cassandra.auth.username=scylla\
--conf spark.cassandra.auth.password=SECRET\
--conf spark.cassandra.connection.ssl.enabled=true\
--num执行者1\
--执行器核心1
然后执行以下spark scala代码:

导入com.datastax.spark.connector_
导入org.apache.spark.sql.cassandra_
val stocksRdd=sc.cassandraTable(“股票”、“股票”)
stocksRdd.count()
但是,我看到了一系列警告:

18/08/23 10:11:01 WARN Cluster: You listed xxx1.composedb.com/xxx.xxx.xxx.xxx:28730 in your contact points, but it wasn't found in the control host's system.peers at startup
18/08/23 10:11:01 WARN Cluster: You listed xxx1.composedb.com/xxx.xxx.xxx.xxx:28730 in your contact points, but it wasn't found in the control host's system.peers at startup
18/08/23 10:11:06 WARN Session: Error creating pool to /xxx.xxx.xxx.xxx:28730
com.datastax.driver.core.exceptions.ConnectionException: [/xxx.xxx.xxx.xxx:28730] Pool was closed during initialization
...
但是,在警告中的stacktrace之后,我看到了预期的输出:

res2: Long = 4 
如果导航到compose UI,我会看到一个映射json:

[
  {"xxx.xxx.xxx.xxx:9042":"xxx1.composedb.com:28730"},
  {"xxx.xxx.xxx.xxx:9042":"xxx2.composedb.com:28730"},
  {"xxx.xxx.xxx.xxx:9042":"xxx3.composedb.com:28730"}
]
似乎警告与地图文件有关

这一警告意味着什么?我可以忽略它吗



注意:我也看到过类似的问题,但是我相信这个问题是不同的,因为地图文件,我无法控制如何通过Compose设置Scyllab集群。

这只是警告。之所以发出警告,是因为spark试图接触到的IP不知道锡拉自己。显然Spark正在连接群集并检索预期信息,因此您应该没事。

这只是警告。之所以发出警告,是因为spark试图接触到的IP不知道锡拉自己。显然Spark正在连接到集群并检索期望的信息,所以您应该很好