Python 如何使用Spyder IDE在Spark群集上进行处理?

Python 如何使用Spyder IDE在Spark群集上进行处理?,python,apache-spark,cluster-computing,pyspark,spyder,Python,Apache Spark,Cluster Computing,Pyspark,Spyder,因此,我一直在搜索如何使用spyder IDE(Anaconda附带)上的IPython控制台在本地计算机(我的例子是ubuntu 16.04)上开发代码,并在集群(例如在Azure HDInsight上创建)上处理代码。我可以在本地运行pyspark,没有问题(通过spark shell和spyder),但我想知道是否可以在spark/Thread(?)群集上运行代码,以加快处理速度,并在spyder上的IPython控制台中显示结果。我在stack overflow()上找到了这篇文章,感觉

因此,我一直在搜索如何使用spyder IDE(Anaconda附带)上的IPython控制台在本地计算机(我的例子是ubuntu 16.04)上开发代码,并在集群(例如在Azure HDInsight上创建)上处理代码。我可以在本地运行pyspark,没有问题(通过spark shell和spyder),但我想知道是否可以在spark/Thread(?)群集上运行代码,以加快处理速度,并在spyder上的IPython控制台中显示结果。我在stack overflow()上找到了这篇文章,感觉它可以解决这个问题,但我得到了一个错误。当我“正常”启动spyder时,以及通过“spark submit spyder.py”命令启动spyder时,会出现错误:

 sc = SparkContext(conf=conf)
Traceback (most recent call last):

  File "<ipython-input-3-6b825dbb354c>", line 1, in <module>
    sc = SparkContext(conf=conf)

  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 115, in __init__
    conf, jsc, profiler_cls)

  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 172, in _do_init
    self._jsc = jsc or self._initialize_context(self._conf._jconf)

  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 235, in _initialize_context
    return self._jvm.JavaSparkContext(jconf)

  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 1062, in __call__
    answer = self._gateway_client.send_command(command)

  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 631, in send_command
    response = self.send_command(command)

  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 624, in send_command
    connection = self._get_connection()

  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 579, in _get_connection
    connection = self._create_connection()

  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 585, in _create_connection
    connection.start()

  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 697, in start
    raise Py4JNetworkError(msg, e)

Py4JNetworkError: An error occurred while trying to connect to the Java server
我在Azure HDInsight上创建了集群,不确定是否从正确的位置检索了IP和端口,或者是否必须创建SSH隧道。这很令人困惑


希望有人能帮助我。提前谢谢

你找到这个问题的答案了吗?不是真的,没有。从那以后就没用过Spyder。但是,您可以通过Jupyter笔记本轻松地从集群内部使用Spark或任何其他库。创建集群,在其中安装Jupyter和所需的软件包,启动它,您的浏览器将可以访问IP(假设您在集群中正确配置了端口转发)
import os
import sys
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-oracle/"
os.environ["SPARK_HOME"] = "/usr/local/spark"
os.environ["PYLIB"] = os.environ["SPARK_HOME"] + "/python/lib"
os.environ["PYSPARK_PYTHON"] = "python2.7"
sys.path.insert(0, os.environ["PYLIB"] +"/py4j-0.9-src.zip")
sys.path.insert(0, os.environ["PYLIB"] +"/pyspark.zip")
############################################################################

from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext

conf = SparkConf().setMaster('spark://xx.x.x.xx:xxxxx').setAppName("building a warehouse")
sc = SparkContext(conf=conf)
sqlCtx = SQLContext(sc)

from pyspark.ml.feature import HashingTF, IDF, Tokenizer

sentenceData = sqlCtx.createDataFrame([
    (0, "Hi I heard about Spark"),
    (0, "I wish Java could use case classes"),
    (1, "Logistic regression models are neat")
], ["label", "sentence"])
tokenizer = Tokenizer(inputCol="sentence", outputCol="words")
wordsData = tokenizer.transform(sentenceData)
hashingTF = HashingTF(inputCol="words", outputCol="rawFeatures", numFeatures=20)
featurizedData = hashingTF.transform(wordsData)
idf = IDF(inputCol="rawFeatures", outputCol="features")
idfModel = idf.fit(featurizedData)
rescaledData = idfModel.transform(featurizedData)
for features_label in rescaledData.select("features", "label").take(3):
    print(features_label)