Apache spark 在Jupyter笔记本中运行PypSpark和Kafka

Apache spark 在Jupyter笔记本中运行PypSpark和Kafka,apache-spark,apache-kafka,jupyter-notebook,pyspark-sql,Apache Spark,Apache Kafka,Jupyter Notebook,Pyspark Sql,我可以在终点站运行这个。我的终端命令是: bin/spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0 examples/src/main/python/sql/streaming/structured_kafka_wordcount.py localhost:9092 subscribe test 现在我想在Juypter python笔记本中运行它。我试着跟随(我可以在链接中运行代码)。但就我而言,

我可以在终点站运行这个。我的终端命令是:

bin/spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0 examples/src/main/python/sql/streaming/structured_kafka_wordcount.py localhost:9092 subscribe test
现在我想在Juypter python笔记本中运行它。我试着跟随(我可以在链接中运行代码)。但就我而言,它失败了。以下是我的代码:

import os
os.environ['PYSPARK_SUBMIT_ARGS'] = "--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0 pyspark-shell"

from pyspark.sql import SparkSession
from pyspark.sql.functions import explode
from pyspark.sql.functions import split

bootstrapServers = "localhost:9092"
subscribeType = "subscribe"
topics = "test"

spark = SparkSession\
    .builder\
    .appName("StructuredKafkaWordCount")\
    .getOrCreate()

# Create DataSet representing the stream of input lines from kafka
lines = spark\
    .readStream\
    .format("kafka")\
    .option("kafka.bootstrap.servers", bootstrapServers)\
    .option(subscribeType, topics)\
    .load()\
    .selectExpr("CAST(value AS STRING)")

# Split the lines into words
words = lines.select(
    # explode turns each item in an array into a separate row
    explode(
        split(lines.value, ' ')
    ).alias('word')
)

# Generate running word count
wordCounts = words.groupBy('word').count()

# Start running the query that prints the running counts to the console
query = wordCounts\
    .writeStream\
    .outputMode('complete')\
    .format('console')\
    .start()

query.awaitTermination()
错误消息是:

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-1-0344129c7d54> in <module>()
     14 
     15 # Create DataSet representing the stream of input lines from kafka
---> 16 lines = spark    .readStream    .format("kafka")    .option("kafka.bootstrap.servers", bootstrapServers)    .option(subscribeType, topics)    .load()    .selectExpr("CAST(value AS STRING)")
     ...

Py4JJavaError: An error occurred while calling o31.load.
: java.lang.NoClassDefFoundError: org/apache/spark/sql/sources/v2/StreamWriteSupport
    at java.base/java.lang.ClassLoader.defineClass1(Native Method)
    ...
然后我用以下内容更新了它们:

{
    "display_name": "PySpark",
    "language": "python",
    "argv": [ "</usr>/anaconda3/bin/python", "-m", "ipykernel", "-f", "  {connection_file}" ],
    "env": {
        "SPARK_HOME": "</usr>/projects/spark-2.3.0",
        "PYSPARK_PYTHON": "</usr>/anaconda3/bin/python",
        "PYTHONPATH": "</usr>/projects/spark-2.3.0/spark/python/:</usr>/projects/spark-2.3.0/spark/python/lib/py4j-0.10.6-src.zip",
        "PYTHONSTARTUP": "</usr>/projects/spark-2.3.0/python/pyspark/shell.py",
        "PYSPARK_SUBMIT_ARGS":  "--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0 pyspark-shell"
    }
}
{
“显示名称”:“Pypark”,
“语言”:“python”,
“argv”:[“/anaconda3/bin/python”、“-m”、“ipykernel”、“-f”、“{connection_file}”],
“环境”:{
“SPARK_HOME”:“/projects/SPARK-2.3.0”,
“PYSPARK_PYTHON”:“/anaconda3/bin/PYTHON”,
“PYTHONPATH”:“/projects/spark-2.3.0/spark/python/:/projects/spark-2.3.0/spark/python/lib/py4j-0.10.6-src.zip”,
“PYTHONSTARTUP”:“/projects/spark-2.3.0/python/pyspark/shell.py”,
“PYSPARK_SUBMIT_ARGS”:--packagesorg.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0 PYSPARK shell”
}
}
然后我得到如下错误:

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/<usr>/projects/spark-2.3.0/assembly/target/scala-2.11/jars/hadoop-auth-2.6.5.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
警告:发生了非法的反射访问操作
警告:org.apache.hadoop.security.authentication.util.KerberosUtil进行非法反射访问(file://projects/spark-2.3.0/assembly/target/scala-2.11/jars/hadoop-auth-2.6.5.jar)方法sun.security.krb5.Config.getInstance()
警告:请考虑将此报告给Or.ApH.Hooop.Soalthial.Undo.Kelbopuutl的维护者。

正如@user6910411所说,PYSPARK_SUBMIT_参数只能在实例化
sparkContext
之前工作

在下面的示例中,他们可能为jupyter笔记本使用python内核,并使用
pyspark
库实例化spark上下文

我猜您使用的是
pyspark
内核,因此:

spark=SparkSession\
建筑商先生\
.appName(“StructuredKafkaWordCount”)\
.getOrCreate()
不会启动
sparkSession
,但只获取已存在的会话

您可以在
kernel.json
文件中将参数传递给jupyter运行的spark submit,这样每次运行新笔记本时都会加载库:

{
“显示名称”:“Pypark”,
“语言”:“python”,
“argv”:[“/opt/anaconda3/bin/python”、“-m”、“ipykernel”、“-f”、“{connection_file}”],
“环境”:{
“SPARK_HOME”:“/usr/iop/current/SPARK client”,
“PYSPARK_PYTHON”:“/opt/anaconda3/bin/python3”,
“PYTHONPATH”:“/usr/iop/current/spark-client/python/:/usr/iop/current/spark-client/python/lib/py4j-0.9-src.zip”,
“PYTHONSTARTUP”:“/usr/iop/current/spark-client/python/pyspark/shell.py”,
“PYSPARK_SUBMIT_ARGS”:--packagesorg.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0 PYSPARK shell”
}
}

PYSPARK\u SUBMIT\u ARGS
仅当JVM在设置后初始化时才起作用。@user6910411我在下面使用了它。在那个例子中它起了作用。我按照你的建议,但仍然得到了错误(我用错误更新了),只将
“PYSPARK\u SUBMIT\u ARGS”
部分添加到现有的PYSPARK
kernel.json
文件(如果有)。要列出可用的内核,可以调用
jupyter kernelspec list
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/<usr>/projects/spark-2.3.0/assembly/target/scala-2.11/jars/hadoop-auth-2.6.5.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil