wrker.py的Pyspark(Jupyter)中没有模块错误

wrker.py的Pyspark(Jupyter)中没有模块错误,pyspark,apache-spark-sql,jupyter-notebook,Pyspark,Apache Spark Sql,Jupyter Notebook,我只需要python脚本中的adblockerparser模块。当我在配置了Jupyter的pyspark上运行此命令时,它将返回以下日志: PythonException: An exception was thrown from the Python worker. Please see the stack trace below. Traceback (most recent call last): File "/home/student/spark/spark-3.0

我只需要python脚本中的adblockerparser模块。当我在配置了Jupyter的pyspark上运行此命令时,它将返回以下日志:

PythonException: 
  An exception was thrown from the Python worker. Please see the stack trace below.
Traceback (most recent call last):
  File "/home/student/spark/spark-3.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 589, in main
    func, profiler, deserializer, serializer = read_udfs(pickleSer, infile, eval_type)
  File "/home/student/spark/spark-3.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 447, in read_udfs
    udfs.append(read_single_udf(pickleSer, infile, eval_type, runner_conf, udf_index=i))
  File "/home/student/spark/spark-3.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 254, in read_single_udf
    f, return_type = read_command(pickleSer, infile)
  File "/home/student/spark/spark-3.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 76, in read_command
    command = serializer.loads(command.value)
  File "/home/student/spark/spark-3.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 458, in loads
    return pickle.loads(obj, encoding=encoding)
ModuleNotFoundError: No module named 'adblockparser'
虽然我已经使用
在jupyter笔记本中安装了此模块!pip安装adblockparser

我的bashrc文件如下所示:

export SPARK_HOME=/home/student/spark/spark-3.0.2-bin-hadoop2.7
export PATH=$PATH:$SPARK_HOME/bin
export PYSPARK_DRIVER_PYTHON="jupyter"
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
export PYSPARK_PYTHON=python3