Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/282.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 无法从pipenv virtualenv导入pyspark,因为它找不到py4j_Python_Pyspark_Pipenv - Fatal编程技术网

Python 无法从pipenv virtualenv导入pyspark,因为它找不到py4j

Python 无法从pipenv virtualenv导入pyspark,因为它找不到py4j,python,pyspark,pipenv,Python,Pyspark,Pipenv,我已经建立了一个包含spark和pipenv的docker映像。如果我在pipenv virtualenv中运行python并尝试导入pyspark,它将失败,并出现错误“ModuleNotFoundError:No module name'py4j” 我承认我从其他地方复制了我的Dockerfile的很多代码,所以我不完全了解这些代码是如何在封面下结合在一起的。我希望py4j放在蟒蛇上就足够了,但显然不行。我可以确认它在PYTHONPATH上,并且它存在: root@4d0ae585a52a:

我已经建立了一个包含spark和pipenv的docker映像。如果我在pipenv virtualenv中运行python并尝试导入pyspark,它将失败,并出现错误“ModuleNotFoundError:No module name'py4j”

我承认我从其他地方复制了我的Dockerfile的很多代码,所以我不完全了解这些代码是如何在封面下结合在一起的。我希望py4j放在蟒蛇上就足够了,但显然不行。我可以确认它在PYTHONPATH上,并且它存在:

root@4d0ae585a52a:/tmp# pipenv run python -c "import os;print(os.environ['PYTHONPATH'])"
/opt/spark/python:/opt/spark/python/lib/py4j-0.10.7-src.zip:
root@4d0ae585a52a:/tmp# pipenv run ls /opt/spark/python/lib/py4j*
/opt/spark/python/lib/py4j-0.10.4-src.zip
有人能建议我如何在我的virtualenv中让python解释器可以使用py4j吗


这是Dockerfile。我们从本地jfrog artifactory缓存中提取工件(docker映像、apt包、pypi包等),因此本文中的所有工件参考:

FROM images.artifactory.our.org.com/python3-7-pipenv:1.0

WORKDIR /tmp

ENV SPARK_VERSION=2.2.1
ENV HADOOP_VERSION=2.8.4

ARG ARTIFACTORY_USER
ARG ARTIFACTORY_ENCRYPTED_PASSWORD
ARG ARTIFACTORY_PATH=artifactory.our.org.com/artifactory/generic-dev/ceng/external-dependencies
ARG SPARK_BINARY_PATH=https://${ARTIFACTORY_PATH}/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz
ARG HADOOP_BINARY_PATH=https://${ARTIFACTORY_PATH}/hadoop-${HADOOP_VERSION}.tar.gz


ADD apt-transport-https_1.4.8_amd64.deb /tmp

RUN echo "deb https://username:password@artifactory.our.org.com/artifactory/debian-main-remote stretch main" >/etc/apt/sources.list.d/main.list &&\
    echo "deb https://username:password@artifactory.our.org.com/artifactory/maria-db-debian stretch main" >>/etc/apt/sources.list.d/main.list &&\
    echo 'Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/02update &&\
    echo 'Acquire::http::Timeout "10";' > /etc/apt/apt.conf.d/99timeout &&\
    echo 'Acquire::ftp::Timeout "10";' >> /etc/apt/apt.conf.d/99timeout &&\
    dpkg -i /tmp/apt-transport-https_1.4.8_amd64.deb &&\
    apt-get install --allow-unauthenticated -y /tmp/apt-transport-https_1.4.8_amd64.deb &&\
    apt-get update --allow-unauthenticated -y -o Dir::Etc::sourcelist="sources.list.d/main.list" -o Dir::Etc::sourceparts="-" -o APT::Get::List-Cleanup="0"


RUN apt-get update && \
    apt-get -y install default-jdk

# Detect JAVA_HOME and export in bashrc.
# This will result in something like this being added to /etc/bash.bashrc
#   export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
RUN echo export JAVA_HOME="$(readlink -f /usr/bin/java | sed "s:/jre/bin/java::")" >> /etc/bash.bashrc

# Configure Spark-${SPARK_VERSION}
# Not using tar -v because including verbose output causes ci logsto exceed max length
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${SPARK_BINARY_PATH}" -o /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && cd /opt \
    && tar -xzf /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && rm spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && ln -s spark-${SPARK_VERSION}-bin-hadoop2.7 spark \
    && sed -i '/log4j.rootCategory=INFO, console/c\log4j.rootCategory=CRITICAL, console' /opt/spark/conf/log4j.properties.template \
    && mv /opt/spark/conf/log4j.properties.template /opt/spark/conf/log4j.properties \
    && mkdir /opt/spark-optional-jars/ \
    && mv /opt/spark/conf/spark-defaults.conf.template /opt/spark/conf/spark-defaults.conf \
    && printf "spark.driver.extraClassPath /opt/spark-optional-jars/*\nspark.executor.extraClassPath /opt/spark-optional-jars/*\n">>/opt/spark/conf/spark-defaults.conf \
    && printf "spark.driver.extraJavaOptions -Dderby.system.home=/tmp/derby" >> /opt/spark/conf/spark-defaults.conf

# Configure Hadoop-${HADOOP_VERSION}
# Not using tar -v because including verbose output causes ci logsto exceed max length
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${HADOOP_BINARY_PATH}" -o /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && cd /opt \
    && tar -xzf /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && rm /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && ln -s hadoop-${HADOOP_VERSION} hadoop

# Set Environment Variables.
ENV SPARK_HOME="/opt/spark" \
    HADOOP_HOME="/opt/hadoop" \
    PYSPARK_SUBMIT_ARGS="--master=lo    cal[*] pyspark-shell --executor-memory 1g --driver-memory 1g --conf spark.ui.enabled=false spark.executor.extrajavaoptions=-Xmx=1024m" \
    PYTHONPATH="/opt/spark/python:/opt/spark/python/lib/py4j-0.10.7-src.zip:$PYTHONPATH" \
    PATH="$PATH:/opt/spark/bin:/opt/hadoop/bin" \
    PYSPARK_DRIVER_PYTHON="/usr/local/bin/python" \
    PYSPARK_PYTHON="/usr/local/bin/python"

# Upgrade pip and setuptools
RUN pip install --index-url https://username:password@artifactory.our.org.com/artifactory/api/pypi/pypi-virtual-all/simple --upgrade pip setuptools

我想我已经通过安装py4j standalone解决了这个问题:

$docker-run--rm-it-images.artifactory.our.org.com/myimage:mytag-bash
root@1d6a0ec725f0:/tmp#pipenv安装py4j
正在安装py4j…
✔ 安装成功
Pipfile.lock(49f1d8)已过期,正在更新为(dfdbd6)…
正在锁定[开发包]依赖项…
正在锁定[程序包]依赖项…
✔ 成功!
更新了Pipfile.lock(49f1d8)!
正在从Pipfile.lock(49f1d8)安装依赖项…

请出示证件Dockerfile@pissall是的,那会有帮助的,不是吗。完成。
root@4d0ae585a52a:/tmp# pipenv run python -c "import os;print(os.environ['PYTHONPATH'])"
/opt/spark/python:/opt/spark/python/lib/py4j-0.10.7-src.zip:
root@4d0ae585a52a:/tmp# pipenv run ls /opt/spark/python/lib/py4j*
/opt/spark/python/lib/py4j-0.10.4-src.zip
FROM images.artifactory.our.org.com/python3-7-pipenv:1.0

WORKDIR /tmp

ENV SPARK_VERSION=2.2.1
ENV HADOOP_VERSION=2.8.4

ARG ARTIFACTORY_USER
ARG ARTIFACTORY_ENCRYPTED_PASSWORD
ARG ARTIFACTORY_PATH=artifactory.our.org.com/artifactory/generic-dev/ceng/external-dependencies
ARG SPARK_BINARY_PATH=https://${ARTIFACTORY_PATH}/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz
ARG HADOOP_BINARY_PATH=https://${ARTIFACTORY_PATH}/hadoop-${HADOOP_VERSION}.tar.gz


ADD apt-transport-https_1.4.8_amd64.deb /tmp

RUN echo "deb https://username:password@artifactory.our.org.com/artifactory/debian-main-remote stretch main" >/etc/apt/sources.list.d/main.list &&\
    echo "deb https://username:password@artifactory.our.org.com/artifactory/maria-db-debian stretch main" >>/etc/apt/sources.list.d/main.list &&\
    echo 'Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/02update &&\
    echo 'Acquire::http::Timeout "10";' > /etc/apt/apt.conf.d/99timeout &&\
    echo 'Acquire::ftp::Timeout "10";' >> /etc/apt/apt.conf.d/99timeout &&\
    dpkg -i /tmp/apt-transport-https_1.4.8_amd64.deb &&\
    apt-get install --allow-unauthenticated -y /tmp/apt-transport-https_1.4.8_amd64.deb &&\
    apt-get update --allow-unauthenticated -y -o Dir::Etc::sourcelist="sources.list.d/main.list" -o Dir::Etc::sourceparts="-" -o APT::Get::List-Cleanup="0"


RUN apt-get update && \
    apt-get -y install default-jdk

# Detect JAVA_HOME and export in bashrc.
# This will result in something like this being added to /etc/bash.bashrc
#   export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
RUN echo export JAVA_HOME="$(readlink -f /usr/bin/java | sed "s:/jre/bin/java::")" >> /etc/bash.bashrc

# Configure Spark-${SPARK_VERSION}
# Not using tar -v because including verbose output causes ci logsto exceed max length
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${SPARK_BINARY_PATH}" -o /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && cd /opt \
    && tar -xzf /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && rm spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && ln -s spark-${SPARK_VERSION}-bin-hadoop2.7 spark \
    && sed -i '/log4j.rootCategory=INFO, console/c\log4j.rootCategory=CRITICAL, console' /opt/spark/conf/log4j.properties.template \
    && mv /opt/spark/conf/log4j.properties.template /opt/spark/conf/log4j.properties \
    && mkdir /opt/spark-optional-jars/ \
    && mv /opt/spark/conf/spark-defaults.conf.template /opt/spark/conf/spark-defaults.conf \
    && printf "spark.driver.extraClassPath /opt/spark-optional-jars/*\nspark.executor.extraClassPath /opt/spark-optional-jars/*\n">>/opt/spark/conf/spark-defaults.conf \
    && printf "spark.driver.extraJavaOptions -Dderby.system.home=/tmp/derby" >> /opt/spark/conf/spark-defaults.conf

# Configure Hadoop-${HADOOP_VERSION}
# Not using tar -v because including verbose output causes ci logsto exceed max length
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${HADOOP_BINARY_PATH}" -o /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && cd /opt \
    && tar -xzf /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && rm /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && ln -s hadoop-${HADOOP_VERSION} hadoop

# Set Environment Variables.
ENV SPARK_HOME="/opt/spark" \
    HADOOP_HOME="/opt/hadoop" \
    PYSPARK_SUBMIT_ARGS="--master=lo    cal[*] pyspark-shell --executor-memory 1g --driver-memory 1g --conf spark.ui.enabled=false spark.executor.extrajavaoptions=-Xmx=1024m" \
    PYTHONPATH="/opt/spark/python:/opt/spark/python/lib/py4j-0.10.7-src.zip:$PYTHONPATH" \
    PATH="$PATH:/opt/spark/bin:/opt/hadoop/bin" \
    PYSPARK_DRIVER_PYTHON="/usr/local/bin/python" \
    PYSPARK_PYTHON="/usr/local/bin/python"

# Upgrade pip and setuptools
RUN pip install --index-url https://username:password@artifactory.our.org.com/artifactory/api/pypi/pypi-virtual-all/simple --upgrade pip setuptools