Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/google-cloud-platform/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Google cloud platform Can';t在Google DataProc的jupyter中添加jars pyspark_Google Cloud Platform_Pyspark_Jupyter Notebook_Google Cloud Dataproc - Fatal编程技术网

Google cloud platform Can';t在Google DataProc的jupyter中添加jars pyspark

Google cloud platform Can';t在Google DataProc的jupyter中添加jars pyspark,google-cloud-platform,pyspark,jupyter-notebook,google-cloud-dataproc,Google Cloud Platform,Pyspark,Jupyter Notebook,Google Cloud Dataproc,我在DataProc上有一个Jupyter笔记本,我需要一个jar来运行一些作业。我知道编辑spark defaults.conf并使用--jars=gs://spark-lib/bigquery/spark-bigquery-latest.jar从命令行提交作业-它们都工作得很好。然而,如果我想直接将jar添加到jupyter笔记本,我尝试了下面的方法,但都失败了 方法1: import os os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars gs://s

我在DataProc上有一个Jupyter笔记本,我需要一个jar来运行一些作业。我知道编辑
spark defaults.conf
并使用
--jars=gs://spark-lib/bigquery/spark-bigquery-latest.jar
从命令行提交作业-它们都工作得很好。然而,如果我想直接将jar添加到jupyter笔记本,我尝试了下面的方法,但都失败了

方法1:

import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars gs://spark-lib/bigquery/spark-bigquery-latest.jar pyspark-shell'
方法2:

spark = SparkSession.builder.appName('Shakespeare WordCount')\
.config('spark.jars', 'gs://spark-lib/bigquery/spark-bigquery-latest.jar')\
.getOrCreate()
它们都有相同的错误:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-1-2b7692efb32b> in <module>()
     19 # Read BQ data into spark dataframe
     20 # This method reads from BQ directly, does not use GCS for intermediate results
---> 21 df = spark.read.format('bigquery').option('table', table).load()
     22 
     23 df.show(5)

/usr/lib/spark/python/pyspark/sql/readwriter.py in load(self, path, format, schema, **options)
    170             return self._df(self._jreader.load(self._spark._sc._jvm.PythonUtils.toSeq(path)))
    171         else:
--> 172             return self._df(self._jreader.load())
    173 
    174     @since(1.4)

/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o81.load.
: java.lang.ClassNotFoundException: Failed to find data source: bigquery. Please find packages at http://spark.apache.org/third-party-projects.html
    at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:657)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:194)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: bigquery.DefaultSource
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
    at scala.util.Try$.apply(Try.scala:192)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
    at scala.util.Try.orElse(Try.scala:84)
    at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:634)
    ... 13 more

我知道有许多类似的问题和答案,但它们要么不起作用,要么不符合我的需要。我需要一些特别的jar,我不想把它们都保留在默认配置中。我希望更加灵活,在移动中添加罐子。我怎样才能解决这个问题?谢谢大家!

不幸的是,如果不有效地编辑
spark defaults.conf
并重新启动内核,就没有一种内置的方法来动态执行此操作。这里有一个

齐柏林飞艇有一些可用性特性,但即使在齐柏林飞艇中,您也必须在执行此操作后重新启动解释器,以便Spark上下文在其类加载器中拾取解释器。而且这些选项还要求jar文件已经在本地文件系统上暂存;您不能只引用远程文件路径或URL

一种解决方法是创建一个init操作,该操作设置一个systemd服务,该服务定期轮询某些HDFS目录,以同步到一个现有的类路径目录,如
/usr/lib/spark/jars

#!/bin/bash
# Sets up continuous sync'ing of an HDFS directory into /usr/lib/spark/jars

# Manually copy jars into this HDFS directory to have them sync into
# ${LOCAL_DIR} on all nodes.
HDFS_DROPZONE='hdfs:///usr/lib/jars'
LOCAL_DIR='file:///usr/lib/spark/jars'

readonly ROLE="$(/usr/share/google/get_metadata_value attributes/dataproc-role)"
if [[ "${ROLE}" == 'Master' ]]; then
  hdfs dfs -mkdir -p "${HDFS_DROPZONE}"
fi

SYNC_SCRIPT='/usr/lib/hadoop/libexec/periodic-sync-jars.sh'
cat << EOF > "${SYNC_SCRIPT}"
#!/bin/bash
while true; do
  sleep 5
  hdfs dfs -ls ${HDFS_DROPZONE}/*.jar 2>/dev/null | grep hdfs: | \
    sed 's/.*hdfs:/hdfs:/' | xargs -n 1 basename 2>/dev/null | sort \
    > /tmp/hdfs_files.txt
  hdfs dfs -ls ${LOCAL_DIR}/*.jar 2>/dev/null | grep file: | \
    sed 's/.*file:/file:/' | xargs -n 1 basename 2>/dev/null | sort \
    > /tmp/local_files.txt
  comm -23 /tmp/hdfs_files.txt /tmp/local_files.txt > /tmp/diff_files.txt
  if [ -s /tmp/diff_files.txt ]; then
    for FILE in \$(cat /tmp/diff_files.txt); do
      echo "$(date): Copying \${FILE} from ${HDFS_DROPZONE} into ${LOCAL_DIR}"
      hdfs dfs -cp "${HDFS_DROPZONE}/\${FILE}" "${LOCAL_DIR}/\${FILE}"
    done
  fi
done
EOF

chmod 755 "${SYNC_SCRIPT}"

SERVICE_CONF='/usr/lib/systemd/system/sync-jars.service'
cat << EOF > "${SERVICE_CONF}"
[Unit]
Description=Period Jar Sync
[Service]
Type=simple
ExecStart=/bin/bash -c '${SYNC_SCRIPT} &>> /var/log/periodic-sync-jars.log'
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

chmod a+rw "${SERVICE_CONF}"

systemctl daemon-reload
systemctl enable sync-jars
systemctl restart sync-jars
systemctl status sync-jars
#!/bin/bash
# Sets up continuous sync'ing of an HDFS directory into /usr/lib/spark/jars

# Manually copy jars into this HDFS directory to have them sync into
# ${LOCAL_DIR} on all nodes.
HDFS_DROPZONE='hdfs:///usr/lib/jars'
LOCAL_DIR='file:///usr/lib/spark/jars'

readonly ROLE="$(/usr/share/google/get_metadata_value attributes/dataproc-role)"
if [[ "${ROLE}" == 'Master' ]]; then
  hdfs dfs -mkdir -p "${HDFS_DROPZONE}"
fi

SYNC_SCRIPT='/usr/lib/hadoop/libexec/periodic-sync-jars.sh'
cat << EOF > "${SYNC_SCRIPT}"
#!/bin/bash
while true; do
  sleep 5
  hdfs dfs -ls ${HDFS_DROPZONE}/*.jar 2>/dev/null | grep hdfs: | \
    sed 's/.*hdfs:/hdfs:/' | xargs -n 1 basename 2>/dev/null | sort \
    > /tmp/hdfs_files.txt
  hdfs dfs -ls ${LOCAL_DIR}/*.jar 2>/dev/null | grep file: | \
    sed 's/.*file:/file:/' | xargs -n 1 basename 2>/dev/null | sort \
    > /tmp/local_files.txt
  comm -23 /tmp/hdfs_files.txt /tmp/local_files.txt > /tmp/diff_files.txt
  if [ -s /tmp/diff_files.txt ]; then
    for FILE in \$(cat /tmp/diff_files.txt); do
      echo "$(date): Copying \${FILE} from ${HDFS_DROPZONE} into ${LOCAL_DIR}"
      hdfs dfs -cp "${HDFS_DROPZONE}/\${FILE}" "${LOCAL_DIR}/\${FILE}"
    done
  fi
done
EOF

chmod 755 "${SYNC_SCRIPT}"

SERVICE_CONF='/usr/lib/systemd/system/sync-jars.service'
cat << EOF > "${SERVICE_CONF}"
[Unit]
Description=Period Jar Sync
[Service]
Type=simple
ExecStart=/bin/bash -c '${SYNC_SCRIPT} &>> /var/log/periodic-sync-jars.log'
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

chmod a+rw "${SERVICE_CONF}"

systemctl daemon-reload
systemctl enable sync-jars
systemctl restart sync-jars
systemctl status sync-jars
import subprocess
sp = subprocess.Popen(
    ['hdfs', 'dfs', '-cp',
     'gs://spark-lib/bigquery/spark-bigquery-latest.jar',
     'hdfs:///usr/lib/jars/spark-bigquery-latest.jar'],
    stdout=subprocess.PIPE,
    stderr=subprocess.PIPE)
out, err = sp.communicate()
print(out)
print(err)