Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 如何在PySpark中将数据帧保存到Elasticsearch?_Apache Spark_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch_Pyspark_Apache Spark Sql - Fatal编程技术网 elasticsearch,pyspark,apache-spark-sql,Apache Spark,elasticsearch,Pyspark,Apache Spark Sql" /> elasticsearch,pyspark,apache-spark-sql,Apache Spark,elasticsearch,Pyspark,Apache Spark Sql" />

Apache spark 如何在PySpark中将数据帧保存到Elasticsearch?

Apache spark 如何在PySpark中将数据帧保存到Elasticsearch?,apache-spark,elasticsearch,pyspark,apache-spark-sql,Apache Spark,elasticsearch,Pyspark,Apache Spark Sql,我有一个spark数据框,我正试图将其推送到AWS Elasticsearch,但在此之前,我测试了这个示例片段以推送到ES from pyspark.sql import SparkSession spark = SparkSession.builder.appName('ES_indexer').getOrCreate() df = spark.createDataFrame([{'num': i} for i in xrange(10)]) df = df.drop('_id') df.w

我有一个spark数据框,我正试图将其推送到AWS Elasticsearch,但在此之前,我测试了这个示例片段以推送到ES

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('ES_indexer').getOrCreate()
df = spark.createDataFrame([{'num': i} for i in xrange(10)])
df = df.drop('_id')
df.write.format(
    'org.elasticsearch.spark.sql'
).option(
    'es.nodes', 'http://spark-data-push-adertadaltdpioy124.us-west-2.es.amazonaws.com'
).option(
    'es.port', 9200
).option(
    'es.resource', '%s/%s' % ('index_name', 'doc_type_name'),
).save()
我得到一个错误的说法

java.lang.ClassNotFoundException:未能找到数据源:org.elasticsearch.spark.sql。请在以下网址查找包裹:

如有任何建议,将不胜感激

错误跟踪:

Traceback (most recent call last):
  File "es_3.py", line 12, in <module>
    'es.resource', '%s/%s' % ('index_name', 'doc_type_name'),
  File "/usr/local/lib/python2.7/site-packages/pyspark/sql/readwriter.py", line 732, in save
    self._jwrite.save()
  File "/usr/local/lib/python2.7/site-packages/py4j/java_gateway.py", line 1257, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/usr/local/lib/python2.7/site-packages/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/usr/local/lib/python2.7/site-packages/py4j/protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o46.save.
: java.lang.ClassNotFoundException: Failed to find data source: org.elasticsearch.spark.sql. Please find packages at http://spark.apache.org/third-party-projects.html
        at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:657)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:245)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.spark.sql.DefaultSource
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
        at scala.util.Try$.apply(Try.scala:192)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
        at scala.util.Try.orElse(Try.scala:84)
        at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:634)
        ... 12 more
回溯(最近一次呼叫最后一次):
文件“es_3.py”,第12行,在
'es.resource','%s/%s'('index\u name','doc\u type\u name'),
文件“/usr/local/lib/python2.7/site packages/pyspark/sql/readwriter.py”,第732行,保存
self.\u jwrite.save()
文件“/usr/local/lib/python2.7/site packages/py4j/java_gateway.py”,第1257行,in__调用__
回答,self.gateway\u客户端,self.target\u id,self.name)
文件“/usr/local/lib/python2.7/site-packages/pyspark/sql/utils.py”,第63行,deco格式
返回f(*a,**kw)
文件“/usr/local/lib/python2.7/site packages/py4j/protocol.py”,第328行,在get_return_值中
格式(目标id,“.”,名称),值)
py4j.protocol.Py4JJavaError:调用o46.save时出错。
:java.lang.ClassNotFoundException:未能找到数据源:org.elasticsearch.spark.sql。请在以下网址查找包裹:http://spark.apache.org/third-party-projects.html
位于org.apache.spark.sql.execution.datasources.DataSource$.lookUpdateSource(DataSource.scala:657)
位于org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:245)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:282)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:238)
运行(Thread.java:748)
原因:java.lang.ClassNotFoundException:org.elasticsearch.spark.sql.DefaultSource
位于java.net.URLClassLoader.findClass(URLClassLoader.java:382)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:424)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:357)
位于org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
位于org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
在scala.util.Try$.apply(Try.scala:192)
位于org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
位于org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
在scala.util.Try.orElse(Try.scala:84)
位于org.apache.spark.sql.execution.datasources.DataSource$.lookUpdateSource(DataSource.scala:634)
... 还有12个

tl;dr使用
pyspark--packagesorg.elasticsearch:elasticsearch-hadoop:7.2.0
并使用
格式(“es”)
引用连接器


引用ApacheHadoop产品Elasticsearch的官方文档:

与其他库一样,elasticsearch hadoop需要在Spark的类路径中可用

后来:

elasticsearch hadoop通过两个不同的jar支持Spark SQL 1.3-1.6和Spark SQL 2.0版本:
elasticsearch-Spark-1.x-.jar
elasticsearch hadoop-.jar

elasticsearch-spark-2.0-.jar
支持spark SQL 2.0

这看起来像是文档的一个问题(因为它们使用两个不同版本的jar文件),但确实意味着您必须在Spark应用程序的类路径上使用正确的jar文件

后来在同一时间:

Spark SQL支持在org.elasticsearch.Spark.SQL包下提供

这仅仅说明格式(在
df.write.format('org.elasticsearch.spark.sql')
中)是正确的

再往下看,您甚至可以使用别名
df.write.format(“es”)
(!)


我发现GitHub上的项目存储库中的部分更具可读性和最新性。

更新:截至2020年6月,当前的ES hadoop软件包是7.7.1,因此我改用了
pyspark--packages org.elasticsearch:elasticsearch hadoop:7.7.1