Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/amazon-web-services/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 将数据从pyspark写入ElasticSearch_Python_Amazon Web Services_Hadoop_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch_Pyspark - Fatal编程技术网 elasticsearch,pyspark,Python,Amazon Web Services,Hadoop,elasticsearch,Pyspark" /> elasticsearch,pyspark,Python,Amazon Web Services,Hadoop,elasticsearch,Pyspark" />

Python 将数据从pyspark写入ElasticSearch

Python 将数据从pyspark写入ElasticSearch,python,amazon-web-services,hadoop,elasticsearch,pyspark,Python,Amazon Web Services,Hadoop,elasticsearch,Pyspark,我遵循这一点向AWS ES发送一些数据,并使用jar elasticsearch hadoop。这是我的剧本: from pyspark import SparkContext, SparkConf from pyspark.sql import SQLContext if __name__ == "__main__": conf = SparkConf().setAppName("WriteToES") sc = SparkContext(conf=conf) sqlC

我遵循这一点向AWS ES发送一些数据,并使用jar elasticsearch hadoop。这是我的剧本:

from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
if __name__ == "__main__":
    conf = SparkConf().setAppName("WriteToES")
    sc = SparkContext(conf=conf)
    sqlContext = SQLContext(sc)
    es_conf = {"es.nodes" : "https://search-elasticsearchdomaine.region.es.amazonaws.com/",
    "es.port" : "9200","es.nodes.client.only" : "true","es.resource" : "sensor_counts/metrics"}
    es_df_p = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").load("output/part-00000-c353bb29-f189-4189-b35b-f7f1af717355.csv")
    es_df_pf= es_df_p.groupBy("network_key")
    es_df_pf.saveAsNewAPIHadoopFile(
    path='-',
    outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat",
    keyClass="org.apache.hadoop.io.NullWritable",
    valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable",
    conf=es_conf)
然后运行以下命令行:

spark-submit --jars elasticsearch-spark-20_2.11-5.3.1.jar write_to_es.py
其中write_to_es.py是上面的脚本

以下是我得到的错误:

17/05/05 17:51:52 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
17/05/05 17:51:52 INFO HadoopRDD: Input split: file:/home/user/spark-2.1.0-bin-hadoop2.7/output/part-00000-c353bb29-f189-4189-b35b-f7f1af717355.csv:0+178633
17/05/05 17:51:52 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1143 bytes result sent to driver
17/05/05 17:51:52 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 11 ms on localhost (executor driver) (1/1)
17/05/05 17:51:52 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
17/05/05 17:51:52 INFO DAGScheduler: ResultStage 1 (load at NativeMethodAccessorImpl.java:0) finished in 0,011 s
17/05/05 17:51:52 INFO DAGScheduler: Job 1 finished: load at NativeMethodAccessorImpl.java:0, took 0,018727 s
17/05/05 17:51:52 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 192.168.1.26:39609 in memory (size: 2.1 KB, free: 366.3 MB)
17/05/05 17:51:52 INFO BlockManagerInfo: Removed broadcast_2_piece0 on 192.168.1.26:39609 in memory (size: 22.9 KB, free: 366.3 MB)
17/05/05 17:51:52 INFO BlockManagerInfo: Removed broadcast_3_piece0 on 192.168.1.26:39609 in memory (size: 2.1 KB, free: 366.3 MB)
Traceback (most recent call last):
  File "/home/user/spark-2.1.0-bin-hadoop2.7/write_to_es.py", line 11, in <module>
    es_df_pf.saveAsNewAPIHadoopFile(
  File "/home/user/spark-2.1.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/dataframe.py", line 964, in __getattr__
AttributeError: 'DataFrame' object has no attribute 'saveAsNewAPIHadoopFile'
17/05/05 17:51:53 INFO SparkContext: Invoking stop() from shutdown hook
17/05/05 17:51:53 INFO SparkUI: Stopped Spark web UI at http://192.168.1.26:4040
17/05/05 17:51:53 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/05/05 17:51:53 INFO MemoryStore: MemoryStore cleared
17/05/05 17:51:53 INFO BlockManager: BlockManager stopped
17/05/05 17:51:53 INFO BlockManagerMaster: BlockManagerMaster stopped
17/05/05 17:51:53 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/05/05 17:51:53 INFO SparkContext: Successfully stopped SparkContext
17/05/05 17:51:53 INFO ShutdownHookManager: Shutdown hook called
17/05/05 17:51:53 INFO ShutdownHookManager: Deleting directory /tmp/spark-501c4efa-5402-430e-93c1-aaff4caddef0
17/05/05 17:51:53 INFO ShutdownHookManager: Deleting directory /tmp/spark-501c4efa-5402-430e-93c1-aaff4caddef0/pyspark-52406fa8-e8d1-4aca-bcb6-91748dc87507

非常感谢您的帮助或建议。

saveasnewapiHadoop文件位于RDD中

我想这条线应该是

es_df_pf.rdd.saveAsNewAPIHadoopFile

我也有同样的问题

读完后,我找到了答案

您必须转换为以下类型的
PythonRDD

>>> type(df)
<class 'pyspark.sql.dataframe.DataFrame'>

>>> type(df.rdd)
<class 'pyspark.rdd.RDD'>

>>> df.rdd.saveAsNewAPIHadoopFile(...) # Got the same error message

>>> df.printSchema() # My schema
root
 |-- id: string (nullable = true)
 ...

# Let's convert to PythonRDD
>>> python_rdd = df.map(lambda item: ('key', {
... 'id': item['id'],
    ...
... }))

>>> python_rdd
PythonRDD[42] at RDD at PythonRDD.scala:43

>>> python_rdd.saveAsNewAPIHadoopFile(...) # Now, success
>类型(df)
>>>类型(df.rdd)
>>>saveAsNewAPIHadoopFile(…)#收到相同的错误消息
>>>df.printSchema()#我的模式
根
|--id:string(nullable=true)
...
#让我们转换成PythonRDD
>>>python_rdd=df.map(lambda项:('key'{
…'id':项['id'],
...
... }))
>>>python_rdd
PythonRDD[42]在PythonRDD.scala的RDD处:43
>>>python_rdd.saveAsNewAPIHadoopFile(…)#现在,成功了

当我尝试它时,它给了我一个巨大的错误:
17/05/09 09:40:21错误执行者:任务0.0中的异常在2.0阶段(TID 2)net.razorvine.pickle.pickle.Exception:构建ClassDict(对于pyspark.sql.types.\u create_row)
PythonRDD和
RDD
之间的区别是什么?为什么
RDD
会引发错误,而不是
PythonRDD
>>> type(df)
<class 'pyspark.sql.dataframe.DataFrame'>

>>> type(df.rdd)
<class 'pyspark.rdd.RDD'>

>>> df.rdd.saveAsNewAPIHadoopFile(...) # Got the same error message

>>> df.printSchema() # My schema
root
 |-- id: string (nullable = true)
 ...

# Let's convert to PythonRDD
>>> python_rdd = df.map(lambda item: ('key', {
... 'id': item['id'],
    ...
... }))

>>> python_rdd
PythonRDD[42] at RDD at PythonRDD.scala:43

>>> python_rdd.saveAsNewAPIHadoopFile(...) # Now, success