Apache spark 如何在AWS EMR上使用带有pyspark的Graphframe?
我正在尝试在AWS EMR上使用Jupyter笔记本中的pyspark包(使用Sagemaker和sparkmagic)。在AWS控制台中创建EMR群集时,我尝试添加一个配置选项:Apache spark 如何在AWS EMR上使用带有pyspark的Graphframe?,apache-spark,pyspark,jupyter-notebook,amazon-emr,graphframes,Apache Spark,Pyspark,Jupyter Notebook,Amazon Emr,Graphframes,我正在尝试在AWS EMR上使用Jupyter笔记本中的pyspark包(使用Sagemaker和sparkmagic)。在AWS控制台中创建EMR群集时,我尝试添加一个配置选项: [{"classification":"spark-defaults", "properties":{"spark.jars.packages":"graphframes:graphframes:0.7.0-spark2.4-s_2.11"}, "configurations":[]}] 但我在jupyter笔记本
[{"classification":"spark-defaults", "properties":{"spark.jars.packages":"graphframes:graphframes:0.7.0-spark2.4-s_2.11"}, "configurations":[]}]
但我在jupyter笔记本的pyspark代码中尝试使用graphframes包时仍然出错
以下是我的代码(来自graphframes示例):
下面是输出/错误:
ImportError: No module named graphframes
我通读了一遍,但所有可能的解决方法似乎都非常复杂,需要ssh连接到EMR集群的主节点。我最终发现了一个问题。我用它创建了一个详细的引导操作,尽管我做了一些修改 以下是我为让电子病历中的图形框架工作所做的:
- 在步骤1中,我在“编辑软件设置”文本框中添加了graphframes包的maven坐标:
- 在步骤3:常规集群设置期间,我进入了引导操作部分
- 在bootstrap actions部分中,我添加了一个新的自定义bootstrap action,其中包括:
- 任意的名字
- 我的“install\u jupyter\u libraries\u emr.sh”脚本的s3位置
- 没有可选参数
- 然后我开始创建集群
+---+--------+
| id|inDegree|
+---+--------+
| c| 1|
| b| 2|
+---+--------+
+---+------------------+
| id| pagerank|
+---+------------------+
| b|1.0905890109440908|
| a| 0.01|
| c|1.8994109890559092|
+---+------------------+
上面的答案很好,但是现在Graphframe的存储库位于。因此,为了使其发挥作用,分类应改为:
[
{
"classification":"spark-defaults",
"properties":{
"spark.jars.packages":"graphframes:graphframes:0.8.0-spark2.4-s_2.11",
"spark.jars.repositories":"https://repos.spark-packages.org/"
}
}
]
回答得很好,我很感激你回来提出你的解决方案。如果由我决定,我会给你所有的假互联网积分。非常感谢使用最新的AWS EMR集群,我不得不使用“sudo pip-3.6安装graphframes”使其工作(而不是简单的pip)
[{"classification":"spark-defaults","properties":{"spark.jars.packages":"graphframes:graphframes:0.7.0-spark2.4-s_2.11"}}]
# Create a Vertex DataFrame with unique ID column "id"
v = spark.createDataFrame([
("a", "Alice", 34),
("b", "Bob", 36),
("c", "Charlie", 30),
], ["id", "name", "age"])
# Create an Edge DataFrame with "src" and "dst" columns
e = spark.createDataFrame([
("a", "b", "friend"),
("b", "c", "follow"),
("c", "b", "follow"),
], ["src", "dst", "relationship"])
# Create a GraphFrame
from graphframes import *
g = GraphFrame(v, e)
# Query: Get in-degree of each vertex.
g.inDegrees.show()
# Query: Count the number of "follow" connections in the graph.
g.edges.filter("relationship = 'follow'").count()
# Run PageRank algorithm, and show results.
results = g.pageRank(resetProbability=0.01, maxIter=20)
results.vertices.select("id", "pagerank").show()
+---+--------+
| id|inDegree|
+---+--------+
| c| 1|
| b| 2|
+---+--------+
+---+------------------+
| id| pagerank|
+---+------------------+
| b|1.0905890109440908|
| a| 0.01|
| c|1.8994109890559092|
+---+------------------+
[
{
"classification":"spark-defaults",
"properties":{
"spark.jars.packages":"graphframes:graphframes:0.8.0-spark2.4-s_2.11",
"spark.jars.repositories":"https://repos.spark-packages.org/"
}
}
]