Amazon web services AWS EMR齐柏林飞艇缺少MYSQL解释器
我在AWS上与齐柏林飞艇一起启动了一个全新的AWS EMR Spark集群,以查询MYSQL数据库。当我试图在齐柏林飞艇中添加MYSQL解释器时,该选项不存在。我在谷歌上搜索,想找到一种方法让解释器显示出来,但我没有找到解决办法。如何在齐柏林飞艇中获得MYSQL解释器,以便查询MYSQL数据库Amazon web services AWS EMR齐柏林飞艇缺少MYSQL解释器,amazon-web-services,emr,amazon-emr,apache-zeppelin,Amazon Web Services,Emr,Amazon Emr,Apache Zeppelin,我在AWS上与齐柏林飞艇一起启动了一个全新的AWS EMR Spark集群,以查询MYSQL数据库。当我试图在齐柏林飞艇中添加MYSQL解释器时,该选项不存在。我在谷歌上搜索,想找到一种方法让解释器显示出来,但我没有找到解决办法。如何在齐柏林飞艇中获得MYSQL解释器,以便查询MYSQL数据库 < P> > Spice SQL支持了许多代码代码> SQL:2003 和 SQL:2011 < /代码>(1)[2 ],您可以考虑通过添加依赖性来通过ZelPin上的火花来实现这一点。 获取具有正确版
< P> > Spice SQL支持了许多代码<>代码> SQL:2003 和<代码> SQL:2011 < /代码>(1)[2 ],您可以考虑通过添加依赖性来通过ZelPin上的火花来实现这一点。
/* Database Configuration*/
val jdbcURL = s"jdbc:mysql://${HOST}/${DATABASE}"
val jdbcUsername = s"${USERNAME}"
val jdbcPassword = s"${PASSWORD}"
import java.util.Properties
val connectionProperties = new Properties()
connectionProperties.put("user", jdbcUsername)
connectionProperties.put("password", jdbcPassword)
connectionProperties.put("driver", "com.mysql.cj.jdbc.Driver")
/* Read Data from MySQL */
val desiredData = spark.read.jdbc(jdbcURL, "${TABLE NAME}", connectionProperties)
desiredData.printSchema
/* Data Manipulation */
desiredData.createOrReplaceTempView("desiredData")
val query = s"""
SELECT COUNT(*) AS `Record Number`
FROM desiredData
"""
spark.sql(query).show
val query2 = s"""
SELECT ROW_NUMBER() OVER (PARTITION BY column1 ORDER BY column1, column2) AS column3
FROM desiredData
"""
spark.sql(query2).show
.
.
.
Spice SQL支持<代码> SQL:2003 和<代码> SQL:2011 < /代码> [ 1 ] [2 ],您可以考虑通过添加依赖性来通过Zelpin的火花来实现这一点。
/* Database Configuration*/
val jdbcURL = s"jdbc:mysql://${HOST}/${DATABASE}"
val jdbcUsername = s"${USERNAME}"
val jdbcPassword = s"${PASSWORD}"
import java.util.Properties
val connectionProperties = new Properties()
connectionProperties.put("user", jdbcUsername)
connectionProperties.put("password", jdbcPassword)
connectionProperties.put("driver", "com.mysql.cj.jdbc.Driver")
/* Read Data from MySQL */
val desiredData = spark.read.jdbc(jdbcURL, "${TABLE NAME}", connectionProperties)
desiredData.printSchema
/* Data Manipulation */
desiredData.createOrReplaceTempView("desiredData")
val query = s"""
SELECT COUNT(*) AS `Record Number`
FROM desiredData
"""
spark.sql(query).show
val query2 = s"""
SELECT ROW_NUMBER() OVER (PARTITION BY column1 ORDER BY column1, column2) AS column3
FROM desiredData
"""
spark.sql(query2).show
.
.
.