直接从sql for JDBC源代码创建表
由于PySpark对jdbc源代码具有如下读取功能:直接从sql for JDBC源代码创建表,jdbc,pyspark,apache-spark-sql,Jdbc,Pyspark,Apache Spark Sql,由于PySpark对jdbc源代码具有如下读取功能: sparkSession.read.format("jdbc")\ .option("url", jdbcUrl) .option("query", "select c1, c2 from t1") .option('user', username) \ .option
sparkSession.read.format("jdbc")\
.option("url", jdbcUrl)
.option("query", "select c1, c2 from t1")
.option('user', username) \
.option('password', password) \
.load()
sparkSession.write.format("jdbc") \
.option("url", jdbcUrl) \
.option("query", "create table db_name.table_name as select c1, c2 from t1") \
.option('user', username) \
.option('password', password) \
.save()
我想知道PySpark是否有类似的写函数,可以在sql查询中直接创建表(无需先创建DataFrame),如下所示:
sparkSession.read.format("jdbc")\
.option("url", jdbcUrl)
.option("query", "select c1, c2 from t1")
.option('user', username) \
.option('password', password) \
.load()
sparkSession.write.format("jdbc") \
.option("url", jdbcUrl) \
.option("query", "create table db_name.table_name as select c1, c2 from t1") \
.option('user', username) \
.option('password', password) \
.save()