Python PySpark:具有不同列的数据帧的动态联合
考虑如下所示的数组。我有3套阵列: 阵列1:Python PySpark:具有不同列的数据帧的动态联合,python,apache-spark,methods,pyspark,Python,Apache Spark,Methods,Pyspark,考虑如下所示的数组。我有3套阵列: 阵列1: C1 C2 C3 1 2 3 9 5 6 阵列2: C2 C3 C4 11 12 13 10 15 16 阵列3: C1 C4 111 112 110 115 我需要如下输出,输入我可以得到C1,…,C4的任何一个值,但是当连接时,我需要得到正确的值,如果该值不存在,那么它应该是零 预期产出: C1 C2 C3 C4 1 2 3 0 9 5 6 0 0 11 12 13 0 10 15 16 111 0
C1 C2 C3
1 2 3
9 5 6
阵列2:
C2 C3 C4
11 12 13
10 15 16
阵列3:
C1 C4
111 112
110 115
我需要如下输出,输入我可以得到C1,…,C4的任何一个值,但是当连接时,我需要得到正确的值,如果该值不存在,那么它应该是零
预期产出:
C1 C2 C3 C4
1 2 3 0
9 5 6 0
0 11 12 13
0 10 15 16
111 0 0 112
110 0 0 115
我已经编写了pyspark代码,但是我已经硬编码了新列及其原始列的值,我需要将下面的代码转换为方法重载,以便我可以将此脚本用作自动脚本。我只需要使用python/pyspark,不需要使用pandas
import pyspark
from pyspark import SparkContext
from pyspark.sql.functions import lit
from pyspark.sql import SparkSession
sqlContext = pyspark.SQLContext(pyspark.SparkContext())
df01 = sqlContext.createDataFrame([(1, 2, 3), (9, 5, 6)], ("C1", "C2", "C3"))
df02 = sqlContext.createDataFrame([(11,12, 13), (10, 15, 16)], ("C2", "C3", "C4"))
df03 = sqlContext.createDataFrame([(111,112), (110, 115)], ("C1", "C4"))
df01_add = df01.withColumn("C4", lit(0)).select("c1","c2","c3","c4")
df02_add = df02.withColumn("C1", lit(0)).select("c1","c2","c3","c4")
df03_add = df03.withColumn("C2", lit(0)).withColumn("C3", lit(0)).select("c1","c2","c3","c4")
df_uni = df01_add.union(df02_add).union(df03_add)
df_uni.show()
方法重载示例:
class Student:
def ___Init__ (self,m1,m2):
self.m1 = m1
self.m2 = m2
def sum(self,c1=None,c2=None,c3=None,c4=None):
s = 0
if c1!= None and c2 != None and c3 != None:
s = c1+c2+c3
elif c1 != None and c2 != None:
s = c1+c2
else:
s = c1
return s
print(s1.sum(55,65,23))
也许有很多更好的方法可以做到这一点,但也许下面的内容对未来的任何人都有用
from pyspark.sql import SparkSession
from pyspark.sql.functions import lit
spark = SparkSession.builder\
.appName("DynamicFrame")\
.getOrCreate()
df01 = spark.createDataFrame([(1, 2, 3), (9, 5, 6)], ("C1", "C2", "C3"))
df02 = spark.createDataFrame([(11,12, 13), (10, 15, 16)], ("C2", "C3", "C4"))
df03 = spark.createDataFrame([(111,112), (110, 115)], ("C1", "C4"))
dataframes = [df01, df02, df03]
# Create a list of all the column names and sort them
cols = set()
for df in dataframes:
for x in df.columns:
cols.add(x)
cols = sorted(cols)
# Create a dictionary with all the dataframes
dfs = {}
for i, d in enumerate(dataframes):
new_name = 'df' + str(i) # New name for the key, the dataframe is the value
dfs[new_name] = d
# Loop through all column names. Add the missing columns to the dataframe (with value 0)
for x in cols:
if x not in d.columns:
dfs[new_name] = dfs[new_name].withColumn(x, lit(0))
dfs[new_name] = dfs[new_name].select(cols) # Use 'select' to get the columns sorted
# Now put it al together with a loop (union)
result = dfs['df0'] # Take the first dataframe, add the others to it
dfs_to_add = dfs.keys() # List of all the dataframes in the dictionary
dfs_to_add.remove('df0') # Remove the first one, because it is already in the result
for x in dfs_to_add:
result = result.union(dfs[x])
result.show()
输出:
+---+---+---+---+
| C1| C2| C3| C4|
+---+---+---+---+
| 1| 2| 3| 0|
| 9| 5| 6| 0|
| 0| 11| 12| 13|
| 0| 10| 15| 16|
|111| 0| 0|112|
|110| 0| 0|115|
+---+---+---+---+
也许有很多更好的方法可以做到这一点,但也许下面的内容对未来的任何人都有用
from pyspark.sql import SparkSession
from pyspark.sql.functions import lit
spark = SparkSession.builder\
.appName("DynamicFrame")\
.getOrCreate()
df01 = spark.createDataFrame([(1, 2, 3), (9, 5, 6)], ("C1", "C2", "C3"))
df02 = spark.createDataFrame([(11,12, 13), (10, 15, 16)], ("C2", "C3", "C4"))
df03 = spark.createDataFrame([(111,112), (110, 115)], ("C1", "C4"))
dataframes = [df01, df02, df03]
# Create a list of all the column names and sort them
cols = set()
for df in dataframes:
for x in df.columns:
cols.add(x)
cols = sorted(cols)
# Create a dictionary with all the dataframes
dfs = {}
for i, d in enumerate(dataframes):
new_name = 'df' + str(i) # New name for the key, the dataframe is the value
dfs[new_name] = d
# Loop through all column names. Add the missing columns to the dataframe (with value 0)
for x in cols:
if x not in d.columns:
dfs[new_name] = dfs[new_name].withColumn(x, lit(0))
dfs[new_name] = dfs[new_name].select(cols) # Use 'select' to get the columns sorted
# Now put it al together with a loop (union)
result = dfs['df0'] # Take the first dataframe, add the others to it
dfs_to_add = dfs.keys() # List of all the dataframes in the dictionary
dfs_to_add.remove('df0') # Remove the first one, because it is already in the result
for x in dfs_to_add:
result = result.union(dfs[x])
result.show()
输出:
+---+---+---+---+
| C1| C2| C3| C4|
+---+---+---+---+
| 1| 2| 3| 0|
| 9| 5| 6| 0|
| 0| 11| 12| 13|
| 0| 10| 15| 16|
|111| 0| 0|112|
|110| 0| 0|115|
+---+---+---+---+
我会试试看
df = df1.join(df2, ['each', 'shared', 'col], how='full')
我会试试看
df = df1.join(df2, ['each', 'shared', 'col], how='full')
你说可能更好,但我还没有遇到过这样的代码,谢谢你的新代码!你在Scala做过类似的事情吗?或者在Scala中看到过这样做的代码吗?@JoshuaJames-这是Scala中的版本-你说可能更好,但我没有遇到过这样的代码,谢谢你的这段新代码!你在Scala做过类似的事情吗?或者看到Scala中的代码了吗?@JoshuaJames-这是Scala中的版本-