Dataframe 如何更改pyspark数据帧列数据类型?

Dataframe 如何更改pyspark数据帧列数据类型?,dataframe,casting,pyspark,Dataframe,Casting,Pyspark,我正在寻找改变pyspark数据帧列类型的方法 从 df.printSchema() 到 提前感谢您的帮助 您必须用新架构替换该列。ArrayType采用两个参数elementType和ContainsAll from pyspark.sql.types import * from pyspark.sql.functions import udf x = [("a",["b","c","d","e"]),("g",["h","h","d","e"])] schema = StructType

我正在寻找改变pyspark数据帧列类型的方法

df.printSchema()


提前感谢您的帮助

您必须用新架构替换该列。ArrayType采用两个参数elementType和ContainsAll

from pyspark.sql.types import *
from pyspark.sql.functions import udf
x = [("a",["b","c","d","e"]),("g",["h","h","d","e"])]
schema = StructType([StructField("key",StringType(), nullable=True),
                     StructField("values", ArrayType(StringType(), containsNull=False))])

df = spark.createDataFrame(x,schema = schema)
df.printSchema()
new_schema = ArrayType(StringType(), containsNull=True)
udf_foo = udf(lambda x:x, new_schema)
df.withColumn("values",udf_foo("values")).printSchema()



root
 |-- key: string (nullable = true)
 |-- values: array (nullable = true)
 |    |-- element: string (containsNull = false)

root
 |-- key: string (nullable = true)
 |-- values: array (nullable = true)
 |    |-- element: string (containsNull = true)

下面是一个有用的示例,您可以在其中更改每个列的模式 假设你想要相同的类型

from pyspark.sql.types import Row
from pyspark.sql.functions import *
df = sc.parallelize([
Row(isbn=1, count=1, average=10.6666666),
Row(isbn=2, count=1, average=11.1111111)
]).toDF()

df.printSchema()
df=df.select(*[col(x).cast('float') for x in df.columns]).printSchema()
产出:

  root
  |-- average: double (nullable = true)
  |-- count: long (nullable = true)
  |-- isbn: long (nullable = true)
  root
  |-- average: float (nullable = true)
  |-- count: float (nullable = true)
  |-- isbn: float (nullable = true)