Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 修改PySpark dataframe列的所有值_Python_Apache Spark_Pyspark_Apache Spark Sql - Fatal编程技术网

Python 修改PySpark dataframe列的所有值

Python 修改PySpark dataframe列的所有值,python,apache-spark,pyspark,apache-spark-sql,Python,Apache Spark,Pyspark,Apache Spark Sql,我是PySpark dataframes的新手,以前使用过RDD。我有这样一个数据帧: date path 2017-01-01 /A/B/C/D 2017-01-01 /X 2017-01-01 /X/Y 并希望转换为以下内容: date path 2017-01-01 /A/B 2017-01-01 /X 2017-01-01 /X/Y from urllib import quote_plus path_levels = df['path'].

我是PySpark dataframes的新手,以前使用过RDD。我有这样一个数据帧:

date        path
2017-01-01  /A/B/C/D
2017-01-01  /X
2017-01-01  /X/Y
并希望转换为以下内容:

date        path
2017-01-01  /A/B
2017-01-01  /X
2017-01-01  /X/Y
from urllib import quote_plus

path_levels = df['path'].split('/')
filtered_path_levels = []
for _level in range(min(df_size, 3)):
    # Take only the top 2 levels of path
    filtered_path_levels.append(quote_plus(path_levels[_level]))

df['path'] = '/'.join(map(str, filtered_path_levels))
基本上是为了在第三次
/
之后摆脱一切,包括它。因此,在使用RDD之前,我曾经有以下几点:

date        path
2017-01-01  /A/B
2017-01-01  /X
2017-01-01  /X/Y
from urllib import quote_plus

path_levels = df['path'].split('/')
filtered_path_levels = []
for _level in range(min(df_size, 3)):
    # Take only the top 2 levels of path
    filtered_path_levels.append(quote_plus(path_levels[_level]))

df['path'] = '/'.join(map(str, filtered_path_levels))
我想说的是,Pypark的事情更复杂。以下是我到目前为止得到的信息:

path_levels = split(results_df['path'], '/')
filtered_path_levels = []
for _level in range(size(df_size, 3)):
    # Take only the top 2 levels of path
    filtered_path_levels.append(quote_plus(path_levels[_level]))

df['path'] = '/'.join(map(str, filtered_path_levels))
这给了我以下错误:

ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.
如果您能帮我重新评分,我们将不胜感激。如果需要更多信息/解释,请告诉我。

使用
udf

from pyspark.sql.functions import *

@udf
def quote_string_(path, size):
    if path:
        return "/".join(quote_plus(x) for x in path.split("/")[:size])

df.withColumn("foo", quote_string_("path", lit(2)))

我使用以下代码解决了问题:

from pyspark.sql.functions import split, col, lit, concat

split_col = split(df['path'], '/')
df = df.withColumn('l1_path', split_col.getItem(1))
df = df.withColumn('l2_path', split_col.getItem(2))
df = df.withColumn('path', concat(col('l1_path'), lit('/'), col('l2_path')))
df = df.drop('l1_path', 'l2_path')

谢谢,但是udf的问题是,它的速度非常慢,特别是在处理非常大的数据时(在我的例子中是TB),所以我决定使用其他pyspark函数。