Python PySpark按给定SparseVector()索引处的值进行筛选
我在尝试做一些看起来很简单的事情时遇到了问题。我有以下格式的DF:Python PySpark按给定SparseVector()索引处的值进行筛选,python,apache-spark,pyspark,apache-spark-sql,Python,Apache Spark,Pyspark,Apache Spark Sql,我在尝试做一些看起来很简单的事情时遇到了问题。我有以下格式的DF: +---------+---------------------+ |id |feat_vctr | +---------+---------------------+ |XXXXXXXX |(4,[],[]) | |XXXXXXXX |(4,[0],[5.0]) | |XXXXXXXX |(4,[2,3],[25.0,15.0])| +---------+-
+---------+---------------------+
|id |feat_vctr |
+---------+---------------------+
|XXXXXXXX |(4,[],[]) |
|XXXXXXXX |(4,[0],[5.0]) |
|XXXXXXXX |(4,[2,3],[25.0,15.0])|
+---------+---------------------+
其中feat_vctr
是pyspark.ml.linalg.SparseVector
请注意,printSchema()
仅将其显示为一个向量,但它的格式是稀疏向量
无论如何,我想把它过滤成4个DF,其中每个数据帧都是上面的过滤版本,所有在给定索引处没有值的行都被过滤掉
我正在尝试使用:
filtered_df_idx_0 = df.filter(df.feat_vctr[0] > 0.0)
filtered_df_idx_1 = df.filter(df.feat_vctr[1] > 0.0)
filtered_df_idx_2 = df.filter(df.feat_vctr[2] > 0.0)
filtered_df_idx_3 = df.filter(df.feat_vctr[3] > 0.0)
我犯了个错误
Py4JJavaError: An error occurred while calling o1089.filter.
: org.apache.spark.sql.AnalysisException: Can't extract value from feat_vctr#1007: need struct type but got struct<type:tinyint,size:int,indices:array<int>,values:array<double>>;
返回2.3(在jupyter笔记本中)我无法在
过滤函数中执行该操作
似乎必须使用UDF来实现这一点:
# Filter the empty Sparse Vector
def no_empty_vector(value):
if value.indices.size > 0:
return True
else:
return False
no_empty_vector_udf = udf(no_empty_vector, BooleanType())
df = df.filter(no_empty_vector_udf('features'))
df.show()
我无法在过滤器函数中执行此操作
似乎必须使用UDF来实现这一点:
# Filter the empty Sparse Vector
def no_empty_vector(value):
if value.indices.size > 0:
return True
else:
return False
no_empty_vector_udf = udf(no_empty_vector, BooleanType())
df = df.filter(no_empty_vector_udf('features'))
df.show()