Dataframe Alteryx regex_countmatches是否与pyspark中的等效项匹配?

Dataframe Alteryx regex_countmatches是否与pyspark中的等效项匹配?,dataframe,apache-spark,pyspark,pyspark-dataframes,alteryx,Dataframe,Apache Spark,Pyspark,Pyspark Dataframes,Alteryx,我正在进行一些alteryx工作流到pyspark任务的迁移,其中一部分遇到了以下过滤条件 length([acc_id]) = 9 AND (REGEX_CountMatches(right([acc_id],7),"[[:alpha:]]")=0 AND REGEX_CountMatches(left([acc_id],2),"[[:alpha:]]")=2) OR (REGEX_CountMatches(right([acc_id],7),&qu

我正在进行一些alteryx工作流到pyspark任务的迁移,其中一部分遇到了以下过滤条件

length([acc_id]) = 9
AND 
(REGEX_CountMatches(right([acc_id],7),"[[:alpha:]]")=0 AND 
REGEX_CountMatches(left([acc_id],2),"[[:alpha:]]")=2)
OR
(REGEX_CountMatches(right([acc_id],7),"[[:alpha:]]")=0 AND 
REGEX_CountMatches(left([acc_id],1),"[[:alpha:]]")=1 AND 
REGEX_CountMatches(right(left([acc_id],2),1), '9')=1 
)

有人能帮我在pyspark数据帧中重新写入此条件吗?

您可以使用
size
split
。您还需要对正则表达式使用
'[a-zA-Z]
,因为Spark中不支持类似
“[[:alpha:][]”“
的表达式

比如说,

REGEX_CountMatches(right([acc_id],7),"[[:alpha:]]")=0
应等同于(在Spark SQL中)

您可以将Spark SQL字符串直接放入Spark dataframe的filter子句中:

df2 = df.filter("size(split(right(acc_id, 7), '[a-zA-Z]')) - 1 = 0")
您可以使用with获得Alteryx的
REGEX\u CountMatches
函数的等效值:

REGEX_CountMatches(right([acc_id],7),"[[:alpha:]]")=0 
变成:

# replace all non aplhapetic caracters with '' then get length
F.length(F.regexp_replace(F.expr("right(acc_id, 7)"), '[^A-Za-z]', '')) == 0
函数仅在SQL中可用,您可以将它们与
expr
一起使用

完整示例:

from pyspark.sql import functions as F


df = spark.createDataFrame([("AB1234567",), ("AD234XG1234TT5",)], ["acc_id"])

def regex_count_matches(c: Column, regex: str) -> Column:
    """
    helper function equivalent to REGEX_CountMatches
    """
    return F.length(F.regexp_replace(c, regex, ''))


df.filter(
    (F.length("acc_id") == 9) &
    (
      (regex_count_matches(F.expr("right(acc_id, 7)"), '[^A-Za-z]') == 0)
      & (regex_count_matches(F.expr("left(acc_id, 2)"), '[^A-Za-z]') == 2)
    ) | (
      (regex_count_matches(F.expr("right(acc_id, 7)"), '[^A-Za-z]') == 0)
      & (regex_count_matches(F.expr("left(acc_id, 1)"), '[^A-Za-z]') == 1)
      & (regex_count_matches(F.expr("right(left(acc_id, 2), 1)"), '[^9]') == 1)
    )
).show()

#+---------+
#|   acc_id|
#+---------+
#|AB1234567|
#+---------+
from pyspark.sql import functions as F


df = spark.createDataFrame([("AB1234567",), ("AD234XG1234TT5",)], ["acc_id"])

def regex_count_matches(c: Column, regex: str) -> Column:
    """
    helper function equivalent to REGEX_CountMatches
    """
    return F.length(F.regexp_replace(c, regex, ''))


df.filter(
    (F.length("acc_id") == 9) &
    (
      (regex_count_matches(F.expr("right(acc_id, 7)"), '[^A-Za-z]') == 0)
      & (regex_count_matches(F.expr("left(acc_id, 2)"), '[^A-Za-z]') == 2)
    ) | (
      (regex_count_matches(F.expr("right(acc_id, 7)"), '[^A-Za-z]') == 0)
      & (regex_count_matches(F.expr("left(acc_id, 1)"), '[^A-Za-z]') == 1)
      & (regex_count_matches(F.expr("right(left(acc_id, 2), 1)"), '[^9]') == 1)
    )
).show()

#+---------+
#|   acc_id|
#+---------+
#|AB1234567|
#+---------+