如何使用ApacheSpark数据帧(Python)执行Switch语句

如何使用ApacheSpark数据帧(Python)执行Switch语句,python,apache-spark,dataframe,pyspark,apache-spark-sql,Python,Apache Spark,Dataframe,Pyspark,Apache Spark Sql,我试图对我的数据执行一个操作,如果某个值与某个条件匹配,则该值将映射到一个预定义值列表,否则映射到一个默认值列表 这将是等效的SQL: CASE WHEN user_agent LIKE \'%CanvasAPI%\' THEN \'api\' WHEN user_agent LIKE \'%candroid%\' THEN \'mobile_app_android\' WHEN user_agent LIKE \'%iCa

我试图对我的数据执行一个操作,如果某个值与某个条件匹配,则该值将映射到一个预定义值列表,否则映射到一个默认值列表

这将是等效的SQL:

CASE
            WHEN user_agent LIKE \'%CanvasAPI%\' THEN \'api\'
            WHEN user_agent LIKE \'%candroid%\' THEN \'mobile_app_android\'
            WHEN user_agent LIKE \'%iCanvas%\' THEN \'mobile_app_ios\'
            WHEN user_agent LIKE \'%CanvasKit%\' THEN \'mobile_app_ios\'
            WHEN user_agent LIKE \'%Windows NT%\' THEN \'desktop\'
            WHEN user_agent LIKE \'%MacBook%\' THEN \'desktop\'
            WHEN user_agent LIKE \'%iPhone%\' THEN \'mobile\'
            WHEN user_agent LIKE \'%iPod Touch%\' THEN \'mobile\'
            WHEN user_agent LIKE \'%iPad%\' THEN \'mobile\'
            WHEN user_agent LIKE \'%iOS%\' THEN \'mobile\'
            WHEN user_agent LIKE \'%CrOS%\' THEN \'desktop\'
            WHEN user_agent LIKE \'%Android%\' THEN \'mobile\'
            WHEN user_agent LIKE \'%Linux%\' THEN \'desktop\'
            WHEN user_agent LIKE \'%Mac OS%\' THEN \'desktop\'
            WHEN user_agent LIKE \'%Macintosh%\' THEN \'desktop\'
            ELSE \'other_unknown\'
            END AS user_agent_type
我是Spark的新手,因此我在这个程序中的第一次尝试使用查找字典并在RDD中逐行调整值,如下所示:

我当前的代码将数据放在数据帧中,我不确定如何最有效地执行上述操作。我知道它们是不可变的,所以它需要作为新的数据帧返回,但我的问题是如何最好地做到这一点。这是我的密码:

from boto3 import client
import psycopg2 as ppg2
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.functions import current_date, date_format, lit, StringType

EMR_CLIENT = client('emr')
conf = SparkConf().setAppName('Canvas Requests Logs')
sc = SparkContext(conf=conf)
sql_context = SQLContext(sc)
# for dependencies
# sc.addPyFile()

USER_AGENT_VALS = {
    'CanvasAPI': 'api',
    'candroid': 'mobile_app_android',
    'iCanvas': 'mobile_app_ios',
    'CanvasKit': 'mobile_app_ios',
    'Windows NT': 'desktop',
    'MacBook': 'desktop',
    'iPhone': 'mobile',
    'iPod Touch': 'mobile',
    'iPad': 'mobile',
    'iOS': 'mobile',
    'CrOS': 'desktop',
    'Android': 'mobile',
    'Linux': 'desktop',
    'Mac OS': 'desktop',
    'Macintosh': 'desktop'
}

if __name__ == '__main__':
    df = sql_context.read.parquet(
        r'/Users/mharris/PycharmProjects/etl3/pyspark/Datasets/'
        r'usage_data.gz.parquet')

    course_data = df.filter(df['context_type'] == 'Course')
    request_data = df.select(
        df['user_id'],
        df['context_id'].alias('course_id'),
        date_format(df['request_timestamp'], 'MM').alias('request_month'),
        df['user_agent']
    )

    sesh_id_data = df.groupBy('user_id').count()

    joined_data = request_data.join(
        sesh_id_data,
        on=request_data['user_id'] == sesh_id_data['user_id']
    ).drop(sesh_id_data['user_id'])

    all_fields = joined_data.withColumn(
        'etl_requests_usage', lit('DEV')
    ).withColumn(
        'etl_datetime_local', current_date()
    ).withColumn(
        'etl_transformation_name', lit('agg_canvas_logs_user_agent_types')
    ).withColumn(
        'etl_pdi_version', lit(r'Apache Spark')
    ).withColumn(
        'etl_pdi_build_version', lit(r'1.6.1')
    ).withColumn(
        'etl_pdi_hostname', lit(r'N/A')
    ).withColumn(
        'etl_pdi_ipaddress', lit(r'N/A')
    ).withColumn(
        'etl_checksum_md5', lit(r'N/A')
    )

作为PS,有没有比我添加列的方式更好的方法?

如果需要,甚至可以直接使用SQL表达式:

expr = """
    CASE
        WHEN user_agent LIKE \'%Android%\' THEN \'mobile\'
        WHEN user_agent LIKE \'%Linux%\' THEN \'desktop\'
        ELSE \'other_unknown\'
    END AS user_agent_type"""

df = sc.parallelize([
    (1, "Android"), (2, "Linux"), (3, "Foo")
]).toDF(["id", "user_agent"])

df.selectExpr("*", expr).show()
## +---+----------+---------------+
## | id|user_agent|user_agent_type|
## +---+----------+---------------+
## |  1|   Android|         mobile|
## |  2|     Linux|        desktop|
## |  3|       Foo|  other_unknown|
## +---+----------+---------------+
否则,您可以将其替换为when和like以及other的组合:

您还可以在一次选择中添加多列:


非常令人印象深刻,我忘了我可以直接使用SQL。我不确定Spark SQL与我习惯使用的PostGRESql方言有多相似。HiveQL不是ANSI SQL,但它已经足够接近了。无论何时,只要您使用的不是Postgres特定的扩展,它就可以正常工作。我不会过度使用,但有时它比编写表达式要简洁得多。reduce语句中的类似语句来自哪里?我在pyspark和functools中都找不到文档。@flybonzai这个答案太棒了。如果将其添加到API中,那就太酷了。
expr = """
    CASE
        WHEN user_agent LIKE \'%Android%\' THEN \'mobile\'
        WHEN user_agent LIKE \'%Linux%\' THEN \'desktop\'
        ELSE \'other_unknown\'
    END AS user_agent_type"""

df = sc.parallelize([
    (1, "Android"), (2, "Linux"), (3, "Foo")
]).toDF(["id", "user_agent"])

df.selectExpr("*", expr).show()
## +---+----------+---------------+
## | id|user_agent|user_agent_type|
## +---+----------+---------------+
## |  1|   Android|         mobile|
## |  2|     Linux|        desktop|
## |  3|       Foo|  other_unknown|
## +---+----------+---------------+
from pyspark.sql.functions import col, when
from functools import reduce

c = col("user_agent")
vs = [("Android", "mobile"), ("Linux", "desktop")]
expr = reduce(
    lambda acc, kv: when(c.like(kv[0]), kv[1]).otherwise(acc), 
    vs, 
    "other_unknown"
).alias("user_agent_type")

df.select("*", expr).show()

## +---+----------+---------------+
## | id|user_agent|user_agent_type|
## +---+----------+---------------+
## |  1|   Android|         mobile|
## |  2|     Linux|        desktop|
## |  3|       Foo|  other_unknown|
## +---+----------+---------------+
exprs = [c.alias(a) for (a, c) in [
  ('etl_requests_usage', lit('DEV')), 
  ('etl_datetime_local', current_date())]]

df.select("*", *exprs)