pyspark DataFrame selectExpr不适用于多个列

pyspark DataFrame selectExpr不适用于多个列,pyspark,apache-spark-sql,Pyspark,Apache Spark Sql,我们正在尝试Spark DataFrameselectExpr及其对一列的工作,当我添加多个列时,它会抛出错误 第一个正在工作,第二个抛出错误 代码示例: df1.selectExpr("coalesce(gtr_pd_am,0 )").show(2) df1.selectExpr("coalesce(gtr_pd_am,0),coalesce(prev_gtr_pd_am,0)").show() >>> df1.selectExpr("coalesce(gtr_pd_a

我们正在尝试Spark DataFrame
selectExpr
及其对一列的工作,当我添加多个列时,它会抛出错误

第一个正在工作,第二个抛出错误

代码示例:

 df1.selectExpr("coalesce(gtr_pd_am,0 )").show(2)
 df1.selectExpr("coalesce(gtr_pd_am,0),coalesce(prev_gtr_pd_am,0)").show()
>>> df1.selectExpr("coalesce(gtr_pd_am,0),coalesce(prev_gtr_pd_am,0)").show()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/hdp/2.6.5.0-292/spark2/python/pyspark/sql/dataframe.py", line 1216, in selectExpr
    jdf = self._jdf.selectExpr(self._jseq(expr))
  File "/usr/hdp/2.6.5.0-292/spark2/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
  File "/usr/hdp/2.6.5.0-292/spark2/python/pyspark/sql/utils.py", line 73, in deco
    raise ParseException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.ParseException: u"\nmismatched input ',' expecting <EOF>(line 1, pos 21)\n\n== SQL ==\ncoalesce(gtr_pd_am,0),coalesce(prev_gtr_pd_am,0)\n---------------------^^^\n" 
错误日志:

 df1.selectExpr("coalesce(gtr_pd_am,0 )").show(2)
 df1.selectExpr("coalesce(gtr_pd_am,0),coalesce(prev_gtr_pd_am,0)").show()
>>> df1.selectExpr("coalesce(gtr_pd_am,0),coalesce(prev_gtr_pd_am,0)").show()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/hdp/2.6.5.0-292/spark2/python/pyspark/sql/dataframe.py", line 1216, in selectExpr
    jdf = self._jdf.selectExpr(self._jseq(expr))
  File "/usr/hdp/2.6.5.0-292/spark2/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
  File "/usr/hdp/2.6.5.0-292/spark2/python/pyspark/sql/utils.py", line 73, in deco
    raise ParseException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.ParseException: u"\nmismatched input ',' expecting <EOF>(line 1, pos 21)\n\n== SQL ==\ncoalesce(gtr_pd_am,0),coalesce(prev_gtr_pd_am,0)\n---------------------^^^\n" 
>>df1.选择expr(“合并(gtr\u pd\u am,0),合并(prev\u gtr\u pd\u am,0)”)。显示()
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
文件“/usr/hdp/2.6.5.0-292/spark2/python/pyspark/sql/dataframe.py”,第1216行,在selectExpr中
jdf=self.\u jdf.selectExpr(self.\u jseq(expr))
文件“/usr/hdp/2.6.5.0-292/spark2/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py”,第1160行,在__
文件“/usr/hdp/2.6.5.0-292/spark2/python/pyspark/sql/utils.py”,第73行,deco格式
引发ParseException(s.split(“:”,1)[1],stackTrace)
pyspark.sql.utils.ParseException:u“\n匹配输入”,“预期(第1行,位置21)\n\n==sql==\n校准(gtr\u-pd\u-am,0),合并(上一个gtr\u-pd\u-am,0)\n------------------^^\n”
检查此项

df1.selectExpr("coalesce(gtr_pd_am,0)”,”coalesce(prev_gtr_pd_am,0)").show()
您需要单独指定列

检查此选项

df1.selectExpr("coalesce(gtr_pd_am,0)”,”coalesce(prev_gtr_pd_am,0)").show()
您需要单独指定列