Apache spark 如何选择不明确的列引用?
下面是一些示例代码,说明了我要做的事情。存在列companyid和companyid的数据帧。我想选择companyId,但引用不明确。如何明确地选择正确的列Apache spark 如何选择不明确的列引用?,apache-spark,pyspark,apache-spark-sql,spark-dataframe,Apache Spark,Pyspark,Apache Spark Sql,Spark Dataframe,下面是一些示例代码,说明了我要做的事情。存在列companyid和companyid的数据帧。我想选择companyId,但引用不明确。如何明确地选择正确的列 >> data = [Row(companyId=1, companyid=2, company="Hello world industries")] >> df = sc.parallelize(data).toDF() >> df.createOrReplaceTempView('my_df') &
>> data = [Row(companyId=1, companyid=2, company="Hello world industries")]
>> df = sc.parallelize(data).toDF()
>> df.createOrReplaceTempView('my_df')
>> spark.sql("SELECT companyid FROM mcl_df")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/spark22/python/pyspark/sql/session.py", line 603, in sql
return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
File "/opt/spark22/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/opt/spark22/python/pyspark/sql/utils.py", line 69, in deco
raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: u"Reference 'companyid' is ambiguous, could be: companyid#1L, companyid#2L.; line 1 pos 7"
解决方案最终非常简单。在运行SELECT语句之前,我运行了以下命令:
spark.sql'set spark.sql.caseSensitive=true'你是救世主!