Pyspark 获取IllegalArgumentException

Pyspark 获取IllegalArgumentException,pyspark,Pyspark,在提交spark作业时,我收到非法argumentException C:\spark\spark-2.2.1-bin-hadoop2.7\hadoop\bin>pyspark Python 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:25:58) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license

在提交spark作业时,我收到非法argumentException

C:\spark\spark-2.2.1-bin-hadoop2.7\hadoop\bin>pyspark
        Python 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:25:58) [MSC v.1500 64 bit (AMD64)] on win32
        Type "help", "copyright", "credits" or "license" for more information.
        Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
        Setting default log level to "WARN".
        To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
        WARNING: An illegal reflective access operation has occurred
        WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/C:/spark/spark-2.2.1-bin-hadoop2.7/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
        WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
        WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
        WARNING: All illegal access operations will be denied in a future release
        17/12/21 15:48:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
        Traceback (most recent call last):
          File "C:\spark\spark-2.2.1-bin-hadoop2.7\python\pyspark\shell.py", line 45, in <module>
            spark = SparkSession.builder\
          File "C:\spark\spark-2.2.1-bin-hadoop2.7\python\pyspark\sql\session.py", line 183, in getOrCreate
            session._jsparkSession.sessionState().conf().setConfString(key, value)
          File "C:\spark\spark-2.2.1-bin-hadoop2.7\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py", line 1133, in __call__
          File "C:\spark\spark-2.2.1-bin-hadoop2.7\python\pyspark\sql\utils.py", line 79, in deco
            raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
        pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"

请建议如何解决这些错误检查您对/tmp/hive/的权限。使用sudo chmod-R 777/tmp/hive/。另外,请检查您是否正在使用sqlContext传递查询,否则您可以将其更改为sparkSession。

考虑到例外情况,我很确定问题在于您在某处有一些与Hive/Hadoop相关的配置,Spark显然在使用它。请提供您的spark配置。并且,请格式化您的消息,感谢Subash的响应,但我不确定在哪里使用上述命令,并且我已经在windows 10计算机上配置了spark,当我调用pyspark时,它给了我一个错误,而且许多spark操作也不能像count gives error py4j error那样工作。您必须在CLI中发出该命令。停止所有spark进程并重新开始,同时检查环境变量