Apache spark Pyspark错误-py4j.PY4JEException:方法限制([class java.lang.String])不存在
当执行代码从HDFS获取spark数据帧,然后将其转换为pandas数据帧时Apache spark Pyspark错误-py4j.PY4JEException:方法限制([class java.lang.String])不存在,apache-spark,pyspark,apache-spark-sql,Apache Spark,Pyspark,Apache Spark Sql,当执行代码从HDFS获取spark数据帧,然后将其转换为pandas数据帧时 spark_df = spark.read.parquet(*data_paths) # other code in the process like filtering, groupby etc. # .... # write sparkdf to hadoop, get n rows if specified if n: spark_df.limit(n).write.csv
spark_df = spark.read.parquet(*data_paths)
# other code in the process like filtering, groupby etc.
# ....
# write sparkdf to hadoop, get n rows if specified
if n:
spark_df.limit(n).write.csv(tmpfoldername, sep=csv_sep, quote=csv_quote)
else:
spark_df.write.csv(tmpfoldername, sep=csv_sep, quote=csv_quote)
我得到一个错误:
/home/sarah/anaconda3/envs/py27/lib/python2.7/site-packages/dspipeline/core/wf_spark.pyc in to_pd(spark_df, n, save_csv, csv_sep, csv_quote, quick)
215 # write sparkdf to hadoop, get n rows if specified
216 if n:
--> 217 spark_df.limit(n).write.csv(tmpfoldername, sep=csv_sep, quote=csv_quote)
218 else:
219 spark_df.write.csv(tmpfoldername, sep=csv_sep, quote=csv_quote)
/opt/spark-2.3.0-SNAPSHOT-bin-spark-master/python/pyspark/sql/dataframe.py in limit(self, num)
472 []
473 """
--> 474 jdf = self._jdf.limit(num)
475 return DataFrame(jdf, self.sql_ctx)
476
/opt/spark-2.3.0-SNAPSHOT-bin-spark-master/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
1131 answer = self.gateway_client.send_command(command)
1132 return_value = get_return_value(
-> 1133 answer, self.gateway_client, self.target_id, self.name)
1134
1135 for temp_arg in temp_args:
/opt/spark-2.3.0-SNAPSHOT-bin-spark-master/python/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
/opt/spark-2.3.0-SNAPSHOT-bin-spark-master/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
321 raise Py4JError(
322 "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
--> 323 format(target_id, ".", name, value))
324 else:
325 raise Py4JError(
Py4JError: An error occurred while calling o1086.limit. Trace:
py4j.Py4JException: Method limit([class java.lang.String]) does not exist
at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318)
at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:326)
at py4j.Gateway.invoke(Gateway.java:272)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745)
当我在pyspark文档中发现函数限制(num)时,我想原因是我没有正确地使用它。有什么帮助吗?这里的例外情况非常清楚: 方法限制([class java.lang.String])不存在
n
您试图传递到限制的不是int
而是str
您应该返回到定义了n
的点,并对其进行修复。数据帧存在.limit方法,如果要从数据帧中获取n行,可以使用.limit(n)方法,但参数n必须是整数。
例如:
如果使用其他参数,如df.limit('10')
,则会发生错误:
py4j.Py4JException:方法限制([classjava.lang.String])不存在
df.limit(10)