Python Spark:没有名为的模块*
为什么我使用Python Spark:没有名为的模块*,python,apache-spark,Python,Apache Spark,为什么我使用rdd=rdd.map(lambda i:Recommendation.TransfertRecurrentConfigKey(i))时会出现此错误 org.apache.spark.api.python.PythonException: Traceback (most recent call last):File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/worker.py", line 1
rdd=rdd.map(lambda i:Recommendation.TransfertRecurrentConfigKey(i))
时会出现此错误
org.apache.spark.api.python.PythonException: Traceback (most recent call last):File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/worker.py", line 161, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/worker.py", line 54, in read_command
command = serializer._read_with_length(file)
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
ImportError: No module named Recommendation.Recommendation
当我使用这个rdd=rdd.map(Recommendation.TransfertRecurrentConfigKey)
时,代码运行良好吗?
如果我问这个问题,是因为我希望能够在辩论中通过关键的列表<代码>[“性别”,“年龄”,“职业”,“收入”]
编辑:我解决了错误,但我仍然不明白为什么在第一种情况下它能工作,而在第二种情况下它不能工作。
(见第二个答案)
你是如何解决你的问题的?
class Recommendation:
@staticmethod
def TransfertRecurrentRecommendation(dataFrame):
rdd = dataFrame.rdd.map(Recommendation.TransfertRecurrentCleanData)
rdd = rdd.filter(lambda x: x is not None)
#rdd = rdd.map(lambda user: ((user['Sexe'], user['Age'], user['Profession'], user['Revenus']), user['TransfertRecurrent']))
rdd = rdd.map(lambda i: Recommendation.TransfertRecurrentConfigKey(i))
print rdd.collect()
@staticmethod
def TransfertRecurrentConfigKey(user):
tmp = []
for k in ["Sexe", "Age", "Profession", "Revenus"]:
tmp.append(user[k])
return tuple(tmp), user['TransfertRecurrent']