Python ImportError:windows上没有名为numpy的模块

Python ImportError:windows上没有名为numpy的模块,python,pyspark,jupyter-notebook,Python,Pyspark,Jupyter Notebook,您好,我是pyspark的新手,因为我在一周前刚刚了解它,我寻求有关此错误的帮助: 导入错误:没有名为numpy的模块 任何一个善良的灵魂都能弄明白为什么找不到我的屁股?我尝试了以下方法:卸载numpy并以管理员身份通过anaconda cmd再次安装。检查python_home的我的环境变量。重新启动我的jupyter笔记本内核 def parse_line(l): try: return l.split(",") except:

您好,我是pyspark的新手,因为我在一周前刚刚了解它,我寻求有关此错误的帮助:

导入错误:没有名为numpy的模块

任何一个善良的灵魂都能弄明白为什么找不到我的屁股?我尝试了以下方法:卸载numpy并以管理员身份通过anaconda cmd再次安装。检查python_home的我的环境变量。重新启动我的jupyter笔记本内核

    def parse_line(l):
        try:
           return l.split(",")
         except:
           print("error in processing {0}".format(l))

    data = sc.textFile('YearPredictionMSD.txt').map(lambda x : parse_line(x)).toDF()
    data_label = data.rdd.map(lambda x: LabeledPoint(x[0], x[1:]))
    data_train = data_label.zipWithIndex().filter(lambda x: x[1] < 463715)
    data_test = data_label.zipWithIndex().filter(lambda x: x[1] >= 463715)



 ---------------------------------------------------------------------------
    Py4JJavaError                             Traceback (most recent call last)
    <ipython-input-4-ed224fb17ae0> in <module>
    ----> 1 data_train = data_label.zipWithIndex().filter(lambda x: x[1] < 463715)
          2 
          3 data_test = data_label.zipWithIndex().filter(lambda x: x[1] >= 463715)

    C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py in zipWithIndex(self)
       2244         starts = [0]
       2245         if self.getNumPartitions() > 1:
    -> 2246             nums = self.mapPartitions(lambda it: [sum(1 for i in it)]).collect()
       2247             for i in range(len(nums) - 1):
       2248                 starts.append(starts[-1] + nums[i])

    C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py in collect(self)
        887         """
        888         with SCCallSiteSync(self.context) as css:
    --> 889             sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
        890         return list(_load_from_socket(sock_info, self._jrdd_deserializer))
        891 

    C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\py4j-0.10.8.1-src.zip\py4j\java_gateway.py in __call__(self, *args)
       1284         answer = self.gateway_client.send_command(command)
       1285         return_value = get_return_value(
    -> 1286             answer, self.gateway_client, self.target_id, self.name)
       1287 
       1288         for temp_arg in temp_args:

    C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)
         96     def deco(*a, **kw):
         97         try:
    ---> 98             return f(*a, **kw)
         99         except py4j.protocol.Py4JJavaError as e:
        100             converted = convert_exception(e.java_exception)

    C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\py4j-0.10.8.1-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
        326                 raise Py4JJavaError(
        327                     "An error occurred while calling {0}{1}{2}.\n".
    --> 328                     format(target_id, ".", name), value)
        329             else:
        330                 raise Py4JError(

    Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
    : org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 1.0 failed 1 times, most recent failure: Lost task 2.0 in stage 1.0 (TID 3, DESKTOP-MRGDUK2, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
      File "C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 579, in main
      File "C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 71, in read_command
      File "C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\serializers.py", line 172, in _read_with_length
        return self.loads(obj)
      File "C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\serializers.py", line 700, in loads
        return pickle.loads(obj, encoding=encoding)
      File "C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\mllib\__init__.py", line 28, in <module>
        import numpy
    ModuleNotFoundError: No module named 'numpy'
def parse_行(l):
尝试:
返回l.split(“,”)
除:
打印(“处理{0}时出错”。格式(l))
data=sc.textFile('YearPredictionMSD.txt').map(lambda x:parse_line(x)).toDF()
data_label=data.rdd.map(lambda x:LabeledPoint(x[0],x[1:]))
data\u train=data\u label.zipWithIndex().filter(lambda x:x[1]<463715)
data\u test=data\u label.zipWithIndex().filter(lambda x:x[1]>=463715)
---------------------------------------------------------------------------
Py4JJavaError回溯(最近一次调用)
在里面
---->1 data\u train=data\u label.zipWithIndex().filter(lambda x:x[1]<463715)
2.
3 data\u test=data\u label.zipWithIndex().filter(lambda x:x[1]>=463715)
zipWithIndex中的C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py(self)
2244开始=[0]
2245如果self.getNumPartitions()大于1:
->2246 nums=self.mapPartitions(lambda it:[sum(1表示其中的i)]).collect()
2247适用于范围内的i(len(nums)-1):
2248 start.append(start[-1]+nums[i])
C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py在collect(self)中
887         """
888,使用SCCallSiteSync(self.context)作为css:
-->889 sock\u info=self.ctx.\u jvm.PythonRDD.collectAndServe(self.\u jrdd.rdd())
890返回列表(从套接字(sock\u信息,self.\u jrdd\u反序列化器)加载)
891
C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\py4j-0.10.8.1-src.zip\py4j\java\u gateway.py in\uuuu调用(self,*args)
1284 answer=self.gateway\u client.send\u命令(command)
1285返回值=获取返回值(
->1286应答,self.gateway\u客户端,self.target\u id,self.name)
1287
1288对于临时参数中的临时参数:
C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a,**kw)
96 def装饰(*a,**千瓦):
97尝试:
--->98返回f(*a,**kw)
99除py4j.protocol.Py4JJavaError外,错误为e:
100 converted=convert\u异常(例如java\u异常)
C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\py4j-0.10.8.1-src.zip\py4j\protocol.py在get\u return\u值中(答案、网关\u客户端、目标\u id、名称)
326 raise Py4JJavaError(
327“调用{0}{1}{2}时出错。\n”。
-->328格式(目标id,“.”,名称),值)
329其他:
330升起Py4JError(
Py4JJavaError:调用z:org.apache.spark.api.python.PythonRDD.collectAndServe时出错。
:org.apache.SparkException:作业因阶段失败而中止:阶段1.0中的任务2失败1次,最近的失败:阶段1.0中的任务2.0丢失(TID 3,DESKTOP-MRGDUK2,执行器驱动程序):org.apache.spark.api.python.python异常:回溯(最近一次调用):
文件“C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py”,第579行,在main中
read_命令中的文件“C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py”,第71行
文件“C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\serializers.py”,第172行,以长度为
返回自加载(obj)
文件“C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\serializers.py”,第700行,加载
返回pickle.load(对象,编码=编码)
文件“C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\mllib\\ uuuuuuu init\uuuuuu.py”,第28行,在
进口numpy
ModuleNotFoundError:没有名为“numpy”的模块

在终端中运行下面的命令

用于蟒蛇2

pip install numpy
用于蟒蛇3

pip3 install numpy

确保您在安装numpy的相同环境中运行。我的意思是使用相同的anaconda python。请共享有关您的开发环境的更多信息。您说您正在使用Conda,此项目的环境中有什么?您运行的确切命令是什么?