Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/358.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java CreateProcess错误=5,访问被拒绝-pyspark_Java_Python_Pyspark_Anaconda - Fatal编程技术网

Java CreateProcess错误=5,访问被拒绝-pyspark

Java CreateProcess错误=5,访问被拒绝-pyspark,java,python,pyspark,anaconda,Java,Python,Pyspark,Anaconda,我在尝试运行以下代码时寻求您的帮助,但是出现以下错误,指出python主路径被拒绝访问 我已经尝试在管理模式下运行浏览器、cmd并执行它,我还更改了目录权限,让每个人都能完全控制,但错误不会消失 import random NUM_SAMPLES = 100000000 def inside(p): x, y = random.random(), random.random() return x*x + y*y < 1 **count = sc.parallelize(range(0,

我在尝试运行以下代码时寻求您的帮助,但是出现以下错误,指出python主路径被拒绝访问

我已经尝试在管理模式下运行浏览器、cmd并执行它,我还更改了目录权限,让每个人都能完全控制,但错误不会消失

import random
NUM_SAMPLES = 100000000
def inside(p):
 x, y = random.random(), random.random()
 return x*x + y*y < 1
**count = sc.parallelize(range(0, NUM_SAMPLES)).filter(inside).count()**
pi = 4 * count / NUM_SAMPLES
print('Pi is roughly', pi)

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-7-2b827abd567e> in <module>
     13  x, y = random.random(), random.random()
     14  return x*x + y*y < 1
---> 15 count = sc.parallelize(range(0, NUM_SAMPLES)).filter(inside).count()
     16 pi = 4 * count / NUM_SAMPLES
     17 print('Pi is roughly', pi)

C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py in count(self)
   1126         3
   1127         """
-> 1128         return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
   1129 
   1130     def stats(self):

C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py in sum(self)
   1117         6.0
   1118         """
-> 1119         return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
   1120 
   1121     def count(self):

C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py in fold(self, zeroValue, op)
    988         # zeroValue provided to each partition is unique from the one provided
    989         # to the final reduce call
--> 990         vals = self.mapPartitions(func).collect()
    991         return reduce(op, vals, zeroValue)
    992 

C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py in collect(self)
    887         """
    888         with SCCallSiteSync(self.context) as css:
--> 889             sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    890         return list(_load_from_socket(sock_info, self._jrdd_deserializer))
    891 

C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\py4j-0.10.8.1-src.zip\py4j\java_gateway.py in __call__(self, *args)
   1284         answer = self.gateway_client.send_command(command)
   1285         return_value = get_return_value(
-> 1286             answer, self.gateway_client, self.target_id, self.name)
   1287 
   1288         for temp_arg in temp_args:

C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)
     96     def deco(*a, **kw):
     97         try:
---> 98             return f(*a, **kw)
     99         except py4j.protocol.Py4JJavaError as e:
    100             converted = convert_exception(e.java_exception)

C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\py4j-0.10.8.1-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 5.0 failed 1 times, most recent failure: Lost task 2.0 in stage 5.0 (TID 22, DESKTOP-MRGDUK2, executor driver): java.io.IOException: Cannot run program "C:\Users\developer\Anaconda3\pkgs\python-3.7.6-h60c2a47_2": CreateProcess error=5, Access is denied
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:165)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:107)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:118)
    at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:126)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:127)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: CreateProcess error=5, Access is denied
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:444)
    at java.lang.ProcessImpl.start(ProcessImpl.java:140)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
    ... 15 more

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:1989)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1977)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1976)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1976)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:956)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:956)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:956)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2206)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2155)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2144)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:758)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2116)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2137)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2156)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2181)
    at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1004)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:388)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:1003)
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:168)
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Cannot run program "C:\Users\developer\Anaconda3\pkgs\python-3.7.6-h60c2a47_2": CreateProcess error=5, Access is denied
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:165)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:107)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:118)
    at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:126)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:127)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ... 1 more
Caused by: java.io.IOException: CreateProcess error=5, Access is denied
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:444)
    at java.lang.ProcessImpl.start(ProcessImpl.java:140)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
    ... 15 more
随机导入
样本数=100000000
def内部(p):
x、 y=random.random(),random.random()
返回x*x+y*y<1
**count=sc.parallelize(范围(0,NUM_个样本)).filter(内部).count()**
pi=4*计数/NUM\u样本
打印('圆周率大致为',圆周率)
---------------------------------------------------------------------------
Py4JJavaError回溯(最近一次调用)
在里面
13 x,y=random.random(),random.random()
14返回x*x+y*y<1
--->15 count=sc.parallelize(范围(0,NUM_样本)).filter(内部).count()
16π=4*计数/个样本
17打印('圆周率大致为',圆周率)
C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py计数(self)
1126         3
1127         """
->1128返回self.mapPartitions(lambda i:[sum(1代表i中的u)]).sum()
1129
1130 def状态(自身):
C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py总和(self)
1117         6.0
1118         """
->1119返回self.mapPartitions(lambda x:[求和(x)]).fold(0,运算符.add)
1120
1121 def计数(自身):
C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py折叠(self,zeroValue,op)
988#提供给每个分区的zeroValue与提供的分区是唯一的
989#到最后的reduce通话
-->990 vals=self.mapPartitions(func.collect())
991返回减少(op、VAL、零值)
992
C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\rdd.py在collect(self)中
887         """
888,使用SCCallSiteSync(self.context)作为css:
-->889 sock\u info=self.ctx.\u jvm.PythonRDD.collectAndServe(self.\u jrdd.rdd())
890返回列表(从套接字(sock\u信息,self.\u jrdd\u反序列化器)加载)
891
C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\py4j-0.10.8.1-src.zip\py4j\java\u gateway.py in\uuuu调用(self,*args)
1284 answer=self.gateway\u client.send\u命令(command)
1285返回值=获取返回值(
->1286应答,self.gateway\u客户端,self.target\u id,self.name)
1287
1288对于临时参数中的临时参数:
C:\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a,**kw)
96 def装饰(*a,**千瓦):
97尝试:
--->98返回f(*a,**kw)
99除py4j.protocol.Py4JJavaError外,错误为e:
100 converted=convert\u异常(例如java\u异常)
C:\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\py4j-0.10.8.1-src.zip\py4j\protocol.py在get\u return\u值中(答案、网关\u客户端、目标\u id、名称)
326 raise Py4JJavaError(
327“调用{0}{1}{2}时出错。\n”。
-->328格式(目标id,“.”,名称),值)
329其他:
330升起Py4JError(
Py4JJavaError:调用z:org.apache.spark.api.python.PythonRDD.collectAndServe时出错。
:org.apache.spark.sparkeexception:作业因阶段失败而中止:阶段5.0中的任务2失败1次,最近的失败:阶段5.0中的任务2.0丢失(TID 22,DESKTOP-MRGDUK2,executor driver):java.io.IOException:无法运行程序“C:\Users\developer\Anaconda3\pkgs\python-3.7.6-h60c2a47_2”:CreateProcess错误=5,访问被拒绝
位于java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
位于org.apache.spark.api.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:165)
位于org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:107)
位于org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:118)
位于org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:126)
位于org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:349)
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:313)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
位于org.apache.spark.scheduler.Task.run(Task.scala:127)
在org.apache.spark.executor.executor$TaskRunner.$anonfun$run$3(executor.scala:441)
位于org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:444)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
运行(Thread.java:748)
原因:java.io.IOException:CreateProcess错误=5,访问被拒绝
在java.lang.ProcessImpl.create(本机方法)
位于java.lang.ProcessImpl。(ProcessImpl.java:444)
在java.lang.ProcessImpl.start(ProcessImpl.java:140)
位于java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
…还有15个
驱动程序堆栈跟踪:
位于org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:1989)
位于org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1977)
位于org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1976)
位于scala.collection.mutable.resizeblearray.foreach(resizeblearray.scala:62)
位于scala.collection.mutable.resizeblearray.foreach$(resizeblearray.scala:55)
位于scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
位于org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1976)
位于org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:956)
位于org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskS