Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/17.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala Spark 3流作业失败,无法运行程序“;chmod“;_Scala_Apache Spark_Kubernetes - Fatal编程技术网

Scala Spark 3流作业失败,无法运行程序“;chmod“;

Scala Spark 3流作业失败,无法运行程序“;chmod“;,scala,apache-spark,kubernetes,Scala,Apache Spark,Kubernetes,Kubernetes上的Spark 3.0从卡夫卡读取数据,并通过第三方段IO REST API将数据推出 我在运行Spark stream作业时遇到以下错误 Caused by: java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable at java.lang.ProcessBuilder.start(ProcessBuilder.java:104

Kubernetes上的Spark 3.0从卡夫卡读取数据,并通过第三方段IO REST API将数据推出

我在运行Spark stream作业时遇到以下错误

Caused by: java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:938)
at org.apache.hadoop.util.Shell.run(Shell.java:901)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:865)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:252)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:232)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:331)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:320)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:351)
at org.apache.hadoop.fs.FileSystem.primitiveCreate(FileSystem.java:1228)
at org.apache.hadoop.fs.DelegateToFileSystem.createInternal(DelegateToFileSystem.java:100)
at org.apache.hadoop.fs.ChecksumFs$ChecksumFSOutputSummer.<init>(ChecksumFs.java:353)
at org.apache.hadoop.fs.ChecksumFs.createInternal(ChecksumFs.java:400)
at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:605)
at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:696)
at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:692)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.create(FileContext.java:698)
at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createTempFile(CheckpointFileManager.scala:310)
at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:133)
at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:136)
at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createAtomic(CheckpointFileManager.scala:316)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.writeBatchToFile(HDFSMetadataLog.scala:131)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.$anonfun$add$3(HDFSMetadataLog.scala:120)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.add(HDFSMetadataLog.scala:118)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$17(MicroBatchExecution.scala:588)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.withProgressLocked(MicroBatchExecution.scala:598)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:585)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:223)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:191)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:185)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:334)
... 1 more
Caused by: java.io.IOException: error=11, Resource temporarily unavailable
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:247)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
原因:java.io.IOException:无法运行程序“chmod”:错误=11,资源暂时不可用
位于java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
位于org.apache.hadoop.util.Shell.runCommand(Shell.java:938)
位于org.apache.hadoop.util.Shell.run(Shell.java:901)
位于org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
位于org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
位于org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
位于org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:865)
位于org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream。(RawLocalFileSystem.java:252)
位于org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream。(RawLocalFileSystem.java:232)
位于org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:331)
位于org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:320)
位于org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:351)
位于org.apache.hadoop.fs.FileSystem.primitiveCreate(FileSystem.java:1228)
位于org.apache.hadoop.fs.DelegateToFileSystem.createInternal(DelegateToFileSystem.java:100)
位于org.apache.hadoop.fs.ChecksumFs$checksumfsoutputsumer。(ChecksumFs.java:353)
位于org.apache.hadoop.fs.ChecksumFs.createInternal(ChecksumFs.java:400)
位于org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:605)
位于org.apache.hadoop.fs.FileContext$3.next(FileContext.java:696)
位于org.apache.hadoop.fs.FileContext$3.next(FileContext.java:692)
位于org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
位于org.apache.hadoop.fs.FileContext.create(FileContext.java:698)
位于org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createTempFile(CheckpointFileManager.scala:310)
位于org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream。(CheckpointFileManager.scala:133)
位于org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream。(CheckpointFileManager.scala:136)
位于org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createAtomic(CheckpointFileManager.scala:316)
位于org.apache.spark.sql.execution.streaming.HDFSMetadataLog.writeBatchToFile(HDFSMetadataLog.scala:131)
在org.apache.spark.sql.execution.streaming.HDFSMetadataLog.$anonfun$add$3(HDFSMetadataLog.scala:120)
在scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
位于scala.Option.getOrElse(Option.scala:189)
位于org.apache.spark.sql.execution.streaming.HDFSMetadataLog.add(HDFSMetadataLog.scala:118)
位于org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$17(MicroBatchExecution.scala:588)
在scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
位于org.apache.spark.sql.execution.streaming.MicroBatchExecution.withProgressLocked(MicroBatchExecution.scala:598)
位于org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:585)
在org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:223)
在scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
位于org.apache.spark.sql.execution.streaming.ProgressReporter.ReportTimeTake(ProgressReporter.scala:352)
位于org.apache.spark.sql.execution.streaming.ProgressReporter.ReportTimeTake$(ProgressReporter.scala:350)
位于org.apache.spark.sql.execution.streaming.StreamExecution.ReportTimeTake(StreamExecution.scala:69)
位于org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:191)
位于org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
位于org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:185)
在org.apache.spark.sql.execution.streaming.streamingExecution.org$apache$spark$sql$execution$streaming$streaming$streamingExecution$$runStream(StreamExecution.scala:334)
... 还有一个
原因:java.io.IOException:错误=11,资源暂时不可用
位于java.lang.UNIXProcess.forkAndExec(本机方法)
位于java.lang.UNIXProcess(UNIXProcess.java:247)
在java.lang.ProcessImpl.start(ProcessImpl.java:134)处
位于java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)

检查PATH环境变量。 (也许您可以覆盖它以向路径添加一些spark/kafka罐子?)