Scala 在IntelliJ中运行Spark字数计数

Scala 在IntelliJ中运行Spark字数计数,scala,apache-spark,intellij-idea,Scala,Apache Spark,Intellij Idea,我花了数小时浏览YouTube视频和教程,试图了解我如何在Scala中为Spark运行单词计数程序,并将其转换为jar文件。我现在完全糊涂了 我运行了Hello World,并且我已经了解了如何去库中添加Apache.spark.spark-core,但是现在我得到了 Error: Could not find or load main class WordCount 此外,我还完全搞不懂为什么这两个教程,我认为我们教的是同一件事,似乎有这么大的不同:教程1 第二个似乎是第一个的两倍长,它包含

我花了数小时浏览YouTube视频和教程,试图了解我如何在Scala中为Spark运行单词计数程序,并将其转换为jar文件。我现在完全糊涂了

我运行了Hello World,并且我已经了解了如何去库中添加Apache.spark.spark-core,但是现在我得到了

Error: Could not find or load main class WordCount
此外,我还完全搞不懂为什么这两个教程,我认为我们教的是同一件事,似乎有这么大的不同:教程1

第二个似乎是第一个的两倍长,它包含了第一个没有提到的东西。我应该依靠这两种方法中的任何一种来帮助我获得一个简单的字数计算程序并启动和运行jar吗

我的代码现在看起来像这样。我从某处复制了它:

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark._

object WordCount {
  def main(args: Array[String]) {

    val sc = new SparkContext( "local", "Word Count", "/usr/local/spark", Nil, Map(), Map())
    val input = sc.textFile("../Data/input.txt")
    val count = input.flatMap(line ⇒ line.split(" "))
      .map(word ⇒ (word, 1))
      .reduceByKey(_ + _)
    count.saveAsTextFile("outfile")
    System.out.println("OK");
  }
}

您可以随时使用WordCount扩展应用程序,这应该可以正常工作。我相信这与你构建项目的方式有关

阅读更多关于应用特性的信息

在任何情况下,请确保您的目录布局如下所示

/build.sbt /src /src/main ./src/main/scala
/src/main/scala/WordCount.scala

您可以随时使用WordCount扩展应用程序,这应该可以正常工作。我相信这与你构建项目的方式有关

阅读更多关于应用特性的信息

在任何情况下,请确保您的目录布局如下所示

/build.sbt /src /src/main ./src/main/scala
/src/main/scala/WordCount.scala

检查我编写的示例代码,如下所示

package com.spark.app

import org.scalatra._
import org.apache.spark.{ SparkContext, SparkConf }

class MySparkAppServlet extends MySparkAppStack {

  get("/wc") {
        val inputFile = "/home/limitless/Documents/projects/test/my-spark-app/README.md"
        val outputFile = "/home/limitless/Documents/projects/test/my-spark-app/README.txt"
        val conf = new SparkConf().setAppName("wordCount").setMaster("local[*]")
        val sc = new SparkContext(conf)
        val input =  sc.textFile(inputFile)
        val words = input.flatMap(line => line.split(" "))
        val counts = words.map(word => (word, 1)).reduceByKey{case (x, y) => x + y}
        counts.saveAsTextFile(outputFile)
    }

}

检查我编写的示例代码,如下所示

package com.spark.app

import org.scalatra._
import org.apache.spark.{ SparkContext, SparkConf }

class MySparkAppServlet extends MySparkAppStack {

  get("/wc") {
        val inputFile = "/home/limitless/Documents/projects/test/my-spark-app/README.md"
        val outputFile = "/home/limitless/Documents/projects/test/my-spark-app/README.txt"
        val conf = new SparkConf().setAppName("wordCount").setMaster("local[*]")
        val sc = new SparkContext(conf)
        val input =  sc.textFile(inputFile)
        val words = input.flatMap(line => line.split(" "))
        val counts = words.map(word => (word, 1)).reduceByKey{case (x, y) => x + y}
        counts.saveAsTextFile(outputFile)
    }

}

在IntelliJ Idea do文件->新建->项目->Scala->SBT->选择项目的位置和名称->完成

写入build.sbt

scalaVersion := "2.11.11"
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.2.0"
在主项目文件夹中的命令行中执行sbt更新,或在IntelliJ Idea内的sbt工具窗口中按刷新按钮

在src/main/scala/WordCount.scala中编写代码

将文件设置为src/main/resources/input.txt

运行代码:Ctrl+Shift+F10或sbt Run

在文件夹src/main/resources中,应该会出现包含多个文件的新子文件夹outfile

控制台输出:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/09/02 14:57:08 INFO SparkContext: Running Spark version 2.2.0
17/09/02 14:57:09 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/09/02 14:57:09 WARN Utils: Your hostname, dmitin-HP-Pavilion-Notebook resolves to a loopback address: 127.0.1.1; using 192.168.1.104 instead (on interface wlan0)
17/09/02 14:57:09 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/09/02 14:57:09 INFO SparkContext: Submitted application: Word Count
17/09/02 14:57:09 INFO SecurityManager: Changing view acls to: dmitin
17/09/02 14:57:09 INFO SecurityManager: Changing modify acls to: dmitin
17/09/02 14:57:09 INFO SecurityManager: Changing view acls groups to: 
17/09/02 14:57:09 INFO SecurityManager: Changing modify acls groups to: 
17/09/02 14:57:09 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(dmitin); groups with view permissions: Set(); users  with modify permissions: Set(dmitin); groups with modify permissions: Set()
17/09/02 14:57:10 INFO Utils: Successfully started service 'sparkDriver' on port 38186.
17/09/02 14:57:10 INFO SparkEnv: Registering MapOutputTracker
17/09/02 14:57:10 INFO SparkEnv: Registering BlockManagerMaster
17/09/02 14:57:10 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/09/02 14:57:10 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/09/02 14:57:10 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-d90a4735-6a2b-42b2-85ea-55b0ed9b1dfd
17/09/02 14:57:10 INFO MemoryStore: MemoryStore started with capacity 1950.3 MB
17/09/02 14:57:10 INFO SparkEnv: Registering OutputCommitCoordinator
17/09/02 14:57:10 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/09/02 14:57:11 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.104:4040
17/09/02 14:57:11 INFO Executor: Starting executor ID driver on host localhost
17/09/02 14:57:11 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46432.
17/09/02 14:57:11 INFO NettyBlockTransferService: Server created on 192.168.1.104:46432
17/09/02 14:57:11 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/09/02 14:57:11 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.104, 46432, None)
17/09/02 14:57:11 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.104:46432 with 1950.3 MB RAM, BlockManagerId(driver, 192.168.1.104, 46432, None)
17/09/02 14:57:11 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.104, 46432, None)
17/09/02 14:57:11 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.104, 46432, None)
17/09/02 14:57:12 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 214.5 KB, free 1950.1 MB)
17/09/02 14:57:12 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.4 KB, free 1950.1 MB)
17/09/02 14:57:12 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.104:46432 (size: 20.4 KB, free: 1950.3 MB)
17/09/02 14:57:12 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:16
17/09/02 14:57:12 INFO FileInputFormat: Total input paths to process : 1
17/09/02 14:57:12 INFO SparkContext: Starting job: saveAsTextFile at WordCount.scala:20
17/09/02 14:57:12 INFO DAGScheduler: Registering RDD 3 (map at WordCount.scala:18)
17/09/02 14:57:12 INFO DAGScheduler: Got job 0 (saveAsTextFile at WordCount.scala:20) with 1 output partitions
17/09/02 14:57:12 INFO DAGScheduler: Final stage: ResultStage 1 (saveAsTextFile at WordCount.scala:20)
17/09/02 14:57:12 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
17/09/02 14:57:12 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
17/09/02 14:57:12 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCount.scala:18), which has no missing parents
17/09/02 14:57:13 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.7 KB, free 1950.1 MB)
17/09/02 14:57:13 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.7 KB, free 1950.1 MB)
17/09/02 14:57:13 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.104:46432 (size: 2.7 KB, free: 1950.3 MB)
17/09/02 14:57:13 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
17/09/02 14:57:13 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCount.scala:18) (first 15 tasks are for partitions Vector(0))
17/09/02 14:57:13 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/09/02 14:57:13 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 4873 bytes)
17/09/02 14:57:13 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
17/09/02 14:57:13 INFO HadoopRDD: Input split: file:/home/dmitin/Projects/sparkdemo/src/main/resources/input.txt:0+11
17/09/02 14:57:13 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1154 bytes result sent to driver
17/09/02 14:57:13 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 289 ms on localhost (executor driver) (1/1)
17/09/02 14:57:13 INFO DAGScheduler: ShuffleMapStage 0 (map at WordCount.scala:18) finished in 0,321 s
17/09/02 14:57:13 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
17/09/02 14:57:13 INFO DAGScheduler: looking for newly runnable stages
17/09/02 14:57:13 INFO DAGScheduler: running: Set()
17/09/02 14:57:13 INFO DAGScheduler: waiting: Set(ResultStage 1)
17/09/02 14:57:13 INFO DAGScheduler: failed: Set()
17/09/02 14:57:13 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[5] at saveAsTextFile at WordCount.scala:20), which has no missing parents
17/09/02 14:57:13 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 65.3 KB, free 1950.0 MB)
17/09/02 14:57:13 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 23.3 KB, free 1950.0 MB)
17/09/02 14:57:13 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.1.104:46432 (size: 23.3 KB, free: 1950.3 MB)
17/09/02 14:57:13 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1006
17/09/02 14:57:13 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[5] at saveAsTextFile at WordCount.scala:20) (first 15 tasks are for partitions Vector(0))
17/09/02 14:57:13 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/09/02 14:57:13 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, ANY, 4621 bytes)
17/09/02 14:57:13 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
17/09/02 14:57:13 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
17/09/02 14:57:13 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 10 ms
17/09/02 14:57:13 INFO FileOutputCommitter: Saved output of task 'attempt_20170902145712_0001_m_000000_1' to file:/home/dmitin/Projects/sparkdemo/src/main/resources/outfile/_temporary/0/task_20170902145712_0001_m_000000
17/09/02 14:57:13 INFO SparkHadoopMapRedUtil: attempt_20170902145712_0001_m_000000_1: Committed
17/09/02 14:57:13 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1224 bytes result sent to driver
17/09/02 14:57:13 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 221 ms on localhost (executor driver) (1/1)
17/09/02 14:57:13 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
17/09/02 14:57:13 INFO DAGScheduler: ResultStage 1 (saveAsTextFile at WordCount.scala:20) finished in 0,223 s
17/09/02 14:57:13 INFO DAGScheduler: Job 0 finished: saveAsTextFile at WordCount.scala:20, took 1,222133 s
OK
17/09/02 14:57:13 INFO SparkContext: Invoking stop() from shutdown hook
17/09/02 14:57:13 INFO SparkUI: Stopped Spark web UI at http://192.168.1.104:4040
17/09/02 14:57:13 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/09/02 14:57:13 INFO MemoryStore: MemoryStore cleared
17/09/02 14:57:13 INFO BlockManager: BlockManager stopped
17/09/02 14:57:13 INFO BlockManagerMaster: BlockManagerMaster stopped
17/09/02 14:57:13 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/09/02 14:57:13 INFO SparkContext: Successfully stopped SparkContext
17/09/02 14:57:13 INFO ShutdownHookManager: Shutdown hook called
17/09/02 14:57:13 INFO ShutdownHookManager: Deleting directory /tmp/spark-663047b2-415a-45b5-bcad-20bd18270baa

Process finished with exit code 0

在IntelliJ Idea do文件->新建->项目->Scala->SBT->选择项目的位置和名称->完成

写入build.sbt

scalaVersion := "2.11.11"
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.2.0"
在主项目文件夹中的命令行中执行sbt更新,或在IntelliJ Idea内的sbt工具窗口中按刷新按钮

在src/main/scala/WordCount.scala中编写代码

将文件设置为src/main/resources/input.txt

运行代码:Ctrl+Shift+F10或sbt Run

在文件夹src/main/resources中,应该会出现包含多个文件的新子文件夹outfile

控制台输出:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/09/02 14:57:08 INFO SparkContext: Running Spark version 2.2.0
17/09/02 14:57:09 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/09/02 14:57:09 WARN Utils: Your hostname, dmitin-HP-Pavilion-Notebook resolves to a loopback address: 127.0.1.1; using 192.168.1.104 instead (on interface wlan0)
17/09/02 14:57:09 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/09/02 14:57:09 INFO SparkContext: Submitted application: Word Count
17/09/02 14:57:09 INFO SecurityManager: Changing view acls to: dmitin
17/09/02 14:57:09 INFO SecurityManager: Changing modify acls to: dmitin
17/09/02 14:57:09 INFO SecurityManager: Changing view acls groups to: 
17/09/02 14:57:09 INFO SecurityManager: Changing modify acls groups to: 
17/09/02 14:57:09 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(dmitin); groups with view permissions: Set(); users  with modify permissions: Set(dmitin); groups with modify permissions: Set()
17/09/02 14:57:10 INFO Utils: Successfully started service 'sparkDriver' on port 38186.
17/09/02 14:57:10 INFO SparkEnv: Registering MapOutputTracker
17/09/02 14:57:10 INFO SparkEnv: Registering BlockManagerMaster
17/09/02 14:57:10 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/09/02 14:57:10 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/09/02 14:57:10 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-d90a4735-6a2b-42b2-85ea-55b0ed9b1dfd
17/09/02 14:57:10 INFO MemoryStore: MemoryStore started with capacity 1950.3 MB
17/09/02 14:57:10 INFO SparkEnv: Registering OutputCommitCoordinator
17/09/02 14:57:10 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/09/02 14:57:11 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.104:4040
17/09/02 14:57:11 INFO Executor: Starting executor ID driver on host localhost
17/09/02 14:57:11 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46432.
17/09/02 14:57:11 INFO NettyBlockTransferService: Server created on 192.168.1.104:46432
17/09/02 14:57:11 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/09/02 14:57:11 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.104, 46432, None)
17/09/02 14:57:11 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.104:46432 with 1950.3 MB RAM, BlockManagerId(driver, 192.168.1.104, 46432, None)
17/09/02 14:57:11 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.104, 46432, None)
17/09/02 14:57:11 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.104, 46432, None)
17/09/02 14:57:12 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 214.5 KB, free 1950.1 MB)
17/09/02 14:57:12 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.4 KB, free 1950.1 MB)
17/09/02 14:57:12 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.104:46432 (size: 20.4 KB, free: 1950.3 MB)
17/09/02 14:57:12 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:16
17/09/02 14:57:12 INFO FileInputFormat: Total input paths to process : 1
17/09/02 14:57:12 INFO SparkContext: Starting job: saveAsTextFile at WordCount.scala:20
17/09/02 14:57:12 INFO DAGScheduler: Registering RDD 3 (map at WordCount.scala:18)
17/09/02 14:57:12 INFO DAGScheduler: Got job 0 (saveAsTextFile at WordCount.scala:20) with 1 output partitions
17/09/02 14:57:12 INFO DAGScheduler: Final stage: ResultStage 1 (saveAsTextFile at WordCount.scala:20)
17/09/02 14:57:12 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
17/09/02 14:57:12 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
17/09/02 14:57:12 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCount.scala:18), which has no missing parents
17/09/02 14:57:13 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.7 KB, free 1950.1 MB)
17/09/02 14:57:13 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.7 KB, free 1950.1 MB)
17/09/02 14:57:13 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.104:46432 (size: 2.7 KB, free: 1950.3 MB)
17/09/02 14:57:13 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
17/09/02 14:57:13 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCount.scala:18) (first 15 tasks are for partitions Vector(0))
17/09/02 14:57:13 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/09/02 14:57:13 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 4873 bytes)
17/09/02 14:57:13 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
17/09/02 14:57:13 INFO HadoopRDD: Input split: file:/home/dmitin/Projects/sparkdemo/src/main/resources/input.txt:0+11
17/09/02 14:57:13 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1154 bytes result sent to driver
17/09/02 14:57:13 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 289 ms on localhost (executor driver) (1/1)
17/09/02 14:57:13 INFO DAGScheduler: ShuffleMapStage 0 (map at WordCount.scala:18) finished in 0,321 s
17/09/02 14:57:13 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
17/09/02 14:57:13 INFO DAGScheduler: looking for newly runnable stages
17/09/02 14:57:13 INFO DAGScheduler: running: Set()
17/09/02 14:57:13 INFO DAGScheduler: waiting: Set(ResultStage 1)
17/09/02 14:57:13 INFO DAGScheduler: failed: Set()
17/09/02 14:57:13 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[5] at saveAsTextFile at WordCount.scala:20), which has no missing parents
17/09/02 14:57:13 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 65.3 KB, free 1950.0 MB)
17/09/02 14:57:13 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 23.3 KB, free 1950.0 MB)
17/09/02 14:57:13 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.1.104:46432 (size: 23.3 KB, free: 1950.3 MB)
17/09/02 14:57:13 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1006
17/09/02 14:57:13 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[5] at saveAsTextFile at WordCount.scala:20) (first 15 tasks are for partitions Vector(0))
17/09/02 14:57:13 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/09/02 14:57:13 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, ANY, 4621 bytes)
17/09/02 14:57:13 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
17/09/02 14:57:13 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
17/09/02 14:57:13 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 10 ms
17/09/02 14:57:13 INFO FileOutputCommitter: Saved output of task 'attempt_20170902145712_0001_m_000000_1' to file:/home/dmitin/Projects/sparkdemo/src/main/resources/outfile/_temporary/0/task_20170902145712_0001_m_000000
17/09/02 14:57:13 INFO SparkHadoopMapRedUtil: attempt_20170902145712_0001_m_000000_1: Committed
17/09/02 14:57:13 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1224 bytes result sent to driver
17/09/02 14:57:13 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 221 ms on localhost (executor driver) (1/1)
17/09/02 14:57:13 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
17/09/02 14:57:13 INFO DAGScheduler: ResultStage 1 (saveAsTextFile at WordCount.scala:20) finished in 0,223 s
17/09/02 14:57:13 INFO DAGScheduler: Job 0 finished: saveAsTextFile at WordCount.scala:20, took 1,222133 s
OK
17/09/02 14:57:13 INFO SparkContext: Invoking stop() from shutdown hook
17/09/02 14:57:13 INFO SparkUI: Stopped Spark web UI at http://192.168.1.104:4040
17/09/02 14:57:13 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/09/02 14:57:13 INFO MemoryStore: MemoryStore cleared
17/09/02 14:57:13 INFO BlockManager: BlockManager stopped
17/09/02 14:57:13 INFO BlockManagerMaster: BlockManagerMaster stopped
17/09/02 14:57:13 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/09/02 14:57:13 INFO SparkContext: Successfully stopped SparkContext
17/09/02 14:57:13 INFO ShutdownHookManager: Shutdown hook called
17/09/02 14:57:13 INFO ShutdownHookManager: Deleting directory /tmp/spark-663047b2-415a-45b5-bcad-20bd18270baa

Process finished with exit code 0


您的第一个链接是计算机中的PDF。。。我们无法访问that@cricket_007就在这里:使用intellij community edition.pd设置spark 2.0​如果您的第一个链接是计算机中的PDF。。。我们无法访问that@cricket_007就在这里:使用intellij community edition.pd设置spark 2.0​是的,我看到了上面提到的extends-app方法,但是假设大多数开发人员使用的是更Java风格的方法?另外,我是Scala新手,自从我在byegone大学时代就没有做过Java。我相信如果你有我提到的目录布局,你甚至不需要扩展应用程序:扩展应用程序与目录布局无关。这是定义可执行类的方式。是的,我看到了上面提到的extends-app方法,但是假设大多数开发人员都使用更Java风格的方法?另外,我是Scala新手,自从我在byegone大学时代就没有做过Java。我相信如果你有我提到的目录布局,你甚至不需要扩展应用程序:扩展应用程序与目录布局无关。这是定义可执行类的方式。谢谢-这很好。当我点击run-Through错误时,我当前收到一个错误:1,12对象apache不是我正在调查的包org import org.apache.spark.{SparkConf,SparkContext}的成员。@user1761806看起来像是解决依赖关系的问题。尝试sbt清理,然后进行sbt更新。或者尝试重新导入IntelliJ中的项目。实际上我已经解决了。sbt更新,应该在您的主项目文件夹中进行,而不是像我这样从任何位置进行。@user1761806您是对的。对不起,我没提这个。很难猜到在发布一些东西时可能遇到的所有困难。谢谢-这很好。当我点击run-Through错误时,我当前收到一个错误:1,12对象apache不是我正在调查的包org import org.apache.spark.{SparkConf,SparkContext}的成员。@user1761806看起来像是解决依赖关系的问题。尝试sbt清理,然后进行sbt更新。或者尝试重新导入IntelliJ中的项目。实际上我已经解决了。sbt更新,应该在您的主项目文件夹中进行,而不是像我这样从任何位置进行。@user1761806您是对的。对不起,我没提这个。这很难理解
使用启动某事物时可能遇到的所有困难。虽然此代码可以回答此问题,但提供有关此代码为什么和/或如何回答此问题的附加上下文可以提高其长期价值。虽然此代码可以回答此问题,提供关于此代码为什么和/或如何回答此问题的附加上下文可提高其长期价值。