Scala SparkWordCount问题-java.lang.ClassNotFoundException

Scala SparkWordCount问题-java.lang.ClassNotFoundException,scala,apache-spark,word-count,spark-submit,Scala,Apache Spark,Word Count,Spark Submit,这不是一个重复的问题,我已经尝试了很多方法,但都没有成功 我正在尝试编写一个字数计算应用程序,这样我就可以运行spark submit 我正在使用IntelliJ IDEA、spark-2.1.1和scala-2.11.8 我的字数代码如下所示: package com.netflix.utilities import org.apache.spark.SparkContext import org.apache.spark.SparkConf class SparkWordCount {

这不是一个重复的问题,我已经尝试了很多方法,但都没有成功

我正在尝试编写一个字数计算应用程序,这样我就可以运行spark submit

我正在使用IntelliJ IDEA、spark-2.1.1和scala-2.11.8

我的字数代码如下所示:

package com.netflix.utilities

import org.apache.spark.SparkContext
import org.apache.spark.SparkConf

class SparkWordCount {
  def main(args: Array[String]) {
   // create Spark context with Spark configuration
   println("starting")
   val sc = new SparkContext(new SparkConf().setAppName("Spark Count"))

   // get threshold
   val threshold = args(1).toInt

   // read in text file and split each document into words
   val tokenized = sc.textFile(args(0)).flatMap(_.split(" "))

   // count the occurrence of each word
   val wordCounts = tokenized.map((_, 1)).reduceByKey(_ + _)

   // filter out words with fewer than threshold occurrences
   val filtered = wordCounts.filter(_._2 >= threshold)

   // count characters
   val charCounts = filtered.flatMap(_._1.toCharArray).map((_, 1)).reduceByKey(_ + _)

   System.out.println(charCounts.collect().mkString(", "))
  }
}
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.netflix.utilities</groupId>
<artifactId>SparkWordCount</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>2.1.1</version>
    </dependency>
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-library</artifactId>
        <version>2.11.8</version>
    </dependency>

</dependencies>

<distributionManagement>
    <repository>
        <id>internal.repo</id>
        <name>Internal repo</name>
        <url>file:///Users/sankar.biswas/Noah/</url>
    </repository>
</distributionManagement>


<build>
    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.7.0</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
        </plugins>
    </pluginManagement>
</build>
我的pom.xml如下所示:

package com.netflix.utilities

import org.apache.spark.SparkContext
import org.apache.spark.SparkConf

class SparkWordCount {
  def main(args: Array[String]) {
   // create Spark context with Spark configuration
   println("starting")
   val sc = new SparkContext(new SparkConf().setAppName("Spark Count"))

   // get threshold
   val threshold = args(1).toInt

   // read in text file and split each document into words
   val tokenized = sc.textFile(args(0)).flatMap(_.split(" "))

   // count the occurrence of each word
   val wordCounts = tokenized.map((_, 1)).reduceByKey(_ + _)

   // filter out words with fewer than threshold occurrences
   val filtered = wordCounts.filter(_._2 >= threshold)

   // count characters
   val charCounts = filtered.flatMap(_._1.toCharArray).map((_, 1)).reduceByKey(_ + _)

   System.out.println(charCounts.collect().mkString(", "))
  }
}
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.netflix.utilities</groupId>
<artifactId>SparkWordCount</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>2.1.1</version>
    </dependency>
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-library</artifactId>
        <version>2.11.8</version>
    </dependency>

</dependencies>

<distributionManagement>
    <repository>
        <id>internal.repo</id>
        <name>Internal repo</name>
        <url>file:///Users/sankar.biswas/Noah/</url>
    </repository>
</distributionManagement>


<build>
    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.7.0</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
        </plugins>
    </pluginManagement>
</build>

4.0.0
com.netflix.utilities
斯巴克沃德伯爵
1.0-快照
org.apache.spark
spark-core_2.11
2.1.1
org.scala-lang
scala图书馆
2.11.8
内部回购
内部回购
file:///Users/sankar.biswas/Noah/
org.apache.maven.plugins
maven编译器插件
3.7.0
1.8
1.8

现在我把我的罐子放在了spark提交的文件夹里

这是我正在运行的spark submit::

./spark submit--class com.netflix.utilities.SparkWordCount--master local--deploy mode客户端SparkWordCount-1.0-SNAPSHOT.jar“file:////Users/sankar.biswas/Desktop/hello.txt"

但我一直在犯这样的错误:

java.lang.ClassNotFoundException:SparkWordCount


我没有得到我所缺少的。如有任何建议,将不胜感激

使用对象SparkWordCount代替类SparkWordCount并将--class“SparkWordCount”更改为--class com.netflix.utilities.SparkWordCount并运行它

您是否尝试将
spark mllib
添加到pom中?您还必须包括包。只是一个类名没有包含在包中,不起作用::Nikhil,为什么我需要spark mllib来完成此任务?/spark submit--class com.netflix.utilities.SparkWordCount--master local--deploy mode client SparkWordCount-1.0-SNAPSHOT.jar“file:////Users/sankar.biswas/Desktop/hello.txt" 2 . ==> 获取相同错误java.lang.ClassNotFoundException:com.netflix.utilities.SparkWordCount是否已将其更改为Object?将类
SparkWordCount
更改为
对象。