Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/maven/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/excel/27.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
星火;java编译错误_Java_Maven_Apache Spark - Fatal编程技术网

星火;java编译错误

星火;java编译错误,java,maven,apache-spark,Java,Maven,Apache Spark,当我使用maven编译spark java程序时,我遇到了如下编译错误 [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] /home/spark/java/src/main/java/SimpleApp.java:[9,36] cannot find symbol symbol: variable read location

当我使用maven编译spark java程序时,我遇到了如下编译错误

[ERROR] COMPILATION ERROR : 
[INFO] -------------------------------------------------------------
[ERROR] /home/spark/java/src/main/java/SimpleApp.java:[9,36] cannot find symbol
  symbol:   variable read
  location: variable spark of type org.apache.spark.sql.SparkSession
这是我的JAVA语言

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;


public class SimpleApp {
  public static void main(String[] args) {
    String logFile = "/home/spark/spark-2.2.0-bin-hadoop2.7/README.md"; // Should be some file on your system
    SparkSession spark = SparkSession.builder().appName("Simple Application").getOrCreate();
    Dataset<String> logData = spark.read.textFile(logFile).cache();
//    Dataset<String> logData = SparkSession.builder().appName("Simple Application").getOrCreate().read.textFile(logFile).cache();
    long numAs = logData.filter(s -> s.contains("a")).count();
    long numBs = logData.filter(s -> s.contains("b")).count();

    System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);

    spark.stop();
  }
}
import org.apache.spark.sql.SparkSession;
导入org.apache.spark.sql.Dataset;
公共类SimpleApp{
公共静态void main(字符串[]args){
字符串logFile=“/home/spark/spark-2.2.0-bin-hadoop2.7/README.md”;//应该是您系统上的一些文件
SparkSession spark=SparkSession.builder().appName(“简单应用程序”).getOrCreate();
数据集logData=spark.read.textFile(logFile.cache();
//Dataset logData=SparkSession.builder().appName(“简单应用程序”).getOrCreate().read.textFile(logFile.cache();
long numAs=logData.filter(s->s.contains(“a”)).count();
long numBs=logData.filter(s->s.contains(“b”)).count();
System.out.println(“带a的行:“+numAs+”,带b的行:“+numBs”);
spark.stop();
}
}
我在官方网站上写程序: 这个错误是怎么来的


这是我的pom.xml

<project>
  <groupId>edu.berkeley</groupId>
  <artifactId>simple-project</artifactId>
  <modelVersion>4.0.0</modelVersion>
  <name>Simple Project</name>
  <packaging>jar</packaging>
  <version>1.0</version>
  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
  </properties>

  <dependencies>
    <dependency> <!-- Spark dependency -->
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.11</artifactId>
      <version>2.2.0</version>
    </dependency>
  </dependencies>
</project>

伯克利教育学院
简单项目
4.0.0
简单项目
罐子
1
1.8
1.8
org.apache.spark
spark-sql_2.11
2.2.0

好的。。。我找到了结果。
官方网站上的例子是错误的

spark.read.textFile(logFile).cache();   -->   spark.read().textFile(logFile).cache();

read
应该是
read()

您添加了正确的依赖项吗?您是否也可以在问题中包含
pom.xml
文件?这是我的pom.xml