Scala Oracle JDBC

Scala Oracle JDBC,oracle,scala,jdbc,Oracle,Scala,Jdbc,我在Scala中发现模块未找到错误。我试图获得一个到Oracle的jdbc连接,连接两个表,然后打印出来 我的scala文件是 import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ import org.apache.spark.SparkConf import org.apache.spark.sql.SQLContext object sparkJDBC { def main(args:

我在Scala中发现
模块未找到
错误。我试图获得一个到Oracle的jdbc连接,连接两个表,然后打印出来

我的scala文件是

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.sql.SQLContext

object sparkJDBC {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setAppName("Simple      
        Application").setMaster("local[2]").set("spark.executor.memory","1g")
    val sc = new SparkContext(conf)
    var sqlContext = new SQLContext(sc)
    val chrttype = sqlContext.load("jdbc", 
      Map("url" -> "jdbc:oracle:thin:gductv1/gductv1@//localhost:1521/XE",
      "dbtable" -> "chrt_typ"))
    val clntlvl1  = sqlContext.load("jdbc", 
      Map("url" -> "jdbc:oracle:thin:gductv1/gductv1@//localhost:1521/XE", 
      "dbtable" -> "clnt_lvl1"))
    val join2 =  
      chrttyp.join(clntlvl1,chrttyp.col("chrt_typ_key")===clntlvl1("lvl1_key"))
    join2.foreach(println)
    join2.printSchema()
    }
}
我的build.sbt文件是

   name := "sparkJDBC"
   version := "0.1"
   scalaVersion := "2.11.7"

   libraryDependencies += "org.apache.spark" %% "spark-core" % "1.5.1" 
   libraryDependencies += "org.apache.tika" % "tika-core" % "1.11"
   libraryDependencies += "org.apache.tika" % "tika-parsers" % "1.11"
   libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.7.1" 
   libraryDependencies += "org.apache.spark" % "spark-sql" % "1.0.0"
错误文件是

[warn]  module not found: org.apache.spark#spark-sql;1.0.0
[warn] ==== local: tried
[warn]   C:\Users\.ivy2\local\org.apache.spark\spark-sql\1.0.0\ivys\ivy.xml
[warn] ==== public: tried
[warn]   https://repo1.maven.org/maven2/org/apache/spark/spark-sql/1.0.0/spark-sql-1.0.0.pom
[info] Resolving jline#jline;2.12.1 ...
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  ::          UNRESOLVED DEPENDENCIES         ::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  :: org.apache.spark#spark-sql;1.0.0: not found
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::

[error] (*:update) sbt.ResolveException: unresolved dependency: org.apache.spark#spark-sql;1.0.0: not found

请帮我找出原因。

spark sql
是一个Scala库,就像
spark core
,因此您需要以相同的方式在组名和工件名之间使用
%%
。仅对Java库在它们之间使用
%
。有关您需要
%%
的原因,请参见。为了确保具有正确的相关性,您可以使用类似mvnrepository的网站:


问题:当前的解析器没有您正在询问的依赖项,这是一个很好的指针。我不再有“找不到模块”的问题了,但现在是错误了,{file:/C:/apps/spark-2.1.0/ScalaFiles/}scalafil es:[error]org.json4s:json4s ast_2.11,_2.10[error]com.twitter:chill_2.11,_2.10[error]org.json4s:json4s jackson _2.11,_2.10[错误]org.json4s:json4s core_2.11,_2.10[错误]org.apache.spark:spark core_2.11,_2.10I删除了.sbt文件中的Scala版本引用(scalaVersion:=“2.11.7”)然后它没有给我任何冲突的交叉版本。现在它是给我一些SQLContext错误,但它已经越过了第一个障碍。感谢Thomas和Alexey的快速响应。
libraryDependencies += "org.apache.spark" % "spark-sql_2.10" % "1.0.0"