Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
R和蜂箱之间的连接(火花上)_R_Apache Spark_Jdbc_Hive - Fatal编程技术网

R和蜂箱之间的连接(火花上)

R和蜂箱之间的连接(火花上),r,apache-spark,jdbc,hive,R,Apache Spark,Jdbc,Hive,我正在尝试连接R和Hive(Spark)。 在我的桌面(Windows 10,R-3.4.2)上工作正常,但在R-server(Linux,R-3.4.4)上我收到错误: library(rJava) library(RJDBC) driver <- JDBC("org.apache.hive.jdbc.HiveDriver", "~/Drivers/Spark/hive-jdbc-1.2.1-spark2-amzn-0.jar",identifier.quote="`") url <

我正在尝试连接R和Hive(Spark)。 在我的桌面(Windows 10,R-3.4.2)上工作正常,但在R-server(Linux,R-3.4.4)上我收到错误:

library(rJava)
library(RJDBC)
driver <- JDBC("org.apache.hive.jdbc.HiveDriver", "~/Drivers/Spark/hive-jdbc-1.2.1-spark2-amzn-0.jar",identifier.quote="`")
url <- "jdbc:hive2://<MyIP>:10001"
conn <- dbConnect(driver, url) 
Error in .jcall(drv@jdrv,"Ljava/sql/Connection;", "connect", as.character(url)[1],  : java.lang.NoClassDefFoundError: org/apache/http/client/CookieStore
库(rJava)
图书馆(RJDBC)
驱动程序我找到了解决方案:

library(rJava)
library(RJDBC)

options(java.parameters = '-Xmx256m')
hadoop_jar_dirs <- c('//home//ubuntu//spark-jdbc//')
clpath <- c()
for (d in hadoop_jar_dirs) {
  clpath <- c(clpath, list.files(d, pattern = 'jar', full.names = TRUE))
}
.jinit(classpath = clpath)
.jaddClassPath(clpath)

hive_jdbc_jar <- 'hive-jdbc-1.2.1-spark2-amzn-0.jar'
hive_driver <- 'org.apache.hive.jdbc.HiveDriver'
hive_url <- 'jdbc:hive2://<MyIP>:10001'
drv <- JDBC(hive_driver, hive_jdbc_jar)
conn <- dbConnect(drv, hive_url)
show_databases <- dbGetQuery(conn, "show databases")
show_databases
库(rJava)
图书馆(RJDBC)
选项(java.parameters='-Xmx256m')
hadoop_jar_dirs我找到了解决方案:

library(rJava)
library(RJDBC)

options(java.parameters = '-Xmx256m')
hadoop_jar_dirs <- c('//home//ubuntu//spark-jdbc//')
clpath <- c()
for (d in hadoop_jar_dirs) {
  clpath <- c(clpath, list.files(d, pattern = 'jar', full.names = TRUE))
}
.jinit(classpath = clpath)
.jaddClassPath(clpath)

hive_jdbc_jar <- 'hive-jdbc-1.2.1-spark2-amzn-0.jar'
hive_driver <- 'org.apache.hive.jdbc.HiveDriver'
hive_url <- 'jdbc:hive2://<MyIP>:10001'
drv <- JDBC(hive_driver, hive_jdbc_jar)
conn <- dbConnect(drv, hive_url)
show_databases <- dbGetQuery(conn, "show databases")
show_databases
库(rJava)
图书馆(RJDBC)
选项(java.parameters='-Xmx256m')
hadoop_jar_dirs