sparkR读取csv错误返回状态==0非真
我开始使用我的火花壳sparkR读取csv错误返回状态==0非真,r,csv,apache-spark,sparkr,R,Csv,Apache Spark,Sparkr,我开始使用我的火花壳 >>./bin/sparkR --packages com.databricks:spark-csv_2.10:1.2.0 现在我正在尝试读取sparkR shell中的csv d <- read.df(sqlContext, "data/mllib/sample_tree_data.csv","com.databricks.spark.csv", header="true") 我遇到了完全相同的问题。它在我重新启动R-session后工作 我
>>./bin/sparkR --packages com.databricks:spark-csv_2.10:1.2.0
现在我正在尝试读取sparkR shell中的csv
d <- read.df(sqlContext,
"data/mllib/sample_tree_data.csv","com.databricks.spark.csv", header="true")
我遇到了完全相同的问题。它在我重新启动R-session后工作 我遇到了完全相同的问题。它在我重新启动R-session后工作 错误路径-
“data/mllib/sample\u tree\u data.csv”
?@zero323抱歉,这是一个输入错误…我仍然收到相同的错误。如果是这样,请提供完整的回溯。@zero323我添加了日志。这是朝着正确方向迈出的一步,但并不完全是我们需要的。在方法调用和错误之间:returnStatus==0不为TRUE
应该有一大块文本。请复制并粘贴。错误路径-“data/mllib/sample\u tree\u data.csv”
?@zero323抱歉,这是一个输入错误…我仍然收到相同的错误。如果是这样,请提供完整的回溯。@zero323我添加了日志,这是朝着正确方向迈出的一步,但并不完全是我们需要的。在方法调用和错误之间:returnStatus==0不为TRUE
应该有一大块文本。请复制并粘贴那个。
R version 3.0.3 (2014-03-06) -- "Warm Puppy"
Copyright (C) 2014 The R Foundation for Statistical Computing
Platform: x86_64-unknown-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
Launching java with spark-submit command /opt/spark/bin/spark-submit "--packages" "com.databricks:spark-csv_2.10:1.2.0" "sparkr-shell" /tmp/RtmpXZM96Y/backend_porte2670057297
Ivy Default Cache set to: /home/ravimanik/.ivy2/cache
The jars for the packages stored in: /home/ravimanik/.ivy2/jars
:: loading settings :: url = jar:file:/opt/alti-spark-1.4.1.hadoop24.hive13/assembly/target/scala-2.10/spark-assembly-1.4.1-hadoop2.4.1.jar!/org/apache/ivy/core/settings/ivysettings.xml
com.databricks#spark-csv_2.10 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
confs: [default]
found com.databricks#spark-csv_2.10;1.2.0 in central
found org.apache.commons#commons-csv;1.1 in central
found com.univocity#univocity-parsers;1.5.1 in central
:: resolution report :: resolve 297ms :: artifacts dl 30ms
:: modules in use:
com.databricks#spark-csv_2.10;1.2.0 from central in [default]
com.univocity#univocity-parsers;1.5.1 from central in [default]
org.apache.commons#commons-csv;1.1 from central in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 3 | 0 | 0 | 0 || 3 | 0 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent
confs: [default]
0 artifacts copied, 3 already retrieved (0kB/21ms)
Welcome to SparkR!
Spark context is available as sc, SQL context is available as sqlContext