Hadoop spark shell出错,返回spark_HOME下的库上载

Hadoop spark shell出错,返回spark_HOME下的库上载,hadoop,apache-spark,pyspark,apache-spark-sql,amazon-emr,Hadoop,Apache Spark,Pyspark,Apache Spark Sql,Amazon Emr,我正在尝试连接spark shell amazon hadoop,但我一直在发出以下错误,不知道如何修复或配置缺失的内容 spark.warn.jars,spark.warn.archive spark-shell --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel)

我正在尝试连接spark shell amazon hadoop,但我一直在发出以下错误,不知道如何修复或配置缺失的内容

spark.warn.jars
spark.warn.archive

spark-shell --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel).
16/08/12 07:47:26 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
16/08/12 07:47:28 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
谢谢

错误1

我正在尝试运行SQL查询,非常简单,如:

val sqlDF = spark.sql("SELECT col1 FROM tabl1 limit 10")
sqlDF.show()
警告YarnScheduler:初始作业未接受任何资源;检查集群UI以确保工作人员已注册并拥有足够的资源

错误2

然后,我尝试运行一个脚本scala,这是在中收集的一些简单内容:

scala.reflect.internal.Symbols$CyclicReference:涉及对象接口的非法循环引用 位于scala.reflect.internal.Symbols$Symbol$$anonfun$info$3.apply(Symbols.scala:1502) 位于scala.reflect.internal.Symbols$Symbol$$anonfun$info$3.apply(Symbols.scala:1500) 在scala.Function0$class.apply$mcV$sp处(Function0.scala:34)


看起来spark UI未启动,尝试启动spark shell,并检查sparkUI
localhost:4040
是否正常运行。

似乎其警告未出错。你面临的问题是什么?
import org.apache.hadoop.io.Text;
import org.apache.hadoop.dynamodb.DynamoDBItemWritable
import com.amazonaws.services.dynamodbv2.model.AttributeValue
import org.apache.hadoop.dynamodb.read.DynamoDBInputFormat
import org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat
import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.io.LongWritable
import java.util.HashMap


var ddbConf = new JobConf(sc.hadoopConfiguration)
ddbConf.set("dynamodb.output.tableName", "tableDynamoDB")
ddbConf.set("dynamodb.throughput.write.percent", "0.5")
ddbConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat")
ddbConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat")


var genreRatingsCount = sqlContext.sql("SELECT col1 FROM table1 LIMIT 1")

var ddbInsertFormattedRDD = genreRatingsCount.map(a => {
var ddbMap = new HashMap[String, AttributeValue]()

var col1 = new AttributeValue()
col1.setS(a.get(0).toString)
ddbMap.put("col1", col1)

var item = new DynamoDBItemWritable()
item.setItem(ddbMap)

(new Text(""), item)
}
)

ddbInsertFormattedRDD.saveAsHadoopDataset(ddbConf)