Hadoop Spark上Hive的动态分配

Hadoop Spark上Hive的动态分配,hadoop,apache-spark,hive,Hadoop,Apache Spark,Hive,我使用以下方法在hive-site.xml中配置了spark引擎: <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> I configured spark engine in hive-site.xml using: <property> <name>hive.execution.engine</name>

我使用以下方法在hive-site.xml中配置了spark引擎:

<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
I configured spark engine in hive-site.xml using:
<property>
  <name>hive.execution.engine</name>
  <value>spark</value>
</property>
<property>
  <name>spark.master</name>
  <value>yarn-cluster</value>
</property>
<property>
  <name>spark.dynamicAllocation.enabled</name>
  <value>true</value>
</property>
<property>
  <name>spark.executor.cores</name>
  <value>4</value>
</property>
<property>
  <name>spark.dynamicAllocation.initialExecutors</name>
  <value>1</value>
</property>
<property>
  <name>spark.dynamicAllocation.minExecutors</name>
  <value>1</value>
</property>
<property>
  <name>spark.dynamicAllocation.maxExecutors</name>
  <value>8</value>
</property>
<property>
  <name>spark.shuffle.service.enabled</name>
  <value>true</value>
</property>
<property>
  <name>spark.executor.memory</name>
  <value>3g</value>
</property>
<property>
  <name>spark.driver.memory</name>
  <value>3g</value>
</property>
<property>
  <name>spark.serializer</name>
  <value>org.apache.spark.serializer.KryoSerializer</value>
</property>
<property>
  <name>spark.io.compression.codec</name>
  <value>lzf</value>
</property>
<property>
  <name>spark.yarn.jar</name>
  <value>hdfs://VCluster1/user/spark/share/lib/spark-assembly-1.3.1-hadoop2.7.1.jar</value>
</property>
<property>
  <name>spark.kryo.referenceTracking</name>
  <value>false</value>
</property>
<property>
  <name>spark.kryo.classesToRegister</name>
  <value>org.apache.hadoop.hive.ql.io.HiveKey,org.apache.hadoop.io.BytesWritable,org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch</value>
</property>
In yarn-site.xml:
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle,spark_shuffle</value>
</property>
<property>
  <name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
  <value>org.apache.spark.network.yarn.YarnShuffleService</value>
</property>

我使用以下方法在hive-site.xml中配置了spark引擎:
蜂巢执行引擎
火花
星火大师
纱线团
spark.DynamicLocation.enabled
真的
spark.executor.cores
4.
spark.DynamicLocation.initialExecutors
1.
spark.DynamicLocation.minExecutors
1.
spark.DynamicLocation.maxExecutors
8.
spark.shuffle.service.enabled
真的
火花,执行器,记忆
3g
火花、驱动、记忆
3g
spark.serializer
org.apache.spark.serializer.KryoSerializer
spark.io.compression.codec
lzf
spark.warn.jar
hdfs://VCluster1/user/spark/share/lib/spark-assembly-1.3.1-hadoop2.7.1.jar
spark.kryo.referenceTracking
假的
spark.kryo.classesToRegister
org.apache.hadoop.hive.ql.io.HiveKey,org.apache.hadoop.io.BytesWritable,org.apache.hadoop.hive.ql.exec.vector.vectoriezedrowbatch
在warn-site.xml中:
纱线.节点管理器.辅助服务
mapreduce\u shuffle,spark\u shuffle
纱线.nodemanager.aux-services.spark_shuffle.class
org.apache.spark.network.warn.YarnShuffleService
在spark作业上运行配置单元时,动态分配不起作用。Spark会自动将Spark.executor.instances分配给我设置为Spark.DynamicLocation.initialExecutors的任何数字,并且不会更改。有人能帮我解决这个问题吗


谢谢

您使用的是什么版本的蜂巢?@ArunakiranNulu我使用的是蜂巢-1.2.1。Spark版本1.3.1您正在使用哪个版本的蜂巢?@ArunakiranNulu我正在使用蜂巢-1.2.1。Spark版本1.3.1