使用配置单元配置mysql后,无法启动配置单元元存储服务或配置单元外壳

使用配置单元配置mysql后,无法启动配置单元元存储服务或配置单元外壳,mysql,hadoop,hive,ubuntu-14.04,metastore,Mysql,Hadoop,Hive,Ubuntu 14.04,Metastore,我知道这个问题已经被问过了,但这些答案毫无帮助 <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value> <description>metadata is stored in a MyS

我知道这个问题已经被问过了,但这些答案毫无帮助

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>
我花了更多的时间用hive配置mysql,每次我遇到错误,我不知道哪里出了问题。。。。。如你所见

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>
hive-site.xml配置可以在

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>
是文件结构

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>
这就是产生这个问题的原因。。。。希望这能帮助我解决问题

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>
我跟踪了这些链接

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>

问题出在hive-site.xml文件上,此配置帮助我解决了问题。
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hivepassword</value>
    <description>password for connecting to mysql server </description>
  </property>
  <property>
    <name>javax.jdo.PersistenceManagerFactoryClass</name>
    <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
    <description>class implementing the jdo persistence</description>
  </property>

  <property>
    <name>javax.jdo.option.DetachAllOnCommit</name>
    <value>true</value>
    <description>detaches all objects from session so that they can be used after transaction is committed</description>
  </property>

  <property>
    <name>javax.jdo.option.NonTransactionalRead</name>
    <value>true</value>
    <description>reads outside of transactions</description>
  </property>
  <property>
    <name>javax.jdo.option.Multithreaded</name>
    <value>true</value>
    <description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
  </property>

  <property>
    <name>datanucleus.validateTables</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateColumns</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.validateConstraints</name>
    <value>false</value>
    <description>validates existing schema against code. turn this on if you want to verify existing schema </description>
  </property>

  <property>
    <name>datanucleus.storeManagerType</name>
    <value>rdbms</value>
    <description>metadata store type</description>
  </property>

  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>

  <property>
    <name>datanucleus.autoStartMechanismMode</name>
    <value>checked</value>
    <description>throw exception if metadata tables are incorrect</description>
  </property>

  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>datanucleus.transactionIsolation</name>
    <value>read-committed</value>
    <description>Default transaction isolation level for identity generation. </description>
  </property>

  <property>
    <name>datanucleus.cache.level2</name>
    <value>false</value>
    <description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
  </property>

  <property>
    <name>datanucleus.cache.level2.type</name>
    <value>SOFT</value>
    <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description>
  </property>

  <property>
    <name>datanucleus.identifierFactory</name>
    <value>datanucleus1</value>
    <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus1' is used for backward  compatibility with DataNucleus v1</description>
  </property>


  <property>
    <name>datanucleus.plugin.pluginRegistryBundleCheck</name>
    <value>LOG</value>
    <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
  </property>

  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
    <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further  note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description>
  </property>

  <property>
    <name>hive.metastore.event.listeners</name>
    <value></value>
    <description>list of comma separated listeners for metastore events.</description>
  </property>

  <property>
    <name>hive.metastore.partition.inherit.table.properties</name>
    <value></value>
    <description>list of comma separated keys occurring in table properties which will get inherited to newly created partitions. *   implies all the keys will get inherited.</description>
  </property>

  <property>
    <name>hive.metadata.export.location</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported   to the current user's home directory on HDFS.</description>
  </property>

  <property>
    <name>hive.metadata.move.exported.metadata.to.trash</name>
    <value></value>
    <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the   dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description>
  </property>

  <property>
    <name>hive.metastore.partition.name.whitelist.pattern</name>
    <value></value>
    <description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
  </property>

  <property>
    <name>hive.metastore.disallow.incompatible.col.type.change</name>
    <value></value>
    <description>If true (default is false), ALTER TABLE operations which change the type of   a column (say STRING) to an incompatible type (say MAP&lt;STRING, STRING&gt;) are disallowed.    RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the  datatypes can be converted from string to any type. The map is also serialized as  a string, which can be read as a string as well. However, with any binary   serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions  when subsequently trying to access old partitions.   Primitive types like INT, STRING, BIGINT, etc are compatible with each other and are   not blocked.  

  See HIVE-4409 for more details.
    </description>
  </property>

  <property>
    <name>hive.metastore.end.function.listeners</name>
    <value></value>
    <description>list of comma separated listeners for the end of metastore functions.</description>
  </property>

  <property>
    <name>hive.metastore.event.expiry.duration</name>
    <value>0</value>
    <description>Duration after which events expire from events table (in seconds)</description>
  </property>

  <property>
    <name>hive.metastore.event.clean.freq</name>
    <value>0</value>
    <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>5</value>
    <description>Number of retries while opening a connection to metastore</description>
  </property>

  <property>
    <name>hive.metastore.failure.retries</name>
    <value>3</value>
    <description>Number of retries upon failure of Thrift metastore calls</description>
  </property>

  <property>
    <name>hive.metastore.client.connect.retry.delay</name>
    <value>1</value>
    <description>Number of seconds for the client to wait between consecutive connection attempts</description>
  </property>

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>20</value>
    <description>MetaStore Client socket timeout in seconds</description>
  </property>

  <property>
    <name>hive.metastore.rawstore.impl</name>
    <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
    <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store   and retrieval of raw metadata objects such as table, database</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.max</name>
    <value>300</value>
    <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the   client side.</description>
  </property>

  <property>
    <name>hive.metastore.batch.retrieve.table.partition.max</name>
    <value>1000</value>
    <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description>
  </property>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
    <description>Hive metastore Thrift server</description>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
    <description>The default number of reduce tasks per job.  Typically set to a prime close to the number of available hosts.  Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
    </description>
  </property>
  <property>
    <name>hive.cli.print.header</name>
    <value>false</value>
    <description>Whether to print the names of the columns in query output.</description>
  </property>

  <property>
    <name>hive.cli.print.current.db</name>
    <value>false</value>
    <description>Whether to include the current database in the Hive prompt.</description>
  </property>

  <property>
    <name>hive.cli.prompt</name>
    <value>hive</value>
    <description>Command line prompt configuration value. Other hiveconf can be used in this configuration value. Variable substitution will only be invoked at the Hive
    CLI startup.</description>
  </property>

  <property>
    <name>hive.test.mode</name>
    <value>false</value>
    <description>Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.</description>
  </property>

  <property>
    <name>hive.test.mode.prefix</name>
    <value>test_</value>
    <description>if Hive is running in test mode, prefixes the output table by this string</description>
  </property>


  <property>
    <name>hive.test.mode.samplefreq</name>
    <value>32</value>
    <description>if Hive is running in test mode and table is not bucketed, sampling frequency</description>
  </property>

  <property>
    <name>hive.test.mode.nosamplelist</name>
    <value></value>
    <description>if Hive is running in test mode, don't sample the above comma separated list of tables</description>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value></value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
</configuration>

javax.jdo.option.ConnectionURL
jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true
元数据存储在MySQL服务器中
javax.jdo.option.ConnectionDriverName
com.mysql.jdbc.Driver
MySQL JDBC驱动程序类
javax.jdo.option.ConnectionUserName
蜂巢用户
用于连接到mysql服务器的用户名
javax.jdo.option.ConnectionPassword
蜂巢密码
连接到mysql服务器的密码
javax.jdo.PersistenceManagerFactoryClass
org.datanucleus.api.jdo.JDOPersistenceManagerFactory
类实现jdo持久性
javax.jdo.option.DetachAllOnCommit
真的
从会话中分离所有对象,以便在提交事务后可以使用它们
javax.jdo.option.NonTransactionalRead
真的
在事务之外读取
javax.jdo.option.multi-threaded
真的
如果多个线程同时通过JDO访问metastore,则将此设置为true。
datanucleus.validateTables
假的
根据代码验证现有架构。如果要验证现有架构,请启用此选项
datanucleus.validateColumns
假的
根据代码验证现有架构。如果要验证现有架构,请启用此选项
datanucleus.validateConstraints
假的
根据代码验证现有架构。如果要验证现有架构,请启用此选项
datanucleus.storeManagerType
关系数据库
元数据存储类型
datanucleus.autoCreateSchema
假的
datanucleus.autoStartMechanismMode
选中的
如果元数据表不正确,则引发异常
自动启动机制
可策划
datanucleus.fixedDatastore
真的
datanucleus.transactionIsolation
阅读承诺
标识生成的默认事务隔离级别。
datanucleus.cache.level2
假的
使用二级缓存。如果元数据的更改独立于配置单元metastore服务器,请关闭此选项
datanucleus.cache.level2.type
软的
SOFT=基于软引用的缓存,弱=基于弱引用的缓存。
datanucleus.identifierFactory
数据核1
生成表/列名等时要使用的标识符工厂的名称。“DataNucleus 1”用于与DataNucleus v1的向后兼容性
datanucleus.plugin.pluginRegistryBundleCheck
日志
定义找到插件包并复制插件包时发生的情况[异常|日志|无]
hive.metastore.warehouse.dir
/用户/配置单元/仓库
仓库的默认数据库的位置
hive.metastore.execute.setugi
假的
在不安全模式下,将此属性设置为true将导致metastore使用客户端报告的用户和组权限执行DFS操作。请注意,必须在客户端和服务器端都设置此属性。进一步注意,它尽了最大努力。若客户端将其设置为true,而服务器将其设置为false,则将忽略客户端设置。
hive.metastore.event.listeners
元存储事件的逗号分隔侦听器列表。
hive.metastore.partition.inherit.table.properties
表属性中出现的逗号分隔键列表,这些键将被继承到新创建的分区。*意味着所有密钥都将被继承。
hive.metadata.export.location
当与org.apache.hadoop.hive.ql.parse.MetaDataExportListener事件前侦听器一起使用时,它是元数据将导出到的位置。默认值为空字符串,这将导致元数据导出到HDFS上当前用户的主目录。
hive.metadata.move.exported.metadata.to.trash
当与org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre-event listener一起使用时,此设置确定导出的元数据是否随后与删除的表数据一起移动到用户的垃圾箱目录。这将确保元数据与删除的表数据一起被清除。
hive.metastore.partition.name.whitelist.pattern
分区名称将根据此正则表达式模式进行检查,如果不匹配,将被拒绝。
hive.metastore.disallow.compatible.col.type.change
如果为true(默认值为false),则不允许将列的类型(例如字符串)更改为不兼容的类型(例如MAPSTRING、STRING)的ALTER TABLE操作。RCFile default SerDe(ColumnarSerDe)以这样一种方式序列化值,即数据类型可以从字符串转换为任何类型。映射也被序列化为字符串,也可以作为字符串读取。但是,对于任何二进制序列化,情况并非如此。阻塞ALTER TABLE可防止在随后尝试访问旧分区时发生ClassCastException。诸如INT、STRING、BIGINT等基本类型彼此兼容并且不被阻塞。
有关更多详细信息,请参阅HIVE-4409。
hive.metastore.end.function.listeners
元存储区结束函数的逗号分隔侦听器列表。
hive.metastore.event.expiration.duration
0
事件表中事件过期的持续时间(秒)