Hive WSO2 BAM配置单元NoSuchObjectException错误

Hive WSO2 BAM配置单元NoSuchObjectException错误,hive,wso2,wso2bam,Hive,Wso2,Wso2bam,我已经按照文档中的说明配置了BAM 2.4.0 . 我正在使用MySQL 当我尝试从BAM管理控制台运行drop table脚本(如“更改统计数据库”一节所述)时,我收到了此错误。 有什么想法吗 [2014-05-08 11:01:19,948] ERROR {hive.ql.metadata.Hive} - NoSuchObjectException(message:default.APIFaultSummaryData table not found) at org.apa

我已经按照文档中的说明配置了BAM 2.4.0 . 我正在使用MySQL

当我尝试从BAM管理控制台运行drop table脚本(如“更改统计数据库”一节所述)时,我收到了此错误。 有什么想法吗

[2014-05-08 11:01:19,948] ERROR {hive.ql.metadata.Hive} -  NoSuchObjectException(message:default.APIFaultSummaryData table not found)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1222)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1217)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1217)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:734)
        at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:901)
        at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:843)
        at org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:3127)
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:250)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:129)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:62)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1351)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1126)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:934)
        at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:201)
        at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:187)
        at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.executeHiveQuery(HiveExecutorServiceImpl.java:569)
        at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:282)
        at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:189)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)

更新错误

[2014-05-08 13:58:00,004]  INFO {org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask} -  Running script executor task for script am_stats_analyzer_503. [Thu May 08 13:58:00 CEST 2014]
Hive history file=/u01/app/wso2bam-2.4.0/tmp/hive/wso2-querylogs/hive_job_log_root_201405081356_596525563.txt
OK
OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
[main] DEBUG org.apache.hadoop.hive.conf.HiveConf  - Using hive-site.xml found on CLASSPATH at /u01/app/wso2bam-2.4.0/repository/conf/advanced/hive-site.xml
log4j:WARN No appenders could be found for logger (org.apache.axiom.util.stax.dialect.StAXDialectDetector).
log4j:WARN Please initialize the log4j system properly.
Execution log at: /u01/app/wso2bam-2.4.0/repository/logs//wso2carbon.log
[2014-05-08 13:58:02,423]  WARN {org.apache.hadoop.mapred.JobClient} -  Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
[2014-05-08 13:58:03,779] ERROR {org.wso2.carbon.bam.notification.task.NotificationDispatchTask} -  Error executing notification dispatch task: Cannot borrow client for TCP,1.33.33.127:7612,TCP,1.33.33.127:7712
org.wso2.carbon.databridge.agent.thrift.exception.AgentException: Cannot borrow client for TCP,1.33.33.127:7612,TCP,1.33.33.127:7712
        at org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.AgentAuthenticator.connect(AgentAuthenticator.java:58)
        at org.wso2.carbon.databridge.agent.thrift.DataPublisher.start(DataPublisher.java:273)
        at org.wso2.carbon.databridge.agent.thrift.DataPublisher.<init>(DataPublisher.java:211)
        at org.wso2.carbon.bam.notification.task.NotificationDispatchTask.initPublisherKS(NotificationDispatchTask.java:103)
        at org.wso2.carbon.bam.notification.task.NotificationDispatchTask.execute(NotificationDispatchTask.java:188)
        at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
        at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.thrift.transport.TTransportException: Could not connect to 1.33.33.127 on port 7712
        at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:212)
        at org.apache.thrift.transport.TSSLTransportFactory.getClientSocket(TSSLTransportFactory.java:166)
        at org.wso2.carbon.databridge.agent.thrift.internal.pool.client.secure.SecureClientPoolFactory.makeObject(SecureClientPoolFactory.java:90)
        at org.wso2.carbon.databridge.agent.thrift.internal.pool.client.secure.SecureClientPoolFactory.makeObject(SecureClientPoolFactory.java:48)
        at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
        at org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.AgentAuthenticator.connect(AgentAuthenticator.java:50)
        ... 11 more
Caused by: java.net.ConnectException: Connection timed out
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:579)
        at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
        at sun.security.ssl.SSLSocketImpl.<init>(SSLSocketImpl.java:407)
        at sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:88)
        at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:208)
        ... 16 more
[2014-05-08 13:58:00004]信息{org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask}-为脚本am_stats_analyzer_503运行脚本执行器任务。[2014年5月8日星期四13:58:00 CEST]
配置单元历史记录文件=/u01/app/wso2bam-2.4.0/tmp/Hive/wso2-querylogs/Hive_作业_日志_根目录_201405081356_596525563.txt
好啊
好啊
MapReduce作业总数=1
正在启动作业1/1
未指定reduce任务数。根据输入数据大小估计:1
要更改减速器的平均负载(以字节为单位):
设置hive.exec.reducers.bytes.per.reducer=
为了限制减速器的最大数量:
设置hive.exec.reducers.max=
为了设置恒定数量的减速器:
设置mapred.reduce.tasks=
[main]DEBUG org.apache.hadoop.hive.conf.HiveConf-使用在类路径/u01/app/wso2bam-2.4.0/repository/conf/advanced/hive-site.xml上找到的hive-site.xml
log4j:WARN找不到记录器(org.apache.axiom.util.stax.dial.StAXDialectDetector)的追加器。
log4j:警告请正确初始化log4j系统。
执行日志位于:/u01/app/wso2bam-2.4.0/repository/logs//wso2carbon.log
[2014-05-08 13:58:02423]警告{org.apache.hadoop.mapred.JobClient}-使用GenericOptionsParser解析参数。应用程序应该为相同的应用程序实现工具。
[2014-05-08 13:58:03779]错误{org.wso2.carbon.bam.notification.task.NotificationDispatchTask}-执行通知分派任务时出错:无法借用TCP的客户端,1.33.33.127:7612,TCP,1.33.33.127:7712
org.wso2.carbon.databridge.agent.thrift.exception.AgentException:无法为TCP借用客户端,1.33.33.127:7612,TCP,1.33.33.127:7712
位于org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.agentuthenticator.connect(agentuthenticator.java:58)
位于org.wso2.carbon.databridge.agent.thrift.DataPublisher.start(DataPublisher.java:273)
位于org.wso2.carbon.databridge.agent.thrift.DataPublisher.(DataPublisher.java:211)
位于org.wso2.carbon.bam.notification.task.NotificationDispatchTask.initPublisherKS(NotificationDispatchTask.java:103)
位于org.wso2.carbon.bam.notification.task.NotificationDispatchTask.execute(NotificationDispatchTask.java:188)
位于org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
位于org.quartz.core.JobRunShell.run(JobRunShell.java:213)
位于java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
在java.util.concurrent.FutureTask.run(FutureTask.java:262)处
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
运行(Thread.java:744)
原因:org.apache.thrift.transport.ttTransportException:无法连接到端口7712上的1.33.33.127
位于org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:212)
位于org.apache.thrift.transport.TSSLTransportFactory.getClientSocket(TSSLTransportFactory.java:166)
位于org.wso2.carbon.databridge.agent.thrift.internal.pool.client.secure.SecureClientPoolFactory.makeObject(SecureClientPoolFactory.java:90)
位于org.wso2.carbon.databridge.agent.thrift.internal.pool.client.secure.SecureClientPoolFactory.makeObject(SecureClientPoolFactory.java:48)
位于org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
位于org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.agentuthenticator.connect(agentuthenticator.java:50)
... 还有11个
原因:java.net.ConnectException:连接超时
位于java.net.PlainSocketImpl.socketConnect(本机方法)
位于java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
位于java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
位于java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
位于java.net.socksocketimpl.connect(socksocketimpl.java:392)
位于java.net.Socket.connect(Socket.java:579)
位于sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
位于sun.security.ssl.SSLSocketImpl.(SSLSocketImpl.java:407)
位于sun.security.ssl.sslsocketfactorympl.createSocket(sslsocketfactorympl.java:88)
位于org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:208)
... 还有16个

上传的图像


如果这是您第一次使用API管理器配置BAM,那么您之前可能没有运行配置单元查询,因此没有要删除的表。所以请继续您的测试,看看其余的测试是否没有错误,除非没有这样的表。

我看到数据库中的表(API_xxxx)不存在。我已将BAM统计信息的AM配置为false。当我重新启动AM时,表是在Mysql上创建的。但只有五个表,它们与文档中所示的脚本不同(drop table APIRestData;drop table APIRestSummaryData;drop table APIVersionSageSummaryData;,…)。如果您没有安装AM stat工具箱,那么您就不会在Analytics中的脚本中看到“AM_stats_analyzer_XX”脚本——列出脚本,因此,如果安装了脚本,则无法获得这些表。但是当我启动BAM时,我多次看到这个错误。我把错误放在更新问题上,尝试是否可以telnet到1.33.33.127:7612、TCP、1.33.33.127:7712端口。这些是BAM服务器上打开的旧端口,您可以看到BAM启动日志中有此端口的详细信息