Wso2 安装HTTPD日志分析示例时出错
我与BAM共事了这么多天,突然之间,我甚至不能像BAM 2.0.1文档中所演示的那样做一个简单的示例(HTTPD日志分析示例)。 我还没换 我正在采取的步骤是: -在linux中启动BAM服务器 -正在从$WSO2\u BAM\u HOME/samples/httpd logs/resources目录读取access.log -在管理控制台中安装“HTTPD日志和分析”工具箱 现在,在安装时,出现配置单元脚本错误:: 执行配置单元脚本时出错。查询返回非零代码:9,原因:失败:执行错误,从org.apache.hadoop.Hive.ql.exec.ddlstask返回代码1 你能告诉我哪里出了问题吗 后端中的错误是Wso2 安装HTTPD日志分析示例时出错,wso2,wso2bam,Wso2,Wso2bam,我与BAM共事了这么多天,突然之间,我甚至不能像BAM 2.0.1文档中所演示的那样做一个简单的示例(HTTPD日志分析示例)。 我还没换 我正在采取的步骤是: -在linux中启动BAM服务器 -正在从$WSO2\u BAM\u HOME/samples/httpd logs/resources目录读取access.log -在管理控制台中安装“HTTPD日志和分析”工具箱 现在,在安装时,出现配置单元脚本错误:: 执行配置单元脚本时出错。查询返回非零代码:9,原因:失败:执行错误,从org.
ERROR {org.apache.hadoop.hive.ql.exec.Task} - FAILED: Error in metadata: MetaException(message:Unable to connect to the server org.apache.hadoop.hive.cassandra.CassandraException: unable to connect to server)
org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Unable to connect to the server org.apache.hadoop.hive.cassandra.CassandraException: unable to connect to server)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:546)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1334)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1125)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:933)
at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:201)
at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:187)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:325)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:225)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: MetaException(message:Unable to connect to the server org.apache.hadoop.hive.cassandra.CassandraException: unable to connect to server)
at org.apache.hadoop.hive.cassandra.CassandraManager.openConnection(CassandraManager.java:118)
at org.apache.hadoop.hive.cassandra.CassandraStorageHandler.preCreateTable(CassandraStorageHandler.java:168)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:397)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540)
... 16 more
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
[2013-01-16 20:03:01,464] ERROR {org.apache.hadoop.hive.ql.Driver} - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
[2013-01-16 20:03:01,470] ERROR {org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl} - Error while executing Hive script.
Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
java.sql.SQLException: Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:189)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:325)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:225)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2013-01-16 20:03:01,473] ERROR {org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask} - Error while executing script : httpd_logs_script_507
org.wso2.carbon.analytics.hive.exception.HiveExecutionException: Error while executing Hive script.Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl.execute(HiveExecutorServiceImpl.java:110)
at org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask.execute(HiveScriptExecutorTask.java:60)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:56)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2013-01-16 20:03:09,139] INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} - 'admin@carbon.super [-1234]' logged in at [2013-01-16 20:03:09,139+0530]
您能否先尝试一下BAM 2.0.1中的HTTPD日志示例?您可以按照/samples/httpd logs/README.txt中的说明操作。它工作得很好。这些是该文件中给出的说明
您能否先尝试一下BAM 2.0.1中的HTTPD日志示例?您可以按照/samples/httpd logs/README.txt中的说明操作。它工作得很好。这些是该文件中给出的说明
您可以发布服务器日志中的完整错误跟踪吗
无法使用“执行配置单元脚本时出错。查询返回非零代码:9,原因:失败:执行错误,从org.apache.hadoop.Hive.ql.exec.ddlstask返回代码1”来发现根本原因,需要完整的异常跟踪来找出根本原因。能否从服务器日志发布完整的错误跟踪
无法使用“执行配置单元脚本时出错。查询返回非零代码:9,原因:失败:执行错误,从org.apache.hadoop.Hive.ql.exec.ddlstask返回代码1”发现根本原因,需要完整的异常跟踪来找出根本原因。配置单元似乎无法连接到cassandra。如果已使用任何偏移量启动BAM服务器,则casssandra端口也将更改为9160+
此外,您可能需要删除Httpd_log_脚本中提到的配置单元表,以反映更改。这是因为,您已经运行了脚本,并且特定表名的表定义将已经存储,并且不会再次尝试创建它,因为如果不存在,则会在脚本表内创建。(如果不存在,则创建外部表)。Hive似乎无法连接到cassandra。如果已使用任何偏移量启动BAM服务器,则casssandra端口也将更改为9160+
此外,您可能需要删除Httpd_log_脚本中提到的配置单元表,以反映更改。这是因为,您已经运行了脚本,并且特定表名的表定义将已经存储,并且不会再次尝试创建,因为如果不存在,将创建脚本内的表。(如果不存在,则创建外部表)。连接到服务器时似乎有问题。您是否更改了用户名或密码?(Dafault用户名和密码为admin和admin) 而且,如果要使用不同的架构创建相同的现有配置单元表,则必须在创建之前删除现有表。。。。。正如辛图贾所说的那样 e、 g: 删除表1
连接到服务器时似乎有问题。您是否更改了用户名或密码?(Dafault用户名和密码为admin和admin) 而且,如果要使用不同的架构创建相同的现有配置单元表,则必须在创建之前删除现有表。。。。。正如辛图贾所说的那样 e、 g: 删除表1
Maninda我也在遵循同样的步骤。安装工具箱配置单元脚本后,将出现错误。错误{org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask}-执行脚本时出错:httpd_logs_script_507能否显示在后端控制台中打印的错误消息?Maninda我遵循相同的步骤。安装工具箱配置单元脚本后,将出现错误。错误{org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask}-执行时出错
drop table table1;