wso2 am 1.9.1+;linux上的bam 2.5问题

wso2 am 1.9.1+;linux上的bam 2.5问题,wso2,wso2-am,wso2bam,Wso2,Wso2 Am,Wso2bam,我在同一台linux机器上安装wso2 am 1.9.1和wso2 bam 2.5,并按照描述配置wso2 am和bam。但当我启动wso2 bam时,脚本am_stats_analyzer再次运行,并且没有报告任何错误。在wso2 am部件上,它显示未配置统计信息 java版本是OracleJDK1.7.80。以root用户身份运行。下面是日志,这些日志会一次又一次地打印出来,请帮帮我 日志 [2015-12-21 02:22:00005]信息{org.wso2.carbon.analytic

我在同一台linux机器上安装wso2 am 1.9.1和wso2 bam 2.5,并按照描述配置wso2 am和bam。但当我启动wso2 bam时,脚本am_stats_analyzer再次运行,并且没有报告任何错误。在wso2 am部件上,它显示未配置统计信息

java版本是OracleJDK1.7.80。以root用户身份运行。下面是日志,这些日志会一次又一次地打印出来,请帮帮我

日志
[2015-12-21 02:22:00005]信息{org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask}-为脚本**am_stats_analyzer**运行脚本执行器任务。
[Mon Dec 21 02:22:00 CST 2015]配置单元历史文件=/home/wso2bam-2.5.0/tmp/Hive/root-querylogs/Hive_job_log_root_20151221022_2145444007.txt
好啊
好啊
MapReduce作业总数=1
正在启动作业1/1
未指定reduce任务数。根据输入数据大小估计:1
要更改减速器的平均负载(以字节为单位):
设置hive.exec.reducers.bytes.per.reducer=
为了限制减速器的最大数量:
设置hive.exec.reducers.max=
为了设置恒定数量的减速器:
设置mapred.reduce.tasks=
log4j:WARN找不到记录器(org.apache.axiom.util.stax.dial.StAXDialectDetector)的追加器。
log4j:警告请正确初始化log4j系统。
执行日志位于:/home/wso2bam-2.5.0/repository/logs//wso2carbon.log
[2015-12-21 02:22:07801]警告{org.apache.hadoop.mapred.JobClient}-使用GenericOptionsParser解析参数。应用程序应该为相同的应用程序实现工具。
进程内运行的作业(本地Hadoop)
空值的Hadoop作业信息:映射程序数:0;减速器数量:0
2015-12-21 02:22:10999空映射=0%,减少=0%
2015-12-21 02:22:14001空映射=100%,减少=0%
2015-12-21 02:22:20004空映射=100%,减少=100%
结束作业=作业\u本地\u 0001
执行成功完成
映射的本地任务成功。将连接转换为MapJoin
好啊
好啊
MapReduce作业总数=1
正在启动作业1/1
未指定reduce任务数。根据输入数据大小估计:1
要更改减速器的平均负载(以字节为单位):
设置hive.exec.reducers.bytes.per.reducer=
为了限制减速器的最大数量:
设置hive.exec.reducers.max=
为了设置恒定数量的减速器:
设置mapred.reduce.tasks=
log4j:WARN找不到记录器(org.apache.axiom.util.stax.dial.StAXDialectDetector)的追加器。
log4j:警告请正确初始化log4j系统。
执行日志位于:/home/wso2bam-2.5.0/repository/logs//wso2carbon.log
[2015-12-21 02:22:24419]警告{org.apache.hadoop.mapred.JobClient}-使用GenericOptionsParser解析参数。应用程序应该为相同的应用程序实现工具。
进程内运行的作业(本地Hadoop)
空值的Hadoop作业信息:映射程序数:0;减速器数量:0
2015-12-21 02:22:27574空映射=0%,减少=0%
2015-12-21 02:22:30576空映射=100%,减少=0%
2015-12-21 02:22:36579空映射=100%,减少=100%
结束作业=作业\u本地\u 0001
执行成功完成
映射的本地任务成功。将连接转换为MapJoin
好啊
好啊
MapReduce作业总数=1
正在启动作业1/1
未指定reduce任务数。根据输入数据大小估计:1
要更改减速器的平均负载(以字节为单位):
设置hive.exec.reducers.bytes.per.reducer=
为了限制减速器的最大数量:
设置hive.exec.reducers.max=
为了设置恒定数量的减速器:
设置mapred.reduce.tasks=
log4j:WARN找不到记录器(org.apache.axiom.util.stax.dial.StAXDialectDetector)的追加器。
log4j:警告请正确初始化log4j系统。
执行日志位于:/home/wso2bam-2.5.0/repository/logs//wso2carbon.log
[2015-12-21 02:22:40883]警告{org.apache.hadoop.mapred.JobClient}-使用GenericOptionsParser解析参数。应用程序应该为相同的应用程序实现工具。
进程内运行的作业(本地Hadoop)
空值的Hadoop作业信息:映射程序数:0;减速器数量:0
2015-12-21 02:22:43945空映射=0%,减少=0%
2015-12-21 02:22:46947空映射=100%,减少=0%
2015-12-21 02:22:52950空映射=100%,减少=100%
结束作业=作业\u本地\u 0001
执行成功完成
映射的本地任务成功。将连接转换为MapJoin
好啊
好啊
好啊
MapReduce作业总数=1

在APIM发布者门户中不显示统计数据可能有几个原因。因为统计表不是使用配置单元脚本创建的,配置单元脚本计划每两分钟运行一次(am_stats_analyzer中的CORN表达式0 0/2***)。数据不会插入到相应的表中。因此,您必须(Curl命令/高级Rest客户端)调用api。一旦api命中请求,stats值就会被插入到TestStatsDB模式下创建的表中

 [2015-12-21 02:22:00,005]  INFO {org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask} -  Running script executor task for script **am_stats_analyzer**. 
 [Mon Dec 21 02:22:00 CST 2015]Hive history file=/home/wso2bam-2.5.0/tmp/hive/root-querylogs/hive_job_log_root_201512210222_2145444007.txt
 OK
 OK
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks not specified. Estimated from input data size: 1
 In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.reducer=<number>
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=<number>
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=<number>
 log4j:WARN No appenders could be found for logger (org.apache.axiom.util.stax.dialect.StAXDialectDetector).
 log4j:WARN Please initialize the log4j system properly.
 Execution log at: /home/wso2bam-2.5.0/repository/logs//wso2carbon.log
 [2015-12-21 02:22:07,801]  WARN {org.apache.hadoop.mapred.JobClient} -  Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
 Job running in-process (local Hadoop)
 Hadoop job information for null: number of mappers: 0; number of reducers: 0
 2015-12-21 02:22:10,999 null map = 0%,  reduce = 0%
 2015-12-21 02:22:14,001 null map = 100%,  reduce = 0%
 2015-12-21 02:22:20,004 null map = 100%,  reduce = 100%
 Ended Job = job_local_0001
 Execution completed successfully
 Mapred Local Task Succeeded . Convert the Join into MapJoin
 OK
 OK
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks not specified. Estimated from input data size: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=<number>
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=<number>
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=<number>
 log4j:WARN No appenders could be found for logger (org.apache.axiom.util.stax.dialect.StAXDialectDetector).
 log4j:WARN Please initialize the log4j system properly.
 Execution log at: /home/wso2bam-2.5.0/repository/logs//wso2carbon.log
 [2015-12-21 02:22:24,419]  WARN {org.apache.hadoop.mapred.JobClient} -  Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
 Job running in-process (local Hadoop)
 Hadoop job information for null: number of mappers: 0; number of reducers: 0
 2015-12-21 02:22:27,574 null map = 0%,  reduce = 0%
 2015-12-21 02:22:30,576 null map = 100%,  reduce = 0%
 2015-12-21 02:22:36,579 null map = 100%,  reduce = 100%
 Ended Job = job_local_0001
 Execution completed successfully
 Mapred Local Task Succeeded . Convert the Join into MapJoin
 OK
 OK
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks not specified. Estimated from input data size: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=<number>

 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=<number>

 In order to set a constant number of reducers:
   set mapred.reduce.tasks=<number>

 log4j:WARN No appenders could be found for logger (org.apache.axiom.util.stax.dialect.StAXDialectDetector).
 log4j:WARN Please initialize the log4j system properly.
 Execution log at: /home/wso2bam-2.5.0/repository/logs//wso2carbon.log
 [2015-12-21 02:22:40,883]  WARN {org.apache.hadoop.mapred.JobClient} -  Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
 Job running in-process (local Hadoop)
 Hadoop job information for null: number of mappers: 0; number of reducers: 0
 2015-12-21 02:22:43,945 null map = 0%,  reduce = 0%
 2015-12-21 02:22:46,947 null map = 100%,  reduce = 0%
 2015-12-21 02:22:52,950 null map = 100%,  reduce = 100%
 Ended Job = job_local_0001
 Execution completed successfully
 Mapred Local Task Succeeded . Convert the Join into MapJoin
 OK
 OK
 OK
 Total MapReduce jobs = 1