Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/343.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 无法将Log4J中的事件导入Flume_Java_Hadoop_Log4j_Flume - Fatal编程技术网

Java 无法将Log4J中的事件导入Flume

Java 无法将Log4J中的事件导入Flume,java,hadoop,log4j,flume,Java,Hadoop,Log4j,Flume,我正在尝试使用Log4J Flume appender通过Flume将Log4J 1x中的事件获取到HDFS中。创建了两个Appender文件和flume。它适用于文件appender,但对于flume appender,程序只是挂起在Eclipse中。Flume工作正常,我能够使用avro客户端将消息发送到avro源,并在HDFS中查看消息。但是,它没有与Log4J 1x集成 我没有看到任何异常,除了log.out中的以下内容 Batch size string = null Using Ne

我正在尝试使用Log4J Flume appender通过Flume将Log4J 1x中的事件获取到HDFS中。创建了两个Appender文件和flume。它适用于文件appender,但对于flume appender,程序只是挂起在Eclipse中。Flume工作正常,我能够使用avro客户端将消息发送到avro源,并在HDFS中查看消息。但是,它没有与Log4J 1x集成

我没有看到任何异常,除了log.out中的以下内容

Batch size string = null
Using Netty bootstrap options: {tcpNoDelay=true, connectTimeoutMillis=20000}
Connecting to localhost/127.0.0.1:41414
[id: 0x52a00770] OPEN
从水槽控制台

2013-10-23 14:32:32,145 (pool-5-thread-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x577cf6e4, /127.0.0.1:46037 => /127.0.0.1:41414] OPEN
2013-10-23 14:32:32,148 (pool-6-thread-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x577cf6e4, /127.0.0.1:46037 => /127.0.0.1:41414] BOUND: /127.0.0.1:41414
2013-10-23 14:32:32,148 (pool-6-thread-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x577cf6e4, /127.0.0.1:46037 => /127.0.0.1:41414] CONNECTED: /127.0.0.1:46037
2013-10-23 14:32:43,086 (pool-6-thread-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x577cf6e4, /127.0.0.1:46037 :> /127.0.0.1:41414] DISCONNECTED
2013-10-23 14:32:43,096 (pool-6-thread-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x577cf6e4, /127.0.0.1:46037 :> /127.0.0.1:41414] UNBOUND
2013-10-23 14:32:43,096 (pool-6-thread-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x577cf6e4, /127.0.0.1:46037 :> /127.0.0.1:41414] CLOSED
2013-10-23 14:32:43,097 (pool-6-thread-1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.channelClosed(NettyServer.java:209)] Connection to /127.0.0.1:46037 disconnected.
如果它有帮助的话,我确实在调试模式下运行了程序,当它挂起时,我执行了挂起并执行堆栈跟踪。试图查看代码,但不确定程序为什么挂起flume appender

Daemon Thread [Avro NettyTransceiver I/O Worker 1] (Suspended)  
Logger(Category).callAppenders(LoggingEvent) line: 205  
Logger(Category).forcedLog(String, Priority, Object, Throwable) line: 391  
Logger(Category).log(String, Priority, Object, Throwable) line: 856  
Log4jLoggerAdapter.debug(String) line: 209  
NettyTransceiver$NettyClientAvroHandler.handleUpstream(ChannelHandlerContext, ChannelEvent) line: 491  
DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline$DefaultChannelHandlerContext, ChannelEvent) line: 564  
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(ChannelEvent) line: 792  
NettyTransportCodec$NettyFrameDecoder(SimpleChannelUpstreamHandler).channelBound(ChannelHandlerContext, ChannelStateEvent) line: 166  
NettyTransportCodec$NettyFrameDecoder(SimpleChannelUpstreamHandler).handleUpstream(ChannelHandlerContext, ChannelEvent) line: 98  
DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline$DefaultChannelHandlerContext, ChannelEvent) line: 564  
DefaultChannelPipeline.sendUpstream(ChannelEvent) line: 559  
Channels.fireChannelBound(Channel, SocketAddress) line: 199  
NioWorker$RegisterTask.run() line: 191  
NioWorker(AbstractNioWorker).processRegisterTaskQueue() line: 329  
NioWorker(AbstractNioWorker).run() line: 235  
NioWorker.run() line: 38  
DeadLockProofWorker$1.run() line: 42  
ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145  
ThreadPoolExecutor$Worker.run() line: 615  
Thread.run() line: 744
这是Java程序

import java.io.IOException;
import java.sql.SQLException;
import org.apache.log4j.Logger;
public class log4jExample {
    static Logger log = Logger.getRootLogger();
    public static void main(String[] args) throws IOException, SQLException {
       log.debug("Hello this is an debug message");
    }
}
这是log4j属性

# Define the root logger with appender file
log = /home/vm4learning/WorkSpace/BigData/Log4J-Example/log
log4j.rootLogger = DEBUG, FILE, flume

# Define the file appender
log4j.appender.FILE=org.apache.log4j.FileAppender
log4j.appender.FILE.File=${log}/log.out
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.conversionPattern=%m%n

# Define the flume appender
log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jAppender
log4j.appender.flume.Hostname = localhost
log4j.appender.flume.Port = 41414
log4j.appender.flume.UnsafeMode = false
log4j.appender.flume.layout=org.apache.log4j.PatternLayout
log4j.appender.flume.layout.ConversionPattern=%m%n
以下是Eclipse中的依赖项

flume-ng-log4jappender-1.4.0.jar
log4j-1.2.17.jar
flume-ng-sdk-1.4.0.jar
avro-1.7.3.jar
netty-3.4.0.Final.jar
avro-ipc-1.7.3.jar
slf4j-api-1.6.1.jar
slf4j-log4j12-1.6.1.jar
这是flume.conf的内容

# Tell agent1 which ones we want to activate.
agent1.channels = ch1
agent1.sources = avro-source1
agent1.sinks = hdfs-sink1

# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory

# Define an Avro source called avro-source1 on agent1 and tell it
# to bind to 0.0.0.0:41414. Connect it to channel ch1.
agent1.sources.avro-source1.type = avro
agent1.sources.avro-source1.bind = 0.0.0.0
agent1.sources.avro-source1.port = 41414

# Define a logger sink that simply logs all events it receives
# and connect it to the other end of the same channel.
agent1.sinks.hdfs-sink1.type = hdfs
agent1.sinks.hdfs-sink1.hdfs.path = hdfs://localhost:9000/flume/events/

agent1.sinks.hdfs-sink1.channel = ch1
agent1.sources.avro-source1.channels = ch1

如何解决这个问题?

我猜您正在尝试通过Flume记录Flume的事件。我在其他Appender中看到过这个问题,但在Log4j1中没有


我会考虑修改Log4j.For来排除水槽、NETY和AVRO事件,看看是否修复了它。

< P>我在Log4J中使用了水槽附加器有类似的问题。每当我试图实例化Logger对象时,程序就会挂起。我记得问题是我在类路径中没有所有必需的库,一旦我添加了它们,它就工作得很好

我建议你先开始工作。尽管那里的pom.xml创建了一个包含所有依赖项的JAR,但编辑它以将依赖JAR文件复制到另一个目录会给出以下列表:

avro-1.7.4.jar
avro-ipc-1.7.4.jar
commons-codec-1.3.jar
commons-collections-3.2.1.jar
commons-compress-1.4.1.jar
commons-lang-2.5.jar
commons-logging-1.1.1.jar
flume-ng-log4jappender-1.4.0-cdh4.5.0.jar
flume-ng-sdk-1.4.0-cdh4.5.0.jar
hamcrest-core-1.1.jar
httpclient-4.0.1.jar
httpcore-4.0.1.jar
jackson-core-asl-1.8.8.jar
jackson-mapper-asl-1.8.8.jar
jetty-6.1.26.jar
jetty-util-6.1.26.jar
junit-4.10.jar
libthrift-0.7.0.jar
log4j-1.2.16.jar
netty-3.5.0.Final.jar
paranamer-2.3.jar
slf4j-api-1.7.2.jar
slf4j-jdk14-1.7.2.jar
snappy-java-1.0.4.1.jar
velocity-1.7.jar
xz-1.0.jar

这些库中的一些(如junit)可能不是真的需要,但我建议首先使用所有这些库,看看您的示例是否能够正常工作,然后再尝试确定所需的最小集合。

我遇到了类似的问题,解决方案是:

  • 将log4j.properties根记录器从调试更改为信息级别
但我不知道水槽里发生了什么。我正在试着调试它。
如果有人知道,请告诉我thx~~~

正如我在OP中提到的,文件appender可以正常工作,如果我包括FLUME appender,Eclipse中的程序将挂起获取根记录器。未到达客户端中的log语句。可能是因为get root logger代码在尝试登录到尚不存在的logger时被阻止。你试过我的建议吗?我已经试过了-当我使用这个log4j.properties
log4j.rootLogger=DEBUG,FILE
时,它就工作了,当我添加flume时,它就挂起了。我添加了额外的jar,没有什么区别。log4j程序挂起在Eclipse.thnx中-根记录器从debug更改为info,log4j试图在端口41414处向flume发送消息(在未运行flume的情况下连接到localhost/127.0.0.1:41414时出错),并且挂起在Eclipse中。在水槽或hdfs中看不到任何东西。如果您在水槽方面有任何进展,请告诉我。另外,为什么根日志记录器必须从调试更改为信息?在信息级别下,我尝试了log4j->flume agent->sink to hdfs、本地文件系统、rabbitmq.works。在生产环境中不可能使用调试级模式,所以这不是一个大错误。正如我前面提到的,我能够独立于Log4J使用Flume。使用log4j的主管道仅与水槽附加器挂起。您可以在github中共享整个项目吗?这个问题可能是因为jar依赖关系。如果您只使用flume-ng-log4jappender-1.4.0-jar-with-dependencies.jar而不使用其他jar。调试级别可以很好地工作。工作方式很好-thanx-无法获得胖jar,因此复制所有jar。另外,hdfs.rollCount默认为10,因此必须记录.info 10次。