Flume:org.apache.avro.ipc.NettyServer:来自下游的意外异常。java.nio.channels.ClosedChannel异常

Flume:org.apache.avro.ipc.NettyServer:来自下游的意外异常。java.nio.channels.ClosedChannel异常,java,hadoop,flume,avro,Java,Hadoop,Flume,Avro,如何解决这个问题?当我配置flume服务器时,它有以下问题 2014-10-20 22:24:01,480 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 => /ip:34001] OPEN 2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 => /ip:34001] BOU

如何解决这个问题?当我配置flume服务器时,它有以下问题

2014-10-20 22:24:01,480 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 => /ip:34001] OPEN
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 => /ip:34001] BOUND: /ip:34001
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /10.182.4.70:57063 => /ip:34001] CONNECTED: /ip:57063
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 :> /ip:34001] DISCONNECTED
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 :> /10.182.4.79:34001] UNBOUND
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /10.182.4.70:57063 :> /10.182.4.79:34001] CLOSED
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: Connection to /10.182.4.70:57063 disconnected.
2014-10-20 22:24:01,481 WARN org.apache.avro.ipc.NettyServer: Unexpected exception from downstream.
java.nio.channels.ClosedChannelException
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:673)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:400)
        at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:120)
        at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:59)
        at org.jboss.netty.channel.Channels.write(Channels.java:733)
        at org.jboss.netty.channel.Channels.write(Channels.java:694)
        at org.jboss.netty.handler.codec.compression.ZlibEncoder.finishEncode(ZlibEncoder.java:380)
        at org.jboss.netty.handler.codec.compression.ZlibEncoder.handleDownstream(ZlibEncoder.java:316)
        at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:55)
        at org.jboss.netty.channel.Channels.close(Channels.java:821)
flume.conf如下所示

instance_35001.channels.channel1.checkpointDir=editlog/checkpoint
instance_35001.channels.channel1.dataDirs=editlog/data
instance_35001.channels.channel1.capacity=200000000
instance_35001.channels.channel1.transactionCapacity=1000000
instance_35001.channels.channel1.checkpointInterval=10000

instance_35001.sources=source1
instance_35001.sources.source1.type=avro
instance_35001.sources.source1.bind=0.0.0.0
instance_35001.sources.source1.port=34001
instance_35001.sources.source1.compression-type=deflate
instance_35001.sources.source1.channels=channel1

instance_35001.sources.source1.interceptors = inter1
instance_35001.sources.source1.interceptors.inter1.type = host
instance_35001.sources.source1.interceptors.inter1.hostHeader = servername

instance_35001.sinks=sink1

instance_35001.sinks.sink1.type=hdfs
instance_35001.sinks.sink1.hdfs.path=hdfs://address:5000/user/admin/%{appname}/%Y/%m/%d/
instance_35001.sinks.sink1.hdfs.filePrefix=%{appname}-%{hostname}-%{servername}.34001
instance_35001.sinks.sink1.hdfs.rollInterval=0
instance_35001.sinks.sink1.hdfs.rollCount=0
instance_35001.sinks.sink1.hdfs.rollSize=21521880492

环境是CDH5。接收器是hdfs程序。日志通常非常正常。但是水槽很慢。所以请帮帮我。谢谢

我在这里看到的一件事是,您的卷大小明显大于通道容量。因此,在滚动文件之前,所有内容都存储在通道中,该通道在一个点后被填满,并开始抛出错误

instance_35001.channels.channel1.capacity=200000000

instance_35001.sinks.sink1.hdfs.rollSize=21521880492
将角色大小保持在为HDFS设置的块大小左右。此外,HDFS接收器的默认批处理大小为100。将其更改为更大的值,然后查看其行为。

容量以事件的数量度量,而rollsize以实际字节度量,因此很难将这两者正确关联起来。 但是,您希望卷大小接近hdfs块大小默认值128mb

rollsize = 21521880492 -> 21GB