水槽:来自下游的意外异常。java.io.IOException:对等方重置连接

水槽:来自下游的意外异常。java.io.IOException:对等方重置连接,java,hadoop,flume,Java,Hadoop,Flume,当我试图将多个日志发送到一个端口时,我遇到以下异常。是否有人知道这是我的配置的问题,或者我需要将其作为一个bug提出。我试图配置多个端口,但仍然得到相同的异常。任何帮助都会非常好,因为我一周来一直在做这个 [WARN - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.exceptionCaught(NettyServer.java:201) ] Unexpected exception from downstream. j

当我试图将多个日志发送到一个端口时,我遇到以下异常。是否有人知道这是我的配置的问题,或者我需要将其作为一个bug提出。我试图配置多个端口,但仍然得到相同的异常。任何帮助都会非常好,因为我一周来一直在做这个

[WARN -   org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.exceptionCaught(NettyServer.java:201)    ] Unexpected exception from downstream.
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225)
at sun.nio.ch.IOUtil.read(IOUtil.java:193)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:66)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)

收集器配置
hdfs-agent.sources=avro-collect
hdfs-agent.sinks=hdfs write
hdfs-agent.channels=fileChannel
hdfs-agent.sources.avro-collect.type=avro
hdfs-agent.sources.avro-collect.bind=
hdfs-agent.sources.avro-collect.port=41414
hdfs-agent.sources.avro-collect.channels=fileChannel
hdfs-agent.sinks.hdfs-write.type=hdfs
hdfs-agent.sinks.hdfs-write.hdfs.path=hdfs://hadoop:54310/flume/%{host}/%Y%m%d/%{logFileType}
hdfs-agent.sinks.hdfs-write.hdfs.rollSize=209715200
hdfs-agent.sinks.hdfs-write.hdfs.rollCount=6000
hdfs-agent.sinks.hdfs-write.hdfs.fileType=DataStream
hdfs-agent.sinks.hdfs-write.hdfs.writeFormat=Text
hdfs-agent.sinks.hdfs-write.hdfs.filePrefix=%{host}
hdfs-agent.sinks.hdfs-write.hdfs.maxOpenFiles=100000
hdfs-agent.sinks.hdfs-write.hdfs.batchSize=5000
hdfs-agent.sinks.hdfs-write.hdfs.rollInterval=75
hdfs-agent.sinks.hdfs-write.hdfs.callTimeout=5000000
hdfs-agent.sinks.hdfs-write.channel=fileChannel
hdfs agent.channels.fileChannel.type=file
hdfs agent.channels.fileChannel.dataDirs=/u01/Collector/flume\u channel/dataDir13
hdfs agent.channels.fileChannel.checkpointDir=/u01/Collector/flume\u channel/checkpointDir13
hdfs-agent.channels.fileChannel.transactionCapacity=50000
hdfs-agent.channels.fileChannel.capacity=9000000
hdfs-agent.channels.fileChannel.write-timeout=250000

发送器配置
app-agent.sources=tail-tailapache
app-agent.channels=fileChannel
app-agent.sinks=avro前向接收器avro前向接收器apache
app-agent.sources.tail.type=exec
app-agent.sources.tail.command=tail-f/server/default/log/server.log
app-agent.sources.tail.channels=fileChannel
app-agent.sources.tailapache.type=exec
app-agent.sources.tailapache.command=tail-f/logs/access\u log
app-agent.sources.tailapache.channels=fileChannel
app-agent.sources.tail.interceptors=ts st stt
app-agent.sources.tail.interceptors.ts.type=org.apache.flume.interceptor.TimestampInterceptor$Builder
app-agent.sources.tail.interceptors.st.type=静态
app-agent.sources.tail.interceptors.st.key=logFileType
app-agent.sources.tail.interceptors.st.value=jboss
app-agent.sources.tail.interceptors.stt.type=静态
app-agent.sources.tail.interceptors.stt.key=host
app-agent.sources.tail.interceptors.stt.value=Mart
app-agent.sources.tailapache.interceptors=ts1 i1 st1
app-agent.sources.tailapache.interceptors.ts1.type=org.apache.flume.interceptor.TimestampInterceptor$Builder
app-agent.sources.tailapache.interceptors.i1.type=static
app-agent.sources.tailapache.interceptors.i1.key=logFileType
app-agent.sources.tailapache.interceptors.i1.value=apache
app-agent.sources.tailapache.interceptors.st1.type=static
app-agent.sources.tailapache.interceptors.st1.key=host
app-agent.sources.tailapache.interceptors.st1.value=Mart
app-agent.sinks.avro-forward-sink.type=avro
app-agent.sinks.avro-forward-sink.hostname=
app-agent.sinks.avro-forward-sink.port=41414
app-agent.sinks.avro-forward-sink.channel=fileChannel
app-agent.sinks.avro-forward-sink-apache.type=avro
app-agent.sinks.avro-forward-sink-apache.hostname=
app-agent.sinks.avro-forward-sink-apache.port=41414
app-agent.sinks.avro-forward-sink-apache.channel=fileChannel
app agent.channels.fileChannel.type=文件
app agent.channels.fileChannel.dataDirs=/usr/local/lib/flume ng/flume\u channel/dataDir13
app agent.channels.fileChannel.checkpointDir=/usr/local/lib/flume ng/flume\u channel/checkpointDir13
app-agent.channels.fileChannel.transactionCapacity=50000
app-agent.channels.fileChannel.capacity=9000000 app-agent.channels.fileChannel.write-timeout=250000 app agent.channles.fileChannel.keep alive=600
从这里开始:

巴卢斯科的答复:

另一方在冲突中突然中止了连接 交易这可能有许多原因,而这些原因是无法控制的 服务器端打开。例如,最终用户决定关闭客户端或 在仍与服务器交互时突然更改服务器, 或者客户端程序崩溃,或者最终用户的互联网崩溃 连接中断,或最终用户的机器崩溃等

hdfs-agent.sources = avro-collect<br>
hdfs-agent.sinks = hdfs-write<br>
hdfs-agent.channels = fileChannel<br>


hdfs-agent.sources.avro-collect.type = avro<br>
hdfs-agent.sources.avro-collect.bind = <<System IP>><br>
hdfs-agent.sources.avro-collect.port = 41414<br>
hdfs-agent.sources.avro-collect.channels = fileChannel<br>

hdfs-agent.sinks.hdfs-write.type = hdfs<br>
hdfs-agent.sinks.hdfs-write.hdfs.path = hdfs://hadoop:54310/flume/%{host}/%Y%m%d/%{logFileType}<br>
hdfs-agent.sinks.hdfs-write.hdfs.rollSize = 209715200<br>
hdfs-agent.sinks.hdfs-write.hdfs.rollCount = 6000<br>
hdfs-agent.sinks.hdfs-write.hdfs.fileType = DataStream<br>
hdfs-agent.sinks.hdfs-write.hdfs.writeFormat = Text<br>
hdfs-agent.sinks.hdfs-write.hdfs.filePrefix = %{host}<br>
hdfs-agent.sinks.hdfs-write.hdfs.maxOpenFiles = 100000<br>
hdfs-agent.sinks.hdfs-write.hdfs.batchSize = 5000<br>
hdfs-agent.sinks.hdfs-write.hdfs.rollInterval = 75<br>
hdfs-agent.sinks.hdfs-write.hdfs.callTimeout = 5000000<br>
hdfs-agent.sinks.hdfs-write.channel = fileChannel<br>


hdfs-agent.channels.fileChannel.type=file<br>
hdfs-agent.channels.fileChannel.dataDirs=/u01/Collector/flume_channel/dataDir13<br>
hdfs-agent.channels.fileChannel.checkpointDir=/u01/Collector/flume_channel/checkpointDir13<br>
hdfs-agent.channels.fileChannel.transactionCapacity = 50000<br>
hdfs-agent.channels.fileChannel.capacity = 9000000<br>
hdfs-agent.channels.fileChannel.write-timeout = 250000<br>
app-agent.sources = tail tailapache<br>
app-agent.channels = fileChannel<br>
app-agent.sinks = avro-forward-sink avro-forward-sink-apache<br>

app-agent.sources.tail.type = exec<br>
app-agent.sources.tail.command = tail -f /server/default/log/server.log<br>
app-agent.sources.tail.channels = fileChannel<br>

app-agent.sources.tailapache.type = exec<br>
app-agent.sources.tailapache.command = tail -f /logs/access_log<br>
app-agent.sources.tailapache.channels = fileChannel<br>

app-agent.sources.tail.interceptors = ts st stt<br>
app-agent.sources.tail.interceptors.ts.type =     org.apache.flume.interceptor.TimestampInterceptor$Builder<br>
app-agent.sources.tail.interceptors.st.type = static<br>
app-agent.sources.tail.interceptors.st.key = logFileType<br>
app-agent.sources.tail.interceptors.st.value = jboss<br>
app-agent.sources.tail.interceptors.stt.type = static<br>
app-agent.sources.tail.interceptors.stt.key = host<br>
app-agent.sources.tail.interceptors.stt.value = Mart<br>

app-agent.sources.tailapache.interceptors = ts1 i1 st1<br>
app-agent.sources.tailapache.interceptors.ts1.type =     org.apache.flume.interceptor.TimestampInterceptor$Builder<br>
app-agent.sources.tailapache.interceptors.i1.type = static<br>
app-agent.sources.tailapache.interceptors.i1.key = logFileType<br>
app-agent.sources.tailapache.interceptors.i1.value = apache<br>
app-agent.sources.tailapache.interceptors.st1.type = static<br>
app-agent.sources.tailapache.interceptors.st1.key = host<br>
app-agent.sources.tailapache.interceptors.st1.value = Mart<br>

app-agent.sinks.avro-forward-sink.type = avro<br>
app-agent.sinks.avro-forward-sink.hostname = <<Host IP>><br>
app-agent.sinks.avro-forward-sink.port = 41414<br>
app-agent.sinks.avro-forward-sink.channel = fileChannel<br>

app-agent.sinks.avro-forward-sink-apache.type = avro<br>
app-agent.sinks.avro-forward-sink-apache.hostname = <<Host IP>><br>
app-agent.sinks.avro-forward-sink-apache.port = 41414<br>
app-agent.sinks.avro-forward-sink-apache.channel = fileChannel<br>

app-agent.channels.fileChannel.type=file<br>
app-agent.channels.fileChannel.dataDirs=/usr/local/lib/flume-ng/flume_channel/dataDir13<br>
app-agent.channels.fileChannel.checkpointDir=/usr/local/lib/flume-ng/flume_channel/checkpointDir13<br>
app-agent.channels.fileChannel.transactionCapacity = 50000<br>
app-agent.channels.fileChannel.capacity = 9000000
app-agent.channels.fileChannel.write-timeout = 250000
app-agent.channles.fileChannel.keep-alive=600