elasticsearch,log4j,logstash,Java,Tomcat,elasticsearch,Log4j,Logstash" /> elasticsearch,log4j,logstash,Java,Tomcat,elasticsearch,Log4j,Logstash" />

Java log4j tcp追加器和logstash源主机

Java log4j tcp追加器和logstash源主机,java,tomcat,elasticsearch,log4j,logstash,Java,Tomcat,elasticsearch,Log4j,Logstash,我使用log4j socketappender将日志从tomcat实例发送到另一台机器。在这个远程服务器上,我有logstash设置来对数据进行转换,然后将数据转发到我的弹性搜索集群。由于某些原因,我无法在弹性搜索中显示日志的原始主机。我得到了logstash转发器的主机,但原始日志中的主机甚至不能通过管道持久化 我的日志存储配置 input { log4j { mode => "server" port => "4560" } tcp { por

我使用log4j socketappender将日志从tomcat实例发送到另一台机器。在这个远程服务器上,我有logstash设置来对数据进行转换,然后将数据转发到我的弹性搜索集群。由于某些原因,我无法在弹性搜索中显示日志的原始主机。我得到了logstash转发器的主机,但原始日志中的主机甚至不能通过管道持久化

我的日志存储配置

input {
  log4j {
    mode => "server"
    port => "4560"
  }
  tcp {
    port => "4561"
    codec => "json"
    type => "access"
  }
}

filter {
  mutate {
    add_field => [ "source_ip", "@source_host" ]
  }
  if [type] == "access" {
    mutate { remove_field => "something-private" }
  }
  ruby {
    code => "event['@timestamp'] = event['@timestamp'].localtime('-05:00')"
  }
  ruby {
    code => "event['@pretty_timestamp'] = event['@timestamp'].strftime('%A, %B %e %Y at %l:%M:%S %p')"
  }
}

output {
  elasticsearch {
    host        => "elasticsearch.host"
    cluster     => "log-cluster"
    protocol    => "http"
  }
}
所有的日志都可以通过,但是我想让它们的源主机也在其中,这样我就可以知道它们来自哪些服务器

一个普通的日志事件会是这样的

{
  "_index": "logstash-2015.01.28",
  "_type": "logs",
  "_id": "AUsyNvDyCso0hZPOUVfM",
  "_score": null,
  "_source": {
    "message": "Executing query: Removed",
    "@version": "1",
    "@timestamp": "2015-01-28T15:23:56.331-05:00",
    "host": "10.253.1.112:31441",
    "path": "org.blahblah",
    "priority": "INFO",
    "logger_name": "org.blahblah",
    "thread": "http-bio-8080-exec-13",
    "class": "?",
    "file": "?:?",
    "method": "?",
    "source_ip": "@source_host",
    "@pretty_timestamp": "Wednesday, January 28 2015 at  3:23:56 PM"
  },
  "sort": [
    1422476636331,
    1422476636331
  ]
}
你知道如何调整它以包含正确的源主机吗

log4j.rootLogger=INFO, default, SOCKET, FILE
log4j.appender.default=org.apache.log4j.ConsoleAppender
log4j.appender.default.layout=org.apache.log4j.PatternLayout
log4j.appender.default.layout.ConversionPattern=%d [%t] %-5p %c{1} - %m%n

log4j.appender.SOCKET=org.apache.log4j.net.SocketAppender
log4j.appender.SOCKET.Port=4560
log4j.appender.SOCKET.RemoteHost=${logHost}
log4j.appender.SOCKET.ReconnectionDelay=1000

log4j.appender.FILE=org.apache.log4j.RollingFileAppender
log4j.appender.FILE.File=${log_home}neuron-logging.log
log4j.appender.FILE.MaxFileSize=100MB
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %p %t %c - %m%n

我添加了一些catalina选项-DlogHost=和-Dlog_home=blah blah blah

您尝试过使用GELF吗?例如,logstash gelf()提供了一种灵活的配置,您可以直接获得原始主机名。除此之外,您还可以从MDC提供数据并提供静态设置

配置示例:

log4j.appender.gelf=biz.paluch.logging.gelf.log4j.GelfLogAppender
log4j.appender.gelf.Threshold=INFO
log4j.appender.gelf.Host=udp:localhost
log4j.appender.gelf.Port=12201
log4j.appender.gelf.ExtractStackTrace=true
log4j.appender.gelf.FilterStackTrace=true

# This are static fields
log4j.appender.gelf.AdditionalFields=fieldName1=fieldValue1,fieldName2=fieldValue2

# This are fields using MDC
log4j.appender.gelf.MdcFields=mdcField1,mdcField2
log4j.appender.gelf.DynamicMdcFields=mdc.*,(mdc|MDC)fields
log4j.appender.gelf.IncludeFullMdc=true

你也可以发布你的log4j配置吗。或者不管你用什么方法把这些日志运到日志库server@VineethMohan我添加了我的log4j配置,谢谢!