Java logback:AsyncAppender比同步FileAppender花费更多的时间
我发现logback异步日志记录提供的性能比同步日志记录差。详情如下 我会错过什么 测试类:Java logback:AsyncAppender比同步FileAppender花费更多的时间,java,logback,slf4j,Java,Logback,Slf4j,我发现logback异步日志记录提供的性能比同步日志记录差。详情如下 我会错过什么 测试类: import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Main extends Thread { public static final Logger defaultLogger = LoggerFactory.getLogger(Main.class); public static void ma
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Main extends Thread {
public static final Logger defaultLogger = LoggerFactory.getLogger(Main.class);
public static void main(String[] args) throws IOException {
new Main().start();
System.out.println("... Thread started\n");
// This is to block till thread finishes writing
System.in.read();
}
public void run() {
long start = System.currentTimeMillis();
for(int i = 0; i < 1000000; i++) {
defaultLogger.warn("Default logger:");
}
long end = System.currentTimeMillis();
System.out.println("\n**** " + new Long(end - start));
}
}
<configuration>
<appender name="DEFAULT-FILE" class="ch.qos.logback.core.FileAppender">
<append>true</append>
<file>logger.log</file>
<encoder charset="UTF-8">
<pattern>[%date] [%thread] %msg %n</pattern>
</encoder>
</appender>
<appender name="DEFAULT-ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<!-- Have tried to play around with queue size - no major effect -->
<!-- <queueSize>512</queueSize> -->
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="DEFAULT-FILE" />
</appender>
<root level="all">
<!-- Switch between the two appenders -->
<appender-ref ref="DEFAULT-FILE" />
<!-- <appender-ref ref="DEFAULT-ASYNC" /> -->
</root>
</configuration>
import org.slf4j.Logger;
导入org.slf4j.LoggerFactory;
公共类主线程扩展{
公共静态最终记录器defaultLogger=LoggerFactory.getLogger(Main.class);
公共静态void main(字符串[]args)引发IOException{
新建Main().start();
System.out.println(“…线程已启动\n”);
//这是为了阻止线程完成写入
System.in.read();
}
公开募捐{
长启动=System.currentTimeMillis();
对于(int i=0;i<1000000;i++){
warn(“默认记录器:”);
}
long end=System.currentTimeMillis();
System.out.println(“\n****”+新长(结束-开始));
}
}
logback.xml:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Main extends Thread {
public static final Logger defaultLogger = LoggerFactory.getLogger(Main.class);
public static void main(String[] args) throws IOException {
new Main().start();
System.out.println("... Thread started\n");
// This is to block till thread finishes writing
System.in.read();
}
public void run() {
long start = System.currentTimeMillis();
for(int i = 0; i < 1000000; i++) {
defaultLogger.warn("Default logger:");
}
long end = System.currentTimeMillis();
System.out.println("\n**** " + new Long(end - start));
}
}
<configuration>
<appender name="DEFAULT-FILE" class="ch.qos.logback.core.FileAppender">
<append>true</append>
<file>logger.log</file>
<encoder charset="UTF-8">
<pattern>[%date] [%thread] %msg %n</pattern>
</encoder>
</appender>
<appender name="DEFAULT-ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<!-- Have tried to play around with queue size - no major effect -->
<!-- <queueSize>512</queueSize> -->
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="DEFAULT-FILE" />
</appender>
<root level="all">
<!-- Switch between the two appenders -->
<appender-ref ref="DEFAULT-FILE" />
<!-- <appender-ref ref="DEFAULT-ASYNC" /> -->
</root>
</configuration>
真的
logger.log
[%date][%thread]%msg%n
0
观察结果:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Main extends Thread {
public static final Logger defaultLogger = LoggerFactory.getLogger(Main.class);
public static void main(String[] args) throws IOException {
new Main().start();
System.out.println("... Thread started\n");
// This is to block till thread finishes writing
System.in.read();
}
public void run() {
long start = System.currentTimeMillis();
for(int i = 0; i < 1000000; i++) {
defaultLogger.warn("Default logger:");
}
long end = System.currentTimeMillis();
System.out.println("\n**** " + new Long(end - start));
}
}
<configuration>
<appender name="DEFAULT-FILE" class="ch.qos.logback.core.FileAppender">
<append>true</append>
<file>logger.log</file>
<encoder charset="UTF-8">
<pattern>[%date] [%thread] %msg %n</pattern>
</encoder>
</appender>
<appender name="DEFAULT-ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<!-- Have tried to play around with queue size - no major effect -->
<!-- <queueSize>512</queueSize> -->
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="DEFAULT-FILE" />
</appender>
<root level="all">
<!-- Switch between the two appenders -->
<appender-ref ref="DEFAULT-FILE" />
<!-- <appender-ref ref="DEFAULT-ASYNC" /> -->
</root>
</configuration>
同步文件追加器:~5000ms
异步追加器:~7000ms
版本:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Main extends Thread {
public static final Logger defaultLogger = LoggerFactory.getLogger(Main.class);
public static void main(String[] args) throws IOException {
new Main().start();
System.out.println("... Thread started\n");
// This is to block till thread finishes writing
System.in.read();
}
public void run() {
long start = System.currentTimeMillis();
for(int i = 0; i < 1000000; i++) {
defaultLogger.warn("Default logger:");
}
long end = System.currentTimeMillis();
System.out.println("\n**** " + new Long(end - start));
}
}
<configuration>
<appender name="DEFAULT-FILE" class="ch.qos.logback.core.FileAppender">
<append>true</append>
<file>logger.log</file>
<encoder charset="UTF-8">
<pattern>[%date] [%thread] %msg %n</pattern>
</encoder>
</appender>
<appender name="DEFAULT-ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<!-- Have tried to play around with queue size - no major effect -->
<!-- <queueSize>512</queueSize> -->
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="DEFAULT-FILE" />
</appender>
<root level="all">
<!-- Switch between the two appenders -->
<appender-ref ref="DEFAULT-FILE" />
<!-- <appender-ref ref="DEFAULT-ASYNC" /> -->
</root>
</configuration>
slf4j:1.7.19
回写:1.1.6
代码的一个问题是,您在很短的时间内创建了1000000个日志条目,但是AsyncAppender有一个最大容量为256(默认)的队列 因此,作为第一步,您必须将队列大小增加到1000000。否则,测量的时间无效
此外,您应该在同一个JVM实例中进行多个测量,并放弃第一个测量,因为它可能由于类和缓冲区初始化时间而受到污染。您尝试过1000000的队列大小吗?否则,度量代码将强制调整队列大小,这可能是错误计时的主要问题。。。此外,您应该在同一个JVM实例中进行多个测量,并放弃第一个测量,因为它可能会因为类和缓冲区初始化时间而受到污染。队列大小需要足够大,否则当队列已满且正在由工作人员清除时,日志记录会被阻止(没有队列大小调整。由于阻塞,日志记录会变慢。)这在[documentation]()
中提到。默认情况下,事件队列的最大容量配置为256个事件。如果队列已满,则会阻止应用程序线程记录新事件,直到工作线程有机会分派一个或多个事件。
@Robert如果您可以将其添加为答案,我将接受它这里有一些更好的基准: