如何计算AWS SDK for Java发出的HTTP请求数?

如何计算AWS SDK for Java发出的HTTP请求数?,java,amazon-web-services,amazon-s3,aws-sdk,Java,Amazon Web Services,Amazon S3,Aws Sdk,我想创建一个测试,以确保包装AWS S3客户端的类请求的客户端数量正确 class Wrapper { // private void buildClient() { this.client = AmazonS3ClientBuilder.standard() .withCredentials(this.secret) .withRegion(this.region .

我想创建一个测试,以确保包装AWS S3客户端的类请求的客户端数量正确

class Wrapper {
    //
    private void buildClient() {
        this.client = AmazonS3ClientBuilder.standard()
                .withCredentials(this.secret)
                .withRegion(this.region
                .build();
    }

    public void doSomething() {
        while(checkSomething()) {
            client.doSomething();
            client.doSomething();
        }
    }
}
我想做这样的事

class WrapperTest {
    public testDoSomething() {
        wrapper.doSomething();
        assertTrue(numberOfHttpRequest, 3);
    }
}
出于测试目的,我可以始终模拟客户机对象,但我也考虑在生产环境中存储用于性能评测的统计数据(因此收集字节数可能与收集HTTP请求本身的数量一样有用)


到目前为止,通过阅读Java文档:1)、2)和3)。但我不确定哪一个更适合收集请求数量,以及如何以编程方式配置它们。

对于单元测试,您肯定要使用scalamock或任何模拟API

关于请求数量的指标

1) 我建议您收集日志并发送到elasticsearch(这在当今非常常见),在那里您可以根据特定字段进行聚合

使用
src/main/resource/log4j.properties
如下所示

log4j.rootLogger=INFO, file, consoleLogs

# Direct log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=myapp.log
log4j.appender.file.MaxFileSize=10MB
log4j.appender.file.MaxBackupIndex=10
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

log4j.appender.consoleLogs=org.apache.log4j.ConsoleAppender
log4j.appender.consoleLogs.layout=org.apache.log4j.PatternLayout
log4j.appender.consoleLogs.layout.ConversionPattern=%d [%t] %-5p %c -  %m%n
log4j.logger.com.amazonaws.request=DEBUG
AWS SDK的log4j日志如下所示:

2017-06-01 11:56:58 DEBUG request:1137 - Sending Request: PUT https://samsa-repo.s3.amazonaws.com /sendme.log Headers: (User-Agent: aws-sdk-java/1.11.109 Mac_OS_X/10.11.6 Java_HotSpot(TM)_64-Bit_Server_VM/25.111-b14/1.8.0_111 scala/2.11.8, amz-sdk-invocation-id: 9179e5c2-fee3-4e6e-abb9-b50f882f1966, Content-Length: 9, x-amz-storage-class: REDUCED_REDUNDANCY, Content-MD5: /UERdk1lrFHXgNJHTSd3QA==, Content-Type: application/octet-stream, ) 

2017-06-01 11:56:58 DEBUG request:87 - Received successful response: 200, AWS Request ID: 3695D599CB1FD794
  {
    "@timestamp": "2017-06-01T21:05:37.204Z",
    "source_host": "M00974000.prayagupd.net",
    "file": "AmazonHttpClient.java",
    "method": "executeOneRequest",
    "level": "DEBUG",
    "line_number": "1137",
    "thread_name": "ScalaTest-run-running-PublishToSimpleStorageServiceSpecs",
    "@version": 1,
    "logger_name": "com.amazonaws.request",
    "message": "Sending Request: HEAD https://samsa-repo.s3.amazonaws.com / Headers: (User-Agent: aws-sdk-java/1.11.109 Mac_OS_X/10.11.6 Java_HotSpot(TM)_64-Bit_Server_VM/25.111-b14/1.8.0_111 scala/2.11.8, amz-sdk-invocation-id: 39fd8121-b40d-cb48-a6ea-65cf580f569f, Content-Type: application/octet-stream, ) ",
    "class": "com.amazonaws.http.AmazonHttpClient$RequestExecutor",
    "mdc": {}
  },
  {
    "@timestamp": "2017-06-01T21:05:38.337Z",
    "source_host": "M00974000.prayagupd.net",
    "file": "AwsResponseHandlerAdapter.java",
    "method": "handle",
    "level": "DEBUG",
    "line_number": "87",
    "thread_name": "ScalaTest-run-running-PublishToSimpleStorageServiceSpecs",
    "@version": 1,
    "logger_name": "com.amazonaws.request",
    "message": "Received successful response: 200, AWS Request ID: null",
    "class": "com.amazonaws.http.response.AwsResponseHandlerAdapter",
    "mdc": {}
  }
对于错误响应

2017-06-01 13:58:24 DEBUG request:1572 - Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: The Content-MD5 you specified did not match what we received. (Service: Amazon S3; Status Code: 400; Error Code: BadDigest; Request ID: 684584BD135900F3), S3 Extended Request ID: Y1NowPaA/mhydTWaDBupS7o7CA/PkliiVKzmDrDQwENIOdrg049h8BZ+I6Pi1GC8TZqBq1AJGJg=
由于更容易查询JSON格式的日志,因此可以使用以下maven依赖项将log4j日志转换为JSON格式

<dependency>
    <groupId>net.logstash.log4j</groupId>
    <artifactId>jsonevent-layout</artifactId>
    <version>1.7</version>
</dependency>
日志如下所示

2017-06-01 11:56:58 DEBUG request:1137 - Sending Request: PUT https://samsa-repo.s3.amazonaws.com /sendme.log Headers: (User-Agent: aws-sdk-java/1.11.109 Mac_OS_X/10.11.6 Java_HotSpot(TM)_64-Bit_Server_VM/25.111-b14/1.8.0_111 scala/2.11.8, amz-sdk-invocation-id: 9179e5c2-fee3-4e6e-abb9-b50f882f1966, Content-Length: 9, x-amz-storage-class: REDUCED_REDUNDANCY, Content-MD5: /UERdk1lrFHXgNJHTSd3QA==, Content-Type: application/octet-stream, ) 

2017-06-01 11:56:58 DEBUG request:87 - Received successful response: 200, AWS Request ID: 3695D599CB1FD794
  {
    "@timestamp": "2017-06-01T21:05:37.204Z",
    "source_host": "M00974000.prayagupd.net",
    "file": "AmazonHttpClient.java",
    "method": "executeOneRequest",
    "level": "DEBUG",
    "line_number": "1137",
    "thread_name": "ScalaTest-run-running-PublishToSimpleStorageServiceSpecs",
    "@version": 1,
    "logger_name": "com.amazonaws.request",
    "message": "Sending Request: HEAD https://samsa-repo.s3.amazonaws.com / Headers: (User-Agent: aws-sdk-java/1.11.109 Mac_OS_X/10.11.6 Java_HotSpot(TM)_64-Bit_Server_VM/25.111-b14/1.8.0_111 scala/2.11.8, amz-sdk-invocation-id: 39fd8121-b40d-cb48-a6ea-65cf580f569f, Content-Type: application/octet-stream, ) ",
    "class": "com.amazonaws.http.AmazonHttpClient$RequestExecutor",
    "mdc": {}
  },
  {
    "@timestamp": "2017-06-01T21:05:38.337Z",
    "source_host": "M00974000.prayagupd.net",
    "file": "AwsResponseHandlerAdapter.java",
    "method": "handle",
    "level": "DEBUG",
    "line_number": "87",
    "thread_name": "ScalaTest-run-running-PublishToSimpleStorageServiceSpecs",
    "@version": 1,
    "logger_name": "com.amazonaws.request",
    "message": "Received successful response: 200, AWS Request ID: null",
    "class": "com.amazonaws.http.response.AwsResponseHandlerAdapter",
    "mdc": {}
  }
使用转发器(如)将日志发送到elasticsearch后,您可以按值聚合/搜索日志/请求<代码>发送请求

如果filebeat forwarder/elasticsearch/kibana dashboard功能过剩,您可能希望通过聚合日志