Amazon web services 什么是AWSRequestMetricsFullSupport?如何关闭它?

Amazon web services 什么是AWSRequestMetricsFullSupport?如何关闭它?,amazon-web-services,amazon-s3,apache-spark,amazon-emr,Amazon Web Services,Amazon S3,Apache Spark,Amazon Emr,我正在尝试将一些数据从Spark数据帧保存到S3存储桶。这很简单: dataframe.saveAsParquetFile("s3://kirk/my_file.parquet") 数据已成功保存,但UI忙了很长时间。我得到了成千上万行这样的信息: 2015-09-04 20:48:19,591 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[200], Serv

我正在尝试将一些数据从Spark数据帧保存到S3存储桶。这很简单:

dataframe.saveAsParquetFile("s3://kirk/my_file.parquet")
数据已成功保存,但UI忙了很长时间。我得到了成千上万行这样的信息:

2015-09-04 20:48:19,591 INFO  [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[200], ServiceName=[Amazon S3], AWSRequestID=[5C3211750F4FF5AB], ServiceEndpoint=[https://kirk.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[63.827], HttpRequestTime=[62.919], HttpClientReceiveResponseTime=[61.678], RequestSigningTime=[0.05], ResponseProcessingTime=[0.812], HttpClientSendRequestTime=[0.038],
2015-09-04 20:48:19,610 INFO  [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[204], ServiceName=[Amazon S3], AWSRequestID=[709DA41540539FE0], ServiceEndpoint=[https://kirk.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[18.064], HttpRequestTime=[17.959], HttpClientReceiveResponseTime=[16.703], RequestSigningTime=[0.06], ResponseProcessingTime=[0.003], HttpClientSendRequestTime=[0.046],
2015-09-04 20:48:19,664 INFO  [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[204], ServiceName=[Amazon S3], AWSRequestID=[1B1EB812E7982C7A], ServiceEndpoint=[https://kirk.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[54.36], HttpRequestTime=[54.26], HttpClientReceiveResponseTime=[53.006], RequestSigningTime=[0.057], ResponseProcessingTime=[0.002], HttpClientSendRequestTime=[0.034],
2015-09-04 20:48:19,675 INFO  [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[404], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: AF6F960F3B2BF3AB), S3 Extended Request ID: CLs9xY8HAxbEAKEJC4LS1SgpqDcnHeaGocAbdsmYKwGttS64oVjFXJOe314vmb9q], ServiceName=[Amazon S3], AWSErrorCode=[404 Not Found], AWSRequestID=[AF6F960F3B2BF3AB], ServiceEndpoint=[https://kirk.s3.amazonaws.com], Exception=1, HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[10.111], HttpRequestTime=[10.009], HttpClientReceiveResponseTime=[8.758], RequestSigningTime=[0.043], HttpClientSendRequestTime=[0.044],
2015-09-04 20:48:19,685 INFO  [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[404], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: F2198ACEB4B2CE72), S3 Extended Request ID: J9oWD8ncn6WgfUhHA1yqrBfzFC+N533oD/DK90eiSvQrpGH4OJUc3riG2R4oS1NU], ServiceName=[Amazon S3], AWSErrorCode=[404 Not Found], AWSRequestID=[F2198ACEB4B2CE72], ServiceEndpoint=[https://kirk.s3.amazonaws.com], Exception=1, HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[9.879], HttpRequestTime=[9.776], HttpClientReceiveResponseTime=[8.537], RequestSigningTime=[0.05], HttpClientSendRequestTime=[0.033],
我可以理解是否有些用户对记录S3操作的延迟感兴趣,但是有没有办法禁用
AWSRequestMetricsFullSupport
中的任何和所有监视和记录功能

当我检查Spark UI时,它告诉我作业完成得相对较快,但控制台中充斥着这些消息很长一段时间。内容如下:

/**
*启动将计时的事件。[...]
* 
*如果系统属性
*设置了“com.amazonaws.sdk.enableRuntimeProfiling”,或者
*{@link RequestMetricCollector}正在请求或web服务中使用
*客户端或AWS SDK级别。
* 
*@param eventName
*-要启动的事件的名称
* 
*@see AwsSdkMetrics
*/
如参考中所述,您可以通过系统属性禁用它:

Java AWS SDK的默认度量集合由禁用 违约要启用它,只需指定系统属性 启动JVM时,“com.amazonaws.sdk.enableDefaultMetrics”。 指定系统属性时,将使用默认度量收集器 可以在AWS SDK级别启动。默认实现上载 使用AWS向Amazon CloudWatch捕获的请求/响应指标 通过获取的凭据

这似乎可以被请求、web服务客户端或AWS SDK级别的
RequestMetricCollector
硬连线覆盖,这可能需要resp。正在使用的客户端/框架的配置调整(如此处的Spark):

需要完全自定义度量集合的客户端可以 实施SPI,然后替换默认AWS 收集器的SDK实现,通过

到目前为止,这些功能的文档似乎有点稀疏,我知道有两篇相关的博客文章:


我找到的最佳解决方案是通过将
log4j
配置文件传递到Spark上下文来配置Java日志记录(即,如果关闭则关闭)

--driver-java-options "-Dlog4j.configuration=/home/user/log4j.properties"

其中
log4j.properties
是禁用信息类型消息的
log4j
配置文件。

发布标签上的EMR使这些日志静音被证明是一个相当大的挑战。存在版本emr-4.7.2中修复的“”。一个有效的解决方案是将这些JSON添加为配置:

[
{
  "Classification": "hadoop-log4j",
  "Properties": {
    "log4j.logger.com.amazon.ws.emr.hadoop.fs": "ERROR",
    "log4j.logger.com.amazonaws.latency": "ERROR"
  },
  "Configurations": []
}
]
在emr-4.7.2之前的版本中,这个json也抛弃了默认的spark的“buggy”错误log4j:

[
{
  "Classification": "spark-defaults",
  "Properties": {
    "spark.driver.extraJavaOptions": "-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=512M -XX:OnOutOfMemoryError='kill -9 %p'"
  },
  "Configurations": []
}
]

对于上下文,我正在保存一个包含1m行和500列的数据帧。保存大约需要20秒,但延迟警告会在我的控制台中显示>20分钟。谢谢Steffen。我发现了相同的文档re:
AwsSdkMetrics
,它(正如您所发布的)表明默认情况下应该关闭它。我想那是一份旧的文件。关闭此功能似乎并不琐碎。我会继续浏览你最终引用的博客。