Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon s3 在akka http/akka streams中上载/下载文件的问题_Amazon S3_Akka Stream_Akka Http_Alpakka - Fatal编程技术网

Amazon s3 在akka http/akka streams中上载/下载文件的问题

Amazon s3 在akka http/akka streams中上载/下载文件的问题,amazon-s3,akka-stream,akka-http,alpakka,Amazon S3,Akka Stream,Akka Http,Alpakka,我正在尝试使用akka streams和akka http以及alpakka库来下载/上传文件到AmazonS3。我看到两个可能相关的问题 我只能下载非常小的文件,最大的是8kb 我不能上传更大的文件。它与消息一起失败 处理请求时出错:“子流源未被删除。” 在5000毫秒内实现。完成500个内部测试 服务器错误响应。要更改默认异常处理行为, 提供自定义ExceptionHandler。 akka.stream.impl.SubscriptionTimeoutException: 子流源在500

我正在尝试使用akka streams和akka http以及alpakka库来下载/上传文件到AmazonS3。我看到两个可能相关的问题

  • 我只能下载非常小的文件,最大的是8kb
  • 我不能上传更大的文件。它与消息一起失败

    处理请求时出错:“子流源未被删除。” 在5000毫秒内实现。完成500个内部测试 服务器错误响应。要更改默认异常处理行为, 提供自定义ExceptionHandler。 akka.stream.impl.SubscriptionTimeoutException: 子流源在5000毫秒内未具体化

这是我的路线

pathEnd {
           post {
             fileUpload("attachment") {
               case (metadata, byteSource) => {
                 val writeResult: Future[MultipartUploadResult] = byteSource.runWith(client.multipartUpload("bucketname", key))
                 onSuccess(writeResult) { result =>
                   complete(result.location.toString())
                 }
               }

             }
           }

         } ~

     path("key" / Segment) {
            (sourceSystem, sourceTable, sourceId) =>
              get {
                val result: Future[ByteString] = 
         client.download("bucketname", key).runWith(Sink.head)
                onSuccess(result) {
                  complete(_)
                }
              }
          }
如果试图下载一个100KB的文件,最终会得到该文件的截断版本,通常大小在16-25Kb左右 谢谢你的帮助

编辑:关于下载问题,我采纳了斯蒂法诺的建议,并获得了

[error]  found   : akka.stream.scaladsl.Source[akka.util.ByteString,akka.NotUsed]
[error]  required: akka.http.scaladsl.marshalling.ToResponseMarshallable
这就成功了

complete(HttpEntity(ContentTypes.`application/octet-stream`, client.download("bucketname", key).runWith(Sink.head)))
1) 关于下载问题:通过调用

val result: Future[ByteString] = 
         client.download("bucketname", key).runWith(Sink.head)
您正在将S3中的所有数据流式传输到内存中,然后提供结果

Akka Http作为流式传输支持,允许您直接从源流式传输字节,而无需在内存中缓冲所有字节。有关这方面的更多信息,请参阅。实际上,这意味着
complete
指令可以采用
源[ByteString,quot]
,如中所示

...
get {
  complete(client.download("bucketname", key))
}
2) 关于上传问题:您可以尝试调整Akka HTTP
Akka.HTTP.server.parsing.max content length
设置:

# Default maximum content length which should not be exceeded by incoming request entities.
# Can be changed at runtime (to a higher or lower value) via the `HttpEntity::withSizeLimit` method.
# Note that it is not necessarily a problem to set this to a high value as all stream operations
# are always properly backpressured.
# Nevertheless you might want to apply some limit in order to prevent a single client from consuming
# an excessive amount of server resources.
#
# Set to `infinite` to completely disable entity length checks. (Even then you can still apply one
# programmatically via `withSizeLimit`.)
max-content-length = 8m
测试这一点的结果代码大致如下:

  withoutSizeLimit {
    fileUpload("attachment") {
      ...
    }
  }