在Play Scala中使用迭代器和枚举器将数据流传输到S3

在Play Scala中使用迭代器和枚举器将数据流传输到S3,scala,playframework,amazon-s3,iterate,Scala,Playframework,Amazon S3,Iterate,我正在Scala中构建一个Play框架应用程序,我希望将字节数组流式传输到S3。我正在利用图书馆来做这件事。文档部分的“多部分文件上载”与此相关: // Retrieve an upload ticket val result:Future[BucketFileUploadTicket] = bucket initiateMultipartUpload BucketFile(fileName, mimeType) // Upload the parts and save the ticke

我正在Scala中构建一个Play框架应用程序,我希望将字节数组流式传输到S3。我正在利用图书馆来做这件事。文档部分的“多部分文件上载”与此相关:

// Retrieve an upload ticket
val result:Future[BucketFileUploadTicket] =
  bucket initiateMultipartUpload BucketFile(fileName, mimeType)

// Upload the parts and save the tickets
val result:Future[BucketFilePartUploadTicket] =
  bucket uploadPart (uploadTicket, BucketFilePart(partNumber, content))

// Complete the upload using both the upload ticket and the part upload tickets
val result:Future[Unit] =
  bucket completeMultipartUpload (uploadTicket, partUploadTickets)
我尝试在我的应用程序中做同样的事情,但是使用
Iteratee
s和
Enumerator
s

流和异步性使事情变得有点复杂,但这里是我到目前为止所做的(注意
uploadTicket
在代码前面定义):


一切都可以顺利编译和运行。事实上,
“Success”
会被打印出来,但S3上从来没有显示任何文件。

您的代码可能存在多个问题。由于
map
方法调用,它有点不可读。你将来的作文可能会有问题。另一个问题可能是因为所有块(最后一个除外)都应该至少为5MB

下面的代码尚未经过测试,但显示了一种不同的方法。iteratee方法是一种可以创建小型构建块并将它们组合到操作管道中的方法

为了使代码能够编译,我添加了一个trait和一些方法

trait BucketFilePartUploadTicket
val uploadPart: (Int, Array[Byte]) => Future[BucketFilePartUploadTicket] = ???
val completeUpload: Seq[BucketFilePartUploadTicket] => Future[Unit] = ???
val body: Enumerator[Array[Byte]] = ???
这里我们创建几个部分

// Create 5MB chunks
val chunked = {
  val take5MB = Traversable.takeUpTo[Array[Byte]](1024 * 1024 * 5)
  Enumeratee.grouped(take5MB transform Iteratee.consume())
}

// Add a counter, used as part number later on
val zipWithIndex = Enumeratee.scanLeft[Array[Byte]](0 -> Array.empty[Byte]) {
  case ((counter, _), bytes) => (counter + 1) -> bytes
}

// Map the (Int, Array[Byte]) tuple to a BucketFilePartUploadTicket
val uploadPartTickets = Enumeratee.mapM[(Int, Array[Byte])](uploadPart.tupled)

// Construct the pipe to connect to the enumerator
// the ><> operator is an alias for compose, it is more intuitive because of 
// it's arrow like structure
val pipe = chunked ><> zipWithIndex ><> uploadPartTickets

// Create a consumer that ends by finishing the upload
val consumeAndComplete = 
  Iteratee.getChunks[BucketFilePartUploadTicket] mapM completeUpload
请注意,我没有测试任何代码,可能在我的方法中犯了一些错误。然而,这显示了处理问题的另一种方式,可能会帮助您找到一个好的解决方案

请注意,这种方法等待一个部分完成上载,然后再进行下一个部分。如果从服务器到amazon的连接速度比从浏览器到服务器的连接速度慢,则此机制将减慢输入速度


您可以采取另一种方法,不必等待零件上传的
将来完成。这将导致另一个步骤,您可以使用
Future.sequence
将上传期货的序列转换为包含结果序列的单个期货。结果是,一旦您有足够的数据,就会有一种机制将一部分发送到amazon。

我的代码总是存在多个问题。还有什么新鲜事?但确实,并非所有情况下块都是5MB,因此这是一个问题。不过,我会试试你的想法,看看我能做些什么。
// Create 5MB chunks
val chunked = {
  val take5MB = Traversable.takeUpTo[Array[Byte]](1024 * 1024 * 5)
  Enumeratee.grouped(take5MB transform Iteratee.consume())
}

// Add a counter, used as part number later on
val zipWithIndex = Enumeratee.scanLeft[Array[Byte]](0 -> Array.empty[Byte]) {
  case ((counter, _), bytes) => (counter + 1) -> bytes
}

// Map the (Int, Array[Byte]) tuple to a BucketFilePartUploadTicket
val uploadPartTickets = Enumeratee.mapM[(Int, Array[Byte])](uploadPart.tupled)

// Construct the pipe to connect to the enumerator
// the ><> operator is an alias for compose, it is more intuitive because of 
// it's arrow like structure
val pipe = chunked ><> zipWithIndex ><> uploadPartTickets

// Create a consumer that ends by finishing the upload
val consumeAndComplete = 
  Iteratee.getChunks[BucketFilePartUploadTicket] mapM completeUpload
// This is the result, a Future[Unit]
val result = body through pipe run consumeAndComplete