使用Java/Scala的S3/MinIO:将文件的字节缓冲区块保存到对象存储

使用Java/Scala的S3/MinIO:将文件的字节缓冲区块保存到对象存储,java,scala,amazon-s3,vert.x,minio,Java,Scala,Amazon S3,Vert.x,Minio,因此,假设我有一个Scala Vert.x Web REST API,它通过HTTP多部分请求接收文件上传。但是,它不会以单个InputStream的形式接收传入的文件数据。相反,每个文件作为一系列字节缓冲区接收,这些字节缓冲区通过几个回调函数传递 回调基本上如下所示: // the callback that receives byte buffers (chunks) of the file being uploaded // it is called multiple times

因此,假设我有一个Scala Vert.x Web REST API,它通过HTTP多部分请求接收文件上传。但是,它不会以单个
InputStream
的形式接收传入的文件数据。相反,每个文件作为一系列字节缓冲区接收,这些字节缓冲区通过几个回调函数传递

回调基本上如下所示:

  // the callback that receives byte buffers (chunks) of the file being uploaded
  //  it is called multiple times until the full file has been received
  upload.handler { buffer =>
    // send chunk to backend
  }

  // the callback that gets called after the full file has been uploaded
  //  (i.e. after all chunks have been received)
  upload.endHandler { _ =>
    // do something after the file has been uploaded
  }

  // callback called if an exception is raised while receiving the file
  upload.exceptionHandler { e =>
    // do something to handle the exception
  }
// this is all inside the context of handling a HTTP request
val out = new PipedOutputStream()
val in = new PipedInputStream()
var size = 0
in.connect(out)

upload.handler { buffer =>
    s.write(buffer.getBytes)
    size += buffer.length()
}

upload.endHandler { _ =>
    minioClient.putObject(
        PutObjectArgs.builder()
            .bucket("my-bucket")
            .object("my-filename")
            .stream(in, size, 50000000)
            .build())
}
      context.request.uploadHandler { upload =>
        println(s"Filename: ${upload.filename()}")

        val partETags = new util.ArrayList[PartETag]
        val initRequest = new InitiateMultipartUploadRequest("docs", "my-filekey")
        val initResponse = s3Client.initiateMultipartUpload(initRequest)

        upload.handler { buffer =>
          println("uploading part", buffer.length())
          try {
            val request = new UploadPartRequest()
              .withBucketName("docs")
              .withKey("my-filekey")
              .withPartSize(buffer.length())
              .withUploadId(initResponse.getUploadId)
              .withInputStream(new ByteArrayInputStream(buffer.getBytes()))

            val uploadResult = s3Client.uploadPart(request)
            partETags.add(uploadResult.getPartETag)
          } catch {
            case e: Exception => println("Exception raised: ", e)
          }
        }

        // this gets called for EACH uploaded file sequentially
        upload.endHandler { _ =>
          // upload successful
          println("done uploading")
          try {
            val compRequest = new CompleteMultipartUploadRequest("docs", "my-filekey", initResponse.getUploadId, partETags)
            s3Client.completeMultipartUpload(compRequest)
          } catch {
            case e: Exception => println("Exception raised: ", e)
          }
          context.response.setStatusCode(200).end("Uploaded")
        }
        upload.exceptionHandler { e =>
          // handle the exception
          println("exception thrown", e)
        }
      }
    }
现在,我想使用这些回调将文件保存到MinIO Bucket中(如果您不熟悉,MinIO基本上是自托管的S3,它的API与s3javaapi几乎相同)

由于我没有文件句柄,我需要使用
putObject()
InputStream
放入MinIO

我目前在MinIO Java API中使用的低效解决方案如下所示:

  // the callback that receives byte buffers (chunks) of the file being uploaded
  //  it is called multiple times until the full file has been received
  upload.handler { buffer =>
    // send chunk to backend
  }

  // the callback that gets called after the full file has been uploaded
  //  (i.e. after all chunks have been received)
  upload.endHandler { _ =>
    // do something after the file has been uploaded
  }

  // callback called if an exception is raised while receiving the file
  upload.exceptionHandler { e =>
    // do something to handle the exception
  }
// this is all inside the context of handling a HTTP request
val out = new PipedOutputStream()
val in = new PipedInputStream()
var size = 0
in.connect(out)

upload.handler { buffer =>
    s.write(buffer.getBytes)
    size += buffer.length()
}

upload.endHandler { _ =>
    minioClient.putObject(
        PutObjectArgs.builder()
            .bucket("my-bucket")
            .object("my-filename")
            .stream(in, size, 50000000)
            .build())
}
      context.request.uploadHandler { upload =>
        println(s"Filename: ${upload.filename()}")

        val partETags = new util.ArrayList[PartETag]
        val initRequest = new InitiateMultipartUploadRequest("docs", "my-filekey")
        val initResponse = s3Client.initiateMultipartUpload(initRequest)

        upload.handler { buffer =>
          println("uploading part", buffer.length())
          try {
            val request = new UploadPartRequest()
              .withBucketName("docs")
              .withKey("my-filekey")
              .withPartSize(buffer.length())
              .withUploadId(initResponse.getUploadId)
              .withInputStream(new ByteArrayInputStream(buffer.getBytes()))

            val uploadResult = s3Client.uploadPart(request)
            partETags.add(uploadResult.getPartETag)
          } catch {
            case e: Exception => println("Exception raised: ", e)
          }
        }

        // this gets called for EACH uploaded file sequentially
        upload.endHandler { _ =>
          // upload successful
          println("done uploading")
          try {
            val compRequest = new CompleteMultipartUploadRequest("docs", "my-filekey", initResponse.getUploadId, partETags)
            s3Client.completeMultipartUpload(compRequest)
          } catch {
            case e: Exception => println("Exception raised: ", e)
          }
          context.response.setStatusCode(200).end("Uploaded")
        }
        upload.exceptionHandler { e =>
          // handle the exception
          println("exception thrown", e)
        }
      }
    }
显然,这不是最优的。因为我在这里使用的是一个简单的
java.io
流,所以整个文件最终都会加载到内存中

在将文件放入对象存储之前,我不想将其保存到服务器上的磁盘。我想把它直接放进我的对象存储器

如何使用S3API和通过
upload.handler
回调提供给我的一系列字节缓冲区来实现这一点

编辑

我应该补充一点,我使用MinIO是因为我不能使用商业托管的云解决方案,比如S3。然而,正如MinIO网站上提到的,我可以使用Amazon的S3JavaSDK,同时使用MinIO作为我的存储解决方案

我试图遵循将对象分块上传到S3的方法

我尝试的解决方案如下所示:

  // the callback that receives byte buffers (chunks) of the file being uploaded
  //  it is called multiple times until the full file has been received
  upload.handler { buffer =>
    // send chunk to backend
  }

  // the callback that gets called after the full file has been uploaded
  //  (i.e. after all chunks have been received)
  upload.endHandler { _ =>
    // do something after the file has been uploaded
  }

  // callback called if an exception is raised while receiving the file
  upload.exceptionHandler { e =>
    // do something to handle the exception
  }
// this is all inside the context of handling a HTTP request
val out = new PipedOutputStream()
val in = new PipedInputStream()
var size = 0
in.connect(out)

upload.handler { buffer =>
    s.write(buffer.getBytes)
    size += buffer.length()
}

upload.endHandler { _ =>
    minioClient.putObject(
        PutObjectArgs.builder()
            .bucket("my-bucket")
            .object("my-filename")
            .stream(in, size, 50000000)
            .build())
}
      context.request.uploadHandler { upload =>
        println(s"Filename: ${upload.filename()}")

        val partETags = new util.ArrayList[PartETag]
        val initRequest = new InitiateMultipartUploadRequest("docs", "my-filekey")
        val initResponse = s3Client.initiateMultipartUpload(initRequest)

        upload.handler { buffer =>
          println("uploading part", buffer.length())
          try {
            val request = new UploadPartRequest()
              .withBucketName("docs")
              .withKey("my-filekey")
              .withPartSize(buffer.length())
              .withUploadId(initResponse.getUploadId)
              .withInputStream(new ByteArrayInputStream(buffer.getBytes()))

            val uploadResult = s3Client.uploadPart(request)
            partETags.add(uploadResult.getPartETag)
          } catch {
            case e: Exception => println("Exception raised: ", e)
          }
        }

        // this gets called for EACH uploaded file sequentially
        upload.endHandler { _ =>
          // upload successful
          println("done uploading")
          try {
            val compRequest = new CompleteMultipartUploadRequest("docs", "my-filekey", initResponse.getUploadId, partETags)
            s3Client.completeMultipartUpload(compRequest)
          } catch {
            case e: Exception => println("Exception raised: ", e)
          }
          context.response.setStatusCode(200).end("Uploaded")
        }
        upload.exceptionHandler { e =>
          // handle the exception
          println("exception thrown", e)
        }
      }
    }
这适用于小文件(我的测试小文件是11字节),但不适用于大文件

对于大文件,当文件继续上载时,
upload.handler
中的进程会逐渐变慢。另外,
upload.endHandler
永远不会被调用,并且在100%的文件上传后,该文件会以某种方式继续上传


但是,只要我注释掉
upload.handler
中的
s3Client.uploadPart(request)
部分和
s3Client.completeMultipartUpload
中的
upload.endHandler
部分(基本上是丢弃文件,而不是将其保存到对象存储中),文件上载过程正常,并正确终止。

我发现自己做错了什么(在使用S3客户端时)。我没有在我的
upload.handler中积累字节。我需要累积字节,直到缓冲区的大小足够上传一个部分,而不是每次收到几个字节就上传

由于Amazon的S3客户机和MinIO客户机都不是我想要的,所以我决定深入研究
putObject()
是如何实际实现的,并制作自己的。这就是我想到的

这个实现是特定于Vert.X的,但是它可以很容易地推广到通过
while
循环并使用一对
管道-
流来处理内置的
java.io
输入流

这个实现也是特定于MinIO的,但是它可以很容易地适应于使用S3客户机,因为在大多数情况下,这两个api是相同的

在这个例子中,
Buffer
基本上是一个围绕
ByteArray
的容器,我在这里并没有做什么特别的事情。我用一个字节数组替换了它,以确保它仍然可以工作,它做到了

package server

import com.google.common.collect.HashMultimap
import io.minio.MinioClient
import io.minio.messages.Part
import io.vertx.core.buffer.Buffer
import io.vertx.core.streams.ReadStream

import scala.collection.mutable.ListBuffer

class CustomMinioClient(client: MinioClient) extends MinioClient(client) {
  def putReadStream(bucket: String = "my-bucket",
                    objectName: String,
                    region: String = "us-east-1",
                    data: ReadStream[Buffer],
                    objectSize: Long,
                    contentType: String = "application/octet-stream"
                   ) = {
    val headers: HashMultimap[String, String] = HashMultimap.create()
    headers.put("Content-Type", contentType)
    var uploadId: String = null

    try {
      val parts = new ListBuffer[Part]()
      val createResponse = createMultipartUpload(bucket, region, objectName, headers, null)
      uploadId = createResponse.result.uploadId()

      var partNumber = 1
      var uploadedSize = 0

      // an array to use to accumulate bytes from the incoming stream until we have enough to make a `uploadPart` request
      var partBuffer = Buffer.buffer()

      // S3's minimum part size is 5mb, excepting the last part
      // you should probably implement your own logic for determining how big
      // to make each part based off the total object size to avoid unnecessary calls to S3 to upload small parts.
      val minPartSize = 5 * 1024 * 1024

      data.handler { buffer =>

        partBuffer.appendBuffer(buffer)

        val availableSize = objectSize - uploadedSize - partBuffer.length

        val isMinPartSize = partBuffer.length >= minPartSize
        val isLastPart = uploadedSize + partBuffer.length == objectSize

        if (isMinPartSize || isLastPart) {

          val partResponse = uploadPart(
            bucket,
            region,
            objectName,
            partBuffer.getBytes,
            partBuffer.length,
            uploadId,
            partNumber,
            null,
            null
          )

          parts.addOne(new Part(partNumber, partResponse.etag))
          uploadedSize += partBuffer.length
          partNumber += 1

          // empty the part buffer since we have already uploaded it
          partBuffer = Buffer.buffer()
        }
      }


      data.endHandler { _ =>
        completeMultipartUpload(bucket, region, objectName, uploadId, parts.toArray, null, null)
      }

      data.exceptionHandler { exception =>
        // should also probably abort the upload here
        println("Handler caught exception in custom putObject: " + exception)
      }
    } catch {
      // and abort it here as well...
      case e: Exception =>
        println("Exception thrown in custom `putObject`: " + e)
        abortMultipartUpload(
          bucket,
          region,
          objectName,
          uploadId,
          null,
          null
        )
    }
  }
}
这一切都可以很容易地使用

首先,设置客户端:

  private val _minioClient = MinioClient.builder()
    .endpoint("http://localhost:9000")
    .credentials("my-username", "my-password")
    .build()

  private val myClient = new CustomMinioClient(_minioClient)
然后,在接收上载请求的位置:

      context.request.uploadHandler { upload =>
        myClient.putReadStream(objectName = upload.filename(), data = upload, objectSize = myFileSize)
        context.response().setStatusCode(200).end("done")
      }
这个实现的唯一缺点是您需要提前知道请求的文件大小

然而,这可以用我的方法很容易地解决,特别是如果你使用的是web UI

  • 在尝试上载文件之前,请向服务器发送一个请求,其中包含文件名到文件大小的映射
  • 该预请求应为上载生成唯一ID
  • 服务器可以使用上载ID作为索引保存文件名->文件大小组。-服务器将上载ID发送回客户端
  • 客户端使用上载ID发送多部分上载请求
  • 服务器提取文件列表及其大小,并使用它调用
    .putReadStream()

我尝试使用AWS S3 Java API将文件对象放入MinIO。正在使用Akka StreamOK以流媒体方式与任何S3兼容服务(AWS、Ceph、Minio)协作。有没有在没有Akka的情况下使用它的例子?是的,没错。我没有要求图书馆推荐。为什么我想用Akka来解决这一个问题,如果我从它的其余部分中没有任何收获呢?Akka,FS2。。。问为什么不使用流媒体库来流媒体至少对我来说很奇怪。。。