akka.http.scaladsl.model.ParsingException:使用akka http将大文件上载到S3时,多部分实体意外结束

akka.http.scaladsl.model.ParsingException:使用akka http将大文件上载到S3时,多部分实体意外结束,scala,akka-stream,akka-http,alpakka,Scala,Akka Stream,Akka Http,Alpakka,我正在尝试使用Akka HTTP和Alpaka S3连接器将一个大文件(目前为90MB)上传到S3。它对于小文件(25MB)工作正常,但当我尝试上载大文件(90MB)时,出现以下错误: akka.http.scaladsl.model.ParsingException: Unexpected end of multipart entity at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$1.applyOrEl

我正在尝试使用Akka HTTP和Alpaka S3连接器将一个大文件(目前为90MB)上传到S3。它对于小文件(25MB)工作正常,但当我尝试上载大文件(90MB)时,出现以下错误:

akka.http.scaladsl.model.ParsingException: Unexpected end of multipart entity
at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$1.applyOrElse(MultipartUnmarshallers.scala:108)
at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$1.applyOrElse(MultipartUnmarshallers.scala:103)
at akka.stream.impl.fusing.Collect$$anon$6.$anonfun$wrappedPf$1(Ops.scala:227)
at akka.stream.impl.fusing.SupervisedGraphStageLogic.withSupervision(Ops.scala:186)
at akka.stream.impl.fusing.Collect$$anon$6.onPush(Ops.scala:229)
at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:523)
at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:510)
at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:376)
at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:606)
at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:485)
at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:581)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:749)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$shortCircuitBatch(ActorGraphInterpreter.scala:739)
at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:765)
at akka.actor.Actor.aroundReceive(Actor.scala:539)
at akka.actor.Actor.aroundReceive$(Actor.scala:537)
at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:671)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:614)
at akka.actor.ActorCell.invoke(ActorCell.scala:583)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
虽然,我在最后得到了成功的消息,但文件并没有完全上传。它只能上传45-50MB

我正在使用以下代码: S3Utility.scala

    class S3Utility(implicit as: ActorSystem, m: Materializer) {
  private val bucketName = "test"

  def sink(fileInfo: FileInfo): Sink[ByteString, Future[MultipartUploadResult]] = {
    val fileName = fileInfo.fileName
    S3.multipartUpload(bucketName, fileName)
  }
}
路线:

def uploadLargeFile: Route =
  post {
    path("import" / "file") {
      extractMaterializer { implicit materializer =>
        withoutSizeLimit {
          fileUpload("file") {
            case (metadata, byteSource) =>
              logger.info(s"Request received to import large file: ${metadata.fileName}")
              val uploadFuture = byteSource.runWith(s3Utility.sink(metadata))
              onComplete(uploadFuture) {
                case Success(result) =>
                  logger.info(s"Successfully uploaded file")
                  complete(StatusCodes.OK)
                case Failure(ex) =>
                  println(ex, "Error in uploading file")
                  complete(StatusCodes.FailedDependency, ex.getMessage)
              }
          }
        }
      }
    }
  }
任何帮助都将被告知。谢谢策略1 您能否将文件分成较小的块并重试,以下是示例代码:

AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
            .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("some-kind-of-endpoint"))
            .withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("user", "pass")))
            .disableChunkedEncoding()
            .withPathStyleAccessEnabled(true)
            .build();

    // Create a list of UploadPartResponse objects. You get one of these
    // for each part upload.
    List<PartETag> partETags = new ArrayList<PartETag>();

    // Step 1: Initialize.
    InitiateMultipartUploadRequest initRequest = new
            InitiateMultipartUploadRequest("bucket", "key");
    InitiateMultipartUploadResult initResponse =
            s3Client.initiateMultipartUpload(initRequest);

    File file = new File("filepath");
    long contentLength = file.length();
    long partSize = 5242880; // Set part size to 5 MB.

    try {
        // Step 2: Upload parts.
        long filePosition = 0;
        for (int i = 1; filePosition < contentLength; i++) {
            // Last part can be less than 5 MB. Adjust part size.
            partSize = Math.min(partSize, (contentLength - filePosition));

            // Create a request to upload a part.
            UploadPartRequest uploadRequest = new UploadPartRequest()
                    .withBucketName("bucket").withKey("key")
                    .withUploadId(initResponse.getUploadId()).withPartNumber(i)
                    .withFileOffset(filePosition)
                    .withFile(file)
                    .withPartSize(partSize);

            // Upload part and add response to our list.
            partETags.add(
                    s3Client.uploadPart(uploadRequest).getPartETag());

            filePosition += partSize;
        }

        // Step 3: Complete.
        CompleteMultipartUploadRequest compRequest = new
                CompleteMultipartUploadRequest(
                "bucket",
                "key",
                initResponse.getUploadId(),
                partETags);

        s3Client.completeMultipartUpload(compRequest);
    } catch (Exception e) {
        s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(
                "bucket", "key", initResponse.getUploadId()));
    }

这将增加服务器预期空闲的时间段。默认情况下,其值为60秒。如果服务器无法在该时间段内上载文件,它将关闭连接并抛出“意外终止多部分实体”错误。

您可以尝试使用@himanshuiiian@himanshuiiian(我是其贡献者)&我无法立即获取文件。我在akka HTTP中获取流,因此为了实现您的解决方案,我需要为传入流编写一个文件。是的,这是解决方案之一,但我不想创建临时文件,我只想直接使用流。明白了@Rishi。我已经更新了我的答案。请检查一下。是的,我也找到了同样的解决方案。它修好了,开始工作了。感谢@himanshuitian调查此事。
akka.http.server.idle-timeout=infinite