Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 试图将InputStream放入AmazonS3时进程死亡_Java_Amazon S3 - Fatal编程技术网

Java 试图将InputStream放入AmazonS3时进程死亡

Java 试图将InputStream放入AmazonS3时进程死亡,java,amazon-s3,Java,Amazon S3,这就是我写InputStream的方法 public OutputStream getOutputStream(@Nonnull final String uniqueId) throws PersistenceException { final PipedOutputStream outputStream = new PipedOutputStream(); final PipedInputStream inputStream; try {

这就是我写InputStream的方法

public OutputStream getOutputStream(@Nonnull final String uniqueId) throws PersistenceException {
        final PipedOutputStream outputStream = new PipedOutputStream();
        final PipedInputStream inputStream;
        try {
            inputStream = new PipedInputStream(outputStream);
            new Thread(
                    new Runnable() {
                        @Override
                        public void run() {
                            PutObjectRequest putObjectRequest = new PutObjectRequest("haritdev.sunrun", "sample.file.key", inputStream, new ObjectMetadata());
                            PutObjectResult result = amazonS3Client.putObject(putObjectRequest);
                            LOGGER.info("result - " + result.toString());
                            try {
                                inputStream.close();
                            } catch (IOException e) {

                            }
                        }
                    }
            ).start();
        } catch (AmazonS3Exception e) {
            throw new PersistenceException("could not generate output stream for " + uniqueId, e);
        } catch (IOException e) {
            throw new PersistenceException("could not generate input stream for S3 for " + uniqueId, e);
        }
         try {
            return new GZIPOutputStream(outputStream);
        } catch (IOException e) {
            LOGGER.error(e.getMessage(), e);
            throw new PersistenceException("Failed to get output stream for " + uniqueId + ": " + e.getMessage(), e);
        }
    }
在下面的方法中,我看到我的过程死亡


我尝试了同样的方法,但也失败了

我首先将所有数据写入输出流,然后在将数据从输出流复制到输入流后启动到S3的上传:

...
// Data written to outputStream here
...
byte[] byteArray = outputStream.toByteArray();
amazonS3Client.uploadPart(new UploadPartRequest()
  .withBucketName(bucket)
  .withKey(key)
  .withInputStream(new ByteArrayInputStream(byteArray))
  .withPartSize(byteArray.length)
  .withUploadId(uploadId)
  .withPartNumber(partNumber));

如果在开始上传到S3之前必须将整个数据块写入并复制到内存中,那么写入流的目的就有点失败了,但这是我让它工作的唯一方法。

我尝试了同样的方法,但也失败了

我首先将所有数据写入输出流,然后在将数据从输出流复制到输入流后启动到S3的上传:

...
// Data written to outputStream here
...
byte[] byteArray = outputStream.toByteArray();
amazonS3Client.uploadPart(new UploadPartRequest()
  .withBucketName(bucket)
  .withKey(key)
  .withInputStream(new ByteArrayInputStream(byteArray))
  .withPartSize(byteArray.length)
  .withUploadId(uploadId)
  .withPartNumber(partNumber));

如果在开始上传到S3之前,整个数据块都必须写入并复制到内存中,这有点违背了写入流的目的,但这是我让它工作的唯一方法。

以下是我尝试和工作的内容-

  try (PipedOutputStream pipedOutputStream = new PipedOutputStream();
     PipedInputStream pipedInputStream = new PipedInputStream()) {
            new Thread(new Runnable() {

            public void run() {
                try {
                      // write some data to pipedOutputStream
                } catch (IOException e) {
                   // handle exception
                }
            }
            }).start();
       PutObjectRequest putObjectRequest = new PutObjectRequest(BUCKET, FILE_NAME, pipedInputStream, new ObjectMetadata());
       s3Client.putObject(putObjectRequest);
}

这段代码与S3一起工作,S3发出警告:内容长度未设置,S3将被缓冲,可能导致OutOfMemoryException。我不相信有哪种便宜的方法可以在ObjectMetadata中设置内容长度,只是为了消除此消息,希望AWS SDK不会将整个流流传输到内存中,只是为了找到内容长度。

以下是我尝试和工作的内容-

  try (PipedOutputStream pipedOutputStream = new PipedOutputStream();
     PipedInputStream pipedInputStream = new PipedInputStream()) {
            new Thread(new Runnable() {

            public void run() {
                try {
                      // write some data to pipedOutputStream
                } catch (IOException e) {
                   // handle exception
                }
            }
            }).start();
       PutObjectRequest putObjectRequest = new PutObjectRequest(BUCKET, FILE_NAME, pipedInputStream, new ObjectMetadata());
       s3Client.putObject(putObjectRequest);
}

这段代码与S3一起工作,S3发出警告:内容长度未设置,S3将被缓冲,可能导致OutOfMemoryException。我不相信有任何廉价的方法可以在ObjectMetadata中设置内容长度,只是为了消除此消息,希望AWS SDK不会将整个流流传输到内存中,只是为了找到内容长度。

您可以包括堆栈跟踪吗?@DavidLevesque我已经添加了它。除了此,我没有看到其他内容,我怎样才能得到你要找的东西,请告诉我你能包括堆栈跟踪吗?@DavidLevesque我已经添加了它除了这个我什么都看不到了,我怎样才能得到你要找的东西,请告诉我