.net 如何将大于5 MB(约)的文件上载到Amazon S3(官方SDK)?

.net 如何将大于5 MB(约)的文件上载到Amazon S3(官方SDK)?,.net,amazon-s3,.net,Amazon S3,我正在使用官方AmazonS3SDK的最新版本(1.0.14.1)创建一个备份工具。到目前为止,如果我正在上载的文件大小小于5 MB,则一切正常,但如果任何文件的大小大于5 MB,则上载失败,出现以下异常: System.Net.WebException:请求 已中止:请求已取消。 --->System.IO.IOException:在所有字节都被删除之前,无法关闭流 书面的在 System.Net.ConnectStream.CloseInternal(布尔值 内部调用,布尔终止)--- 内部

我正在使用官方AmazonS3SDK的最新版本(1.0.14.1)创建一个备份工具。到目前为止,如果我正在上载的文件大小小于5 MB,则一切正常,但如果任何文件的大小大于5 MB,则上载失败,出现以下异常:

System.Net.WebException:请求 已中止:请求已取消。 --->System.IO.IOException:在所有字节都被删除之前,无法关闭流 书面的在 System.Net.ConnectStream.CloseInternal(布尔值 内部调用,布尔终止)--- 内部异常堆栈跟踪的结束--- 在 Amazon.S3.AmazonS3Client.ProcessRequestError(字符串 actionName,HttpWebRequest请求, WebException我们,HttpWebResponse errorResponse,字符串requestAddr, WebHeaderCollection&respHdrs,t型) 在 Amazon.S3.AmazonS3Client.Invoke[T](S3Request) 用户请求)在 Amazon.S3.AmazonS3Client.PutObject(PutObjectRequest (请求)在 backuptolkit.S3Module.UploadFile(字符串 sourceFileName,字符串 destinationFileName)中的 W:\code\autobackuptol\backuptolkit\S3Module.cs:line 88 at backuptolkit.S3Module.UploadFiles(字符串 sourceDirectory)中的 W:\code\autobackuptol\backuptolkit\S3Module.cs:line 108

注意:5 MB大致为故障边界,可以稍低,也可以稍高

我假设连接正在超时,并且在文件上载完成之前流正在自动关闭

我试图找到一种设置长超时的方法(但在
AmazonS3
AmazonS3Config
中都找不到该选项)

关于如何增加超时(比如我可以使用的应用程序范围的设置)或者它与超时问题无关,有什么想法吗


代码:


更新答案:

我最近更新了我的一个使用Amazon AWS.NET SDK的项目(版本为1.4.1.0),在这个版本中,有两个改进是我在这里编写原始答案时不存在的

  • 现在,您可以将
    超时设置为
    -1
    ,使put操作具有无限时间限制
  • 现在,在
    PutObjectRequest
    上有一个名为
    ReadWriteTimeout
    的额外属性,它可以在流读/写级别(以毫秒为单位)上设置为超时,而不是整个put操作级别
  • 现在我的代码如下所示:

    var putObjectRequest = new PutObjectRequest {
    
        BucketName            = Bucket,
        FilePath              = sourceFileName,
        Key                   = destinationFileName,
        MD5Digest             = md5Base64,
        GenerateMD5Digest     = true,
        Timeout               = -1,
        ReadWriteTimeout      = 300000     // 5 minutes in milliseconds
    };
    
    原始答案:


    我设法找到了答案

    在发布这个问题之前,我已经探索了
    AmazonS3
    AmazonS3Config
    ,但没有探索
    PutObjectRequest

    PutObjectRequest
    中有一个
    Timeout
    属性(以毫秒为单位设置)。我已成功使用此选项上载较大的文件(注意:将其设置为0不会删除超时,您需要指定毫秒的正数…我已上载了1小时)

    这很好:

    var putObjectRequest = new PutObjectRequest {
    
        BucketName            = Bucket,
        FilePath              = sourceFileName,
        Key                   = destinationFileName,
        MD5Digest             = md5Base64,
        GenerateMD5Digest     = true,
        Timeout               = 3600000
    };
    

    我遇到了类似的问题,并开始使用TransferUtility类执行多部分上传

    目前,该代码正在运行。但是,当超时设置得太低时,我确实遇到了问题

                    var request = new TransferUtilityUploadRequest()
                    .WithBucketName(BucketName)
                    .WithFilePath(sourceFile.FullName)
                    .WithKey(key)
                    .WithTimeout(100 * 60 * 60 * 1000)
                    .WithPartSize(10 * 1024 * 1024)
                    .WithSubscriber((src, e) =>
                    {
                        Console.CursorLeft = 0;
                        Console.Write("{0}: {1} of {2}    ", sourceFile.Name, e.TransferredBytes, e.TotalBytes);
                    });
                utility.Upload(request);
    

    当我输入这个时,我有一个4GB的上传正在进行,它已经比以往任何时候都更深入了

    Nick Randell在这方面的想法是正确的,在他的文章之后,这里有另一个例子,其中包含一些可选的事件处理,以及一种获得上传文件完成百分比的方法:

            private static string WritingLargeFile(AmazonS3 client, int mediaId, string bucketName, string amazonKey, string fileName, string fileDesc, string fullPath)
        {
            try
            {
    
                Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Create TransferUtilityUploadRequest");
                var request = new TransferUtilityUploadRequest()
                    .WithBucketName(bucketName)
                    .WithKey(amazonKey)
                    .WithMetadata("fileName", fileName)
                    .WithMetadata("fileDesc", fileDesc)
                    .WithCannedACL(S3CannedACL.PublicRead)
                    .WithFilePath(fullPath)
                    .WithTimeout(100 * 60 * 60 * 1000) //100 min timeout
                    .WithPartSize(5 * 1024 * 1024); // Upload in 5MB pieces 
    
                request.UploadProgressEvent += new EventHandler<UploadProgressArgs>(uploadRequest_UploadPartProgressEvent);
    
                Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Create TransferUtility");
                TransferUtility fileTransferUtility = new TransferUtility(ConfigurationManager.AppSettings["AWSAccessKey"], ConfigurationManager.AppSettings["AWSSecretKey"]);
    
                Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Start Upload");
                fileTransferUtility.Upload(request);
    
                return amazonKey;
            }
            catch (AmazonS3Exception amazonS3Exception)
            {
                if (amazonS3Exception.ErrorCode != null &&
                    (amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId") ||
                    amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
                {
                    Log.Add(LogTypes.Debug, mediaId, "Please check the provided AWS Credentials.");
                }
                else
                {
                    Log.Add(LogTypes.Debug, mediaId, String.Format("An error occurred with the message '{0}' when writing an object", amazonS3Exception.Message));
                }
                return String.Empty; //Failed
            }
        }
    
        private static Dictionary<string, int> uploadTracker = new Dictionary<string, int>();
        static void uploadRequest_UploadPartProgressEvent(object sender, UploadProgressArgs e)
        {
            TransferUtilityUploadRequest req = sender as TransferUtilityUploadRequest;          
            if (req != null)
            {
                string fileName = req.FilePath.Split('\\').Last();
                if (!uploadTracker.ContainsKey(fileName))
                    uploadTracker.Add(fileName, e.PercentDone);
    
                //When percentage done changes add logentry:
                if (uploadTracker[fileName] != e.PercentDone)
                {
                    uploadTracker[fileName] = e.PercentDone;
                    Log.Add(LogTypes.Debug, 0, String.Format("WritingLargeFile progress: {1} of {2} ({3}%) for file '{0}'", fileName, e.TransferredBytes, e.TotalBytes, e.PercentDone));
                }
            }
    
        }
    
        public static int GetAmazonUploadPercentDone(string fileName)
        {
            if (!uploadTracker.ContainsKey(fileName))
                return 0;
    
            return uploadTracker[fileName];
        }
    
    private static string WritingLargeFile(AmazonS3客户端、int mediaId、string bucketName、string amazonKey、string fileName、string fileDesc、string fullPath)
    {
    尝试
    {
    添加(LogTypes.Debug,mediaId,“WritingLargeFile:createtransferUtilityUploadRequest”);
    var request=新的TransferUtilityUploadRequest()
    .带bucketName(bucketName)
    .WithKey(amazonKey)
    .WithMetadata(“文件名”,文件名)
    .WithMetadata(“fileDesc”,fileDesc)
    .WithCannedACL(S3CannedACL.PublicRead)
    .WithFilePath(完整路径)
    .WithTimeout(100*60*60*1000)//100分钟超时
    .WithPartSize(5*1024*1024);//以5MB片段上传
    request.UploadProgressEvent+=新事件处理程序(uploadRequest\u UploadPartProgressEvent);
    添加(LogTypes.Debug,mediaId,“WritingLargeFile:createtransferUtility”);
    TransferUtility fileTransferUtility=新的TransferUtility(ConfigurationManager.AppSettings[“AWSAccessKey”]、ConfigurationManager.AppSettings[“AWSSecretKey”]);
    添加(LogTypes.Debug,mediaId,“WritingLargeFile:Start Upload”);
    上传(请求);
    返回amazonKey;
    }
    捕获(AmazonS3Exception AmazonS3Exception)
    {
    如果(amazonS3Exception.ErrorCode!=null&&
    (amazonS3Exception.ErrorCode.Equals(“InvalidAccessKeyId”)||
    amazonS3Exception.ErrorCode.Equals(“InvalidSecurity”))
    {
    添加(LogTypes.Debug,mediaId,“请检查提供的AWS凭据”);
    }
    其他的
    {
    添加(LogTypes.Debug,mediaId,String.Format(“写入对象时消息“{0}”发生错误”,amazonS3Exception.message));
    }
    返回String.Empty;//失败
    }
    }
    私有静态字典uploadTracker=新字典();
    静态无效uploadRequest\u UploadPartProgressEvent(对象发送方,UploadProgressArgs e)
    {
    TransferUtilityUploadRequest req=发送方作为TransferUtilityUploadRequest;
    如果(请求!=null)
    {
    字符串fileName=req.FilePath.Split('\\').Last();
    如果(!uploadTracker.ContainsKey(文件名))
    上传
    
            private static string WritingLargeFile(AmazonS3 client, int mediaId, string bucketName, string amazonKey, string fileName, string fileDesc, string fullPath)
        {
            try
            {
    
                Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Create TransferUtilityUploadRequest");
                var request = new TransferUtilityUploadRequest()
                    .WithBucketName(bucketName)
                    .WithKey(amazonKey)
                    .WithMetadata("fileName", fileName)
                    .WithMetadata("fileDesc", fileDesc)
                    .WithCannedACL(S3CannedACL.PublicRead)
                    .WithFilePath(fullPath)
                    .WithTimeout(100 * 60 * 60 * 1000) //100 min timeout
                    .WithPartSize(5 * 1024 * 1024); // Upload in 5MB pieces 
    
                request.UploadProgressEvent += new EventHandler<UploadProgressArgs>(uploadRequest_UploadPartProgressEvent);
    
                Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Create TransferUtility");
                TransferUtility fileTransferUtility = new TransferUtility(ConfigurationManager.AppSettings["AWSAccessKey"], ConfigurationManager.AppSettings["AWSSecretKey"]);
    
                Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Start Upload");
                fileTransferUtility.Upload(request);
    
                return amazonKey;
            }
            catch (AmazonS3Exception amazonS3Exception)
            {
                if (amazonS3Exception.ErrorCode != null &&
                    (amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId") ||
                    amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
                {
                    Log.Add(LogTypes.Debug, mediaId, "Please check the provided AWS Credentials.");
                }
                else
                {
                    Log.Add(LogTypes.Debug, mediaId, String.Format("An error occurred with the message '{0}' when writing an object", amazonS3Exception.Message));
                }
                return String.Empty; //Failed
            }
        }
    
        private static Dictionary<string, int> uploadTracker = new Dictionary<string, int>();
        static void uploadRequest_UploadPartProgressEvent(object sender, UploadProgressArgs e)
        {
            TransferUtilityUploadRequest req = sender as TransferUtilityUploadRequest;          
            if (req != null)
            {
                string fileName = req.FilePath.Split('\\').Last();
                if (!uploadTracker.ContainsKey(fileName))
                    uploadTracker.Add(fileName, e.PercentDone);
    
                //When percentage done changes add logentry:
                if (uploadTracker[fileName] != e.PercentDone)
                {
                    uploadTracker[fileName] = e.PercentDone;
                    Log.Add(LogTypes.Debug, 0, String.Format("WritingLargeFile progress: {1} of {2} ({3}%) for file '{0}'", fileName, e.TransferredBytes, e.TotalBytes, e.PercentDone));
                }
            }
    
        }
    
        public static int GetAmazonUploadPercentDone(string fileName)
        {
            if (!uploadTracker.ContainsKey(fileName))
                return 0;
    
            return uploadTracker[fileName];
        }
    
        // preparing our file and directory names
            string fileToBackup = @"d:\mybackupFile.zip" ; // test file
            string myBucketName = "mys3bucketname"; //your s3 bucket name goes here
            string s3DirectoryName = "justdemodirectory";
            string s3FileName = @"mybackupFile uploaded in 12-9-2014.zip";
            AmazonUploader myUploader = new AmazonUploader();
            myUploader.sendMyFileToS3(fileToBackup, myBucketName, s3DirectoryName, s3FileName);
    
    // Step 1 : 
    AmazonS3Config s3Config = new AmazonS3Config();
    s3Config.RegionEndpoint = GetRegionEndPoint();
    
    // Step 2 :
    using(var client = new AmazonS3Client(My_AWSAccessKey, My_AWSSecretKey, s3Config) )
    {
        // Step 3 :
        PutObjectRequest request = new PutObjectRequest();
        request.Key = My_key;
        request.InputStream = My_fileStream;
        request.BucketName = My_BucketName;
    
        // Step 4 : Finally place object to S3
        client.PutObject(request);
    }
    
    // Step 1 : Create "Transfer Utility" (replacement of old "Transfer Manager")
    TransferUtility fileTransferUtility =
         new TransferUtility(new AmazonS3Client(Amazon.RegionEndpoint.USEast1));
    
    // Step 2 : Create Request object
    TransferUtilityUploadRequest uploadRequest =
        new TransferUtilityUploadRequest
        {
            BucketName = My_BucketName,
            FilePath = My_filePath, 
            Key = My_keyName
        };
    
    // Step 3 : Event Handler that will be automatically called on each transferred byte 
    uploadRequest.UploadProgressEvent +=
        new EventHandler<UploadProgressArgs>
            (uploadRequest_UploadPartProgressEvent);
    
    static void uploadRequest_UploadPartProgressEvent(object sender, UploadProgressArgs e)
    {    
        Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
    }
    
    // Step 4 : Hit upload and send data to S3
    fileTransferUtility.Upload(uploadRequest);