Azure 在blob存储中下载大文件并将其拆分为100 MB块

Azure 在blob存储中下载大文件并将其拆分为100 MB块,azure,azure-storage-blobs,Azure,Azure Storage Blobs,我在blob存储中有一个2GB文件,正在构建一个控制台应用程序,将该文件下载到桌面上。要求是分成100MB的块,并在文件名中添加一个数字。我不需要重新组合这些文件。我需要的只是文件块 我目前拥有来自 但我不知道如何在文件大小已经达到100MB时停止下载并创建一个新文件 任何帮助都将不胜感激 更新:这是我的代码 CloudStorageAccount account = CloudStorageAccount.Parse(connectionString); var blo

我在blob存储中有一个2GB文件,正在构建一个控制台应用程序,将该文件下载到桌面上。要求是分成100MB的块,并在文件名中添加一个数字。我不需要重新组合这些文件。我需要的只是文件块

我目前拥有来自

但我不知道如何在文件大小已经达到100MB时停止下载并创建一个新文件

任何帮助都将不胜感激

更新:这是我的代码

CloudStorageAccount account = CloudStorageAccount.Parse(connectionString);
            var blobClient = account.CreateCloudBlobClient();
            var container = blobClient.GetContainerReference(containerName);
            var file = uri;
            var blob = container.GetBlockBlobReference(file);
            //First fetch the size of the blob. We use this to create an empty file with size = blob's size
            blob.FetchAttributes();
            var blobSize = blob.Properties.Length;
            long blockSize = (1 * 1024 * 1024);//1 MB chunk;
            blockSize = Math.Min(blobSize, blockSize);
            //Create an empty file of blob size
            using (FileStream fs = new FileStream(file, FileMode.Create))//Create empty file.
            {
                fs.SetLength(blobSize);//Set its size
            }
            var blobRequestOptions = new BlobRequestOptions
            {
                RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(5), 3),
                MaximumExecutionTime = TimeSpan.FromMinutes(60),
                ServerTimeout = TimeSpan.FromMinutes(60)
            };
            long startPosition = 0;
            long currentPointer = 0;
            long bytesRemaining = blobSize;
            do
            {
                var bytesToFetch = Math.Min(blockSize, bytesRemaining);
                using (MemoryStream ms = new MemoryStream())
                {
                    //Download range (by default 1 MB)
                    blob.DownloadRangeToStream(ms, currentPointer, bytesToFetch, null, blobRequestOptions);
                    ms.Position = 0;
                    var contents = ms.ToArray();
                    using (var fs = new FileStream(file, FileMode.Open))//Open that file
                    {
                        fs.Position = currentPointer;//Move the cursor to the end of file.
                        fs.Write(contents, 0, contents.Length);//Write the contents to the end of file.
                    }
                    startPosition += blockSize;
                    currentPointer += contents.Length;//Update pointer
                    bytesRemaining -= contents.Length;//Update bytes to fetch

                    Console.WriteLine(fileName + dateTimeStamp + ".csv " + (startPosition / 1024 / 1024) + "/" + (blob.Properties.Length / 1024 / 1024) + " MB downloaded...");
                }
            }
            while (bytesRemaining > 0);

根据我的理解,您可以将blob文件分解为预期的片段(100MB),然后利用下载的每个文件块。这是我的代码片段,您可以参考它:

using (CsvReader csv = new CsvReader(new StreamReader("data.csv"), true))
{
    int fieldCount = csv.FieldCount;
    string[] headers = csv.GetFieldHeaders();
    while (csv.ReadNextRecord())
    {
        for (int i = 0; i < fieldCount; i++)
            Console.Write(string.Format("{0} = {1};",
                          headers[i],
                          csv[i] == null ? "MISSING" : csv[i]));
        //TODO: 
        //1.Read the current record, check the total bytes you have read;
        //2.Create a new csv file if the current total bytes up to 100MB, then save the current record to the current CSV file.
    }
}
string[] headers = new string[0];
using (var sr = new StreamReader(@"C:\Users\v-brucch\Desktop\BlobHourMetrics.csv")) //83.9KB
{
    using (CsvHelper.CsvReader csvReader = new CsvHelper.CsvReader(sr,
        new CsvHelper.Configuration.CsvConfiguration()
        {
            Delimiter = ",",
            Encoding = Encoding.UTF8
        }))
    {
        //check header
        if (csvReader.ReadHeader())
        {
            headers = csvReader.FieldHeaders;
        }

        TextWriter writer = null;
        CsvWriter csvWriter = null;
        long readBytesCount = 0;
        long chunkSize = 30 * 1024; //divide CSV file into each CSV file with byte size up to 30KB

        while (csvReader.Read())
        {
            var curRecord = csvReader.CurrentRecord;
            var curRecordByteCount = curRecord.Sum(r => Encoding.UTF8.GetByteCount(r)) + headers.Count() + 1;
            readBytesCount += curRecordByteCount;

            //check bytes you have read
            if (writer == null || readBytesCount > chunkSize)
            {
                readBytesCount = curRecordByteCount + headers.Sum(h => Encoding.UTF8.GetByteCount(h)) + headers.Count() + 1;
                if (writer != null)
                {
                    writer.Flush();
                    writer.Close();
                }
                string fileName = $"BlobHourMetrics_{Guid.NewGuid()}.csv";
                writer = new StreamWriter(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, fileName), true);
                csvWriter = new CsvWriter(writer);
                csvWriter.Configuration.Encoding = Encoding.UTF8;
                //output header field
                foreach (var header in headers)
                {
                    csvWriter.WriteField(header);
                }
                csvWriter.NextRecord();
            }
            //output record field
            foreach (var field in curRecord)
            {
                csvWriter.WriteField(field);
            }
            csvWriter.NextRecord();
        }
        if (writer != null)
        {
            writer.Flush();
            writer.Close();
        }
    }
}
并行下载blob

private static void ParallelDownloadBlob(Stream outPutStream, CloudBlockBlob blob,long startRange,long endRange)
{
    blob.FetchAttributes();
    int bufferLength = 1 * 1024 * 1024;//1 MB chunk for download
    long blobRemainingLength = endRange-startRange;
    Queue<KeyValuePair<long, long>> queues = new Queue<KeyValuePair<long, long>>();
    long offset = startRange;
    while (blobRemainingLength > 0)
    {
        long chunkLength = (long)Math.Min(bufferLength, blobRemainingLength);
        queues.Enqueue(new KeyValuePair<long, long>(offset, chunkLength));
        offset += chunkLength;
        blobRemainingLength -= chunkLength;
    }
    Parallel.ForEach(queues,
        new ParallelOptions()
        {
            MaxDegreeOfParallelism = 5
        }, (queue) =>
        {
            using (var ms = new MemoryStream())
            {
                blob.DownloadRangeToStream(ms, queue.Key, queue.Value);
                lock (outPutStream)
                {
                    outPutStream.Position = queue.Key- startRange;
                    var bytes = ms.ToArray();
                    outPutStream.Write(bytes, 0, bytes.Length);
                }
            }
        });
}

结果

尝试了这个。它会“打破”文件最后一行的记录,并继续处理下一个文件。事实不应如此。例如,在最后一行,它在七列中的前三列写入数据,然后继续在下一个文件的第一行的下四列写入数据。如您所述,将blob文件拆分为固定大小,它将必须打破您的记录。能否提供blob记录的结构?表有七列。如果您使用lumenworks获得完整的代码片段会更好。:)根据您的要求,我更新了我的答案,您可以参考它。StreamReader中的参数已经在您的本地驱动器中。我需要的是在不破坏记录的情况下从blob存储中分块下载。希望这一切都清楚了。我只是开发了一个Powershell脚本,因为当文件大小超过1 GB时,此代码有“挂起”的趋势。不过还是谢谢你
string[] headers = new string[0];
using (var sr = new StreamReader(@"C:\Users\v-brucch\Desktop\BlobHourMetrics.csv")) //83.9KB
{
    using (CsvHelper.CsvReader csvReader = new CsvHelper.CsvReader(sr,
        new CsvHelper.Configuration.CsvConfiguration()
        {
            Delimiter = ",",
            Encoding = Encoding.UTF8
        }))
    {
        //check header
        if (csvReader.ReadHeader())
        {
            headers = csvReader.FieldHeaders;
        }

        TextWriter writer = null;
        CsvWriter csvWriter = null;
        long readBytesCount = 0;
        long chunkSize = 30 * 1024; //divide CSV file into each CSV file with byte size up to 30KB

        while (csvReader.Read())
        {
            var curRecord = csvReader.CurrentRecord;
            var curRecordByteCount = curRecord.Sum(r => Encoding.UTF8.GetByteCount(r)) + headers.Count() + 1;
            readBytesCount += curRecordByteCount;

            //check bytes you have read
            if (writer == null || readBytesCount > chunkSize)
            {
                readBytesCount = curRecordByteCount + headers.Sum(h => Encoding.UTF8.GetByteCount(h)) + headers.Count() + 1;
                if (writer != null)
                {
                    writer.Flush();
                    writer.Close();
                }
                string fileName = $"BlobHourMetrics_{Guid.NewGuid()}.csv";
                writer = new StreamWriter(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, fileName), true);
                csvWriter = new CsvWriter(writer);
                csvWriter.Configuration.Encoding = Encoding.UTF8;
                //output header field
                foreach (var header in headers)
                {
                    csvWriter.WriteField(header);
                }
                csvWriter.NextRecord();
            }
            //output record field
            foreach (var field in curRecord)
            {
                csvWriter.WriteField(field);
            }
            csvWriter.NextRecord();
        }
        if (writer != null)
        {
            writer.Flush();
            writer.Close();
        }
    }
}