Asp.net core 如何在.NETCore中为多个请求运行api
我有一个场景,我试图从UI上传文件,然后在.net core中,我创建了一个端点,将文件推送到azure blob存储中,之后我编写了一个azure函数,该函数会自动触发并调用另一个API,该API会进一步处理该文件,并编写某种SQL查询在处理一个文件时,它会生成一些数据,然后我将生成的文件再次推送到blob中的某个不同容器中。但我面临一个问题,如果我上传一个文件,它对我来说很好,但当我上传一个文件,然后我上传另一个文件,然后我的第二个文件放弃我的第一次数据处理,在我的第二个容器中,我只得到一个导出的文件,这是第二个上传的文件。所以,我希望尽可能多的用户可以上传文件,但所有进程都应该运行它不应该丢弃任何正在运行的进程。 如果你们能给我建议一些对我有帮助的方法。如果需要,我将分享我的代码 上载文件终结点Asp.net core 如何在.NETCore中为多个请求运行api,asp.net-core,.net-core,file-upload,asp.net-core-webapi,azure-blob-storage,Asp.net Core,.net Core,File Upload,Asp.net Core Webapi,Azure Blob Storage,我有一个场景,我试图从UI上传文件,然后在.net core中,我创建了一个端点,将文件推送到azure blob存储中,之后我编写了一个azure函数,该函数会自动触发并调用另一个API,该API会进一步处理该文件,并编写某种SQL查询在处理一个文件时,它会生成一些数据,然后我将生成的文件再次推送到blob中的某个不同容器中。但我面临一个问题,如果我上传一个文件,它对我来说很好,但当我上传一个文件,然后我上传另一个文件,然后我的第二个文件放弃我的第一次数据处理,在我的第二个容器中,我只得到一个
[HttpPost]
[Route("uploadFileToBlob")]
public async Task<IActionResult> UploadFileToBlobStorage(IFormFile file)
{
var userId = HttpContext.User.Identity.Name;
var accessToken = Request.Headers[HeaderNames.Authorization];
_manageClaim.UploadExcelToBlob(file, accessToken, userId);
return Ok(new { Message = "Your file is uploaded and processing..."});
}
API端点当我从UI上传任何文件时,该端点会自动命中
[HttpPost]
[Route("getBatchClaimData")]
public async Task<IActionResult> ExportToCsv(List<CsvPropModel> model)
{
await _manageClaim.GetClaimData(model);
return Ok();
}
[HttpPost]
[路由(“getBatchClaimData”)]
公共异步任务ExportToCsv(列表模型)
{
wait_manageClaim.GetClaimData(模型);
返回Ok();
}
这是上述端点的存储库
public async Task<bool> GetClaimData(List<CsvPropModel> csvProps)
{
var dataRange = new List<ClaimsResponse>();
ClaimsResponse claims = new ClaimsResponse();
foreach (var item in csvProps)
{
InvoiceResult invoice = _manage.GetInvoiceDetailsByInvoiceId(item.InvoiceId, item.NickName);
EligiblePaymentStatusParameters eligibleParams = Converter.ConvertInvoiceToEligibleParams(invoice);
claims = await GetClaimDetails(eligibleParams, invoice);
claims.InvKey = invoice.InvKey;
claims.NickName = invoice.NickName;
claims.InvoiceNumber = invoice.InvNbrDisplay;
dataRange.Add(claims);
}
var csvRecord = ClaimDataHelper.ConvertToCsvReCord(dataRange);
var fileName = @$"{ DateTime.Now.ToShortDateString() }_ExportedBatchClaim_{ DateTime.Now.ToShortTimeString() }.csv";
MemoryStream ms = new MemoryStream();
using (var memoryStream = new MemoryStream())
{
using (var writer = new StreamWriter(memoryStream))
{
using (var csv = new CsvWriter(writer, CultureInfo.InvariantCulture))
{
csv.WriteRecords(csvRecord);
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(blobConnectionString);
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve reference to a previously created container.
CloudBlobContainer container = blobClient.GetContainerReference("outfolder");
// Retrieve reference to a blob named "test.xlsx"
CloudBlockBlob blockBlobReference = container.GetBlockBlobReference(fileName);
memoryStream.Position = 0;
memoryStream.CopyTo(ms);
ms.Position = 0;
await blockBlobReference.UploadFromStreamAsync(ms);
ms.Flush();
var importedData = _blobRepository.GetLastImportedData();
BatchClaimStatus batchClaim = new BatchClaimStatus()
{
ErrorMessage = null,
InputFileName = importedData.InputFileName,
IsExported = true,
OutputFileName = fileName,
UpdatedDate = DateTime.Now,
UploadedDate = importedData.UploadedDate,
UserId = importedData.UserId,
UserName = importedData.UserName
};
_blobRepository.UpdateExportedFileToDb(batchClaim, importedData.Id);
}
}
}
return true;
}
公共异步任务GetClaimData(列出csvProps)
{
var dataRange=新列表();
索赔响应索赔=新索赔响应();
foreach(csvProps中的var项目)
{
发票结果发票=_manage.getInvoiceDetailsByVoiceId(item.InvoiceId,item.昵称);
EligiblePaymentStatusParameters eligibleParams=转换器。将发票转换为LigibleParams(发票);
索赔=等待获取索赔详细信息(eligibleParams,发票);
claims.InvKey=invoice.InvKey;
claims.昵称=invoice.昵称;
claims.InvoiceNumber=invoice.InvNbrDisplay;
添加(索赔);
}
var csvRecord=ClaimDataHelper.ConvertToCsvReCord(数据范围);
var fileName=@$“{DateTime.Now.ToShortDateString()}\u ExportedBatchClaim{DateTime.Now.ToShortTimeString()}.csv”;
MemoryStream ms=新的MemoryStream();
使用(var memoryStream=new memoryStream())
{
使用(var writer=newstreamwriter(memoryStream))
{
使用(var csv=new CsvWriter(writer,CultureInfo.InvariantCulture))
{
csv.WriterRecords(csvRecord);
CloudStorageAccount-storageAccount=CloudStorageAccount.Parse(blobConnectionString);
//创建blob客户端。
CloudBlobClient blobClient=storageAccount.CreateCloudBlobClient();
//检索对以前创建的容器的引用。
CloudBlobContainer容器=blobClient.GetContainerReference(“outfolder”);
//检索对名为“test.xlsx”的blob的引用
cloudblockblobbreference=container.getblockblobbreference(文件名);
memoryStream.Position=0;
memoryStream.CopyTo(毫秒);
ms.Position=0;
等待blockBlobReference.UploadFromStreamAsync(毫秒);
弗拉什女士();
var importedData=\u blobRepository.GetLastImportedData();
BatchClaimStatus batchClaim=新的BatchClaimStatus()
{
ErrorMessage=null,
InputFileName=importedData.InputFileName,
IsExported=true,
OutputFileName=文件名,
updateDate=DateTime。现在,
UploadedDate=importedData.UploadedDate,
UserId=importedData.UserId,
UserName=importedData.UserName
};
_blobRepository.UpdateExportedFileToDb(batchClaim,importedData.Id);
}
}
}
返回true;
}
我看不出上述代码中存在任何可能导致此问题的问题,尽管您应该始终尝试使任何IO绑定操作,例如\u blobRepository.UpdateExportedFileToDb(batchClaim,importedData.Id)
或\u manageClaim.UploadExcelToBlob(文件,accessToken,userId)
异步以提高并发性
实际问题似乎与Azure函数blob触发器有关。有时,您可能会在blob触发器(特别是函数消耗计划)中遇到问题,如空闲、缺少触发器或队列处理延迟
对于高吞吐量场景、速度和可靠性,如果您使用的是storage V2,建议使用For Function。您可以参考更多详细信息。根据您的流程,您应该在本文中提交示例代码,因为您的代码中有看不见的东西。我更希望看到您如何创建一个进程,这一行导致您的进程被丢弃。@JohnathanLe我编辑了我的问题并添加了代码,因此每当我点击端点“uploadFileToBlob”时,会自动调用端点为“getBatchClaimData”的代码但是当我连续上传两次时,它会丢弃第一个并处理第二个。你能在这里帮助我吗?这将非常有帮助。我查看了这些代码,但没有发现任何问题,而且我真的不了解你在这里的业务。您是否登录以确保在第二个api中调用了第一个进程?这是一个帮助我们防止UI出现问题的标志(可能是javascript)好的,我明白你的意思了。因此,问题可能发生在您的
public async Task<bool> GetClaimData(List<CsvPropModel> csvProps)
{
var dataRange = new List<ClaimsResponse>();
ClaimsResponse claims = new ClaimsResponse();
foreach (var item in csvProps)
{
InvoiceResult invoice = _manage.GetInvoiceDetailsByInvoiceId(item.InvoiceId, item.NickName);
EligiblePaymentStatusParameters eligibleParams = Converter.ConvertInvoiceToEligibleParams(invoice);
claims = await GetClaimDetails(eligibleParams, invoice);
claims.InvKey = invoice.InvKey;
claims.NickName = invoice.NickName;
claims.InvoiceNumber = invoice.InvNbrDisplay;
dataRange.Add(claims);
}
var csvRecord = ClaimDataHelper.ConvertToCsvReCord(dataRange);
var fileName = @$"{ DateTime.Now.ToShortDateString() }_ExportedBatchClaim_{ DateTime.Now.ToShortTimeString() }.csv";
MemoryStream ms = new MemoryStream();
using (var memoryStream = new MemoryStream())
{
using (var writer = new StreamWriter(memoryStream))
{
using (var csv = new CsvWriter(writer, CultureInfo.InvariantCulture))
{
csv.WriteRecords(csvRecord);
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(blobConnectionString);
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve reference to a previously created container.
CloudBlobContainer container = blobClient.GetContainerReference("outfolder");
// Retrieve reference to a blob named "test.xlsx"
CloudBlockBlob blockBlobReference = container.GetBlockBlobReference(fileName);
memoryStream.Position = 0;
memoryStream.CopyTo(ms);
ms.Position = 0;
await blockBlobReference.UploadFromStreamAsync(ms);
ms.Flush();
var importedData = _blobRepository.GetLastImportedData();
BatchClaimStatus batchClaim = new BatchClaimStatus()
{
ErrorMessage = null,
InputFileName = importedData.InputFileName,
IsExported = true,
OutputFileName = fileName,
UpdatedDate = DateTime.Now,
UploadedDate = importedData.UploadedDate,
UserId = importedData.UserId,
UserName = importedData.UserName
};
_blobRepository.UpdateExportedFileToDb(batchClaim, importedData.Id);
}
}
}
return true;
}