使用Python和Azure函数将URL参数另存为CSV文件
我想用HTTP POST发送一些数据,如下所示:使用Python和Azure函数将URL参数另存为CSV文件,python,azure,http,azure-functions,azure-storage,Python,Azure,Http,Azure Functions,Azure Storage,我想用HTTP POST发送一些数据,如下所示: https://httptrigger-testfunction.azurewebsites.net/api/HttpTrigger1?id =test&serial_id =1254&device_tra =302&received_time =2021-03-01 我从微软的示例中编写了一个Azure函数,从HTTP POST中读取“name”。 现在,我想读取上述数据并将其保存到blob存储上的CSV文件中。 我应
https://httptrigger-testfunction.azurewebsites.net/api/HttpTrigger1?id
=test&serial_id
=1254&device_tra
=302&received_time
=2021-03-01
我从微软的示例中编写了一个Azure函数,从HTTP POST中读取“name”。
现在,我想读取上述数据并将其保存到blob存储上的CSV文件中。
我应该使用哪个模块
示例代码:
import logging
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
status_code=200
)
您可以使用。这是一个ADLS第2代,如果您使用的是第1代或更旧的找到正确的
请看“”部分,它显示了如何将字符串数据写入blob。例如,在下面的代码中,您希望将csv内容放入变量数据中
from azure.storage.filedatalake import DataLakeFileClient
data = b"abc"
file = DataLakeFileClient.from_connection_string("my_connection_string",
file_system_name="myfilesystem", file_path="myfile")
file.append_data(data, offset=0, length=len(data))
file.flush_data(len(data))
再来一些
您需要使用一些正在写入内存字符串(可能)而不是本地文件的字符串,然后将该字符串写入ADL
如果从代码和示例中看不明显,可以使用req.params
访问params。除了body之外,没有任何东西具有getter
请参考我的代码:
import logging
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
import azure.functions as func
import os, uuid
import tempfile
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
connect_str = "<your-connection-string>"
container_name = "<your-container-name>"
id = req.params.get('id')
if not id:
try:
req_body = req.get_json()
except ValueError:
pass
else:
id = req_body.get('id')
serial_id = req.params.get('serial_id')
if not serial_id:
try:
req_body = req.get_json()
except ValueError:
pass
else:
serial_id = req_body.get('serial_id')
device_tra = req.params.get('device_tra')
if not device_tra:
try:
req_body = req.get_json()
except ValueError:
pass
else:
device_tra = req_body.get('device_tra')
received_time = req.params.get('received_time')
if not received_time:
try:
req_body = req.get_json()
except ValueError:
pass
else:
received_time = req_body.get('received_time')
# Create the BlobServiceClient object which will be used to create a container client
blob_service_client = BlobServiceClient.from_connection_string(connect_str)
# Create the container
container_client = blob_service_client.get_container_client(container_name)
# Create a local directory to hold blob data
local_path = tempfile.gettempdir()
# Create a file in the local data directory to upload and download
local_file_name = str(uuid.uuid4()) + ".csv"
upload_file_path = os.path.join(local_path, local_file_name)
logging.info(upload_file_path)
# Write text to the file
file = open(upload_file_path, 'w')
csv_content = id + "," + serial_id + "," + device_tra + "," + received_time
logging.info(csv_content)
file.write(csv_content)
file.close()
# Create a blob client using the local file name as the name for the blob
blob_client = blob_service_client.get_blob_client(container=container_name, blob=local_file_name)
print("\nUploading to Azure Storage as blob:\n\t" + local_file_name)
# Upload the created file
with open(upload_file_path, "rb") as data:
blob_client.upload_blob(data)
if id:
return func.HttpResponse(f"Hello, {id}. This HTTP triggered function executed successfully.")
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
status_code=200
)
导入日志
从azure.storage.blob导入BlobServiceClient、BlobClient、ContainerClient
将azure.1函数导入为func
导入操作系统,uuid
导入临时文件
def main(请求:func.HttpRequest)->func.HttpResponse:
info('Python HTTP触发器函数处理了一个请求')
connect_str=“”
容器名称=“”
id=req.params.get('id')
如果没有身份证:
尝试:
req_body=req.get_json()
除值错误外:
通过
其他:
id=req\u body.get('id')
serial\u id=req.params.get('serial\u id')
如果不是序列号:
尝试:
req_body=req.get_json()
除值错误外:
通过
其他:
serial\u id=req\u body.get('serial\u id')
device_tra=req.params.get('device_tra')
如果不是设备故障:
尝试:
req_body=req.get_json()
除值错误外:
通过
其他:
device\u tra=req\u body.get('device\u tra')
received_time=req.params.get('received_time'))
如果未在指定时间收到:
尝试:
req_body=req.get_json()
除值错误外:
通过
其他:
received_time=req_body.get('received_time'))
#创建BlobServiceClient对象,该对象将用于创建容器客户端
blob\u服务\u客户端=BlobServiceClient.from\u连接\u字符串(connect\u str)
#创建容器
container\u client=blob\u service\u client.get\u container\u client(container\u name)
#创建本地目录以保存blob数据
local_path=tempfile.gettempdir()
#在本地数据目录中创建要上载和下载的文件
本地文件名=str(uuid.uuid4())+“.csv”
upload\u file\u path=os.path.join(本地路径,本地文件名)
logging.info(上传文件路径)
#将文本写入文件
文件=打开(上传文件路径“w”)
csv_content=id+“,“+串行_id+”,“+设备_tra+”,“+接收时间”
logging.info(csv\u内容)
文件写入(csv_内容)
file.close()文件
#使用本地文件名作为blob的名称创建blob客户端
blob\u client=blob\u service\u client.get\u blob\u client(container=container\u name,blob=local\u file\u name)
打印(“\n以blob:\n\t”+本地\u文件\u名称加载到Azure存储)
#上载创建的文件
打开(上传文件路径“rb”)作为数据:
blob_客户端。上载\u blob(数据)
如果id为:
返回func.HttpResponse(f“Hello,{id}。此HTTP触发函数已成功执行。”)
其他:
返回func.HttpResponse(
“此HTTP触发的函数已成功执行。请在查询字符串或请求正文中为个性化响应传递名称。”,
状态代码=200
)
或者您可以使用以下代码:
import logging
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
import azure.functions as func
import os, uuid
import tempfile
import csv
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
connect_str = "<your-connection-string>"
container_name = "<your-container-name>"
id = req.params.get('id')
if not id:
try:
req_body = req.get_json()
except ValueError:
pass
else:
id = req_body.get('id')
serial_id = req.params.get('serial_id')
if not serial_id:
try:
req_body = req.get_json()
except ValueError:
pass
else:
serial_id = req_body.get('serial_id')
device_tra = req.params.get('device_tra')
if not device_tra:
try:
req_body = req.get_json()
except ValueError:
pass
else:
device_tra = req_body.get('device_tra')
received_time = req.params.get('received_time')
if not received_time:
try:
req_body = req.get_json()
except ValueError:
pass
else:
received_time = req_body.get('received_time')
# Create the BlobServiceClient object which will be used to create a container client
blob_service_client = BlobServiceClient.from_connection_string(connect_str)
# Create the container
container_client = blob_service_client.get_container_client(container_name)
# Create a local directory to hold blob data
local_path = tempfile.gettempdir()
# Create a file in the local data directory to upload and download
local_file_name = str(uuid.uuid4()) + ".csv"
upload_file_path = os.path.join(local_path, local_file_name)
logging.info(upload_file_path)
with open(upload_file_path, 'w', newline='') as csvfile:
filewriter = csv.writer(csvfile, delimiter=',',
quotechar='|', quoting=csv.QUOTE_MINIMAL)
filewriter.writerow(['id', 'serial_id', 'device_tra', 'received_time'])
filewriter.writerow([id, serial_id, device_tra, received_time])
# Create a blob client using the local file name as the name for the blob
blob_client = blob_service_client.get_blob_client(container=container_name, blob=local_file_name)
print("\nUploading to Azure Storage as blob:\n\t" + local_file_name)
# Upload the created file
with open(upload_file_path, "rb") as data:
blob_client.upload_blob(data)
if id:
return func.HttpResponse(f"Hello, {id}. This HTTP triggered function executed successfully.")
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
status_code=200
)
导入日志
从azure.storage.blob导入BlobServiceClient、BlobClient、ContainerClient
将azure.1函数导入为func
导入操作系统,uuid
导入临时文件
导入csv
def main(请求:func.HttpRequest)->func.HttpResponse:
info('Python HTTP触发器函数处理了一个请求')
connect_str=“”
容器名称=“”
id=req.params.get('id')
如果没有身份证:
尝试:
req_body=req.get_json()
除值错误外:
通过
其他:
id=req\u body.get('id')
serial\u id=req.params.get('serial\u id')
如果不是序列号:
尝试:
req_body=req.get_json()
除值错误外:
通过
其他:
serial\u id=req\u body.get('serial\u id')
device_tra=req.params.get('device_tra')
如果不是设备故障:
尝试:
req_body=req.get_json()
除值错误外:
通过
其他:
device\u tra=req\u body.get('device\u tra')
received_time=req.params.get('received_time'))
如果未在指定时间收到:
尝试:
req_body=req.get_json()
除值错误外:
通过
其他:
received_time=req_body.get('received_time'))
#创建BlobServiceClient对象,该对象将用于创建容器客户端
blob\u服务\u客户端=BlobServiceClient.from\u连接\u字符串(connect\u str)
#创建容器
container\u client=blob\u service\u client.get\u container\u client(container\u name)
#创建本地目录以保存blob数据
local_path=tempfile.gettempdir()
#在本地数据目录中创建要上载和下载的文件
本地文件名=str(uuid.uuid4())+“.csv”
upload\u file\u path=os.path.join(本地路径,本地文件名)
logging.info(上传文件路径)
打开(上传文件路径为“w”,换行符为“”)作为csvfile:
filewriter=csv.writer(csvfile,分隔符=',',
quotechar='|',quoting=csv.QUOTE|U最小值)
filewriter.writerow(['id','serial\u id','device\u tra','received\u time']))
filewriter.writerow([id,串行id,设备传输,接收时间])
#使用本地文件名作为blob的名称创建blob客户端
blob_cli