Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/329.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
无法使用python(2.0)在microsoft azure中创建管道,出现以下错误_Python_Azure_Azure Data Factory - Fatal编程技术网

无法使用python(2.0)在microsoft azure中创建管道,出现以下错误

无法使用python(2.0)在microsoft azure中创建管道,出现以下错误,python,azure,azure-data-factory,Python,Azure,Azure Data Factory,这是我的主要功能。下面的快照显示了以下错误 def main(): # Azure subscription ID subscription_id = '' # This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group rg_name = '' #

这是我的主要功能。下面的快照显示了以下错误

def main():

    # Azure subscription ID
    subscription_id = ''

    # This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group
    rg_name = ''

    # The data factory name. It must be globally unique.
    df_name = ''        

    # Specify your Active Directory client ID, client secret, and tenant ID
    credentials = ServicePrincipalCredentials(client_id='', secret='', tenant='')
    resource_client = ResourceManagementClient(credentials, subscription_id)
    adf_client = DataFactoryManagementClient(credentials, subscription_id)

    rg_params = {'location':'eastus'}
    df_params = {'location':'eastus'}

    # create the resource group
    # comment out if the resource group already exits
    #resource_client.resource_groups.create_or_update(rg_name, rg_params)

    # Create a data factory
    #df_resource = Factory(location='eastus')
    #df = adf_client.factories.create_or_update(rg_name, df_name, df_resource)
    #print_item(df)
    #while df.provisioning_state != 'Succeeded':
    #    df = adf_client.factories.get(rg_name, df_name)
    #    time.sleep(1)

    # Create an Azure Storage linked service
    ls_name = ''

    # Specify the name and key of your Azure Storage account
    storage_string = SecureString('DefaultEndpointsProtocol=https;AccountName=;AccountKey=;EndpointSuffix=core.windows.net')
    ls_azure_storage = AzureStorageLinkedService(connection_string=storage_string)
    ls = adf_client.linked_services.create_or_update(rg_name, df_name, ls_name, ls_azure_storage)
    print_item(ls)

    # Create an Azure blob dataset (input)
    ds_name = ''
    ds_ls = LinkedServiceReference(ls_name)
    blob_path= ''
    blob_filename = ''
    ds_azure_blob= AzureBlobDataset(ds_ls, folder_path=blob_path, file_name = blob_filename)
    ds = adf_client.datasets.create_or_update(rg_name, df_name, ds_name, ds_azure_blob)
    print_item(ds)

    # Create an Azure blob dataset (output)
    dsOut_name = ''
    output_blobpath = ''
    dsOut_azure_blob = AzureBlobDataset(ds_ls, folder_path=output_blobpath)
    dsOut = adf_client.datasets.create_or_update(rg_name, df_name, dsOut_name, dsOut_azure_blob)
    print_item(dsOut)

    # Create a copy activity
    act_name =  ''
    blob_source = BlobSource()
    blob_sink = BlobSink()
    dsin_ref = DatasetReference(ds_name)
    dsOut_ref = DatasetReference(dsOut_name)
    copy_activity = CopyActivity(act_name,inputs=[dsin_ref], outputs=[dsOut_ref], source=blob_source, sink=blob_sink)

    # Create a pipeline with the copy activity
    p_name =  ''
    params_for_pipeline = {}
    p_obj = PipelineResource(activities=[copy_activity], parameters=params_for_pipeline)
    p = adf_client.pipelines.create_or_update(rg_name, df_name, p_name, p_obj)
    print_item(p)

    # Create a pipeline run
    run_response = adf_client.pipelines.create_run(rg_name, df_name, p_name,
        {
        }
    )

    # Monitor the pipeilne run
    time.sleep(30)
    pipeline_run = adf_client.pipeline_runs.get(rg_name, df_name, run_response.run_id)
    print("\n\tPipeline run status: {}".format(pipeline_run.status))
    activity_runs_paged = list(adf_client.activity_runs.list_by_pipeline_run(rg_name, df_name, pipeline_run.run_id, datetime.now() - timedelta(1),  datetime.now() + timedelta(1)))
    print_activity_run_details(activity_runs_paged[0])

我得到以下错误:- ErrorResponseException回溯(最近一次调用) 在() ---->1主要内容()

大体上 37 38 ls_azure_存储=AzureStorageLinkedService(连接字符串=存储字符串) --->39 ls=adf_客户端。链接的_服务。创建_或_更新(rg_名称、df_名称、ls_名称、ls_azure_存储) 40打印项目(ls) 41

/创建或更新中的usr/local/lib/python2.7/dist-packages/azure/mgmt/datafactory/operations/linked\u services\u operations.pyc(self、resource\u group\u name、factory\u name、linked\u service\u name、properties、如果匹配、自定义\u头、原始、**operation\u config) 170 171如果response.status_代码不在[200]中: -->172升起模型。错误响应异常(自我。\u反序列化,响应) 173 174反序列化=无

ErrorResponseException:操作返回了无效的状态代码“禁止”


禁止通常意味着你没有权限。你能检查一下你是否有数据工厂的写权限吗

您使用的是ADF v1还是v2?
您是否尝试在UI中创建管道,因为这是验证权限的简单方法?

您似乎没有权限。您有该订阅的书面许可吗?欢迎访问。您能否将错误作为文本消息而不是图像添加到问题中?只需从笔记本中复制错误消息,将其缩进四个空格,然后使用“编辑”按钮将其添加到问题中。这使您的问题更容易找到,无论是对您有帮助的人还是有相同问题的人。你好,刘芳,我已经添加了错误,请尝试帮助其解决紧急问题