Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/19.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x 为什么我不能将云函数中的csv分隔符设置为管道_Python 3.x_Csv_Google Bigquery_Google Cloud Functions - Fatal编程技术网

Python 3.x 为什么我不能将云函数中的csv分隔符设置为管道

Python 3.x 为什么我不能将云函数中的csv分隔符设置为管道,python-3.x,csv,google-bigquery,google-cloud-functions,Python 3.x,Csv,Google Bigquery,Google Cloud Functions,我编写了一个函数,从云存储中读取csv文件并将其加载到BigQuery。 该函数非常简单明了,但是csv文件是管道分隔的,即使我将作业配置filedelimiter设置为“|”,它仍然在逗号上进行分区” 下面是 def FlexToBigQuery(data, context): bucketname = data['bucket'] filename = data['name'] timeCreated = data['timeCreated'] client

我编写了一个函数,从云存储中读取csv文件并将其加载到BigQuery。 该函数非常简单明了,但是csv文件是管道分隔的,即使我将作业配置filedelimiter设置为“|”,它仍然在逗号上进行分区” 下面是

def FlexToBigQuery(data, context):
    bucketname = data['bucket']
    filename = data['name']
    timeCreated = data['timeCreated']

    client = bigquery.Client()
    dataset_id = 'nature_bi'
    dataset_ref = client.dataset(dataset_id)
    job_config = bigquery.LoadJobConfig()
    job_config.schema = [
        bigquery.SchemaField('Anstallningsnummer', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Datum', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Kod', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Kostnadsstalle', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Tidkod', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('OB_tidkod', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Dagsschema', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Schemalagd_arbetstid', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Summa_narvaro', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Summa_franvaro', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Datum_for_klarmarkering', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Datum_for_attestering', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Frislappsdatum', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Export_klockslag', 'STRING', mode='NULLABLE'),
        bigquery.SchemaField('Vecka', 'STRING', mode='NULLABLE')
    ]

    job_config.skip_leading_rows = 0
    job_config.fieldDelimiter = '|',
    job_config.allow_jagged_rows = True
    job_config.write_disposition = 'WRITE_TRUNCATE',
    # log the receipt of the file
    job_config.source_format = bigquery.SourceFormat.CSV
    uri = 'gs://%s/%s' % (bucketname, filename)
    print('Received file "%s" at %s.' % (
        uri,
        timeCreated
    ))


    "1121|51.2|130|1|2019-08-05 09:06|2019-08-05 11:27|ARB|2019-07-01 null null null null null null null null null"

GCP支持这里!我注意到您正在使用
job\u config.fieldDelimiter
而不是文档中所述的
job\u config.field\u delimiter
,这可能是您的问题


我建议您尝试一下,如果问题仍然存在,请在原始答案中添加任何错误。如果您能分享您正在遵循的指南(如果有的话),我也会很有帮助。

谢谢您!有同样的问题。但该文档称API属性是fieldDelimiter。这就是困惑所在