使用powershell合并行并将内容从一个.csv文件拆分为多个文件

使用powershell合并行并将内容从一个.csv文件拆分为多个文件,powershell,csv,import-csv,export-csv,Powershell,Csv,Import Csv,Export Csv,如中所述,对于给定的数据,我希望有第二种类型的输出: header1; header2; header3; header4; header5; header6; header7; header8; header9; header10; header11; header12; header13; AB; 12345; AB123456789; 10.03.2021; GT; BC987654321; EUR CD; 456789; 22.24; Text; SW; AB; 12345; AB123

如中所述,对于给定的数据,我希望有第二种类型的输出:

header1; header2; header3; header4; header5; header6; header7; header8; header9; header10; header11; header12; header13;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654321; EUR
CD; 456789; 22.24; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR
CD; 354345; 85.45; Text; SW;
CD; 123556; 94.63; Text; SW;
CD; 354564; 12.34; Text; SW;
CD; 135344; 32.23; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654323; EUR
CD; 354564; 12.34; Text; SW;
CD; 852143; 34.97; Text; SW;
这次
AB
行应始终位于
CD
行的前面。我知道这是冗余的,但它会使每一行都成为一组完整的数据。 预期的结果将是:
BC987654321.csv

header1; header2; header3; header4; header5; header6; header7; header8; header9; header10; header11; header12; header13;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654321; EUR; 12345; CD; 456789; 22.24; Text; SW;
header1; header2; header3; header4; header5; header6; header7; header8; header9; header10; header11; header12; header13;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 354345; 85.45; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 123556; 94.63; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 354564; 12.34; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 135344; 32.23; Text; SW;
header1; header2; header3; header4; header5; header6; header7; header8; header9; header10; header11; header12; header13;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654323; EUR; 12345; CD; 354564; 12.34; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654323; EUR; 12345; CD; 852143; 34.97; Text; SW;
BC987654322.csv

header1; header2; header3; header4; header5; header6; header7; header8; header9; header10; header11; header12; header13;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654321; EUR; 12345; CD; 456789; 22.24; Text; SW;
header1; header2; header3; header4; header5; header6; header7; header8; header9; header10; header11; header12; header13;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 354345; 85.45; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 123556; 94.63; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 354564; 12.34; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 135344; 32.23; Text; SW;
header1; header2; header3; header4; header5; header6; header7; header8; header9; header10; header11; header12; header13;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654323; EUR; 12345; CD; 354564; 12.34; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654323; EUR; 12345; CD; 852143; 34.97; Text; SW;
BC987654323.csv

header1; header2; header3; header4; header5; header6; header7; header8; header9; header10; header11; header12; header13;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654321; EUR; 12345; CD; 456789; 22.24; Text; SW;
header1; header2; header3; header4; header5; header6; header7; header8; header9; header10; header11; header12; header13;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 354345; 85.45; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 123556; 94.63; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 354564; 12.34; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654322; EUR; 12345; CD; 135344; 32.23; Text; SW;
header1; header2; header3; header4; header5; header6; header7; header8; header9; header10; header11; header12; header13;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654323; EUR; 12345; CD; 354564; 12.34; Text; SW;
AB; 12345; AB123456789; 10.03.2021; GT; BC987654323; EUR; 12345; CD; 852143; 34.97; Text; SW;

提前感谢您

为此,我们需要更具创造性并使用临时哈希表

大概是这样的:

$path = 'D:\Test'
$fileIn = Join-Path -Path $path -ChildPath 'input.csv'
$fileOut = $null   # will get a value in the loop
$splitValue = 'AB' # the header1 value that decides to start a new file
$csv = Import-Csv -Path $fileIn -Delimiter ';'
# get an array of the column headers
$allHeaders = $csv[0].PsObject.Properties.Name

## create a new variable containing

$hash = [ordered]@{}
foreach ($item in $csv) {
    if ($item.header1 -eq $splitValue) { 
        # start a new row (build a new hash)
        $hash.Clear()
        $item.PsObject.Properties | Where-Object { $_.Value } | ForEach-Object { $hash[$_.Name] = $_.Value } 
        # get the filename from header6
        $fileOut = Join-Path -Path $path -ChildPath ('{0}.csv' -f $item.header6)
        # if a file with that name already exists, delete it
        if (Test-Path -Path $fileOut -PathType Leaf) { Remove-Item -Path $fileOut }
    }
    elseif ($hash.Count) {
        # copy the hash which holds the beginning of the line to a temporary row hash (the 'AB' line)
        $rowHash = [ordered]@{}
        foreach ($name in $hash.Keys) { $rowHash[$name] = $hash[$name] }
        $headerIndex = $hash.Count
        # append the new fields from this line to the row hash
        $item.PsObject.Properties | Where-Object { $_.Value } | ForEach-Object {
            # for safety: test if we do not index out of the $allHeaders array
            $header = if ($headerIndex -lt $allHeaders.Count) { $allHeaders[$headerIndex] } else { "header$($headerIndex + 1)" }
            $rowHash[$header] = $_.Value 
            $headerIndex++  # increment the counter
        }
        # append trailing headers with empty value
        while ($headerIndex -lt $allHeaders.Count) { 
            $rowHash[$allHeaders[$headerIndex++]] = $null
        }
        # cast the finalized rowhash into a [PsCustomObject]
        $newRow = [PsCustomObject]$rowHash
        # write the completed row in the csv file
        ##$fileOut = Join-Path -Path $path -ChildPath ('{0}.csv' -f $newRow.header6)
        # if the file already exists, we append, otherwise we create a new file
        $append = Test-Path -Path $fileOut -PathType Leaf
        $newRow | Export-Csv -Path $fileOut -Delimiter ';' -NoTypeInformation -Append:$append
    }
    else {
        Write-Warning "Could not find a starting row (header1 = '$splitValue') for the file"
    }
 }
输出:

BC987654321.csv

"header1";"header2";"header3";"header4";"header5";"header6";"header7";"header8";"header9";"header10";"header11";"header12";"header13"
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654321";"EUR";"CD";"456789";"22.24";"Text";"SW";
"header1";"header2";"header3";"header4";"header5";"header6";"header7";"header8";"header9";"header10";"header11";"header12";"header13"
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"354345";"85.45";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"123556";"94.63";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"354564";"12.34";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"135344";"32.23";"Text";"SW";
"header1";"header2";"header3";"header4";"header5";"header6";"header7";"header8";"header9";"header10";"header11";"header12";"header13"
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654323";"EUR";"CD";"354564";"12.34";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654323";"EUR";"CD";"852143";"34.97";"Text";"SW";
BC987654322.csv.csv

"header1";"header2";"header3";"header4";"header5";"header6";"header7";"header8";"header9";"header10";"header11";"header12";"header13"
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654321";"EUR";"CD";"456789";"22.24";"Text";"SW";
"header1";"header2";"header3";"header4";"header5";"header6";"header7";"header8";"header9";"header10";"header11";"header12";"header13"
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"354345";"85.45";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"123556";"94.63";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"354564";"12.34";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"135344";"32.23";"Text";"SW";
"header1";"header2";"header3";"header4";"header5";"header6";"header7";"header8";"header9";"header10";"header11";"header12";"header13"
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654323";"EUR";"CD";"354564";"12.34";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654323";"EUR";"CD";"852143";"34.97";"Text";"SW";
BC987654323.csv.csv

"header1";"header2";"header3";"header4";"header5";"header6";"header7";"header8";"header9";"header10";"header11";"header12";"header13"
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654321";"EUR";"CD";"456789";"22.24";"Text";"SW";
"header1";"header2";"header3";"header4";"header5";"header6";"header7";"header8";"header9";"header10";"header11";"header12";"header13"
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"354345";"85.45";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"123556";"94.63";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"354564";"12.34";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654322";"EUR";"CD";"135344";"32.23";"Text";"SW";
"header1";"header2";"header3";"header4";"header5";"header6";"header7";"header8";"header9";"header10";"header11";"header12";"header13"
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654323";"EUR";"CD";"354564";"12.34";"Text";"SW";
"AB";"12345";"AB123456789";"10.03.2021";"GT";"BC987654323";"EUR";"CD";"852143";"34.97";"Text";"SW";

编辑

上面的工作基于问题中给出的示例数据,但很大程度上依赖于一个事实,即任何重要字段都不能为空

正如您所评论的,真正的csv确实有空字段,因此,代码将数据转移到错误的列中

使用真实的数据,这应该会做得更好:

$path       = 'D:\Test'
$fileIn     = Join-Path -Path $path -ChildPath 'input.csv'
$fileOut    = $null   # will get a value in the loop
$splitValue = 'IH'    # the value in the first column ($idColumn) that decides to start a new file. (in example data 'AB')
$csv        = Import-Csv -Path $fileIn -Delimiter ';'

# get an array of all the column headers
$allHeaders = $csv[0].PsObject.Properties.Name   # a string array of all header names
# get the index of the first column to start appending from ("Identifier")
$idColumn   = $allHeaders[0]                     # --> 'Record Identifier'  (in example data 'header1')

$mergeIndex = [array]::IndexOf($allHeaders, "Identifier")  # this is Case-Sensitive !
# if you want to do this case-insensitive, you need to do something like
# $mergeIndex = [array]::IndexOf((($allHeaders -join ';').ToLowerInvariant() -split ';'), "identifier")

# create an ordered hash that will contain the values up to column no. $mergeIndex
$hash = [ordered]@{}
foreach ($item in $csv) {
    if ($item.$idColumn -eq $splitValue) { 
        # start a new row (build a new hash)
        $hash.Clear()
        for ($i = 0; $i -lt $mergeIndex; $i++) {
            $hash[$allHeaders[$i]] = $item.$($allHeaders[$i])  # we need $(..) because of the spaces in the header names
        }

        # get the filename from the 6th header $item.$($allHeaders[5]) --> 'VAT Number'
        $fileOut = Join-Path -Path $path -ChildPath ('{0}.csv' -f $item.'VAT Number')
        # if a file with that name already exists, delete it
        if (Test-Path -Path $fileOut -PathType Leaf) { Remove-Item -Path $fileOut }
    }
    elseif ($hash.Count) {
        # create a new ordered hashtable to build the entire line with
        $rowHash = [ordered]@{}
        # copy the hash which holds the beginning of the line to a temporary row hash (the 'IH' line)
        # an ordered hashtable does not have a .Clone() method unfortunately..
        foreach ($name in $hash.Keys) { $rowHash[$name] = $hash[$name] }

        # append the fields from this item to the row hash starting at the $mergeIndex column
        $j = 0
        for ($i = $mergeIndex; $i -lt $allHeaders.Count; $i++) {
            $rowHash[$allHeaders[$i]] = $item.PsObject.Properties.Value[$j++]
        }

        # cast the finalized rowhash into a [PsCustomObject] and add to the file
        [PsCustomObject]$rowHash | Export-Csv -Path $fileOut -Delimiter ';' -NoTypeInformation -Append
    }
    else {
        Write-Warning "Could not find a starting row ('$idColumn' = '$splitValue') for the file"
    }
 }

注意:我没有在这里显示输出,因为实际csv可能会在该文件中显示敏感数据

,它会跳过列
发送到地址2
发送到省/县
,在该示例中没有任何内容。另外:从
标识符
列向右移动,内容向左移动两列。
标识符
正在
发送至电话
上写入
EAN
on
kundennumer
等等。@Nerevar.de我明白你的意思了。
Where Object{$\uuu.Value}
是计算第一行中有多少字段,而不是头数。当介于两者之间的字段为空时,它不起作用。。我明天会发布另一种方法,可以吗?当然可以,伙计。由于我对PowerShell及其语法有0方面的经验,我非常感谢您的帮助!我不是想以任何方式催你。我期待着在tommorow.@Nerevar.de阅读您的来信。请查看编辑后的代码。这次我用你的真实数据进行了测试。请,因为这可能包含敏感数据,我敦促您删除您发布的带有该csv文件链接的评论。非常感谢@Theo。我的真实示例数据经过调整的代码工作得非常好。我甚至设法将它集成到另一个foreach队列中,让脚本在特定文件夹中的每个.csv中工作。非常感谢你的帮助!不要担心我为你上传的临时数据。所有字段在上传之前都经过了编辑——根本没有真实数据,只是格式完全相同。