Python 将CSV文件转换为';平锉;为了熊猫
我的CSV文件不包含标题,每行仅包含2列(时间和数据名称),始终包含数据,其余行根据数据具有不同的列数 我已经成功地将“普通”CSV文件导入到具有一致列数的熊猫中。它运行得非常好,但我在文档中看到了处理当前情况的任何东西 下面是有问题的CSV文件的一个片段:Python 将CSV文件转换为';平锉;为了熊猫,python,pandas,numpy,csv,Python,Pandas,Numpy,Csv,我的CSV文件不包含标题,每行仅包含2列(时间和数据名称),始终包含数据,其余行根据数据具有不同的列数 我已经成功地将“普通”CSV文件导入到具有一致列数的熊猫中。它运行得非常好,但我在文档中看到了处理当前情况的任何东西 下面是有问题的CSV文件的一个片段: 1573081480.942000, /eeg, 843.3333, 854.61536, 851.79486, 849.3773, 863.0769 1573081480.942000, /eeg, 844.1392, 857.4359,
1573081480.942000, /eeg, 843.3333, 854.61536, 851.79486, 849.3773, 863.0769
1573081480.942000, /eeg, 844.1392, 857.4359, 849.3773, 861.8681, 890.07324
1573081480.943000, /eeg, 853.8095, 853.8095, 850.989, 866.30035, 854.61536
1573081480.944000, /eeg, 855.42126, 855.0183, 846.1539, 852.1978, 846.9597
1573081480.947000, /eeg, 844.1392, 853.8095, 846.55676, 842.52747, 873.5531
1573081480.947000, /eeg, 848.97437, 853.00366, 851.79486, 853.00366, 856.2271
1573081480.948000, /eeg, 859.0476, 852.6007, 850.18317, 863.8828, 826.0073
1573081480.950000, /eeg, 859.0476, 851.79486, 853.00366, 866.30035, 819.5604
1573081480.950000, /eeg, 851.79486, 852.1978, 846.9597, 854.61536, 859.45056
1573081480.951000, /eeg, 856.63007, 853.00366, 846.55676, 840.9158, 854.21246
1573081480.960000, /elements/alpha_absolute, 0.48463312
1573081480.960000, /elements/beta_absolute, 0.061746284
1573081480.961000, /elements/gamma_absolute, 0.7263172
1573081480.961000, /elements/theta_absolute, 0.7263172
1573081480.961000, /elements/delta_absolute, 0.7263172
我需要的结果如下所示
time, eeg_0, eeg_1, eeg_2, eeg_3, delta, theta, alpha, beta, gamma
1573081480.942000, 844.1392, 857.4359, 849.3773, 861.8681,,,,,
1573081480.947000, 844.1392, 853.8095, 846.55676, 842.52747, 873.5531,,,,,
1573081480.947000, 848.97437, 853.00366, 851.79486, 853.00366, 856.2271,,,,,
1573081480.948000, 859.0476, 852.6007, 850.18317, 863.8828, 826.0073,,,,,
1573081480.960000,,,,,,,0.48463312,,
1573081480.960000,,,,,,,,0.061746284,
1573081480.961000,,,,,0.7263172,,,,
1573081480.961000,,,,,0.52961296,,,
1573081480.962000,,,,,,,,-0.26484978
正如您所看到的,值的数量可以根据存储的数据而变化
我希望导入过程与“普通”CSV文件一样简单高效
这正是我希望避免的,它冗长乏味,效率低下:
d = {
'time': [0.],
'eeg0': [0.],'eeg1': [0.],'eeg2': [0.],'eeg3': [0.],'eeg4': [0.],
'delta_absolute': [0.], 'theta_absolute': [0], 'alpha_absolute': [0], 'beta_absolute': [0], 'alpha_absolute': [0],
'acc0': [0], 'acc1': [0], 'acc2': [0], 'gyro0': [0], 'gyro1': [0], 'gyro2': [0],
'concentration': [0],'mellow': [0]
}
df_new_data = pd.DataFrame(data=d)
csvfile = open(fname)
csv_reader = csv.reader(csvfile, delimiter=',')
csv_data = list(csv_reader)
row_count = len(csv_data)
for row in csv_data:
if row[1] == ' /muse/acc':
df_new_data = df_new_data.append({'acc0' : row[2], 'acc1' : row[3], 'acc2' : row[4]}, ignore_index=True)
if row[1] == ' /muse/gyro':
df_new_data = df_new_data.append({'gyro0' : row[2], 'gyro1' : row[3], 'gyro2' : row[4]}, ignore_index=True)
编辑:
我发现,如果CSV文件的第一行包含的字段较少,则任何后续行的读取将失败。上面的CSV数据示例有效,但此示例无效:
573081480.960000, /elements/alpha_absolute, 0.48463312
1573081480.960000, /elements/beta_absolute, 0.061746284
1573081480.961000, /elements/gamma_absolute, 0.7263172
1573081480.961000, /elements/theta_absolute, 0.7263172
1573081480.961000, /elements/delta_absolute, 0.7263172
1573081480.942000, /eeg, 843.3333, 854.61536, 851.79486, 849.3773, 863.0769
1573081480.942000, /eeg, 844.1392, 857.4359, 849.3773, 861.8681, 890.07324
1573081480.943000, /eeg, 853.8095, 853.8095, 850.989, 866.30035, 854.61536
1573081480.944000, /eeg, 855.42126, 855.0183, 846.1539, 852.1978, 846.9597
1573081480.947000, /eeg, 844.1392, 853.8095, 846.55676, 842.52747, 873.5531
1573081480.947000, /eeg, 848.97437, 853.00366, 851.79486, 853.00366, 856.2271
1573081480.948000, /eeg, 859.0476, 852.6007, 850.18317, 863.8828, 826.0073
1573081480.950000, /eeg, 859.0476, 851.79486, 853.00366, 866.30035, 819.5604
1573081480.950000, /eeg, 851.79486, 852.1978, 846.9597, 854.61536, 859.45056
1573081480.951000, /eeg, 856.63007, 853.00366, 846.55676, 840.9158, 854.21246
熊猫将产生以下错误:
pandas.errors.ParserError: Error tokenizing data. C error: Expected 3 fields in line 6, saw 7
提前谢谢 您可以通过以下方式规范化CSV,并使用Miller()创建无错误CSV:
mlr --csv --implicit-csv-header unsparsify \
then rename 1,one,2,two \
then reshape -r "[0-9]" -o item,value \
then filter -x -S '$value==""' \
then put '$item=fmtnum(($item-2),"%03d");$item=$two."_".$item' \
then cut -x -f two then sort -f item -n one \
then reshape -s item,value \
then unsparsify input.csv >output.csv
您将有这样的CSV,您将能够导入
one /eeg_001 /eeg_002 /eeg_003 /eeg_004 /eeg_005 /elements/alpha_absolute_001 /elements/beta_absolute_001 /elements/delta_absolute_001 /elements/gamma_absolute_001 /elements/theta_absolute_001
1573081480.942000 844.1392 857.4359 849.3773 861.8681 890.07324 - - - - -
1573081480.943000 853.8095 853.8095 850.989 866.30035 854.61536 - - - - -
1573081480.944000 855.42126 855.0183 846.1539 852.1978 846.9597 - - - - -
1573081480.947000 848.97437 853.00366 851.79486 853.00366 856.2271 - - - - -
1573081480.948000 859.0476 852.6007 850.18317 863.8828 826.0073 - - - - -
1573081480.950000 851.79486 852.1978 846.9597 854.61536 859.45056 - - - - -
1573081480.951000 856.63007 853.00366 846.55676 840.9158 854.21246 - - - - -
1573081480.960000 - - - - - 0.48463312 0.061746284 - - -
1573081480.961000 - - - - - - - 0.7263172 0.7263172 0.7263172
您可以通过以下方式规范化CSV,并使用Miller()创建无错误CSV:
mlr --csv --implicit-csv-header unsparsify \
then rename 1,one,2,two \
then reshape -r "[0-9]" -o item,value \
then filter -x -S '$value==""' \
then put '$item=fmtnum(($item-2),"%03d");$item=$two."_".$item' \
then cut -x -f two then sort -f item -n one \
then reshape -s item,value \
then unsparsify input.csv >output.csv
您将有这样的CSV,您将能够导入
one /eeg_001 /eeg_002 /eeg_003 /eeg_004 /eeg_005 /elements/alpha_absolute_001 /elements/beta_absolute_001 /elements/delta_absolute_001 /elements/gamma_absolute_001 /elements/theta_absolute_001
1573081480.942000 844.1392 857.4359 849.3773 861.8681 890.07324 - - - - -
1573081480.943000 853.8095 853.8095 850.989 866.30035 854.61536 - - - - -
1573081480.944000 855.42126 855.0183 846.1539 852.1978 846.9597 - - - - -
1573081480.947000 848.97437 853.00366 851.79486 853.00366 856.2271 - - - - -
1573081480.948000 859.0476 852.6007 850.18317 863.8828 826.0073 - - - - -
1573081480.950000 851.79486 852.1978 846.9597 854.61536 859.45056 - - - - -
1573081480.951000 856.63007 853.00366 846.55676 840.9158 854.21246 - - - - -
1573081480.960000 - - - - - 0.48463312 0.061746284 - - -
1573081480.961000 - - - - - - - 0.7263172 0.7263172 0.7263172
你到底想要什么还不清楚。很好,您已经提供了一个示例输出,但是如果它是您输入的actault预期输出,那么就容易多了 当我理解时,最简单的方法是循环每种类型,找出它们使用了多少列,创建了许多框架,最后对它们进行连接。像这样:
# Using pandas:
max_number_of_columns = pandas.read_csv('test.txt', sep='|', header=None)[0].str.count(',').max()
# or just hardcoded:
max_number_of_columns = 10
base = pandas.read_csv('test.txt', header=None, names=list(range(max_number_of_columns)))
base.columns = ['time','datatype'] + list(base.columns[2:])
results = [base.iloc[:,:2]]
for datatype in base['datatype'].unique():
group = base[base['datatype']==datatype].iloc[:,2:].dropna(how='all', axis=1)
group.columns = [f"{datatype}_{x}" for x in range(len(group.columns))]
results.append(group)
final = pandas.concat(results, axis=1)
编辑:修复当第一行包含的列少于后面的行时的问题。不清楚您到底想要什么。很好,您已经提供了一个示例输出,但是如果它是您输入的actault预期输出,那么就容易多了 当我理解时,最简单的方法是循环每种类型,找出它们使用了多少列,创建了许多框架,最后对它们进行连接。像这样:
# Using pandas:
max_number_of_columns = pandas.read_csv('test.txt', sep='|', header=None)[0].str.count(',').max()
# or just hardcoded:
max_number_of_columns = 10
base = pandas.read_csv('test.txt', header=None, names=list(range(max_number_of_columns)))
base.columns = ['time','datatype'] + list(base.columns[2:])
results = [base.iloc[:,:2]]
for datatype in base['datatype'].unique():
group = base[base['datatype']==datatype].iloc[:,2:].dropna(how='all', axis=1)
group.columns = [f"{datatype}_{x}" for x in range(len(group.columns))]
results.append(group)
final = pandas.concat(results, axis=1)
编辑:修复第一行包含的列少于后续行的情况。您希望生成的数据结构是什么样子?某种数据帧?有多少列?还有什么吗?我可以想象一行一行地读取文件,在“,”上展开,将前两个元素分配给一个列表,并将其余元素保留为嵌套列表。换句话说,嵌套列表结构可以处理它。但是,你想从那里走到哪里?您打算如何处理这些数据?我希望将其导入数据框。这些数据用于信号/频谱分析。到目前为止,我只需要处理平面CSV文件,这种格式的改变让我想知道最好的方法。文件的大小越来越大,但熊猫是如此之快,我真的喜欢使用它。当然其他人也遇到过这种情况,这就是为什么我认为pandas或numpy可能已经有了一个解决方案。用
df=pd.Read\u csv('myFile.csv',header=None)
阅读它。稍后,您可以使用df.columns=['time','name','data1','data2',…'data\u max']重命名这些列。
您希望生成的数据结构是什么样的?某种数据帧?有多少列?还有什么吗?我可以想象一行一行地读取文件,在“,”上展开,将前两个元素分配给一个列表,并将其余元素保留为嵌套列表。换句话说,嵌套列表结构可以处理它。但是,你想从那里走到哪里?您打算如何处理这些数据?我希望将其导入数据框。这些数据用于信号/频谱分析。到目前为止,我只需要处理平面CSV文件,这种格式的改变让我想知道最好的方法。文件的大小越来越大,但熊猫是如此之快,我真的喜欢使用它。当然其他人也遇到过这种情况,这就是为什么我认为pandas或numpy可能已经有了一个解决方案。用df=pd.Read\u csv('myFile.csv',header=None)
阅读它。稍后,您可以使用df.columns=['time','name','data1','data2',…'data\u max']重命名这些列。
这是一个非常有用的工具!但是,我不确定我是否清楚。下面是一个结果应该是什么样的例子:time,eeg_0,eeg_1,eeg_2,eeg_3,delta,theta,alpha,beta,gamma 1573081480.942000,844.1392857.4359849.3773861.8681,,,,,,,1573081480.947000,844.1392,853.8095,846.55676,842.52747,873.5531,,,,,,,,,,,,1573081480.947000,848.97437,843.0033.796,851.856,,,,,1573081480.94800859.0476852.6007850.18317863.8828826.0073,,,,,,,,,1573081480.960000,,,,,,,,0.48463312,1573081480.960000,,,,,,,,,,,,0.061746284,1573081480.961000,,,,,,,,,,,,,,,,,,,1573081480.961000,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0.52961296,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,-2648.968?你能编辑你想要的输出吗,谢谢你,我就这么做了!我是stackoverflow的新手,正在学习如何设置问题/答案的格式。您好,是的,我尝试过,但不幸的是速度非常慢。我指的是使用pandas方法,它更快,更适合于大型数据集。这是一个非常有用的工具!但是,我不确定我是否清楚。下面是一个结果应该是什么样的例子:time,eeg_0,eeg_1,eeg_2,eeg_3,delta,theta,alpha,beta,gamma 1573081480.942000,844.1392857.4359849.3773861.8681,,,,,,,1573081480.947000,844.1392,853.8095,846.55676,842.52747,873.5531,,,,,,,,,,,,1573081480.947000,848.97437,843.0033.796,851.856,,,,, 1573081480.948000, 859.0476, 852.6007, 850.18317, 863.8828, 826.0073,,,,, 1573081480.960000,,,,,,,0.48463312,, 1573081480.960000,,,,,,,,0.061746284, 1573081480.961000,,,,,0.7263172,,,, 1573081480.961000,,,,,0.52961296,,, 1573081480.962000,,,,,,,,-0.26484