Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/307.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 从具有不同大小的元组填充数据帧_Python_List_Pandas_Dataframe_Pivot Table - Fatal编程技术网

Python 从具有不同大小的元组填充数据帧

Python 从具有不同大小的元组填充数据帧,python,list,pandas,dataframe,pivot-table,Python,List,Pandas,Dataframe,Pivot Table,我有一天的数据。 我对它进行了聚类,然后计算了每个簇每小时的比率(权重)(并非所有簇在所有小时内都存在)。 (数据帧时间) I按小时分组,并使用np bincount计算每个簇的权重: group_by_hour = time_df.groupby(time_df.Date.dt.hour) cluster_ids_hour = group_by_hour.cluster.\ apply(lambda arr: list(range(0,(arr+1).max()+1))) cluste

我有一天的数据。 我对它进行了聚类,然后计算了每个簇每小时的比率(权重)(并非所有簇在所有小时内都存在)。 (数据帧时间)

I按小时分组,并使用np bincount计算每个簇的权重:

group_by_hour = time_df.groupby(time_df.Date.dt.hour)
cluster_ids_hour = group_by_hour.cluster.\
    apply(lambda arr: list(range(0,(arr+1).max()+1)))
cluster_ratio_hour = group_by_hour.cluster.\
    apply(lambda arr: 1.0*np.bincount(arr+1)/len(arr))
这将为每小时提供不同的群集数组大小及其权重 它试图构造一个数据帧

pd.DataFrame(临时,列=['hour','clusters','rations'])

但我得到了以下信息:

   hour   clusters                                           weights
0    14        [0]                                            [1.0]
1    15     [0, 1]                 [0.488888888889, 0.511111111111]
2    16  [0, 1, 2]  [0.302325581395, 0.162790697674, 0.53488372093]
3    17  [0, 1, 2]                                  [0.0, 0.0, 1.0]
4    18  [0, 1, 2]                                  [0.0, 0.0, 1.0]
5    19  [0, 1, 2]                                  [0.0, 0.0, 1.0]
6    20  [0, 1, 2]                                  [0.0, 0.0, 1.0]
7    21  [0, 1, 2]                                  [0.0, 0.0, 1.0]
8    22  [0, 1, 2]                                  [0.0, 0.0, 1.0]
9    23  [0, 1, 2]                                  [0.0, 0.0, 1.0]
如何使集群成为索引,小时成为列

    0    1    2    3    4    ...
0    0.2    0.6    0.4    0.0    0.6
1    0.0    0.4    0.1    0.0    0.4
2    0.8    0.0    0.5    1.0    0.0
我认为你可以使用:

import pandas as pd
import numpy as np

time_df = pd.DataFrame({'cluster': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 1, 6: 1, 7: 2}, 
                        'Date': {0: pd.Timestamp('2014-02-28 12:24:59.535000'),
                                 1: pd.Timestamp('2014-02-28 12:26:35.019000'), 
                                 2: pd.Timestamp('2014-02-28 12:27:37.213000'), 
                                 3: pd.Timestamp('2014-02-28 12:28:35.246000'), 
                                 4: pd.Timestamp('2014-02-28 12:29:37.283000'), 
                                 5: pd.Timestamp('2014-02-28 13:27:37.213000'), 
                                 6: pd.Timestamp('2014-02-28 14:28:35.246000'), 
                                 7: pd.Timestamp('2014-02-28 14:29:37.283000')}})

print (time_df)
                     Date  cluster
0 2014-02-28 12:24:59.535        1
1 2014-02-28 12:26:35.019        1
2 2014-02-28 12:27:37.213        1
3 2014-02-28 12:28:35.246        2
4 2014-02-28 12:29:37.283        2
5 2014-02-28 13:27:37.213        1
6 2014-02-28 14:28:35.246        1
7 2014-02-28 14:29:37.283        2

我想知道,这种方法给出了一天的集群权重。我将运行它几天,然后将它们全部合并。在某些日子里,我只有部分时间(例如12、13、14),而其他日子则包括所有时间,我如何使用不同的列数对数据帧进行编码?对不起,我不确定我是否理解您的意思。你需要功能吗?
import pandas as pd
import numpy as np

time_df = pd.DataFrame({'cluster': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 1, 6: 1, 7: 2}, 
                        'Date': {0: pd.Timestamp('2014-02-28 12:24:59.535000'),
                                 1: pd.Timestamp('2014-02-28 12:26:35.019000'), 
                                 2: pd.Timestamp('2014-02-28 12:27:37.213000'), 
                                 3: pd.Timestamp('2014-02-28 12:28:35.246000'), 
                                 4: pd.Timestamp('2014-02-28 12:29:37.283000'), 
                                 5: pd.Timestamp('2014-02-28 13:27:37.213000'), 
                                 6: pd.Timestamp('2014-02-28 14:28:35.246000'), 
                                 7: pd.Timestamp('2014-02-28 14:29:37.283000')}})

print (time_df)
                     Date  cluster
0 2014-02-28 12:24:59.535        1
1 2014-02-28 12:26:35.019        1
2 2014-02-28 12:27:37.213        1
3 2014-02-28 12:28:35.246        2
4 2014-02-28 12:29:37.283        2
5 2014-02-28 13:27:37.213        1
6 2014-02-28 14:28:35.246        1
7 2014-02-28 14:29:37.283        2
group_by_hour = time_df.groupby(time_df.Date.dt.hour)
cluster_ids_hour = group_by_hour.cluster.\
    apply(lambda arr: list(range(0,(arr+1).max()+1)))
cluster_ratio_hour = group_by_hour.cluster.\
    apply(lambda arr: 1.0*np.bincount(arr+1)/len(arr))

print (cluster_ids_hour)
Date
12    [0, 1, 2, 3]
13       [0, 1, 2]
14    [0, 1, 2, 3]
Name: cluster, dtype: object

print (cluster_ratio_hour)
Date
12    [0.0, 0.0, 0.6, 0.4]
13         [0.0, 0.0, 1.0]
14    [0.0, 0.0, 0.5, 0.5]
Name: cluster, dtype: object

#create DataFrames from both columns and concate them
df1 = pd.DataFrame(cluster_ids_hour.values.tolist(), index=cluster_ids_hour.index)
#print (df1)

df2 = pd.DataFrame(cluster_ratio_hour.values.tolist(), index=cluster_ratio_hour.index)
#print (df2)
df = pd.concat([df1, df2], axis=1, keys=('clusters','weights'))
print (df)
     clusters            weights               
            0  1  2    3       0    1    2    3
Date                                           
12          0  1  2  3.0     0.0  0.0  0.6  0.4
13          0  1  2  NaN     0.0  0.0  1.0  NaN
14          0  1  2  3.0     0.0  0.0  0.5  0.5
#reshape, cast clusters column to integer    
df = df.stack().reset_index(level=1, drop=True).reset_index()
df['clusters'] = df['clusters'].astype(int)
#pivoting, fill NaN by 0
df = df.pivot(index='clusters', columns='Date', values='weights').fillna(0)

df.index.name = None
df.columns.name = None
print (df)
    12   13   14
0  0.0  0.0  0.0
1  0.0  0.0  0.0
2  0.6  1.0  0.5
3  0.4  0.0  0.5
import pandas as pd
import numpy as np

time_df = pd.DataFrame({'cluster': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 1, 6: 1, 7: 2}, 
                        'Date': {0: pd.Timestamp('2014-02-28 12:24:59.535000'),
                                 1: pd.Timestamp('2014-02-28 12:26:35.019000'), 
                                 2: pd.Timestamp('2014-02-28 12:27:37.213000'), 
                                 3: pd.Timestamp('2014-02-28 12:28:35.246000'), 
                                 4: pd.Timestamp('2014-02-28 12:29:37.283000'), 
                                 5: pd.Timestamp('2014-02-28 13:27:37.213000'), 
                                 6: pd.Timestamp('2014-02-28 14:28:35.246000'), 
                                 7: pd.Timestamp('2014-02-28 14:29:37.283000')}})

print (time_df)
time_df_group = time_df.groupby([time_df.Date.dt.hour,time_df.cluster]).size()
cluster_hour_df =  time_df_group.unstack(level=0)
cluster_hour_df = cluster_hour_df[cluster_hour_df.columns.values].apply(lambda row: row / row.sum(), axis=0)
cluster_hour_df


Date    12  13  14
cluster         
1   0.6 1.0 0.5
2   0.4 NaN 0.5