Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/293.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/xslt/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 复杂数据集拆分-分层组ShuffleSplit_Python_Machine Learning_Scikit Learn_Dataset - Fatal编程技术网

Python 复杂数据集拆分-分层组ShuffleSplit

Python 复杂数据集拆分-分层组ShuffleSplit,python,machine-learning,scikit-learn,dataset,Python,Machine Learning,Scikit Learn,Dataset,我有一个约200万个观测数据集,我需要按60:20:20的比例将其分为训练集、验证集和测试集。我的数据集的简化摘录如下所示: +---------+------------+-----------+-----------+ | note_id | subject_id | category | note | +---------+------------+-----------+-----------+ | 1 | 1 | ECG | bla

我有一个约200万个观测数据集,我需要按60:20:20的比例将其分为训练集、验证集和测试集。我的数据集的简化摘录如下所示:

+---------+------------+-----------+-----------+
| note_id | subject_id | category  |   note    |
+---------+------------+-----------+-----------+
|       1 |          1 | ECG       | blah ...  |
|       2 |          1 | Discharge | blah ...  |
|       3 |          1 | Nursing   | blah ...  |
|       4 |          2 | Nursing   | blah ...  |
|       5 |          2 | Nursing   | blah ...  |
|       6 |          3 | ECG       | blah ...  |
+---------+------------+-----------+-----------+
有多个类别-它们不是均匀平衡的-因此我需要确保培训、验证和测试集的类别比例与原始数据集中的类别比例相同。这部分很好,我可以使用sklearn库中的StratifiedShuffleSplit

但是,我还需要确保每个受试者的观察结果不会在培训、验证和测试数据集中分开。给定对象的所有观察结果都需要放在同一个桶中,以确保我经过培训的模型在进行验证/测试时从未见过该对象。例如,对受试者1的每次观察都应在培训集中

我想不出一种方法来确保按类别进行分层分割,防止数据集因缺少更好的主题id而受到污染,确保60:20:20的分割,并确保数据集以某种方式被洗牌。任何帮助都将不胜感激

谢谢

编辑:


我现在已经了解到,通过一个类别进行分组以及跨数据集拆分将组保持在一起也可以通过sklearn通过GroupShuffleSplit函数实现。所以本质上,我需要的是一个组合的分层和分组洗牌分割,即分层分组洗牌分割,它不存在。Github问题:

我认为在这种情况下,您必须构建自己的函数来分割数据。 这是我的一个实现:

def split(df, based_on='subject_id', cv=5):
    splits = []
    based_on_uniq = df[based_on]#set(df[based_on].tolist())
    based_on_uniq = np.array_split(based_on_uniq, cv)
    for fold in based_on_uniq:
        splits.append(df[df[based_on] == fold.tolist()[0]])
    return splits


if __name__ == '__main__':
    df = pd.DataFrame([{'note_id': 1, 'subject_id': 1, 'category': 'test1', 'note': 'test1'},
                       {'note_id': 2, 'subject_id': 1, 'category': 'test2', 'note': 'test2'},
                       {'note_id': 3, 'subject_id': 2, 'category': 'test3', 'note': 'test3'},
                       {'note_id': 4, 'subject_id': 2, 'category': 'test4', 'note': 'test4'},
                       {'note_id': 5, 'subject_id': 3, 'category': 'test5', 'note': 'test5'},
                       {'note_id': 6, 'subject_id': 3, 'category': 'test6', 'note': 'test6'},
                       {'note_id': 7, 'subject_id': 4, 'category': 'test7', 'note': 'test7'},
                       {'note_id': 8, 'subject_id': 4, 'category': 'test8', 'note': 'test8'},
                       {'note_id': 9, 'subject_id': 5, 'category': 'test9', 'note': 'test9'},
                       {'note_id': 10, 'subject_id': 5, 'category': 'test10', 'note': 'test10'},
                       ])
    print(split(df))
本质上,我需要分层组ShuffleSplit,它不存在。这是因为这样一个函数的行为是不清楚的,实现这一点以生成分组和分层的数据集并不总是可能的——特别是对于像我这样的严重不平衡的数据集。在我的情况下,我希望严格进行分组,以确保在分层和数据集比率拆分为60:20:20时,分组不会有任何重叠

正如Ghanem提到的,我别无选择,只能自己构建一个分割数据集的函数,我已经完成了以下工作:

def StratifiedGroupShuffleSplit(df_main):

    df_main = df_main.reindex(np.random.permutation(df_main.index)) # shuffle dataset

    # create empty train, val and test datasets
    df_train = pd.DataFrame()
    df_val = pd.DataFrame()
    df_test = pd.DataFrame()

    hparam_mse_wgt = 0.1 # must be between 0 and 1
    assert(0 <= hparam_mse_wgt <= 1)
    train_proportion = 0.6 # must be between 0 and 1
    assert(0 <= train_proportion <= 1)
    val_test_proportion = (1-train_proportion)/2

    subject_grouped_df_main = df_main.groupby(['subject_id'], sort=False, as_index=False)
    category_grouped_df_main = df_main.groupby('category').count()[['subject_id']]/len(df_main)*100

    def calc_mse_loss(df):
        grouped_df = df.groupby('category').count()[['subject_id']]/len(df)*100
        df_temp = category_grouped_df_main.join(grouped_df, on = 'category', how = 'left', lsuffix = '_main')
        df_temp.fillna(0, inplace=True)
        df_temp['diff'] = (df_temp['subject_id_main'] - df_temp['subject_id'])**2
        mse_loss = np.mean(df_temp['diff'])
        return mse_loss

    i = 0
    for _, group in subject_grouped_df_main:

        if (i < 3):
            if (i == 0):
                df_train = df_train.append(pd.DataFrame(group), ignore_index=True)
                i += 1
                continue
            elif (i == 1):
                df_val = df_val.append(pd.DataFrame(group), ignore_index=True)
                i += 1
                continue
            else:
                df_test = df_test.append(pd.DataFrame(group), ignore_index=True)
                i += 1
                continue

        mse_loss_diff_train = calc_mse_loss(df_train) - calc_mse_loss(df_train.append(pd.DataFrame(group), ignore_index=True))
        mse_loss_diff_val = calc_mse_loss(df_val) - calc_mse_loss(df_val.append(pd.DataFrame(group), ignore_index=True))
        mse_loss_diff_test = calc_mse_loss(df_test) - calc_mse_loss(df_test.append(pd.DataFrame(group), ignore_index=True))

        total_records = len(df_train) + len(df_val) + len(df_test)

        len_diff_train = (train_proportion - (len(df_train)/total_records))
        len_diff_val = (val_test_proportion - (len(df_val)/total_records))
        len_diff_test = (val_test_proportion - (len(df_test)/total_records)) 

        len_loss_diff_train = len_diff_train * abs(len_diff_train)
        len_loss_diff_val = len_diff_val * abs(len_diff_val)
        len_loss_diff_test = len_diff_test * abs(len_diff_test)

        loss_train = (hparam_mse_wgt * mse_loss_diff_train) + ((1-hparam_mse_wgt) * len_loss_diff_train)
        loss_val = (hparam_mse_wgt * mse_loss_diff_val) + ((1-hparam_mse_wgt) * len_loss_diff_val)
        loss_test = (hparam_mse_wgt * mse_loss_diff_test) + ((1-hparam_mse_wgt) * len_loss_diff_test)

        if (max(loss_train,loss_val,loss_test) == loss_train):
            df_train = df_train.append(pd.DataFrame(group), ignore_index=True)
        elif (max(loss_train,loss_val,loss_test) == loss_val):
            df_val = df_val.append(pd.DataFrame(group), ignore_index=True)
        else:
            df_test = df_test.append(pd.DataFrame(group), ignore_index=True)

        print ("Group " + str(i) + ". loss_train: " + str(loss_train) + " | " + "loss_val: " + str(loss_val) + " | " + "loss_test: " + str(loss_test) + " | ")
        i += 1

    return df_train, df_val, df_test

df_train, df_val, df_test = StratifiedGroupShuffleSplit(df_main)
我根据以下两点创建了一些任意损失函数:

与整个数据集相比,每个类别的百分比表示的平均平方差 根据提供的比率60:20:20,数据集的比例长度与数据集的比例长度之间的平方差 通过静态超参数hparam_mse_wgt对损失函数的这两个输入进行加权。对于我的特定数据集,值0.1运行良好,但如果您使用此函数,我鼓励您使用它。将其设置为0将仅优先维持分割比率,而忽略分层。如果将其设置为1,则反之亦然

使用这个损失函数,我然后遍历每个受试者组,并根据损失函数最高的那个,将其附加到适当的数据集训练、验证或测试中


这并不特别复杂,但它对我来说很有用。它不一定适用于所有数据集,但数据集越大,机会就越大。希望其他人会发现它有用。

这已经有一年多的时间了,但我发现我自己处于一种类似的情况,我有标签和组,由于组的性质,一组数据点可以只在测试中,也可以只在训练中,我用pandas和sklearn编写了一个小算法,我希望这会有所帮助

from sklearn.model_selection import GroupShuffleSplit
groups = df.groupby('label')
all_train = []
all_test = []
for group_id, group in groups:
    # if a group is already taken in test or train it must stay there
    group = group[~group['groups'].isin(all_train+all_test)]
    # if group is empty 
    if group.shape[0] == 0:
        continue
    train_inds, test_inds = next(GroupShuffleSplit(
        test_size=valid_size, n_splits=2, random_state=7).split(group, groups=group['groups']))

    all_train += group.iloc[train_inds]['groups'].tolist()
    all_test += group.iloc[test_inds]['groups'].tolist()



train= df[df['groups'].isin(all_train)]
test= df[df['groups'].isin(all_test)]

form_train = set(train['groups'].tolist())
form_test = set(test['groups'].tolist())
inter = form_train.intersection(form_test)

print(df.groupby('label').count())
print(train.groupby('label').count())
print(test.groupby('label').count())
print(inter) # this should be empty

我只需要解决同样的问题。在我的文档处理用例中,我希望来自同一页面的单词粘在一起,而文档类别应该在整个序列中分割,并对测试集进行均匀分层。对于我的问题,它认为对于一个组的所有实例,我们有相同的分层类别,即一页中的所有单词都属于相同的类别。因此,我发现直接对组执行分层拆分,然后使用拆分组选择实例是最简单的。如果这个假设不成立,这个解决方案就不适用了

从输入导入元组开始 作为pd进口熊猫 从sklearn.model\u选择导入列车\u测试\u拆分 def分层组列车测试分割 样本:pd.DataFrame,组:str,分层依据:str,测试大小:float ->元组[pd.DataFrame,pd.DataFrame]: 组=样本[组]。删除重复项 分层=样本。将重复组[stratify\u by]拖放到\u numpy 分组训练,分组测试=训练测试分组,分层=分层,测试大小=测试大小 samples\u train=samples.loc[lambda d:d[group].isingroups\u train] samples\u test=samples.loc[lambda d:d[group].isingroups\u test] 返回样品列车、样品测试
我想你可能是对的。感谢您提供返回拆分点的起始代码。我想 问题是,通过使用库提供的函数,可以简化多少操作。我看看能想出什么办法