Machine learning 使用自动编码器降低维数

Machine learning 使用自动编码器降低维数,machine-learning,neural-network,pytorch,autoencoder,Machine Learning,Neural Network,Pytorch,Autoencoder,以下是我使用PyTorch编写的自动编码器版本: import warnings warnings.filterwarnings('ignore') import numpy as np import matplotlib.pyplot as plt import pandas as pd from matplotlib import pyplot as plt from sklearn import metrics import datetime from sklearn.preprocess

以下是我使用PyTorch编写的自动编码器版本:

import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib import pyplot as plt
from sklearn import metrics
import datetime
from sklearn.preprocessing import MultiLabelBinarizer
import seaborn as sns
sns.set_style("darkgrid")
from ast import literal_eval
import numpy as np
from sklearn.preprocessing import scale
import seaborn as sns
sns.set_style("darkgrid")
import torch

%matplotlib inline

f = []
f.append(np.random.uniform(0,10,(1 , 10)).flatten())
f.append(np.random.uniform(10,20,(1 , 10)).flatten())
f.append(np.random.uniform(20,30,(1 , 10)).flatten())
x_data = torch.FloatTensor(np.array(f))
x_data

dimensions_input = 10
hidden_layer_nodes = 5
output_dimension = 10

class Model(torch.nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.linear = torch.nn.Linear(dimensions_input,hidden_layer_nodes)
        self.sigmoid = torch.nn.Sigmoid()
        self.linear2 = torch.nn.Linear(hidden_layer_nodes,output_dimension)

    def forward(self, x):
        l_out1 = self.linear(x)
        l_out2 = self.sigmoid(l_out1)
        y_pred = self.linear2(l_out2)
        return y_pred

model = Model()

criterion = torch.nn.MSELoss(size_average = False)
optim = torch.optim.SGD(model.parameters(), lr = 0.00001)

def train_model():
    y_data = x_data.clone()
    for i in range(150000):
        y_pred = model(x_data)
        loss = criterion(y_pred, y_data)

        if i % 5000 == 0:
            print(loss)
        optim.zero_grad()

        loss.backward()
        optim.step()
使用
x\u data.clone()
我训练网络学习输入数据的特征表示


我试图生成与输入数据行的维度相匹配的隐藏层权重,以便
x_data
的每个向量都有相应的编码。但后面隐藏的是一个大小为5的向量。如何更改此网络,以便生成表示输入数据降维的矩阵?

您可以找到一个使用Pytork的自动编码器的最简示例。它使用了
nn.Sequential
接口,但基本相同。您的代码不运行。我必须修改您的功能
列车模型
,使其运行: