Python 我现在正在研究一种算法来恢复数据,而这种情况并不';不允许使用神经网络

Python 我现在正在研究一种算法来恢复数据,而这种情况并不';不允许使用神经网络,python,algorithm,compression,signal-processing,noise-reduction,Python,Algorithm,Compression,Signal Processing,Noise Reduction,以下代码显示了我面临的问题: ''' def fakeDataGenerator(chanNum=31): #此函数生成我要恢复的数据,并显示我正在处理的数据的特征。它是连续的,可微的。 峰值=随机样本(范围(chanNum)、随机选择(范围(3,10))) peaks.append(chanNum) peaks.sort() out=[random.choice(范围(-5,5))] δ=1 而len(out)

以下代码显示了我面临的问题: '''

def fakeDataGenerator(chanNum=31):
#此函数生成我要恢复的数据,并显示我正在处理的数据的特征。它是连续的,可微的。
峰值=随机样本(范围(chanNum)、随机选择(范围(3,10)))
peaks.append(chanNum)
peaks.sort()
out=[random.choice(范围(-5,5))]
δ=1
而len(out)
我能否利用原始数据和编码器的特性更好地恢复原始数据?这个程序工作的环境不允许复杂的模型,比如神经网络

def fakeDataGenerator(chanNum=31):
#This function generates the data I want to recover and it shows the characters of the data I am working on. It's continuous and differentiable.
    peaks = random.sample(range(chanNum), random.choice(range(3,10)))
    peaks.append(chanNum)
    peaks.sort()
    out = [random.choice(range(-5, 5))]
    delta = 1
    while len(out) < chanNum:
        if len(out) < peaks[0]:
            out.append(out[-1]+delta)
        elif len(out) == peaks[0]:
            delta *= -1
            peaks.pop(0)
    return out

originalData = torch.tensor(fakeDataGenerator(31)).reshape(1, 31).float()

encoder = torch.rand((31, 9)).float() #encoder here is something that messed the data up
code = torch.matmul(originalData, encoder) #here we get the code which is messed up by the encoder

decoder = torch.pinverse(encoder) #We can make use of the encoder matrix to decode the data.
#For example, here I apply pinverse to recover the data, but...

decoded = torch.matmul(code, decoder)
print(decoded - originalData)  #the result is no good.