Keras自定义lambda层:如何规范化/缩放输出

Keras自定义lambda层:如何规范化/缩放输出,keras,neural-network,normalization,loss-function,normalize,Keras,Neural Network,Normalization,Loss Function,Normalize,我正在努力扩展lambda层的输出。代码如下: 我的X_列为100*15*24,Y_列为100*1(网络由LSTM层+密集层组成) 如何正确规范lambda_层的输出?任何想法或建议都将不胜感激 我不认为Scikit transformers可以在Lambda层中工作。如果您只想获得传入数据的规范化输出w.r.t,您可以这样做 from tensorflow.keras.layers import Input, LSTM, Dense, Lambda from tensorflow.keras.

我正在努力扩展lambda层的输出。代码如下: 我的X_列为100*15*24,Y_列为100*1(网络由LSTM层+密集层组成)


如何正确规范lambda_层的输出?任何想法或建议都将不胜感激

我不认为Scikit transformers可以在Lambda层中工作。如果您只想获得传入数据的规范化输出w.r.t,您可以这样做

from tensorflow.keras.layers import Input, LSTM, Dense, Lambda
from tensorflow.keras.models import Model
import tensorflow as tf


timesteps = 3
num_feat = 12
input_shape=(timesteps, num_feat)
data_input = Input(shape=input_shape, name="input_layer")
lstm1 = LSTM(10, name="lstm_layer")(data_input)
dense1 = Dense(4, activation="relu", name="dense1")(lstm1)
dense2 = Dense(1, activation = "custom_activation_1", name = "dense2")(dense1)
dense3 = Dense(1, activation = "custom_activation_2", name = "dense3")(dense1) 
#dense 2 and 3 has customed activation function with range the REAL LINE (so I need to normalize it)


## custom lambda layer/ loss function ##
def custom_layer(new_input):

    add_input = new_input[0]+new_input[1]
    
    normalized = (add_input - tf.reduce_min(add_input, axis=0, keepdims=True))/(tf.reduce_max(add_input, axis=0, keepdims=True) - tf.reduce_max(add_input, axis=0, keepdims=True))
    
    return normalized

lambda_layer = Lambda(custom_layer, name="lambda_layer")([dense2, dense3])

model = Model(inputs=data_input, outputs=lambda_layer) 
model.compile(loss='mse', optimizer='adam',metrics=['accuracy'])

非常感谢你@thushv89成功了!我可以问你一个后续问题吗:使用tf.reduce_max的原因是“add_input”是一个二维数组,所以我不能只使用min(),它应该应用于一维输入?因此,当我们想将min/max应用于维数大于1的数组时,我们应该使用tf.reduce\u min/max,这是否正确?(顺便说一句,我认为有一个输入错误,“normalized”的分母部分应该是tf.reduce_max-tf.reduce_min)@Doi_Ann,你可以在一维向量上使用reduce_min/max,多轴nD向量或单轴nD向量。它的用途相当广泛。我使用axis=0的原因是因为Scikit minmax scaler就是这样做的。
from tensorflow.keras.layers import Input, LSTM, Dense, Lambda
from tensorflow.keras.models import Model
import tensorflow as tf


timesteps = 3
num_feat = 12
input_shape=(timesteps, num_feat)
data_input = Input(shape=input_shape, name="input_layer")
lstm1 = LSTM(10, name="lstm_layer")(data_input)
dense1 = Dense(4, activation="relu", name="dense1")(lstm1)
dense2 = Dense(1, activation = "custom_activation_1", name = "dense2")(dense1)
dense3 = Dense(1, activation = "custom_activation_2", name = "dense3")(dense1) 
#dense 2 and 3 has customed activation function with range the REAL LINE (so I need to normalize it)


## custom lambda layer/ loss function ##
def custom_layer(new_input):

    add_input = new_input[0]+new_input[1]
    
    normalized = (add_input - tf.reduce_min(add_input, axis=0, keepdims=True))/(tf.reduce_max(add_input, axis=0, keepdims=True) - tf.reduce_max(add_input, axis=0, keepdims=True))
    
    return normalized

lambda_layer = Lambda(custom_layer, name="lambda_layer")([dense2, dense3])

model = Model(inputs=data_input, outputs=lambda_layer) 
model.compile(loss='mse', optimizer='adam',metrics=['accuracy'])