Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/355.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 我可以用线性累加器通过挂钩记录训练损失吗?_Python_Tensorflow_Tensorflow Estimator - Fatal编程技术网

Python 我可以用线性累加器通过挂钩记录训练损失吗?

Python 我可以用线性累加器通过挂钩记录训练损失吗?,python,tensorflow,tensorflow-estimator,Python,Tensorflow,Tensorflow Estimator,我对TensorFlow很陌生。我使用TF1.8进行“简单”线性回归。 练习的输出是最适合数据的线性权重集,而不是预测模型。因此,我想跟踪并记录当前训练期间的最小损失,以及相应的权重值 我正在尝试使用线性累加器: tf.logging.set_verbosity(tf.logging.INFO) model = tf.estimator.LinearRegressor( feature_columns = make_feature_cols(), model_dir = TRA

我对TensorFlow很陌生。我使用TF1.8进行“简单”线性回归。 练习的输出是最适合数据的线性权重集,而不是预测模型。因此,我想跟踪并记录当前训练期间的最小损失,以及相应的权重值

我正在尝试使用
线性累加器

tf.logging.set_verbosity(tf.logging.INFO)

model = tf.estimator.LinearRegressor(
    feature_columns = make_feature_cols(),
    model_dir = TRAINING_OUTDIR
)

# --------------------------------------------v
logger = tf.train.LoggingTensorHook({"loss": ???}, every_n_iter=10)
trainHooks = [logger]

model.train(
    input_fn = make_train_input_fn(df, num_epochs = nEpochs),
    hooks = trainHooks
)
模型似乎没有包含损失的变量

我可以用
日志tensorhook
吗?在这种情况下,如何定义损失张量?

我还尝试实现我自己的钩子。示例建议在运行之前通过调用
SessionRunArgs
中注册损失,但我遇到了相同的问题


谢谢

我同意@jdehesa的观点,
loss
如果不编写自定义的
model\u fn
就不能直接使用。然而,通过
LoggingTensorHook
您可以在每一步获得功能评估,并自己计算任何损失或其他训练指标。我建议使用
formatter
来处理钩子可用的张量值。在下面的例子中,我使用了
LoggingTensorHook
使用自定义
格式化程序
输出特征估计值和当前MSE损失

import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)

"""prepare inputs - generate sample data"""
num_features = 5
features = ['f'+str(i) for i in range (num_features)]
X = np.random.randint(-1000,1000, (10000, num_features))
a = np.random.randint (2, 30, size = (num_features))/10
b = np.random.randint(-99, 99)/10
y = np.matmul(X, a) + b
noise = np.random.randn(*X.shape)
X = X + (noise * 1)
X.shape, y.shape, a, b
>> ((10000, 5), (10000,), array([2.1, 2. , 1.7, 0.5, 0.9]), 1.8)

""" create model """
feature_cols = [tf.feature_column.numeric_column(k) for k in features]  
X_dict = {features[i]:X[:,i] for i in range (num_features) }

TRAINING_OUTDIR = '.'
model = tf.estimator.LinearRegressor(
    model_dir = TRAINING_OUTDIR,
    feature_columns = feature_cols)

input_fn = tf.estimator.inputs.numpy_input_fn(
    X_dict, y, batch_size=512, num_epochs=50, shuffle=True,
    queue_capacity=1000, num_threads=1)

input_fn_predict = tf.estimator.inputs.numpy_input_fn(
    X_dict, batch_size=X.shape[0], shuffle=False)

"""create hook and formatter"""
feature_var_names = [f"linear/linear_model/{f}/weights" for f in features]
hook_vars_list = ['global_step', 'linear/linear_model/bias_weights'] + feature_var_names

def hooks_formatter (tensor_dict):
    step = tensor_dict['global_step']
    a_hat = [tensor_dict[feat][0][0] for feat in feature_var_names]
    b_hat = tensor_dict['linear/linear_model/bias_weights'][0]
    y_pred = np.dot (X, np.array(a_hat).T) + b_hat
    mse_loss =  np.mean((y - y_pred)**2)   # MSE
    line = f"step:{step}; MSE_loss: {mse_loss:.4f}; bias:{b_hat:.3f};"
    for f,w in zip (features, a_hat):
        line += f" {f}:{w:.3f};"
    return line
hook1 = tf.train.LoggingTensorHook(hook_vars_list, every_n_iter=10, formatter=hooks_formatter)

"""train"""
model.train(input_fn = input_fn, steps=100,hooks = [hook1])
>>>
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 1 into ./model.ckpt.
INFO:tensorflow:step:1; MSE_loss: 3183865.8670; bias:0.200; f0:0.200; f1:0.200; f2:0.200; f3:0.200; f4:0.200;
INFO:tensorflow:loss = 1924836100.0, step = 1
INFO:tensorflow:step:11; MSE_loss: 1023556.4537; bias:0.359; f0:0.936; f1:0.944; f2:0.903; f3:0.521; f4:0.802;
INFO:tensorflow:step:21; MSE_loss: 468665.2052; bias:0.269; f0:1.294; f1:1.276; f2:1.202; f3:0.437; f4:0.857;
INFO:tensorflow:step:31; MSE_loss: 232310.3535; bias:0.292; f0:1.513; f1:1.491; f2:1.379; f3:0.528; f4:0.893;
INFO:tensorflow:step:41; MSE_loss: 118843.3051; bias:0.278; f0:1.671; f1:1.633; f2:1.491; f3:0.472; f4:0.898;
INFO:tensorflow:step:51; MSE_loss: 62416.4437; bias:0.272; f0:1.782; f1:1.735; f2:1.563; f3:0.505; f4:0.903;
INFO:tensorflow:step:61; MSE_loss: 32799.2320; bias:0.277; f0:1.865; f1:1.808; f2:1.611; f3:0.487; f4:0.899;
INFO:tensorflow:step:71; MSE_loss: 17619.6118; bias:0.270; f0:1.924; f1:1.861; f2:1.641; f3:0.510; f4:0.904;
INFO:tensorflow:step:81; MSE_loss: 9423.0092; bias:0.283; f0:1.970; f1:1.899; f2:1.661; f3:0.494; f4:0.900;
INFO:tensorflow:step:91; MSE_loss: 5062.2780; bias:0.285; f0:2.003; f1:1.927; f2:1.675; f3:0.503; f4:0.901;
INFO:tensorflow:Saving checkpoints for 100 into ./model.ckpt.
INFO:tensorflow:Loss for final step: 1693422.1.
<tensorflow.python.estimator.canned.linear.LinearRegressor at 0x7f90a590f240>
将numpy导入为np
导入tensorflow作为tf
tf.logging.set_详细性(tf.logging.INFO)
“”“准备输入-生成样本数据”“”
num_特征=5
特征=['f'+str(i)表示范围内的i(num_特征)]
X=np.random.randint(-10001000,(10000,num_特征))
a=np.random.randint(2,30,size=(num_特征))/10
b=np.random.randint(-99,99)/10
y=np.matmul(X,a)+b
噪声=np.random.randn(*X.shape)
X=X+(噪声*1)
X形,y形,a形,b形
>>((10000,5),(10000,),数组([2.1,2,1.7,0.5,0.9]),1.8)
“创建模型”
feature\u cols=[tf.feature\u column.numeric\u column(k)表示特征中的k]
X_dict={features[i]:X[:,i]表示范围内的i(num_features)}
培训_OUTDIR='。'
模型=tf.estimator.lineareregressor(
model_dir=培训_OUTDIR,
要素列=要素列)
输入\u fn=tf.estimator.inputs.numpy\u输入\u fn(
X_dict,y,批处理大小=512,num_epochs=50,shuffle=True,
队列容量=1000,线程数=1)
输入\u fn\u预测=tf.estimator.inputs.numpy\u输入\u fn(
X_dict,批处理大小=X.shape[0],洗牌=False)
“”“创建挂钩和格式化程序”“”
feature_var_names=[f“线性/线性_模型/{f}/weights”表示特征中的f]
hook_vars_list=[‘全局_步长’、‘线性/线性模型/偏差权重’]+特征_var_名称
def hooks_格式化程序(张量dict):
步长=张量dict[“全局步长”]
a_hat=[tensor_dict[feat][0][0]表示特征中的专长\u变量\u名称]
b_hat=张量[0]
y_pred=np.dot(X,np.array(a_hat).T)+b_hat
mse_损失=np.平均值((y-y_pred)**2)#mse
line=f“步长:{step};MSE_损耗:{MSE_损耗:.4f};偏差:{b_hat:.3f};”
适用于f、w拉链(功能、a_帽):
行+=f“{f}:{w:.3f};”
回程线
hook1=tf.train.LoggingTensorHook(hook\u vars\u list,every\u n\u iter=10,formatter=hooks\u formatter)
“火车”
模型列车(输入值=输入值,步数=100,挂钩=挂钩1)
>>>
信息:tensorflow:正在调用模型\u fn。
信息:tensorflow:已完成调用模型\u fn。
信息:tensorflow:创建检查点SaveRhook。
信息:tensorflow:图表已定稿。
信息:tensorflow:正在运行本地初始化操作。
信息:tensorflow:已完成运行本地初始化操作。
信息:tensorflow:将1的检查点保存到./model.ckpt。
信息:tensorflow:步骤:1;MSE_损失:3183865.8670;偏差:0.200;f0:0.200;f1:0.200;f2:0.200;f3:0.200;f4:0.200;
信息:tensorflow:损耗=1924836100.0,步长=1
信息:tensorflow:步骤:11;MSE_损失:1023556.4537;偏差:0.359;f0:0.936;f1:0.944;f2:0.903;f3:0.521;f4:0.802;
信息:tensorflow:步骤:21;MSE_损失:468665.2052;偏差:0.269;f0:1.294;f1:1.276;f2:1.202;f3:0.437;f4:0.857;
信息:tensorflow:步骤:31;MSE_损失:232310.3535;偏差:0.292;f0:1.513;f1:1.491;f2:1.379;f3:0.528;f4:0.893;
信息:tensorflow:步骤:41;MSE_损失:118843.3051;偏差:0.278;f0:1.671;f1:1.633;f2:1.491;f3:0.472;f4:0.898;
信息:tensorflow:步骤:51;MSE_损失:62416.4437;偏差:0.272;f0:1.782;f1:1.735;f2:1.563;f3:0.505;f4:0.903;
信息:tensorflow:步骤:61;MSE_损失:32799.2320;偏差:0.277;f0:1.865;f1:1.808;f2:1.611;f3:0.487;f4:0.899;
信息:tensorflow:步骤:71;MSE_损失:17619.6118;偏差:0.270;f0:1.924;f1:1.861;f2:1.641;f3:0.510;f4:0.904;
信息:tensorflow:步骤:81;MSE_损失:9423.0092;偏差:0.283;f0:1.970;f1:1.899;f2:1.661;f3:0.494;f4:0.900;
信息:tensorflow:步骤:91;MSE_损失:5062.2780;偏差:0.285;f0:2.003;f1:1.927;f2:1.675;f3:0.503;f4:0.901;
信息:tensorflow:将100的检查点保存到./model.ckpt。
信息:tensorflow:最后一步损失:1693422.1。

据我所知,这似乎不可能直接实现,从源代码看,计算损失的张量似乎没有给出任何明确的名称,也无法直接从估计器中引用。也许(不确定)您可以访问度量张量,但即使这样,也会为TensorBoard提供度量消息,而不是您可以记录的内容。也许如果你编写自己的
模型_fn
,这是可能的,但它有点忽略了罐头估计器的意义。