Python 3.x 为什么我会得到;NotImplementedError();在Tensorflow中构建自定义优化器时
我正在研究图像分类,并试图在Tensorflow中实现一个自定义优化器(基于ELSEVIER上发表的一篇论文) 我试图修改代码如下:我有一些其他功能,但它都与预处理和模型架构等相关。我的优化器代码如下Python 3.x 为什么我会得到;NotImplementedError();在Tensorflow中构建自定义优化器时,python-3.x,tensorflow,machine-learning,optimization,keras,Python 3.x,Tensorflow,Machine Learning,Optimization,Keras,我正在研究图像分类,并试图在Tensorflow中实现一个自定义优化器(基于ELSEVIER上发表的一篇论文) 我试图修改代码如下:我有一些其他功能,但它都与预处理和模型架构等相关。我的优化器代码如下 import os os.environ['TF_KERAS'] = '1' from tensorflow import keras import tensorflow as tf from sklearn.model_selection import train_test_split from
import os
os.environ['TF_KERAS'] = '1'
from tensorflow import keras
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
import cv2
import imutils
import matplotlib.pyplot as plt
from os import listdir
from sklearn.metrics import confusion_matrix,classification_report
import logging, warnings
import numpy as np
from tensorflow.python.training import optimizer
from tensorflow.python.ops import math_ops, state_ops, control_flow_ops, variable_scope
from tensorflow.python.framework import ops
class BPVAM(optimizer.Optimizer):
"""Back-propagation algorithm with variable adaptive momentum.
Variables are updated in two steps:
1) v(t + 1) = alpha * v(t)- lr * g(t)
2) w(t + 1) = w(t) + v(t + 1)
where
- v(t + 1): delta for update at step t + 1
- w(t + 1): weights at step t + 1 (after update)
- g(t): gradients at step t.
- lr: learning rate
- alpha: momentum parameter
In the algorithm alpha is not fixed. It is variable and it is parametrized by:
alpha(t) = lambda / (1 - beta ^ t)
"""
def __init__(
self,
lr: float = 0.001,
lam: float = 0.02,
beta: float = 0.998,
use_locking: bool = False,
name: str = 'BPVAM'
):
"""
Args:
lr: learning rate
lam: momentum parameter
beta: momentum parameter
use_locking:
name:
"""
super(BPVAM, self).__init__(use_locking, name)
self._lr = lr
self._lambda = lam
self._beta = beta
self._lr_tensor = None
self._lambda_tensor = None
self._beta_tensor = None
def _create_slots(self, var_list):
for v in var_list:
self._zeros_slot(v, 'v', self._name)
self._get_or_make_slot(v,
ops.convert_to_tensor(self._beta),
'beta',
self._name)
def _prepare(self):
self._lr_tensor = ops.convert_to_tensor(self._lr, name='lr')
self._lambda_tensor = ops.convert_to_tensor(self._lambda, name='lambda')
def _apply_dense(self, grad, var):
lr_t = math_ops.cast(self._lr_tensor, var.dtype.base_dtype)
lambda_t = math_ops.cast(self._lambda_tensor, var.dtype.base_dtype)
v = self.get_slot(var, 'v')
betas = self.get_slot(var, 'beta')
beta_t = state_ops.assign(betas, betas * betas)
alpha = lambda_t / (1 - beta_t)
v_t = state_ops.assign(v, alpha * v - lr_t * grad)
var_update = state_ops.assign_add(var, v_t, use_locking=self._use_locking)
return control_flow_ops.group(*[beta_t, v_t, var_update])
创建优化器并运行之后
myopt = BPVAM()
model.compile(optimizer= myopt, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
我收到了这个错误信息
Traceback (most recent call last):
File "/Users/classification.py", line 264, in <module>model.fit(x=X_train,y=y_train, batch_size=32, epochs=50, validation_data=(X_val, y_val))
File"/Users/ venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training.py", line 780, in fit steps_name='steps_per_epoch')
File"/Users/venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training_arrays.py", line 157, in model_iteration f = _make_execution_function(model, mode)
File"/Users/ venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training_arrays.py", line 532, in _make_execution_function return model._make_execution_function(mode)
File"/Users/ venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training.py", line 2276, in _make_execution_function self._make_train_function()
File"/Users/ venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training.py", line 2219, in _make_train_function params=self._collected_trainable_weights, loss=self.total_loss)
File "/Users/ venv/lib/python3.7/site-packages/tensorflow/python/keras/optimizers.py", line 753, in get_updates grads, global_step=self.iterations)
File "/Users/ venv/lib/python3.7/site-packages/tensorflow/python/training/optimizer.py", line 614, in apply_gradients update_ops.append(processor.update_op(self, grad))
File "/Users/venv/lib/python3.7/site-packages/tensorflow/python/training/optimizer.py", line 171, in update_op update_op = optimizer._resource_apply_dense(g, self._v)
File "/Users/venv/lib/python3.7/site-packages/tensorflow/python/training/optimizer.py", line 954, in _resource_apply_dense
raise NotImplementedError()
NotImplementedError
回溯(最近一次呼叫最后一次):
文件“/Users/classification.py”,第264行,在model.fit中(x=x\u-train,y=y\u-train,批量大小=32,历元=50,验证数据=(x\u-val,y\u-val))
文件“/Users/venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training.py”,第780行,在fit steps\u name='steps\u per\u epoch'中
文件“/Users/venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training\u arrays.py”,第157行,在模型迭代f=\u生成执行函数(模型,模式)中
文件“/Users/venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training_arrays.py”,第532行,在函数返回模型中
文件“/Users/venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training.py”,第2276行,在函数self中
文件“/Users/venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training.py”,第2219行,在函数params=self中
文件“/Users/venv/lib/python3.7/site packages/tensorflow/python/keras/optimizers.py”,第753行,在get\u updates grads中,global\u step=self.iterations)
文件“/Users/venv/lib/python3.7/site packages/tensorflow/python/training/optimizer.py”,第614行,在apply_gradients update_ops.append(processor.update_op(self,grad))中
文件“/Users/venv/lib/python3.7/site packages/tensorflow/python/training/optimizer.py”,第171行,在update\u op update\u op=optimizer.\u resource\u apply\u densite(g,self.\u v)
文件“/Users/venv/lib/python3.7/site packages/tensorflow/python/training/optimizer.py”,第954行,在资源应用中
引发未实现的错误()
未实现错误
我不明白问题出在哪里。我使用的是Tensorflow 1.14.0版本和python 3.7。我创建了虚拟环境并尝试了其他tensorflow和python版本,但仍然不起作用。要使用继承自tensorflow.python.training.optimizer.optimizer的类,您必须至少实现以下方法:
\u应用\u密集
\u资源\u应用\u密集
\u应用\u稀疏
由于您尝试实现自定义动量方法,您可能希望直接将
MomentumOptimizer
子类化。仍然使用Tensorflow 1.14.0,我是否需要实现以下方法?