Python 使用Keras TimeDistributed层时拓扑排序失败

Python 使用Keras TimeDistributed层时拓扑排序失败,python,tensorflow,keras,deep-learning,Python,Tensorflow,Keras,Deep Learning,我试图通过使用keras时间分布层,在4D张量(样本、时间步长、回望、特征)上,将回望维度的最后一列点到之前的回望周期。该模型可以正常运行,但在我运行model.fit()时,它会发出一个警告,即图形无法按拓扑顺序排序 说它会搞砸模特训练。那么我能做些什么来防止这种情况发生呢 环境: Tensorflow GPU 1.15.0 CUDA V10.0.130 python 3.6.5 Keras 2.3.1 Keras应用程序1.0.8 Keras预处理1.1.0 警告日志 2020-03-05

我试图通过使用keras时间分布层,在4D张量(样本、时间步长、回望、特征)上,将回望维度的最后一列点到之前的回望周期。该模型可以正常运行,但在我运行model.fit()时,它会发出一个警告,即图形无法按拓扑顺序排序

说它会搞砸模特训练。那么我能做些什么来防止这种情况发生呢

环境:

  • Tensorflow GPU 1.15.0
  • CUDA V10.0.130
  • python 3.6.5
  • Keras 2.3.1
  • Keras应用程序1.0.8
  • Keras预处理1.1.0
  • 警告日志

    2020-03-05 08:36:17.558396: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 1, topological sort failed with message: The graph couldn't be sorted in topological order.
    2020-03-05 08:36:17.558777: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:533] layout failed: Invalid argument: The graph couldn't be sorted in topological order.
    2020-03-05 08:36:17.559302: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:533] model_pruner failed: Invalid argument: MutableGraphView::MutableGraphView error: node 'loss/time_distributed_1_loss/mean_squared_error/weighted_loss/concat' has self cycle fanin 'loss/time_distributed_1_loss/mean_squared_error/weighted_loss/concat'.
    2020-03-05 08:36:17.560121: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:533] remapper failed: Invalid argument: MutableGraphView::MutableGraphView error: node 'loss/time_distributed_1_loss/mean_squared_error/weighted_loss/concat' has self cycle fanin 'loss/time_distributed_1_loss/mean_squared_error/weighted_loss/concat'.
    2020-03-05 08:36:17.560575: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:533] arithmetic_optimizer failed: Invalid argument: The graph couldn't be sorted in topological order.
    2020-03-05 08:36:17.560853: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 0, topological sort failed with message: The graph couldn't be sorted in topological order.
    2020-03-05 08:36:17.561141: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 1, topological sort failed with message: The graph couldn't be sorted in topological order.
    

    可以考虑使用TysFooSo.x版本。

    我已经迁移/升级了您的代码,并验证它是否可以在google colab上运行。 您可以尝试查找有关如何将代码迁移到Tensorflow 2.x的更多信息

    请参考下面的代码

    import numpy as np
    import tensorflow as tf
    from tensorflow.keras.models import Model
    from tensorflow.keras.layers import Input, TimeDistributed
    #import keras
    # Dot layer
    class Dot(tf.keras.layers.Layer):
        def __init__(self, **kwargs):
            super(Dot, self).__init__(**kwargs)
    
        def call(self, x):
    
            ht, hT = x[:,:-1,:],x[:,-1:,:]
            ml = tf.multiply(ht, hT)
    
            # I believe problem come from reduce_sum
            dot = tf.reduce_sum(ml, axis=-1)
            return dot
    
        def compute_output_shape(self, input_shape):
    
            return (None,input_shape[1]-1)
    
    num_fea = 11
    num_lookback = 5
    time_step = 3
    sample = 2
    
    # create model
    input = Input(shape=(time_step,num_lookback,num_fea))
    dot = Dot()
    output = TimeDistributed(dot)(input)
    
    M = Model(inputs=[input], outputs=[output])
    M.compile(optimizer='adam', loss='mse')
    
    # create test data
    data = np.arange(num_lookback*num_fea).reshape((num_lookback,num_fea))
    data = np.broadcast_to(data,shape=(sample,time_step,num_lookback,num_fea))
    y = np.ones(shape=(sample,time_step,num_lookback-1))
    
    # fit model to demonstrate error
    M.fit(x=data,y=y, batch_size=2, epochs=10)
    

    谢谢你的代码,但这只是我代码的一部分。如果我更改TF版本,我必须将所有1000行代码更改为TF2并再次调试它。嗨@RonakritW。您可以参考下面关于该警告的解释。我已经看到了这篇文章,但我不明白这个实现中的循环在哪里。问题可能是这个
    ml=tf.multiply(ht,ht)
    。你也可以查看链接,我也看到了这一点,但它没有帮助,我花了1周的时间来寻找解决方案,我看到了几乎所有可用的答案,但没有一个建议有解决方案。第二,乘法ht和ht不会创建循环,如果它在后端创建循环,请向我展示代码或其他东西来证明它。
    import numpy as np
    import tensorflow as tf
    from tensorflow.keras.models import Model
    from tensorflow.keras.layers import Input, TimeDistributed
    #import keras
    # Dot layer
    class Dot(tf.keras.layers.Layer):
        def __init__(self, **kwargs):
            super(Dot, self).__init__(**kwargs)
    
        def call(self, x):
    
            ht, hT = x[:,:-1,:],x[:,-1:,:]
            ml = tf.multiply(ht, hT)
    
            # I believe problem come from reduce_sum
            dot = tf.reduce_sum(ml, axis=-1)
            return dot
    
        def compute_output_shape(self, input_shape):
    
            return (None,input_shape[1]-1)
    
    num_fea = 11
    num_lookback = 5
    time_step = 3
    sample = 2
    
    # create model
    input = Input(shape=(time_step,num_lookback,num_fea))
    dot = Dot()
    output = TimeDistributed(dot)(input)
    
    M = Model(inputs=[input], outputs=[output])
    M.compile(optimizer='adam', loss='mse')
    
    # create test data
    data = np.arange(num_lookback*num_fea).reshape((num_lookback,num_fea))
    data = np.broadcast_to(data,shape=(sample,time_step,num_lookback,num_fea))
    y = np.ones(shape=(sample,time_step,num_lookback-1))
    
    # fit model to demonstrate error
    M.fit(x=data,y=y, batch_size=2, epochs=10)