Python InvalidArgumentError:必须为占位符张量';占位符';使用数据类型float和shape

Python InvalidArgumentError:必须为占位符张量';占位符';使用数据类型float和shape,python,tensorflow,deep-learning,pycharm,Python,Tensorflow,Deep Learning,Pycharm,我在Pycharm中编写了以下代码,它在Tensorflow中完全连接层(FCL)。占位符发生无效参数错误。因此,我在占位符中输入了所有的dtype、shape和name,但仍然得到无效参数错误 我想通过FCL模型生成新信号(1222)。 输入信号(1222)=>输出信号(1222) maxPredict:查找输出信号中具有最高值的索引 计算Y:获取maxPredict对应的频率数组值 损失:使用真实Y和计算Y之间的差值作为损失 损失=tf.abs(真实-计算)` 代码(发生错误) x=tf

我在Pycharm中编写了以下代码,它在Tensorflow中完全连接层(FCL)。占位符发生无效参数错误。因此,我在占位符中输入了所有的
dtype
shape
name
,但仍然得到无效参数错误

我想通过FCL模型生成新信号(1222)。
输入信号(1222)=>输出信号(1222)

  • maxPredict
    :查找输出信号中具有最高值的索引
  • 计算Y
    :获取maxPredict对应的频率数组值
  • 损失
    :使用真实Y和计算Y之间的差值作为损失
  • 损失
    =tf.abs(真实-计算)`
代码(发生错误)
x=tf.placeholder(dtype=tf.float32,shape=[1222],name='inputX')

错误

InvalidArgumentError(回溯见上文):必须为带有数据类型float和形状[1222]的占位符张量'inputX'输入一个值。 tensorflow.python.framework.errors\u impl.InvalidArgumentError:必须为带有数据类型float和形状[1222]的占位符tensor“inputX”提供一个值 [{{node inputX}}=Placeholderdtype=DT_FLOAT,shape=[1222],_device=“/job:localhost/replica:0/task:0/device:CPU:0”]] 在处理上述异常期间,发生了另一个异常:

新错误案例 我更改了代码。
x=tf.placeholder(tf.float32[None,222],name='inputX')

错误案例1
tensorFreq=tf.convert\u to\u tensor(basicFreq,tf.float32)

newY=tf.gather(tensorFreq,maxPredict)*60

loss=tf.abs(y-tf.Variable(newY))

ValueError:初始值必须指定一个形状:张量(“mul:0”,shape=(?,),dtype=float32)

错误案例2
tensorFreq=tf.convert\u to\u tensor(basicFreq,tf.float32)

newY=tf.gather(tensorFreq,maxPredict)*60

loss=tf.abs(y-newY)

回溯(最近一次呼叫最后一次): 文件“D:/PycharmProject/DetectionSignal/TEST_FCL_StackOverflow.py”,第127行,in 列车步骤=选择最小化(损失) 文件“C:\Users\Heewony\Anaconda3\envs\TSFW\u pycharm\lib\site packages\tensorflow\python\training\optimizer.py”,第407行 ([str(v)代表v,在梯度和变量中为v],损失)) ValueError:没有为任何变量提供渐变,请检查图形中不支持渐变的操作。[tf.variable'variable:0'shape=(2221024)dtype=float32\u ref,tf.variable'variable\u 1:0'shape=(1024,)dtype=float32\u re,…..tf.variable'variable\u 5:0'shape=(222,)dtype=float32\u ref]和损失张量(“Abs:0”,dtype=float32)

开发环境
  • 操作系统平台和发行版:Windows 10 x64
  • TensorFlow安装地点:Anaconda
  • Tensorflow 1.12.0版:
  • python 3.6.7:
  • 移动设备:不适用
  • 要复制的确切命令:不适用
  • GPU型号和内存:NVIDIA GeForce CTX 1080 Ti
  • CUDA/cuDNN:9.0/7.4
模型与功能 图表 一场
变量
batchSignal
的类型或形状似乎不正确。它必须是形状精确的numpy数组
[1222]
。如果要使用一批大小为n×222的示例,占位符
x
的形状应为
[None,222]
,占位符
y
的形状应为
[None]


顺便说一下,考虑使用,而不是显式初始化变量和自己实现这些层。

应该有两件事情要改变。 错误案例0。您不需要重新调整层间的流。您可以在第一个维度使用

None
传递动态批量大小

错误案例1。您可以直接使用您的newY作为NN的输出。只能使用tf.Variable定义权重或偏差

错误案例2.而且tensorflow似乎既没有
tf.abs()
也没有
tf.gather()
的梯度下降实现。对于回归问题,均方误差通常是足够的

这里,我如何重写您的代码。我没有您的matlab部分,因此无法调试您的python/matlab接口:

型号:

def Model_FCL(inputX):
    # Fully Connected Layer 1
    fcW1 = tf.get_variable('w1', shape=[222, 1024], initializer=tf.initializer.truncated_normal())
    fcb1 = tf.get_variable('b1', shape=[222], initializer=tf.initializer.truncated_normal())
    # fcb1 = tf.get_variable('b1', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
    fch1 = tf.nn.relu(tf.matmul(inputX, fcW1) + fcb1, name='relu1')

    # Fully Connected Layer 2
    fcW2 = tf.get_variable('w2', shape=[1024, 1024], initializer=tf.initializer.truncated_normal())
    fcb2 = tf.get_variable('b2', shape=[222], initializer=tf.initializer.truncated_normal())
    # fcb2 = tf.get_variable('b2', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
    fch2 = tf.nn.relu(tf.matmul(fch1, fcW2) + fcb2, name='relu2')

    # Output Layer
    fcW3 = tf.get_variable('w3', shape=[1024, 222], initializer=tf.initializer.truncated_normal())
    fcb3 = tf.get_variable('b3', shape=[222], initializer=tf.initializer.truncated_normal())
    # fcb2 = tf.get_variable('b2', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
    logits = tf.add(tf.matmul(fch2, fcW3), fcb3)

    predictY = tf.nn.softmax(logits)  #I'm not sure that it will learn if you do softmax then abs/MSE
    return predictY, logits
with myGraph.as_default():
    # define input data & output data 입력받기 위한 placeholder
    # put None(dynamic batch size) not -1 at the first dimension so that you can change your batch size
    x = tf.placeholder(tf.float32, shape=[None, 222], name='inputX')  # Signal size = [1, 222]
    y = tf.placeholder(tf.float32, shape=[None], name='trueY')  # Float value size = [1]

    ...

    predictY, logits = Model_FCL(x)  # Predict Signal, size = [1, 222]
    maxPredict = tf.argmax(predictY, 1, name='maxPredict')  # Find max index of Predict Signal

    tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
    newY = tf.gather(tensorFreq, maxPredict) * 60   # Find the value that corresponds to the Freq array index

    loss = tf.losses.mean_squared_error(labels=y, predictions=newY)  # maybe use MSE for regression problem
    # loss = tf.abs(y - newY)  # Calculate absolute (true Y - predict Y) #tensorflow doesn't have gradient descent implementation for tf.abs
    opt = tf.train.AdamOptimizer(learning_rate=0.0001)
    trainStep = opt.minimize(loss)
图形:

def Model_FCL(inputX):
    # Fully Connected Layer 1
    fcW1 = tf.get_variable('w1', shape=[222, 1024], initializer=tf.initializer.truncated_normal())
    fcb1 = tf.get_variable('b1', shape=[222], initializer=tf.initializer.truncated_normal())
    # fcb1 = tf.get_variable('b1', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
    fch1 = tf.nn.relu(tf.matmul(inputX, fcW1) + fcb1, name='relu1')

    # Fully Connected Layer 2
    fcW2 = tf.get_variable('w2', shape=[1024, 1024], initializer=tf.initializer.truncated_normal())
    fcb2 = tf.get_variable('b2', shape=[222], initializer=tf.initializer.truncated_normal())
    # fcb2 = tf.get_variable('b2', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
    fch2 = tf.nn.relu(tf.matmul(fch1, fcW2) + fcb2, name='relu2')

    # Output Layer
    fcW3 = tf.get_variable('w3', shape=[1024, 222], initializer=tf.initializer.truncated_normal())
    fcb3 = tf.get_variable('b3', shape=[222], initializer=tf.initializer.truncated_normal())
    # fcb2 = tf.get_variable('b2', shape=[None, 222], trainable=False, initializer=tf.constant_initializer(valueThatYouWant)) # if you want to fix your bias constant
    logits = tf.add(tf.matmul(fch2, fcW3), fcb3)

    predictY = tf.nn.softmax(logits)  #I'm not sure that it will learn if you do softmax then abs/MSE
    return predictY, logits
with myGraph.as_default():
    # define input data & output data 입력받기 위한 placeholder
    # put None(dynamic batch size) not -1 at the first dimension so that you can change your batch size
    x = tf.placeholder(tf.float32, shape=[None, 222], name='inputX')  # Signal size = [1, 222]
    y = tf.placeholder(tf.float32, shape=[None], name='trueY')  # Float value size = [1]

    ...

    predictY, logits = Model_FCL(x)  # Predict Signal, size = [1, 222]
    maxPredict = tf.argmax(predictY, 1, name='maxPredict')  # Find max index of Predict Signal

    tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
    newY = tf.gather(tensorFreq, maxPredict) * 60   # Find the value that corresponds to the Freq array index

    loss = tf.losses.mean_squared_error(labels=y, predictions=newY)  # maybe use MSE for regression problem
    # loss = tf.abs(y - newY)  # Calculate absolute (true Y - predict Y) #tensorflow doesn't have gradient descent implementation for tf.abs
    opt = tf.train.AdamOptimizer(learning_rate=0.0001)
    trainStep = opt.minimize(loss)

我变了
x=tf.placeholder(tf.float32[None,222],name='inputX')
#信号大小=[1222]。但另一个错误发生了
newY=tf.gather(tensorFreq,maxPredict)*60
loss=tf.abs(y-tf.Variable(newY))value错误:初始值必须指定一个形状:Tensor(“mul:0”,shape=(?),dtype=float32)或将损耗更改为
loss=tf.abs(y-newY)
,它发生在trainStep=opt.minimize(损耗),ValueError:没有为任何变量提供梯度,请检查图形中不支持梯度的ops,变量之间这很奇怪:
loss=tf.abs(y-tf.variable(newY))
,它创建了一个由
newY
初始化的变量(即使用从newY复制的值),从那时起,该变量与
newY
`变量(newY)`不是你的预测,而是一个恰好具有相同值的变量,因此错误无法传播到模型中。错误案例2 tensorFreq=tf.convert_to_tensor(basicFreq,tf.float32)newY=tf.gather(tensorFreq,maxPredict)*60 loss=tf abs(y-newY)由它引起,此错误值错误:没有为任何变量提供渐变,请在变量之间检查图形中是否有不支持渐变的操作[tf.variable'variable:0'shape=(2221024)dtype=float32_ref,tf.variable'variable
with myGraph.as_default():
    # define input data & output data 입력받기 위한 placeholder
    # put None(dynamic batch size) not -1 at the first dimension so that you can change your batch size
    x = tf.placeholder(tf.float32, shape=[None, 222], name='inputX')  # Signal size = [1, 222]
    y = tf.placeholder(tf.float32, shape=[None], name='trueY')  # Float value size = [1]

    ...

    predictY, logits = Model_FCL(x)  # Predict Signal, size = [1, 222]
    maxPredict = tf.argmax(predictY, 1, name='maxPredict')  # Find max index of Predict Signal

    tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
    newY = tf.gather(tensorFreq, maxPredict) * 60   # Find the value that corresponds to the Freq array index

    loss = tf.losses.mean_squared_error(labels=y, predictions=newY)  # maybe use MSE for regression problem
    # loss = tf.abs(y - newY)  # Calculate absolute (true Y - predict Y) #tensorflow doesn't have gradient descent implementation for tf.abs
    opt = tf.train.AdamOptimizer(learning_rate=0.0001)
    trainStep = opt.minimize(loss)