Python 关于add_标量()的TensorboardX输入问题

Python 关于add_标量()的TensorboardX输入问题,python,pytorch,tensorboardx,Python,Pytorch,Tensorboardx,当我使用tensorboardX绘制数据丢失时,它会显示: AssertionError Traceback (most recent call last) <ipython-input-76-73419a51fcc9> in <module> ----> 1 writer.add_scalar('resnet34_loss', loss) F:\Program Files\Python\lib\site-pa

当我使用tensorboardX绘制数据丢失时,它会显示:

AssertionError                            Traceback (most recent call last)
<ipython-input-76-73419a51fcc9> in <module>
----> 1 writer.add_scalar('resnet34_loss', loss)

F:\Program Files\Python\lib\site-packages\tensorboardX\writer.py in add_scalar(self, tag, scalar_value, global_step, walltime)
    403             scalar_value = workspace.FetchBlob(scalar_value)
    404         self._get_file_writer().add_summary(
--> 405             scalar(tag, scalar_value), global_step, walltime)
    406 
    407     def add_scalars(self, main_tag, tag_scalar_dict, global_step=None, walltime=None):

F:\Program Files\Python\lib\site-packages\tensorboardX\summary.py in scalar(name, scalar, collections)
    145     name = _clean_tag(name)
    146     scalar = make_np(scalar)
--> 147     assert(scalar.squeeze().ndim == 0), 'scalar should be 0D'
    148     scalar = float(scalar)
    149     return Summary(value=[Summary.Value(tag=name, simple_value=scalar)])

AssertionError: scalar should be 0D
AssertionError回溯(最近一次调用)
在里面
---->1个写入器。添加标量('resnet34_loss',loss)
F:\Program Files\Python\lib\site packages\tensorboardX\writer.py in add\u scalar(self、tag、scalar\u value、global\u step、walltime)
403标量值=workspace.FetchBlob(标量值)
404 self.\u获取\u文件\u编写器()。添加\u摘要(
-->405标量(标记、标量值)、全局步长、walltime)
406
407 def add_scalars(self、main_tag、tag_scalar_dict、global_step=None、walltime=None):
F:\Program Files\Python\lib\site packages\tensorboardX\summary.py(名称、标量、集合)
145名称=_clean_标记(名称)
146标量=make_np(标量)
-->147断言(scalar.squence().ndim==0),“标量应为0D”
148标量=浮点(标量)
149返回摘要(值=[Summary.value(tag=name,simple\u value=scalar)])
AssertionError:标量应为0D

我已经将损失从
float
转换为
np.array
,并且我已经阅读了tensorboardX的文档,它告诉我
add_scalar()
函数必须输入标量数据,我这样做了,但它显示了一个错误。谢谢你的帮助

我也有同样的问题,这里有一个最小的样本来重现你的错误

writer = SummaryWriter(osp.join('runs', 'hello'))
loss = np.random.randn(10)
writer.add_scalar(tag='Checking range', scalar_value=loss)
writer.close()
这就回来了,

Traceback (most recent call last):

  File "untitled0.py", line 26, in <module>
    writer.add_scalar(tag='Checking range', scalar_value=loss)

  File "/home/melike/anaconda2/envs/pooling/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py", line 346, in add_scalar
    scalar(tag, scalar_value), global_step, walltime)

  File "/home/melike/anaconda2/envs/pooling/lib/python3.6/site-packages/torch/utils/tensorboard/summary.py", line 248, in scalar
    assert(scalar.squeeze().ndim == 0), 'scalar should be 0D'

AssertionError: scalar should be 0D
这个输出

1
因此,我们找到了错误的原因,
add_scalar
square
操作后期望0-d scalar,我们给它一个1-d scalar。PyTorch文档的页面有
add\u scalar
示例。让我们把代码转换成那个版本

writer = SummaryWriter(osp.join('runs', 'hello'))
loss = np.random.randn(10)
for i, val in enumerate(loss):
    writer.add_scalar(tag='Checking range', scalar_value=val, global_step=i)
writer.close()
这就是输出

writer = SummaryWriter(osp.join('runs', 'hello'))
loss = np.random.randn(10)
for i, val in enumerate(loss):
    writer.add_scalar(tag='Checking range', scalar_value=val, global_step=i)
writer.close()