Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/365.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 用渐变将Pytork张量调整为较小尺寸_Python_Pytorch - Fatal编程技术网

Python 用渐变将Pytork张量调整为较小尺寸

Python 用渐变将Pytork张量调整为较小尺寸,python,pytorch,Python,Pytorch,我试图将张量从(3,3)缩小为(1,1),但我想保留原始张量: import torch a = torch.rand(3, 3) a_copy = a.clone() a_copy.resize_(1, 1) 我需要requires_grad=True在我的初始张量中,但PyTorch禁止我尝试调整副本的大小: a = torch.rand(3, 3, requires_grad=True) a_copy = a.clone() a_copy.resize_(1, 1) 抛出一个错误:

我试图将张量从
(3,3)
缩小为
(1,1)
,但我想保留原始张量:

import torch

a = torch.rand(3, 3)
a_copy = a.clone()
a_copy.resize_(1, 1)
我需要
requires_grad=True
在我的初始张量中,但PyTorch禁止我尝试调整副本的大小:

a = torch.rand(3, 3, requires_grad=True)
a_copy = a.clone()
a_copy.resize_(1, 1)
抛出一个错误:

Traceback (most recent call last):
  File "pytorch_test.py", line 7, in <module>
    a_copy.resize_(1, 1)
RuntimeError: cannot resize variables that require grad
这会产生以下错误:

Traceback (most recent call last):
  File "pytorch_test.py", line 14, in <module>
    a_copy.resize_(1, 1)
RuntimeError: set_sizes_contiguous is not allowed on a Tensor created from .data or .detach().
If your intent is to change the metadata of a Tensor (such as sizes / strides / storage / storage_offset)
without autograd tracking the change, remove the .data / .detach() call and wrap the change in a `with torch.no_grad():` block.
For example, change:
    x.data.set_(y)
to:
    with torch.no_grad():
        x.set_(y)
但它仍然给了我一个关于grad的错误:

Traceback (most recent call last):
  File "pytorch_test.py", line 21, in <module>
    a_copy.resize_(1, 1)
RuntimeError: cannot resize variables that require grad
回溯(最近一次呼叫最后一次):
文件“pytorch_test.py”,第21行,在
复制。调整大小(1,1)
RuntimeError:无法调整需要梯度的变量的大小
类似问题 我已经看过了,但那个例子中的张量保留了所有的原始值。 我还研究了复制张量的方法


我正在使用PyTorch 1.4.0版

我认为您应该首先分离然后克隆:

a = torch.rand(3, 3, requires_grad=True)
a_copy = a.detach().clone()
a_copy.resize_(1, 1)
注意:
a.detach()
返回一个从当前图形中分离的新张量(它不会像
a.detach()
那样从图形中分离
a
本身)。但由于它与
a
共享存储空间,因此您还应该克隆它。这样,您对
a\u副本所做的任何操作都不会影响
a
。然而,我不确定为什么
a.detach().clone()
有效,但是
a.clone().detach()
给出了错误

编辑

以下代码也可以工作(这可能是更好的解决方案):

有一个功能:

def samestorage(x,y):
    if x.storage().data_ptr()==y.storage().data_ptr():
        print("same storage")
    else:
        print("different storage")
def contiguous(y):
    if True==y.is_contiguous():
        print("contiguous")
    else:
        print("non contiguous")
# narrow => same storage contiguous tensors
import torch
x = torch.randn(3, 3, requires_grad=True)
y = x.narrow(0, 1, 2) #dim, start, len  
print(x)
print(y)
contiguous(y)
samestorage(x,y)
输出:

张量([1.1383,-1.2937,0.8451],
[ 0.0151,  0.8608,  1.4623],
[0.8490,-0.0870,-0.0254]],需要_grad=True)
张量([[0.0151,0.8608,1.4623],
[0.8490,-0.0870,-0.0254]],梯度fn=)
相邻的
相同的存储

您是否已尝试先分离然后克隆:
a_copy=a.detach().clone()
?@andresk。不…我只是试了一下works@MoonCheesez我已经编辑了答案。您也可以将
与torch.no_grad()
一起使用。
a = torch.rand(3, 3, requires_grad=True)
a_copy = a.detach().clone()
a_copy.resize_(1, 1)
a = torch.rand(3, 3, requires_grad=True)

with torch.no_grad():
    a_copy = a.clone()
    a_copy.resize_(1, 1)
def samestorage(x,y):
    if x.storage().data_ptr()==y.storage().data_ptr():
        print("same storage")
    else:
        print("different storage")
def contiguous(y):
    if True==y.is_contiguous():
        print("contiguous")
    else:
        print("non contiguous")
# narrow => same storage contiguous tensors
import torch
x = torch.randn(3, 3, requires_grad=True)
y = x.narrow(0, 1, 2) #dim, start, len  
print(x)
print(y)
contiguous(y)
samestorage(x,y)
tensor([[ 1.1383, -1.2937,  0.8451],
        [ 0.0151,  0.8608,  1.4623],
        [ 0.8490, -0.0870, -0.0254]], requires_grad=True)
tensor([[ 0.0151,  0.8608,  1.4623],
        [ 0.8490, -0.0870, -0.0254]], grad_fn=<SliceBackward>)
contiguous
same storage