Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/sql/82.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 将两个高维张量(2,5,3)*(2,5)乘以(2,5,3)_Python_Pytorch_Tensor - Fatal编程技术网

Python 将两个高维张量(2,5,3)*(2,5)乘以(2,5,3)

Python 将两个高维张量(2,5,3)*(2,5)乘以(2,5,3),python,pytorch,tensor,Python,Pytorch,Tensor,我想把两个高维张量(2,5,3)*(2,5)乘以(2,5,3),每行向量乘以一个标量 例如 除了多for循环方式外,如何通过PyTorchapi以简洁的方式实现它? 感谢您的改进。您可以通过正确对齐两个张量的尺寸: import torch from torch.nn import Embedding emb = Embedding(6, 3) inp = torch.tensor([[1, 2, 3, 4, 5,], [2, 3, 1, 4, 5,

我想把两个高维张量(2,5,3)*(2,5)乘以(2,5,3),每行向量乘以一个标量

例如

除了多for循环方式外,如何通过
PyTorch
api以简洁的方式实现它?

感谢您的改进。

您可以通过正确对齐两个张量的尺寸:

import torch
from torch.nn import Embedding

emb = Embedding(6, 3)
inp = torch.tensor([[1, 2, 3, 4, 5,],
                      [2, 3, 1, 4, 5,]])
input_emb = emb(inp)

inp[...,None] * input_emb

tensor([[[-0.3069, -0.7727, -0.3772],
         [-2.8308,  1.3438, -1.1167],
         [ 0.6366,  0.6509, -3.2282],
         [-4.3004,  3.2342, -0.6556],
         [-3.0045, -0.0191, -7.4436]],

        [[-2.8308,  1.3438, -1.1167],
         [ 0.6366,  0.6509, -3.2282],
         [-0.3069, -0.7727, -0.3772],
         [-4.3004,  3.2342, -0.6556],
         [-3.0045, -0.0191, -7.4436]]], grad_fn=<MulBackward0>)
导入火炬
从torch.nn导入嵌入
emb=嵌入(6,3)
inp=火炬张量([[1,2,3,4,5,5],
[2, 3, 1, 4, 5,]])
输入\u emb=emb(inp)
inp[…,无]*输入
张量([-0.3069,-0.7727,-0.3772],
[-2.8308,  1.3438, -1.1167],
[ 0.6366,  0.6509, -3.2282],
[-4.3004,  3.2342, -0.6556],
[-3.0045, -0.0191, -7.4436]],
[[-2.8308,  1.3438, -1.1167],
[ 0.6366,  0.6509, -3.2282],
[-0.3069, -0.7727, -0.3772],
[-4.3004,  3.2342, -0.6556],
[-3.0045,-0.0191,-7.4436]],梯度fn=)

谢谢您的回答。如果你不介意的话,你能给我一些更多的操作例子或参考来对齐张量的尺寸吗?你在这篇文章或这里的文档中有一个例子。否则,这篇文章(更长)应该对@BowenPeng有用
// It is written in this way for convenience, not mathematical true. 

// multiply each row vector by a scalar
[[
         [-1.9114, -0.1580,  1.2186] * 1
         [ 0.4627,  0.9119, -1.1691] * 2
         [ 0.6452, -0.6944,  1.9659] * 3
         [-0.5048,  0.6411, -1.3568] * 4
         [-0.2328, -0.9498,  0.7216] * 5
] 
[
         [ 0.4627,  0.9119, -1.1691] * 2
         [ 0.6452, -0.6944,  1.9659] * 3
         [-1.9114, -0.1580,  1.2186] * 1
         [-0.5048,  0.6411, -1.3568] * 4
         [-0.2328, -0.9498,  0.7216] * 5
]]
import torch
from torch.nn import Embedding

emb = Embedding(6, 3)
inp = torch.tensor([[1, 2, 3, 4, 5,],
                      [2, 3, 1, 4, 5,]])
input_emb = emb(inp)

inp[...,None] * input_emb

tensor([[[-0.3069, -0.7727, -0.3772],
         [-2.8308,  1.3438, -1.1167],
         [ 0.6366,  0.6509, -3.2282],
         [-4.3004,  3.2342, -0.6556],
         [-3.0045, -0.0191, -7.4436]],

        [[-2.8308,  1.3438, -1.1167],
         [ 0.6366,  0.6509, -3.2282],
         [-0.3069, -0.7727, -0.3772],
         [-4.3004,  3.2342, -0.6556],
         [-3.0045, -0.0191, -7.4436]]], grad_fn=<MulBackward0>)