Python 对浮点张量执行正常或统一初始化只会导致零

Python 对浮点张量执行正常或统一初始化只会导致零,python,pytorch,Python,Pytorch,导致 import torch pytorchGPUDirectCreateWEmpty = torch.empty(size=(20000000, 128), dtype=torch.float, device='cuda', requires_grad=False, pin_memory=False).uniform_(-1, 1) pytorchGPUDirectCreateWEmpty import torch torch.set_default_tensor_type('torch.

导致

import torch
pytorchGPUDirectCreateWEmpty = torch.empty(size=(20000000, 128), dtype=torch.float, device='cuda', requires_grad=False, pin_memory=False).uniform_(-1, 1)
pytorchGPUDirectCreateWEmpty
import torch
torch.set_default_tensor_type('torch.cuda.FloatTensor')
u_embeddings = torch.nn.Embedding(20000000, 128, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None)
u_embeddings.weight.data.uniform_(-1, 1)
u_embeddings.weight.data

导致

import torch
pytorchGPUDirectCreateWEmpty = torch.empty(size=(20000000, 128), dtype=torch.float, device='cuda', requires_grad=False, pin_memory=False).uniform_(-1, 1)
pytorchGPUDirectCreateWEmpty
import torch
torch.set_default_tensor_type('torch.cuda.FloatTensor')
u_embeddings = torch.nn.Embedding(20000000, 128, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None)
u_embeddings.weight.data.uniform_(-1, 1)
u_embeddings.weight.data
如果我用
double
而不是
float
初始化,初始化工作正常。我可以稍后转换为float,但我的内存有限,无法在转换之前先初始化double

为什么初始化对浮点张量不起作用