Python 应使用`torch.device`或将字符串作为参数传递来设置`device`参数

Python 应使用`torch.device`或将字符串作为参数传递来设置`device`参数,python,machine-learning,deep-learning,nlp,pytorch,Python,Machine Learning,Deep Learning,Nlp,Pytorch,我的数据迭代器当前在CPU上运行,因为device=0参数已弃用。但我需要它在GPU上运行与模型的其余部分等 这是我的密码: pad_idx = TGT.vocab.stoi["<blank>"] model = make_model(len(SRC.vocab), len(TGT.vocab), N=6) model = model.to(device) criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_

我的数据迭代器当前在CPU上运行,因为
device=0
参数已弃用。但我需要它在GPU上运行与模型的其余部分等

这是我的密码:

pad_idx = TGT.vocab.stoi["<blank>"]
model = make_model(len(SRC.vocab), len(TGT.vocab), N=6)
model = model.to(device)
criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_idx, smoothing=0.1)
criterion = criterion.to(device)
BATCH_SIZE = 12000
train_iter = MyIterator(train, device, batch_size=BATCH_SIZE,
                        repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
                        batch_size_fn=batch_size_fn, train=True)
valid_iter = MyIterator(val, device, batch_size=BATCH_SIZE,
                        repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
                        batch_size_fn=batch_size_fn, train=False)
#model_par = nn.DataParallel(model, device_ids=devices)
我尝试将
'cuda'
作为参数而不是
device=0
传递,但收到以下错误:

<ipython-input-50-da3b1f7ed907> in <module>()
    10     train_iter = MyIterator(train, 'cuda', batch_size=BATCH_SIZE,
    11                             repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
---> 12                             batch_size_fn=batch_size_fn, train=True)
    13     valid_iter = MyIterator(val, 'cuda', batch_size=BATCH_SIZE,
    14                             repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),

TypeError: __init__() got multiple values for argument 'batch_size'
() 10序列iter=MyIterator(序列'cuda',批量大小=批量大小, 11 repeat=False,sort_key=lambda x:(len(x.src),len(x.trg)), --->12批次尺寸=批次尺寸,序列=真) 13有效迭代器=我的迭代器(val,'cuda',批量大小=批量大小, 14 repeat=False,sort_key=lambda x:(len(x.src),len(x.trg)), TypeError:\uuuu init\uuuuuuuuu()为参数“batch\u size”获取了多个值 我还尝试将
device
作为参数传入。将设备定义为
device=torch.device('cuda:0'如果torch.cuda.is_可用(),否则为'cpu')

但是收到与上面相同的错误


如果您有任何建议,我们将不胜感激。

我当前的pytorch版本
1.0.1
和以前的版本
0.4
与字符串和
火炬配合使用。设备

导入火炬
x=火炬张量(1)
打印(x.to('cuda:0'))#没问题
打印(x.to(火炬装置('cuda:0'))#也没问题
pad_idx=TGT.vocab.stoi[“”]
模型=制作模型(len(SRC.vocab),len(TGT.vocab),N=6)
模型=模型到(设备)
标准=标签平滑(大小=len(TGT.vocab),填充\u idx=填充\u idx,平滑=0.1)
标准=标准到(设备)
批量大小=12000
序列iter=MyIterator(序列,批量大小=批量大小,设备=火炬。设备('cuda'),
repeat=False,sort_key=lambda x:(len(x.src),len(x.trg)),
批次大小=批次大小,序列=真)
有效的iter=MyIterator(val,batch\u size=batch\u size,device=torch.device('cuda'),
repeat=False,sort_key=lambda x:(len(x.src),len(x.trg)),
批次大小=批次大小,序列=假)

经过大量的尝试和错误,我成功地将
device
设置为
device=torch.device('cuda'))
而不是
device=0

@codeslord这是很久以前的事了,但我很确定我是在玩我知道的带注释的转换器。我问这个问题是因为我对您自己的实现非常感兴趣。无论如何,谢谢。
<ipython-input-50-da3b1f7ed907> in <module>()
    10     train_iter = MyIterator(train, 'cuda', batch_size=BATCH_SIZE,
    11                             repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
---> 12                             batch_size_fn=batch_size_fn, train=True)
    13     valid_iter = MyIterator(val, 'cuda', batch_size=BATCH_SIZE,
    14                             repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),

TypeError: __init__() got multiple values for argument 'batch_size'
pad_idx = TGT.vocab.stoi["<blank>"]
model = make_model(len(SRC.vocab), len(TGT.vocab), N=6)
model = model.to(device)
criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_idx, smoothing=0.1)
criterion = criterion.to(device)
BATCH_SIZE = 12000
train_iter = MyIterator(train, batch_size=BATCH_SIZE, device = torch.device('cuda'),
                        repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
                        batch_size_fn=batch_size_fn, train=True)
valid_iter = MyIterator(val, batch_size=BATCH_SIZE, device = torch.device('cuda'),
                        repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
                        batch_size_fn=batch_size_fn, train=False)