RuntimeError: element 0 of tensors does not require grad and doe
【摘要】 @[toc] 问题描述在重新加入模型,再训练的时候出现了如下问题:Traceback (most recent call last): File "D:\Ghost_Demo\train.py", line 200, in <module> train_loss, train_acc = train(model_ft, DEVICE, train_loader, optimizer,...
@[toc]
问题描述
在重新加入模型,再训练的时候出现了如下问题:
Traceback (most recent call last):
File "D:\Ghost_Demo\train.py", line 200, in <module>
train_loss, train_acc = train(model_ft, DEVICE, train_loader, optimizer, epoch,model_ema)
File "D:\Ghost_Demo\train.py", line 40, in train
scaler.scale(loss).backward()
File "D:\Users\wh109\anaconda3\lib\site-packages\torch\_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "D:\Users\wh109\anaconda3\lib\site-packages\torch\autograd\__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
原因
出现这种错误是因为,构建Variable, 要注意得传入一个参数requires_grad=True, 这个参数表示是否对这个变量求梯度, 默认的是False, 也就是不对这个变量求梯度。
解决方法
将
data, target = data.to(device, non_blocking=True), target.to(device, non_blocking=True)
改为:
from torch.autograd import Variable
data, target = Variable(data, requires_grad=True).to(device, non_blocking=True), target.to(device,non_blocking=True)
【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
- 点赞
- 收藏
- 关注作者
评论(0)