grad can be implicitly created only for scalar outputs
【摘要】
grad can be implicitly created only for scalar outputs
自定义损失函数时报错:
import torchimport torch.nn as nnimport numpy as np class CrossEntropyLoss(nn.Module): def __init__(self): super(C...
grad can be implicitly created only for scalar outputs
自定义损失函数时报错:
-
-
import torch
-
import torch.nn as nn
-
import numpy as np
-
-
-
class CrossEntropyLoss(nn.Module):
-
def __init__(self):
-
super(CrossEntropyLoss, self).__init__()
-
-
def forward(self, output, label):
-
-
-
first = [-output[i][label[i]] for i in range(label.size()[0])]
-
first_ = 0
-
for i in range(len(first)):
-
first_ += first[i]
-
-
second = torch.exp(output)
-
second = torch.sum(second, dim=1)
-
second = torch.log(second + 1e-5)#**2
-
second = torch.sum(second)
-
loss = 1 / label.size()[0] * (first_ + second)
-
return loss
-
-
-
if __name__ == '__main__':
-
output = torch.randn(3,3, 5, requires_grad=True)
-
label = torch.empty((3,5), dtype=torch.long).random_(3)
-
-
# x=0.9
-
# output = to
文章来源: blog.csdn.net,作者:AI视觉网奇,版权归原作者所有,如需转载,请联系作者。
原文链接:blog.csdn.net/jacke121/article/details/118997850
【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
- 点赞
- 收藏
- 关注作者
评论(0)