with torch.autograd.set_detect_anomaly(True)
        【摘要】    
with torch.autograd.set_detect_anomaly(True) 
  
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.Float...
    
    
    
    
with torch.autograd.set_detect_anomaly(True)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [16, 384, 4, 4]], which is output 0 of HardtanhBackward1, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
 
好像是inplace=True引起的
nn.ReLU6(True)
把True去掉即可。
文章来源: blog.csdn.net,作者:网奇,版权归原作者所有,如需转载,请联系作者。
原文链接:blog.csdn.net/jacke121/article/details/111057344
        【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
            cloudbbs@huaweicloud.com
        
        
        
        
        - 点赞
- 收藏
- 关注作者
 
             
           
评论(0)