【图像分类】手撕ResNet——复现ResNet(Pytorch)

举报
AI浩 发表于 2021/12/23 01:41:59 2021/12/23
【摘要】 目录 摘要 ​实现残差模块 ResNet18, ResNet34 RseNet50、 RseNet101、 RseNet152、  摘要 ResNet(Residual Neural Network)由微软研究院的Kaiming He等四名华人提出,通过使用ResNet Unit成功...

目录

摘要

​实现残差模块

ResNet18, ResNet34

RseNet50、 RseNet101、 RseNet152、 


摘要

ResNet(Residual Neural Network)由微软研究院的Kaiming He等四名华人提出,通过使用ResNet Unit成功训练出了152层的神经网络,并在ILSVRC2015比赛中取得冠军,在top5上的错误率为3.57%,同时参数量比VGGNet低,效果非常明显。

模型的创新点在于提出残差学习的思想,在网络中增加了直连通道,将原始输入信息直接传到后面的层中,如下图所示:


传统的卷积网络或者全连接网络在信息传递的时候或多或少会存在信息丢失,损耗等问题,同时还有导致梯度消失或者梯度爆炸,导致很深的网络无法训练。ResNet在一定程度上解决了这个问题,通过直接将输入信息绕道传到输出,保护信息的完整性,整个网络只需要学习输入、输出差别的那一部分,简化学习目标和难度。VGGNet和ResNet的对比如下图所示。ResNet最大的区别在于有很多的旁路将输入直接连接到后面的层,这种结构也被称为shortcut或者skip connections。

  在ResNet网络结构中会用到两种残差模块,一种是以两个3*3的卷积网络串接在一起作为一个残差模块,另外一种是1*1、3*3、1*1的3个卷积网络串接在一起作为一个残差模块。如下图所示:

 ResNet有不同的网络层数,比较常用的是18-layer,34-layer,50-layer,101-layer,152-layer。他们都是由上述的残差模块堆叠在一起实现的。 下图展示了不同的ResNet模型。

​实现残差模块

第一个残差模块


  
  1. class ResidualBlock(nn.Module):
  2. """
  3. 实现子module: Residual Block
  4. """
  5. def __init__(self, inchannel, outchannel, stride=1, shortcut=None):
  6. super(ResidualBlock, self).__init__()
  7. self.left = nn.Sequential(
  8. nn.Conv2d(inchannel, outchannel, 3, stride, 1, bias=False),
  9. nn.BatchNorm2d(outchannel),
  10. nn.ReLU(inplace=True),
  11. nn.Conv2d(outchannel, outchannel, 3, 1, 1, bias=False),
  12. nn.BatchNorm2d(outchannel))
  13. self.right = shortcut
  14. def forward(self, x):
  15. out = self.left(x)
  16. residual = x if self.right is None else self.right(x)
  17. out += residual
  18. return F.relu(out)

第二个残差模块

 


  
  1. class Bottleneck(nn.Module):
  2. def __init__(self,in_places,places, stride=1,downsampling=False, expansion = 4):
  3. super(Bottleneck,self).__init__()
  4. self.expansion = expansion
  5. self.downsampling = downsampling
  6. self.bottleneck = nn.Sequential(
  7. nn.Conv2d(in_channels=in_places,out_channels=places,kernel_size=1,stride=1, bias=False),
  8. nn.BatchNorm2d(places),
  9. nn.ReLU(inplace=True),
  10. nn.Conv2d(in_channels=places, out_channels=places, kernel_size=3, stride=stride, padding=1, bias=False),
  11. nn.BatchNorm2d(places),
  12. nn.ReLU(inplace=True),
  13. nn.Conv2d(in_channels=places, out_channels=places*self.expansion, kernel_size=1, stride=1, bias=False),
  14. nn.BatchNorm2d(places*self.expansion),
  15. )
  16. if self.downsampling:
  17. self.downsample = nn.Sequential(
  18. nn.Conv2d(in_channels=in_places, out_channels=places*self.expansion, kernel_size=1, stride=stride, bias=False),
  19. nn.BatchNorm2d(places*self.expansion)
  20. )
  21. self.relu = nn.ReLU(inplace=True)
  22. def forward(self, x):
  23. residual = x
  24. out = self.bottleneck(x)
  25. if self.downsampling:
  26. residual = self.downsample(x)
  27. out += residual
  28. out = self.relu(out)
  29. return out

ResNet18, ResNet34


  
  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.nn import functional as F
  5. from torchsummary import summary
  6. class ResidualBlock(nn.Module):
  7. """
  8. 实现子module: Residual Block
  9. """
  10. def __init__(self, inchannel, outchannel, stride=1, shortcut=None):
  11. super(ResidualBlock, self).__init__()
  12. self.left = nn.Sequential(
  13. nn.Conv2d(inchannel, outchannel, 3, stride, 1, bias=False),
  14. nn.BatchNorm2d(outchannel),
  15. nn.ReLU(inplace=True),
  16. nn.Conv2d(outchannel, outchannel, 3, 1, 1, bias=False),
  17. nn.BatchNorm2d(outchannel)
  18. )
  19. self.right = shortcut
  20. def forward(self, x):
  21. out = self.left(x)
  22. residual = x if self.right is None else self.right(x)
  23. out += residual
  24. return F.relu(out)
  25. class ResNet(nn.Module):
  26. """
  27. 实现主module:ResNet34
  28. ResNet34包含多个layer,每个layer又包含多个Residual block
  29. 用子module来实现Residual block,用_make_layer函数来实现layer
  30. """
  31. def __init__(self, blocks, num_classes=1000):
  32. super(ResNet, self).__init__()
  33. self.model_name = 'resnet34'
  34. # 前几层: 图像转换
  35. self.pre = nn.Sequential(
  36. nn.Conv2d(3, 64, 7, 2, 3, bias=False),
  37. nn.BatchNorm2d(64),
  38. nn.ReLU(inplace=True),
  39. nn.MaxPool2d(3, 2, 1))
  40. # 重复的layer,分别有3,4,6,3个residual block
  41. self.layer1 = self._make_layer(64, 64, blocks[0])
  42. self.layer2 = self._make_layer(64, 128, blocks[1], stride=2)
  43. self.layer3 = self._make_layer(128, 256, blocks[2], stride=2)
  44. self.layer4 = self._make_layer(256, 512, blocks[3], stride=2)
  45. # 分类用的全连接
  46. self.fc = nn.Linear(512, num_classes)
  47. def _make_layer(self, inchannel, outchannel, block_num, stride=1):
  48. """
  49. 构建layer,包含多个residual block
  50. """
  51. shortcut = nn.Sequential(
  52. nn.Conv2d(inchannel, outchannel, 1, stride, bias=False),
  53. nn.BatchNorm2d(outchannel),
  54. nn.ReLU()
  55. )
  56. layers = []
  57. layers.append(ResidualBlock(inchannel, outchannel, stride, shortcut))
  58. for i in range(1, block_num):
  59. layers.append(ResidualBlock(outchannel, outchannel))
  60. return nn.Sequential(*layers)
  61. def forward(self, x):
  62. x = self.pre(x)
  63. x = self.layer1(x)
  64. x = self.layer2(x)
  65. x = self.layer3(x)
  66. x = self.layer4(x)
  67. x = F.avg_pool2d(x, 7)
  68. x = x.view(x.size(0), -1)
  69. return self.fc(x)
  70. def ResNet18():
  71. return ResNet([2, 2, 2, 2])
  72. def ResNet34():
  73. return ResNet([3, 4, 6, 3])
  74. if __name__ == '__main__':
  75. device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
  76. model = ResNet34()
  77. model.to(device)
  78. summary(model, (3, 224, 224))

RseNet50、 RseNet101、 RseNet152、 


  
  1. import torch
  2. import torch.nn as nn
  3. import torchvision
  4. import numpy as np
  5. print("PyTorch Version: ",torch.__version__)
  6. print("Torchvision Version: ",torchvision.__version__)
  7. __all__ = ['ResNet50', 'ResNet101','ResNet152']
  8. def Conv1(in_planes, places, stride=2):
  9. return nn.Sequential(
  10. nn.Conv2d(in_channels=in_planes,out_channels=places,kernel_size=7,stride=stride,padding=3, bias=False),
  11. nn.BatchNorm2d(places),
  12. nn.ReLU(inplace=True),
  13. nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
  14. )
  15. class Bottleneck(nn.Module):
  16. def __init__(self,in_places,places, stride=1,downsampling=False, expansion = 4):
  17. super(Bottleneck,self).__init__()
  18. self.expansion = expansion
  19. self.downsampling = downsampling
  20. self.bottleneck = nn.Sequential(
  21. nn.Conv2d(in_channels=in_places,out_channels=places,kernel_size=1,stride=1, bias=False),
  22. nn.BatchNorm2d(places),
  23. nn.ReLU(inplace=True),
  24. nn.Conv2d(in_channels=places, out_channels=places, kernel_size=3, stride=stride, padding=1, bias=False),
  25. nn.BatchNorm2d(places),
  26. nn.ReLU(inplace=True),
  27. nn.Conv2d(in_channels=places, out_channels=places*self.expansion, kernel_size=1, stride=1, bias=False),
  28. nn.BatchNorm2d(places*self.expansion),
  29. )
  30. if self.downsampling:
  31. self.downsample = nn.Sequential(
  32. nn.Conv2d(in_channels=in_places, out_channels=places*self.expansion, kernel_size=1, stride=stride, bias=False),
  33. nn.BatchNorm2d(places*self.expansion)
  34. )
  35. self.relu = nn.ReLU(inplace=True)
  36. def forward(self, x):
  37. residual = x
  38. out = self.bottleneck(x)
  39. if self.downsampling:
  40. residual = self.downsample(x)
  41. out += residual
  42. out = self.relu(out)
  43. return out
  44. class ResNet(nn.Module):
  45. def __init__(self,blocks, num_classes=1000, expansion = 4):
  46. super(ResNet,self).__init__()
  47. self.expansion = expansion
  48. self.conv1 = Conv1(in_planes = 3, places= 64)
  49. self.layer1 = self.make_layer(in_places = 64, places= 64, block=blocks[0], stride=1)
  50. self.layer2 = self.make_layer(in_places = 256,places=128, block=blocks[1], stride=2)
  51. self.layer3 = self.make_layer(in_places=512,places=256, block=blocks[2], stride=2)
  52. self.layer4 = self.make_layer(in_places=1024,places=512, block=blocks[3], stride=2)
  53. self.avgpool = nn.AvgPool2d(7, stride=1)
  54. self.fc = nn.Linear(2048,num_classes)
  55. for m in self.modules():
  56. if isinstance(m, nn.Conv2d):
  57. nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
  58. elif isinstance(m, nn.BatchNorm2d):
  59. nn.init.constant_(m.weight, 1)
  60. nn.init.constant_(m.bias, 0)
  61. def make_layer(self, in_places, places, block, stride):
  62. layers = []
  63. layers.append(Bottleneck(in_places, places,stride, downsampling =True))
  64. for i in range(1, block):
  65. layers.append(Bottleneck(places*self.expansion, places))
  66. return nn.Sequential(*layers)
  67. def forward(self, x):
  68. x = self.conv1(x)
  69. x = self.layer1(x)
  70. x = self.layer2(x)
  71. x = self.layer3(x)
  72. x = self.layer4(x)
  73. x = self.avgpool(x)
  74. x = x.view(x.size(0), -1)
  75. x = self.fc(x)
  76. return x
  77. def ResNet50():
  78. return ResNet([3, 4, 6, 3])
  79. def ResNet101():
  80. return ResNet([3, 4, 23, 3])
  81. def ResNet152():
  82. return ResNet([3, 8, 36, 3])
  83. if __name__=='__main__':
  84. #model = torchvision.models.resnet50()
  85. model = ResNet50()
  86. print(model)
  87. input = torch.randn(1, 3, 224, 224)
  88. out = model(input)
  89. print(out.shape)

 

文章来源: wanghao.blog.csdn.net,作者:AI浩,版权归原作者所有,如需转载,请联系作者。

原文链接:wanghao.blog.csdn.net/article/details/117383956

【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。