【图像分类】实战——使用EfficientNetV2实现图像分类(Pytorch)

举报
AI浩 发表于 2021/12/23 00:26:45 2021/12/23
【摘要】 目录 摘要 新建项目 导入所需要的库 设置全局参数 图像预处理 读取数据 设置模型 设置训练和验证 验证 完整代码: 摘要 这几天学习了EfficientNetV2,对论文做了翻译,并复现了论文的代码。 论文翻译:【图像分类】 EfficientNetV2:更快、更小、更强——论文翻译_AI浩-CSDN博客 ...

目录

摘要

新建项目

导入所需要的库

设置全局参数

图像预处理

读取数据

设置模型

设置训练和验证

验证

完整代码:


摘要

这几天学习了EfficientNetV2,对论文做了翻译,并复现了论文的代码。

论文翻译:【图像分类】 EfficientNetV2:更快、更小、更强——论文翻译_AI浩-CSDN博客

代码复现:【图像分类】用通俗易懂代码的复现EfficientNetV2,入门的绝佳选择(pytorch)_AI浩-CSDN博客

对EfficientNetV2想要了解的可以查看上面的文章,这篇文章着重介绍如何使用EfficientNetV2实现图像分类。Loss函数采用CrossEntropyLoss,可以通过更改最后一层的全连接方便实现二分类和多分类。数据集采用经典的猫狗大战数据集,做二分类的实现。

数据集地址:链接:https://pan.baidu.com/s/1kqhVPOqV5vklYYIFVAzAAA 
提取码:3ch6 

新建项目

新建一个图像分类的项目,在项目的跟目录新建文件夹model,用于存放EfficientNetV2的模型代码,新建EfficientNetV2.py,将【图像分类】用通俗易懂代码的复现EfficientNetV2,入门的绝佳选择(pytorch)_AI浩-CSDN博客 复现的代码复制到里面,然后在model文件夹新建__init__.py空文件,model的目录结构如下:

 

在项目的根目录新建train.py,然后在里面写训练代码。

导入所需要的库

 


  
  1. import torch.optim as optim
  2. import torch
  3. import torch.nn as nn
  4. import torch.nn.parallel
  5. import torch.optim
  6. import torch.utils.data
  7. import torch.utils.data.distributed
  8. import torchvision.transforms as transforms
  9. import torchvision.datasets as datasets
  10. from torch.autograd import Variable
  11. from model.EfficientNetv2 import efficientnetv2_s

设置全局参数


设置BatchSize、学习率和epochs,判断是否有cuda环境,如果没有设置为cpu。


  
  1. # 设置全局参数
  2. modellr = 1e-4
  3. BATCH_SIZE = 64
  4. EPOCHS = 20
  5. DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

图像预处理

     在做图像与处理时,train数据集的transform和验证集的transform分开做,train的图像处理出了resize和归一化之外,还可以设置图像的增强,比如旋转、随机擦除等一系列的操作,验证集则不需要做图像增强,另外不要盲目的做增强,不合理的增强手段很可能会带来负作用,甚至出现Loss不收敛的情况。


  
  1. # 数据预处理
  2.  
  3. transform = transforms.Compose([
  4.     transforms.Resize((224, 224)),
  5.     transforms.ToTensor(),
  6.     transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
  7.  
  8. ])
  9. transform_test = transforms.Compose([
  10.     transforms.Resize((224, 224)),
  11.     transforms.ToTensor(),
  12.     transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
  13. ])

读取数据

使用Pytorch的默认方式读取数据。数据的目录如下图:

训练集,取了猫狗大战数据集中,猫狗图像各一万张,剩余的放到验证集中。


  
  1. # 读取数据
  2. dataset_train = datasets.ImageFolder('data/train', transform)
  3. print(dataset_train.imgs)
  4. # 对应文件夹的label
  5. print(dataset_train.class_to_idx)
  6. dataset_test = datasets.ImageFolder('data/val', transform_test)
  7. # 对应文件夹的label
  8. print(dataset_test.class_to_idx)
  9.  
  10. # 导入数据
  11. train_loader = torch.utils.data.DataLoader(dataset_train, batch_size=BATCH_SIZE, shuffle=True)
  12. test_loader = torch.utils.data.DataLoader(dataset_test, batch_size=BATCH_SIZE, shuffle=False)

设置模型


使用CrossEntropyLoss作为loss,模型采用efficientnetv2_s,由于没有Pytorch的预训练模型,我们只能从头开始训练。更改最后一层的全连接,将类别设置为2,然后将模型放到DEVICE。优化器选用Adam。


  
  1. # 实例化模型并且移动到GPU
  2. criterion = nn.CrossEntropyLoss()
  3. model = efficientnetv2_s()
  4. num_ftrs = model.classifier.in_features
  5. model.classifier = nn.Linear(num_ftrs, 2)
  6. model.to(DEVICE)
  7. # 选择简单暴力的Adam优化器,学习率调低
  8. optimizer = optim.Adam(model.parameters(), lr=modellr)
  9. def adjust_learning_rate(optimizer, epoch):
  10. """Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
  11. modellrnew = modellr * (0.1 ** (epoch // 50))
  12. print("lr:", modellrnew)
  13. for param_group in optimizer.param_groups:
  14. param_group['lr'] = modellrnew

设置训练和验证


  
  1. # 定义训练过程
  2. def train(model, device, train_loader, optimizer, epoch):
  3. model.train()
  4. sum_loss = 0
  5. total_num = len(train_loader.dataset)
  6. print(total_num, len(train_loader))
  7. for batch_idx, (data, target) in enumerate(train_loader):
  8. data, target = Variable(data).to(device), Variable(target).to(device)
  9. output = model(data)
  10. loss = criterion(output, target)
  11. optimizer.zero_grad()
  12. loss.backward()
  13. optimizer.step()
  14. print_loss = loss.data.item()
  15. sum_loss += print_loss
  16. if (batch_idx + 1) % 50 == 0:
  17. print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
  18. epoch, (batch_idx + 1) * len(data), len(train_loader.dataset),
  19. 100. * (batch_idx + 1) / len(train_loader), loss.item()))
  20. ave_loss = sum_loss / len(train_loader)
  21. print('epoch:{},loss:{}'.format(epoch, ave_loss))
  22. #验证过程
  23. def val(model, device, test_loader):
  24. model.eval()
  25. test_loss = 0
  26. correct = 0
  27. total_num = len(test_loader.dataset)
  28. print(total_num, len(test_loader))
  29. with torch.no_grad():
  30. for data, target in test_loader:
  31. data, target = Variable(data).to(device), Variable(target).to(device)
  32. output = model(data)
  33. loss = criterion(output, target)
  34. _, pred = torch.max(output.data, 1)
  35. correct += torch.sum(pred == target)
  36. print_loss = loss.data.item()
  37. test_loss += print_loss
  38. correct = correct.data.item()
  39. acc = correct / total_num
  40. avgloss = test_loss / len(test_loader)
  41. print('\nVal set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
  42. avgloss, correct, len(test_loader.dataset), 100 * acc))
  43. # 训练
  44. for epoch in range(1, EPOCHS + 1):
  45. adjust_learning_rate(optimizer, epoch)
  46. train(model, DEVICE, train_loader, optimizer, epoch)
  47. val(model, DEVICE, test_loader)
  48. torch.save(model, 'model.pth')

​完成上面的代码后就可以开始训练,点击run开始训练,如下图:

验证


测试集存放的目录如下图:

第一步 定义类别,这个类别的顺序和训练时的类别顺序对应,一定不要改变顺序!!!!我们在训练时,cat类别是0,dog类别是1,所以我定义classes为(cat,dog)。

第二步 定义transforms,transforms和验证集的transforms一样即可,别做数据增强。

第三步 加载model,并将模型放在DEVICE里,

第四步 读取图片并预测图片的类别,在这里注意,读取图片用PIL库的Image。不要用cv2,transforms不支持。


  
  1. import torch.utils.data.distributed
  2. import torchvision.transforms as transforms
  3. from torch.autograd import Variable
  4. import os
  5. from PIL import Image
  6. classes = ('cat', 'dog')
  7. transform_test = transforms.Compose([
  8. transforms.Resize((224, 224)),
  9. transforms.ToTensor(),
  10. transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
  11. ])
  12. DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
  13. model = torch.load("model.pth")
  14. model.eval()
  15. model.to(DEVICE)
  16. path='data/test/'
  17. testList=os.listdir(path)
  18. for file in testList:
  19. img=Image.open(path+file)
  20. img=transform_test(img)
  21. img.unsqueeze_(0)
  22. img = Variable(img).to(DEVICE)
  23. out=model(img)
  24. # Predict
  25. _, pred = torch.max(out.data, 1)
  26. print('Image Name:{},predict:{}'.format(file,classes[pred.data.item()]))


 
运行结果:

其实在读取数据,也可以巧妙的用datasets.ImageFolder,下面我们就用datasets.ImageFolder实现对图片的预测。改一下test数据集的路径,在test文件夹外面再加一层文件件,取名为dataset,如下图所示:

然后修改读取图片的方式。代码如下:


  
  1. import torch.utils.data.distributed
  2. import torchvision.transforms as transforms
  3. import torchvision.datasets as datasets
  4. from torch.autograd import Variable
  5.  
  6. classes = ('cat', 'dog')
  7. transform_test = transforms.Compose([
  8.     transforms.Resize((224, 224)),
  9.     transforms.ToTensor(),
  10.     transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
  11. ])
  12.  
  13. DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
  14. model = torch.load("model.pth")
  15. model.eval()
  16. model.to(DEVICE)
  17.  
  18. dataset_test = datasets.ImageFolder('data/datatest', transform_test)
  19. print(len(dataset_test))
  20. # 对应文件夹的label
  21.  
  22. for index in range(len(dataset_test)):
  23.     item = dataset_test[index]
  24.     img, label = item
  25.     img.unsqueeze_(0)
  26.     data = Variable(img).to(DEVICE)
  27.     output = model(data)
  28.     _, pred = torch.max(output.data, 1)
  29.     print('Image Name:{},predict:{}'.format(dataset_test.imgs[index][0], classes[pred.data.item()]))
  30.     index += 1
  31.  


完整代码:

train.py


  
  1. import torch.optim as optim
  2. import torch
  3. import torch.nn as nn
  4. import torch.nn.parallel
  5. import torch.optim
  6. import torch.utils.data
  7. import torch.utils.data.distributed
  8. import torchvision.transforms as transforms
  9. import torchvision.datasets as datasets
  10. from torch.autograd import Variable
  11. from model.EfficientNetv2 import efficientnetv2_s
  12. # 设置全局参数
  13. modellr = 1e-4
  14. BATCH_SIZE = 32
  15. EPOCHS = 50
  16. DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
  17. # 数据预处理
  18. transform = transforms.Compose([
  19. transforms.Resize((224, 224)),
  20. transforms.ToTensor(),
  21. transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
  22. ])
  23. transform_test = transforms.Compose([
  24. transforms.Resize((224, 224)),
  25. transforms.ToTensor(),
  26. transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
  27. ])
  28. # 读取数据
  29. dataset_train = datasets.ImageFolder('data/train', transform)
  30. print(dataset_train.imgs)
  31. # 对应文件夹的label
  32. print(dataset_train.class_to_idx)
  33. dataset_test = datasets.ImageFolder('data/val', transform_test)
  34. # 对应文件夹的label
  35. print(dataset_test.class_to_idx)
  36. # 导入数据
  37. train_loader = torch.utils.data.DataLoader(dataset_train, batch_size=BATCH_SIZE, shuffle=True)
  38. test_loader = torch.utils.data.DataLoader(dataset_test, batch_size=BATCH_SIZE, shuffle=False)
  39. # 实例化模型并且移动到GPU
  40. criterion = nn.CrossEntropyLoss()
  41. model = efficientnetv2_s()
  42. num_ftrs = model.classifier.in_features
  43. model.classifier = nn.Linear(num_ftrs, 2)
  44. model.to(DEVICE)
  45. # 选择简单暴力的Adam优化器,学习率调低
  46. optimizer = optim.Adam(model.parameters(), lr=modellr)
  47. def adjust_learning_rate(optimizer, epoch):
  48. """Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
  49. modellrnew = modellr * (0.1 ** (epoch // 50))
  50. print("lr:", modellrnew)
  51. for param_group in optimizer.param_groups:
  52. param_group['lr'] = modellrnew
  53. # 定义训练过程
  54. def train(model, device, train_loader, optimizer, epoch):
  55. model.train()
  56. sum_loss = 0
  57. total_num = len(train_loader.dataset)
  58. print(total_num, len(train_loader))
  59. for batch_idx, (data, target) in enumerate(train_loader):
  60. data, target = Variable(data).to(device), Variable(target).to(device)
  61. output = model(data)
  62. loss = criterion(output, target)
  63. optimizer.zero_grad()
  64. loss.backward()
  65. optimizer.step()
  66. print_loss = loss.data.item()
  67. sum_loss += print_loss
  68. if (batch_idx + 1) % 50 == 0:
  69. print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
  70. epoch, (batch_idx + 1) * len(data), len(train_loader.dataset),
  71. 100. * (batch_idx + 1) / len(train_loader), loss.item()))
  72. ave_loss = sum_loss / len(train_loader)
  73. print('epoch:{},loss:{}'.format(epoch, ave_loss))
  74. #验证过程
  75. def val(model, device, test_loader):
  76. model.eval()
  77. test_loss = 0
  78. correct = 0
  79. total_num = len(test_loader.dataset)
  80. print(total_num, len(test_loader))
  81. with torch.no_grad():
  82. for data, target in test_loader:
  83. data, target = Variable(data).to(device), Variable(target).to(device)
  84. output = model(data)
  85. loss = criterion(output, target)
  86. _, pred = torch.max(output.data, 1)
  87. correct += torch.sum(pred == target)
  88. print_loss = loss.data.item()
  89. test_loss += print_loss
  90. correct = correct.data.item()
  91. acc = correct / total_num
  92. avgloss = test_loss / len(test_loader)
  93. print('\nVal set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
  94. avgloss, correct, len(test_loader.dataset), 100 * acc))
  95. # 训练
  96. for epoch in range(1, EPOCHS + 1):
  97. adjust_learning_rate(optimizer, epoch)
  98. train(model, DEVICE, train_loader, optimizer, epoch)
  99. val(model, DEVICE, test_loader)
  100. torch.save(model, 'model.pth')

test1.py


  
  1. import torch.utils.data.distributed
  2. import torchvision.transforms as transforms
  3. from torch.autograd import Variable
  4. import os
  5. from PIL import Image
  6. classes = ('cat', 'dog')
  7. transform_test = transforms.Compose([
  8. transforms.Resize((224, 224)),
  9. transforms.ToTensor(),
  10. transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
  11. ])
  12. DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
  13. model = torch.load("model.pth")
  14. model.eval()
  15. model.to(DEVICE)
  16. path='data/test/'
  17. testList=os.listdir(path)
  18. for file in testList:
  19. img=Image.open(path+file)
  20. img=transform_test(img)
  21. img.unsqueeze_(0)
  22. img = Variable(img).to(DEVICE)
  23. out=model(img)
  24. # Predict
  25. _, pred = torch.max(out.data, 1)
  26. print('Image Name:{},predict:{}'.format(file,classes[pred.data.item()]))

test2.py


  
  1. import torch.utils.data.distributed
  2. import torchvision.transforms as transforms
  3. import torchvision.datasets as datasets
  4. from torch.autograd import Variable
  5.  
  6. classes = ('cat', 'dog')
  7. transform_test = transforms.Compose([
  8.     transforms.Resize((224, 224)),
  9.     transforms.ToTensor(),
  10.     transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
  11. ])
  12.  
  13. DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
  14. model = torch.load("model.pth")
  15. model.eval()
  16. model.to(DEVICE)
  17.  
  18. dataset_test = datasets.ImageFolder('data/datatest', transform_test)
  19. print(len(dataset_test))
  20. # 对应文件夹的label
  21.  
  22. for index in range(len(dataset_test)):
  23.     item = dataset_test[index]
  24.     img, label = item
  25.     img.unsqueeze_(0)
  26.     data = Variable(img).to(DEVICE)
  27.     output = model(data)
  28.     _, pred = torch.max(output.data, 1)
  29.     print('Image Name:{},predict:{}'.format(dataset_test.imgs[index][0], classes[pred.data.item()]))
  30.     index += 1

 

文章来源: wanghao.blog.csdn.net,作者:AI浩,版权归原作者所有,如需转载,请联系作者。

原文链接:wanghao.blog.csdn.net/article/details/117535310

【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。