TSD(目标检测/Pytorch)

举报
HWCloudAI 发表于 2022/12/20 17:13:36 2022/12/20
【摘要】 TSD(目标检测/Pytorch)论文名为《Revisiting the Sibling Head in Object Detector》,其提出基于任务间空间自适应解耦(task-aware spatial disentanglement,TSD)的检测算法能够有效的减弱通用物体检测中分类任务和回归任务之间的潜在冲突,可以灵活插入大多检测器中,在COCO和OpenImage上给任意bac...

TSD(目标检测/Pytorch)

论文名为《Revisiting the Sibling Head in Object Detector》,其提出基于任务间空间自适应解耦(task-aware spatial disentanglement,TSD)的检测算法能够有效的减弱通用物体检测中分类任务和回归任务之间的潜在冲突,可以灵活插入大多检测器中,在COCO和OpenImage上给任意backbone提升3~5%的mAP.

本案例是TSD论文复现,此模型基于Revisiting the Sibling Head in Object Detector中提出的模型结构实现,该算法会载入在COCO上的预训练模型,在用户数据集上做迁移学习。我们提供了训练代码和可用于训练的模型,用于实际场景的微调训练。训练后生成的模型可直接在ModelArts平台部署成在线服务。

具体算法介绍:https://marketplace.huaweicloud.com/markets/aihub/modelhub/detail/?id=03c7877b-d95a-404f-a4d7-51f056e2ddf4

注意事项:

1.本案例使用框架:PyTorch1.4.0

2.本案例使用硬件:GPU: 1*NVIDIA-V100NV32(32GB) | CPU: 8 核 64GB

3.运行代码方法: 点击本页面顶部菜单栏的三角形运行按钮或按Ctrl+Enter键 运行每个方块中的代码

4.JupyterLab的详细用法: 请参考《ModelAtrs JupyterLab使用指导》

5.碰到问题的解决办法**:** 请参考《ModelAtrs JupyterLab常见问题解决办法》

1.代码和数据下载

运行下面代码,进行数据和代码的下载

import os
import moxing as mox
# 数据代码下载
mox.file.copy_parallel('s3://obs-aigallery-zc/algorithm/TSD','./TSD')

2.模型训练

2.1依赖库加载和安装

设置参数,安装运行所需依赖库

import subprocess
import time
import argparse
import moxing as mox
import os
root_path = './TSD/'
os.chdir(root_path)
os.system('pip uninstall pycocotools')
os.system('pip install mmpycocotools')
from TSD.tools.train import main
import importlib
import moxing as mox


def check_mkdir(path):
    if not os.path.exists(path):
        os.makedirs(path)


def parse_args():
    parser = argparse.ArgumentParser(description='RUN TSD detection of cascade res101 multi test')
    parser.add_argument('--data_url', default='./coco_data', type=str,
                        help='the training and validation data path')  # 在ModelArts中创建训练作业时,必须指定OBS上的一个数据存储位置,启动训练时,会将该位置的数据拷贝到输入映射路径
    parser.add_argument('--train_url', default='./output', type=str,
                        help='the path to save training outputs')  # 在ModelArts中创建训练作业时,必须指定OBS上的一个训练输出位置,训练结束时,会将输出映射路径拷贝到该位置
    parser.add_argument('--extra_config', default="", type=str,
                        help='the path for extra training config, not support yet')  # 直接给config
    parser.add_argument('--gpu_num', default=1, type=int, help="Your GPU NUM, default: 1V100.")  # GPU NUM
    parser.add_argument('--imgs_per_gpu', default=2, type=int,
                        help="images per gpu. 2 images/gpu for V100")  # IMGS PER GPU
    parser.add_argument('--lr', default=0.02, type=float, help="learning rate")  # Learning rate
    parser.add_argument('--epoch', default=35, type=int, help="Training epoch num")  #如果想多运行,设置比35更大的值
    #     parser.add_argument('--lr_step', default=[24,33], nargs='+',type=int, help='epoch steps to reduce lr')
    parser.add_argument('--lr_step', default='24,33', type=str, help='epoch steps to reduce lr')
    parser.add_argument('--deterministic', default=1, type=int, help="if deterministic")
    parser.add_argument('--validate', default=0, type=int, help="If enable validate per epoch during training")
    parser.add_argument('--multi_train', default=1, type=int, help='use multi-scale data augmentation for training ')
    parser.add_argument('--multi_test', default=0, type=int,
                        help="use multi-scale data augmentation for testing or validation")
    parser.add_argument('--test_ann_file', default="annotations/image_info_test-dev2017.json", type=str,
                        help="annotation file for test_dir (Check test info data in coco website)")
    parser.add_argument('--test_img_prefix', default="test2017", type=str, help="data dir for test")
    parser.add_argument('--format_only', default=0, type=int,
                        help="evaluate with bbox by default, but when testing without gt, please enable this format_only button")
    parser.add_argument('--load_weight', default="./pre-trained_weights/resnet101-5d3b4d8f.pth", type=str, help="weight path for pretrained model or evaluation")
    parser.add_argument('--eval', default=0, type=int, help="For enabling eval state")
    args, unknown = parser.parse_known_args()
    return args


args = parse_args()

local_time = time.strftime("%b%d_%H%M%S", time.localtime())

abs_data_url = os.path.abspath(args.data_url)
abs_train_url = os.path.abspath(args.train_url)
# root_path = os.path.abspath(__file__)
# root_path = os.path.split(root_path)[0]
root_path = '/home/ma-user/work/TSD'
abs_tsd_path = root_path + '/TSD'

print('----------------------------------')
print("data_url:", args.data_url)
print("train_url", args.train_url)
print("abs_data_url", abs_data_url)
print("abs_train_url", abs_train_url)
print("root_path", root_path)
print('----------------------------------')


abs_weight_path = os.path.abspath(args.load_weight)
tmp_config_file = root_path + '/gen_tmp_config.py'

cmd_initenv = """
pip install numpy==1.17.4
pip install %s/dependents/Cython-0.29.21-cp36-cp36m-manylinux1_x86_64.whl
pip install %s/dependents/mmcv_full-1.2.0+torch1.4.0+cu101-cp36-cp36m-manylinux1_x86_64.whl
pip install %s/dependents/mmdet-1.1.0+144fc96-cp36-cp36m-linux_x86_64.whl
pip install %s/dependents/pycocotools-2.0.2-cp36-cp36m-linux_x86_64.whl
""" % (root_path, root_path, root_path, root_path)

subprocess.run(cmd_initenv, shell=True)
print('pip install done--------------------------------\n\n')

2.2配置参数

import sys

sys.path.insert(1, root_path)

from mmcv_config import Config

# Processing Config.........
if args.extra_config == "":
    basic_config = "%s/TSD_configs/BEST_cascade_rcnn_r101_fpn_TSD_3x_val.py" % root_path 
else:
    basic_config = "%s/args.extra_config" % abs_data_url
train_config = "%s/tsd_train_config.py" % root_path

config_dict = {}

# import tsd_train_config
config_dict = Config.fromfile(train_config)
abs_load_weight = os.path.abspath(abs_weight_path)
config_dict['model']['pretrained'] = abs_load_weight
config_dict['total_epochs'] = args.epoch
config_dict['optimizer']['lr'] = args.lr
config_dict['lr_config']['step'] = [int(i) for i in args.lr_step.split(',')]
config_dict['data']['imgs_per_gpu'] = args.imgs_per_gpu

train_pipeline = config_dict['train_pipeline']
if args.multi_train == 0:
    train_pipeline[2] = dict(type='Resize', img_scale=(1333, 800), keep_ratio=True)
else:
    train_pipeline[2] = {'type': 'Resize', 'img_scale': [(1600, 400), (1600, 1400)], 'multiscale_mode': 'range',
                         'keep_ratio': True}
config_dict['train_pipeline'] = train_pipeline
config_dict['data']['train']['pipeline'] = train_pipeline

data_root = '%s/' % abs_data_url

config_dict['checkpoint_config']['interval'] = args.epoch
config_dict['work_dir'] = abs_train_url
config_dict.dump(tmp_config_file)

check_mkdir("%s/model" % (abs_train_url))

print("ls current data_dir is: %s_: \n" % abs_data_url, os.listdir(abs_data_url), '____________________________\n')
ls current data_dir is: /home/ma-user/work/TSD/coco_data_: 

 ['train.json', 'train'] ____________________________

2.3开始训练

import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--config', default ='/home/ma-user/work/TSD/gen_tmp_config.py', help='train config file path')
parser.add_argument('--work_dir', default ='/home/ma-user/work/TSD/output', help='the dir to save logs and models')
parser.add_argument('--resume_from', help='the checkpoint file to resume from')
parser.add_argument(
    '--validate',
    default ='False',
    help='whether to evaluate the checkpoint during training')
group_gpus = parser.add_mutually_exclusive_group()
group_gpus.add_argument(
    '--gpus',
    default =1,
    type=int,
    help='number of gpus to use '
    '(only applicable to non-distributed training)')
group_gpus.add_argument(
    '--gpu-ids',
    type=int,
    nargs='+',
    help='ids of gpus to use '
    '(only applicable to non-distributed training)')
parser.add_argument('--seed', type=int, default=None, help='random seed')
parser.add_argument(
    '--deterministic',
    action='store_true',
    help='whether to set deterministic options for CUDNN backend.')
parser.add_argument(
    '--launcher',
    choices=['none', 'pytorch', 'slurm', 'mpi'],
    default='none',
    help='job launcher')
parser.add_argument('--local_rank', type=int, default=0)
parser.add_argument(
    '--autoscale-lr',
    action='store_true',
    help='automatically scale lr with the number of gpus')

args1, unknown = parser.parse_known_args()
if 'LOCAL_RANK' not in os.environ:
    os.environ['LOCAL_RANK'] = str(args1.local_rank)
main(args1)

if args.eval == 0:
    if args.epoch != 0:
        cp_model_cmd = """cd %s
        cp -r trained_model/model %s/
        rm %s/model/*.pth
        cp -r %s/epoch_%d.pth %s/model
        """ % (root_path, abs_train_url, abs_train_url, abs_train_url, args.epoch, abs_train_url)
    else:
        cp_model_cmd = """cd %s
        cp -r trained_model/model %s/
        """ % (root_path, abs_train_url)
        print(cp_model_cmd)
        subprocess.run(cp_model_cmd, shell=True)
    
    print("Everything done")

3.模型测试

3.1测试函数

# -*- coding: utf-8 -*-
import subprocess
import os

import torch
import numpy as np
from PIL import Image
from io import BytesIO
from collections import OrderedDict
import torch.backends.cudnn as cudnn
import cv2
import sys
# sys.path.insert(0, os.path.dirname(__file__))
from mmdet.apis import inference_detector, init_detector, show_result

root_path = './'
cudnn.benchmark = True
    
def inference(model, img, thred=0.5): 
    result = inference_detector(model, img)
    bboxes = np.vstack(result)
    labels = [
        np.full(bbox.shape[0], i, dtype=np.int32)
        for i, bbox in enumerate(result)
    ]
    labels = np.concatenate(labels)
    CLASSES = model.CLASSES
    detection_classes, detection_boxes, detection_scores = [], [], []
    #print(labels)
    for i in range(len(bboxes)):
        if bboxes[i][-1] > thred:
            detection_classes.append(CLASSES[int(labels[i]) ])
            box = [float(b) for b in bboxes[i]][:4]
            new_box = [box[1],box[0],box[3], box[2]]
            detection_boxes.append(new_box) 
            detection_scores.append(float(bboxes[i][-1]))
    #show_result(
    #        img, result, model.CLASSES, score_thr=thred, show=False, wait_time=1, out_file='/home/work/res_demo.jpg')
    return detection_classes, detection_boxes, detection_scores 


class ModelClass():
    def __init__(self, model_path):

        self.model_path = model_path
        
        
        self.device = torch.device( "cuda")
        config = "%s/TSD_configs/cascade_rcnn_r101_fpn_TSD_multi_test3x.py"%root_path
        self.model = init_detector(config, model_path, device=self.device)
        print('load model success')

    def predict(self, file_name):
        image = Image.open(file_name).convert('RGB')
        img = np.array(image)
        img = img[:, :, ::-1]
        img = np.float32(img)
        thred = os.environ.get('thred')
        try:
            thred = float(thred)
        except:
            thred = 0.5
        detection_classes, detection_boxes, detection_scores = inference(self.model, img, thred)
        
        image = cv2.cvtColor(np.asarray(image),cv2.COLOR_RGB2BGR)
        for i,box in enumerate(detection_boxes):
            if detection_scores[i] > 0.9 :
                scores = str(round(float(detection_scores[i]), 2))
                image = cv2.rectangle(image,(int(box[1]),int(box[0])),(int(box[3]),int(box[2])),(0,255,0),2)
                image = cv2.putText(image,detection_classes[i]+':'+scores,(int(box[1]),int(box[0])),cv2.FONT_HERSHEY_SIMPLEX,0.7,(0,0,255),2)

        return image

3.2开始测试

if __name__ == '__main__':
    import matplotlib.pyplot as plt

    img_path ='./coco_data/train/000000040036.jpg'  # 修改测试图片路径
    model_path = './trained_model/model/epoch_35_mAP_49.4_tsd1.pth'  #  修改模型路径

    my_model = ModelClass(model_path)
    result = my_model.predict(img_path)
    result = Image.fromarray(cv2.cvtColor(result,cv2.COLOR_BGR2RGB))
    plt.figure(figsize=(10,10)) #设置窗口大小
    plt.imshow(result)
    plt.show()
load model success

【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。