【CANN训练营第三季】MMDeploy搭建手记(Atlas 200DK+CANN 5.1RC2)
1、前言
其实张小白这台Atlas 200DK玩转CANN 5.1.RC2的攻略在这里:
CANN 5.1.RC2
(1)Atlas 200DK+CANN 5.1.RC2+MindStudio5.0.RC2+MindX SDK 3.0玩转攻略1/2 https://bbs.huaweicloud.com/blogs/371575
(2)Atlas 200DK+CANN 5.1.RC2+MindStudio5.0.RC2+MindX SDK 3.0玩转攻略3 https://bbs.huaweicloud.com/blogs/371576
(3)Atlas 200DK+CANN 5.1.RC2+MindStudio5.0.RC2+MindX SDK 3.0玩转攻略4 https://bbs.huaweicloud.com/blogs/371577
(4)Atlas 200DK+CANN 5.1.RC2+MindStudio5.0.RC2+MindX SDK 3.0玩转攻略5 https://bbs.huaweicloud.com/blogs/371578
(5)Atlas 200DK+CANN 5.1.RC2+MindStudio5.0.RC2+MindX SDK 3.0玩转攻略6 https://bbs.huaweicloud.com/blogs/371579
(6)Atlas 200DK+CANN 5.1.RC2+MindStudio5.0.RC2+MindX SDK 3.0玩转攻略7 https://bbs.huaweicloud.com/blogs/371580
(7)Atlas 200DK+CANN 5.1.RC2+MindStudio5.0.RC2+MindX SDK 3.0玩转攻略8 https://bbs.huaweicloud.com/blogs/371581
(8)Atlas 200DK+CANN 5.1.RC2+MindStudio5.0.RC2+MindX SDK 3.0玩转攻略9 https://bbs.huaweicloud.com/blogs/371583
(9)Atlas 200DK+CANN 5.1.RC2+MindStudio5.0.RC2+MindX SDK 3.0玩转攻略10 https://bbs.huaweicloud.com/blogs/371584
(10)Atlas 200DK+CANN 5.1.RC2+MindStudio5.0.RC2+MindX SDK 3.0玩转攻略11 https://bbs.huaweicloud.com/blogs/371600
(11)Atlas 200DK+CANN 5.1.RC2+MindStudio5.0.RC2+MindX SDK 3.0玩转攻略12 https://bbs.huaweicloud.com/blogs/371775
但是,昇腾发展也是日新月异,这不,刚到2022年年底,CANN就已经升级到V6了。
然而,张小白还是要在原来CANN 5.1.RC2的基础上试验下MMDeploy的。
2、确认Atlas 200DK的软件环境
先确认下这台200DK中运行的版本是CANN 5.1.RC2:
检查软件环境:
gcc 7.5
python 3.9
没有conda环境
cmake 3.10.2版本,低了。
3、安装cmake 3.24
先升级cmake:
下载cmake 3.24:
wget https://github.com/Kitware/CMake/releases/download/v3.24.3/cmake-3.24.3.tar.gz
tar -xvf cmake-3.24.3.tar.gz
cd cmake-3.24.3
./configure
make -j$(nproc)
sudo make install
检查cmake版本:
cmake --version
把/usr/local/bin的路径放到/usr/bin前面
在.bashrc中加入以下语句:
source ~/.bashrc使其生效。
可见cmake已升级为3.24版本。
4、安装bz2
安装bz2依赖包:
sudo apt-get install libbz2-dev
5、重新编译安装Python 3.9.7
切换到Python 3.9的目录
cd /usr/local/python3.9.7/bin
这里只有pip3,没有pip。
做个软链接:sudo ln -s pip3 pip
在 /home/HwHiAiUser下进入python 3.9.7的源码目录:
cd ~/Python-3.9.7
make -j$(nproc)
sudo make install
6、安装PyTorch
pip install torch==1.8.1 torchvision==0.9.1 --extra-index-url https://download.pytorch.org/whl/cpu
7、安装mim
pip install openmim -i https://pypi.tuna.tsinghua.edu.cn/simple
8、安装click
pip install click==7.1.2 -i https://pypi.tuna.tsinghua.edu.cn/simple
9、安装mmcv
mim install mmcv-full -i https://pypi.tuna.tsinghua.edu.cn/simple
耐心等待安装完毕。
10、下载MMDeploy代码仓
由于github难以连接,在这里下载github在gitee的国内镜像(当然这就导致了可能gitee代码可能并非是github最新代码的问题)
git clone --recursive https://gitee.com/chen-hong-chhhh/mmdeploy
11、安装MMDeploy的模型转换器
cd mmdeploy
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -v -e .
12、编译安装MMDeploy的SDK
source ~/Ascend/ascend-toolkit/set_env.sh
cd ~/mmdeploy
mkdir -p build && cd build
cmake .. -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON -DMMDEPLOY_TARGET_BACKENDS=acl
make -j$(nproc)
make install
13、验证MMDeploy的模型转换器是否部署成功
cd ~/mmdeploy
python tools/check_env.py
14、验证MMDeploy的SDK是否部署成功
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
python -c "import mmdeploy_python"
15、安装openmmlab的算法库mmcls
pip install mmcls -i https://pypi.tuna.tsinghua.edu.cn/simple
16、下载ResNet18的Pytorch模型
cd ~/mmdeploy
mim download mmcls --config resnet18_8xb32_in1k --dest .
可以看到如下的文件:
17、ResNet18模型转换
先编辑一个resnet18.sh文件:
python tools/deploy.py configs/mmcls/classification_ascend_static-224x224.py resnet18_8xb32_in1k.py resnet18_8xb32_in1k_20210831-fbbb1da6.pth tests/data/tiger.jpeg --work-dir mmdeploy_models/mmcls/resnet18/cann --device cpu --dump-info
复制
然后执行这个文件:sh ./resnet18.sh
可以看到,先执行了torch2onnx,然后调用了atc进行模型转换:
atc --model=mmdeploy_models/mmcls/resnet18/cann/end2end.onnx --framework=5 --output=mmdeploy_models/mmcls/resnet18/cann/end2end --soc_version=Ascend310 --input_format=NCHW --input_shape=input:1,3,224,224
framework=5表示ONNX格式。
模型转换成功。
进入模型的目录看看:
cd ~/mmdeploy/mmdeploy_models/mmcls/resnet18/cann
另外,打开两个json文件看看模型的Meta信息:
detail.json
deploy.json
18、安装openmmlab的算法库mmdet
pip install mmdet -i https://pypi.tuna.tsinghua.edu.cn/simple
19、下载FastRCNN的Pytorch模型
mim download mmdet --config faster_rcnn_r50_fpn_1x_coco --dest .
20、FasterRCNN模型转换
先编辑一个faster_rcnn.sh文件:
python tools/deploy.py configs/mmdet/detection/detection_ascend_static-800x1344.py faster_rcnn_r50_fpn_1x_coco.py faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth demo/resources/det.jpg --work-dir mmdeploy_models/mmdet/faster_rcnn/cann --device cpu --dump-info
复制
然后执行这个文件:sh ./faster_rcnn.sh
此时可能会遇到找不到det.jpg文件的报错。
可以去github主仓下载det.jpg文件,将其上传到200DK的~/demo/resources目录:
重新执行:sh ./faster_rcnn.sh
同样的,耐心等待atc转换结束: atc --model=mmdeploy_models/mmdet/faster_rcnn/cann/end2end.onnx --framework=5 --output=mmdeploy_models/mmdet/faster_rcnn/cann/end2end --soc_version=Ascend310 --input_format=NCHW --input_shape=input:1,3,800,1344
转换完成。
查看下结果:
cd ~/mmdeploy/mmdeploy_models/mmdet/faster_rcnn/cann
可以看到onnx模型、om离线模型和json文件都已经生成。
deploy.json
"version": "0.10.0",
"task": "Detector",
"models": [
{
"name": "fasterrcnn",
"net": "end2end.om",
"weights": "",
"backend": "ascend",
"precision": "FP32",
"batch_size": 1,
"dynamic_shape": false
}
],
"customs": []
}
复制
detail.json
{
"version": "0.10.0",
"codebase": {
"task": "ObjectDetection",
"codebase": "mmdet",
"version": "2.26.0",
"pth": "faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth",
"config": "faster_rcnn_r50_fpn_1x_coco.py"
},
"codebase_config": {
"type": "mmdet",
"task": "ObjectDetection",
"model_type": "end2end",
"post_processing": {
"score_threshold": 0.05,
"confidence_threshold": 0.005,
"iou_threshold": 0.5,
"max_output_boxes_per_class": 200,
"pre_top_k": 5000,
"keep_top_k": 100,
"background_label_id": -1
}
},
"onnx_config": {
"type": "onnx",
"export_params": true,
"keep_initializers_as_inputs": false,
"opset_version": 11,
"save_file": "end2end.onnx",
"input_names": [
"input"
],
"output_names": [
"dets",
"labels"
],
"input_shape": [
1344,
800
],
"optimize": true
},
"backend_config": {
"type": "ascend",
"model_inputs": [
{
"input_shapes": {
"input": [
1,
3,
800,
1344
]
}
}
]
},
"calib_config": {}
}
复制
pipeline.json:
{
"pipeline": {
"input": [
"img"
],
"output": [
"post_output"
],
"tasks": [
{
"type": "Task",
"module": "Transform",
"name": "Preprocess",
"input": [
"img"
],
"output": [
"prep_output"
],
"transforms": [
{
"type": "LoadImageFromFile"
},
{
"type": "Resize",
"keep_ratio": false,
"size": [
800,
1344
]
},
{
"type": "Normalize",
"mean": [
123.675,
116.28,
103.53
],
"std": [
58.395,
57.12,
57.375
],
"to_rgb": true
},
{
"type": "Pad",
"size_divisor": 1
},
{
"type": "DefaultFormatBundle"
},
{
"type": "Collect",
"keys": [
"img"
],
"meta_keys": [
"valid_ratio",
"filename",
"img_norm_cfg",
"scale_factor",
"flip_direction",
"ori_filename",
"ori_shape",
"pad_shape",
"flip",
"img_shape"
]
}
],
"sha256": "83ba9eb66901a32e1fe5ebcff0a6375706597472e185e1d94aee2043a7399d3b",
"fuse_transform": false
},
{
"name": "fasterrcnn",
"type": "Task",
"module": "Net",
"input": [
"prep_output"
],
"output": [
"infer_output"
],
"input_map": {
"img": "input"
},
"output_map": {}
},
{
"type": "Task",
"module": "mmdet",
"name": "postprocess",
"component": "ResizeBBox",
"params": {
"rpn": {
"nms_pre": 1000,
"max_per_img": 1000,
"nms": {
"type": "nms",
"iou_threshold": 0.7
},
"min_bbox_size": 0
},
"rcnn": {
"score_thr": 0.05,
"nms": {
"type": "nms",
"iou_threshold": 0.5
},
"max_per_img": 100
},
"min_bbox_size": 0,
"score_thr": 0.05
},
"output": [
"post_output"
],
"input": [
"prep_output",
"infer_output"
]
}
]
}
}
21、完成ResNet18和FasterRCNN的模型推理
编辑文件 resnet18_inference.py
import cv2
from mmdeploy_python import Classifier
# create a classifer
classifier = Classifier(model_path="mmdeploy_models/mmcls/resnet18/cann",device_name='npu',device_id=0)
#read an image
img = cv2.imread("tests/data/tiger.jpeg")
#person inference
result = classifier(img)
#show result
for label_id, score in result:
print(label_id,score)
复制
python resnet18_inference.py
贴出文本结果:
[2022-12-04 03:51:32.101] [mmdeploy] [info] [acl_net.cpp:65] ACL initialized.
[2022-12-04 03:51:32.407] [mmdeploy] [info] [acl_net.cpp:314] n_inputs = 1, dynamic_tensor_index_ = -1
[2022-12-04 03:51:32.410] [mmdeploy] [info] [acl_net.cpp:330] input [1, 3, 224, 224]
[2022-12-04 03:51:32.411] [mmdeploy] [info] [acl_net.cpp:369] Softmax_49:0:output [1, 1000]
[2022-12-04 03:51:32.419] [mmdeploy] [info] [inference.cpp:50] ["img"] <- ["img"]
[2022-12-04 03:51:32.419] [mmdeploy] [info] [inference.cpp:61] ["post_output"] -> ["cls"]
292 0.92626953125
282 0.07257080078125
290 0.0008058547973632812
281 0.00024580955505371094
340 5.65648078918457e-05
[2022-12-04 03:51:32.522] [mmdeploy] [info] [acl_net.cpp:83] ACL finalized.
HwHiAiUser@davinci-mini:~/mmdeploy$
复制
可见,92%的可能性是292的分类。
编辑文件 faster_rcnn_inference.py
import cv2
from mmdeploy_python import Detector
#create a detector
detector = Detector(
model_path = 'mmdeploy_models/mmdet/faster_rcnn/cann',
device_name='npu',
device_id=0)
#read an image
img = cv2.imread('demo/resources/det.jpg')
#perform inference
bboxes, labels, _ = detector(img)
#visualize result
for index, (bbox, label_id) in enumerate(zip(bboxes,labels)):
[left, top, right, bottom],score = bbox[0:4].astype(int),bbox[4]
if score < 0.3:
continue
cv2.rectangle(img, (left,top),(right,bottom),(0,255,0))
cv2.imwrite('faster_rcnn_output_detection.png',img)
执行推理:
python faster_rcnn_inference.py
在当前目录下生成了一个结果图片:faster_rcnn_output_detection.png
将其传到windows上打开看看:
可见已检测成功。
22、完成ResNet50和RetinaNet的模型转换作业
根据作业的要求,将resnet18改为resnet50,将faster_rcnn改为retinanet,再试试即可。
下面具体尝试:
(1)下载resnet50的pytorch模型
mim download mmcls --config resnet50_8xb32_in1k --dest .
(2)下载resnet50的retinanet模型
mim download mmdet --config retinanet_r50_fpn_1x_coco --dest .
(3)resnet50模型转换
resnet50.sh:
python tools/deploy.py configs/mmcls/classification_ascend_static-224x224.py resnet50_8xb32_in1k.py resnet50_8xb32_in1k_20210831-ea4938fc.pth tests/data/tiger.jpeg --work-dir mmdeploy_models/mmcls/resnet50/cann --device cpu --dump-info
查看转换结果:
(4)RetinaNet模型转换
retina.sh:
python tools/deploy.py configs/mmdet/detection/detection_ascend_static-800x1344.py retinanet_r50_fpn_1x_coco.py retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth demo/resources/det.jpg --work-dir mmdeploy_models/mmdet/retina/cann --device cpu --dump-info
查看模型转换脚本:
atc --model=mmdeploy_models/mmdet/retina/cann/end2end.onnx --framework=5 --output=mmdeploy_models/mmdet/retina/cann/end2end --soc_version=Ascend310 --input_format=NCHW --input_shape=input:1,3,800,1344
转换成功,检查onnx和om的模型结果:
23、完成ResNet50和RetinaNet的模型推理作业
(1)ResNet50模型推理
resnet50_inference.py
import cv2
from mmdeploy_python import Classifier
# create a classifer
classifier = Classifier(model_path="mmdeploy_models/mmcls/resnet50/cann",device_name='npu',device_id=0)
#read an image
img = cv2.imread("tests/data/tiger.jpeg")
#person inference
result = classifier(img)
#show result
for label_id, score in result:
print(label_id,score)
复制
python resnet50_inference.py
结果贴出来如下:
[2022-12-04 04:41:42.925] [mmdeploy] [info] [acl_net.cpp:65] ACL initialized.
[2022-12-04 04:41:43.468] [mmdeploy] [info] [acl_net.cpp:314] n_inputs = 1, dynamic_tensor_index_ = -1
[2022-12-04 04:41:43.476] [mmdeploy] [info] [acl_net.cpp:330] input [1, 3, 224, 224]
[2022-12-04 04:41:43.478] [mmdeploy] [info] [acl_net.cpp:369] Softmax_122:0:output [1, 1000]
[2022-12-04 04:41:43.496] [mmdeploy] [info] [inference.cpp:50] ["img"] <- ["img"]
[2022-12-04 04:41:43.496] [mmdeploy] [info] [inference.cpp:61] ["post_output"] -> ["cls"]
292 0.91845703125
282 0.07904052734375
281 0.00037169456481933594
290 0.00032806396484375
243 0.0001347064971923828
[2022-12-04 04:41:43.573] [mmdeploy] [info] [acl_net.cpp:83] ACL finalized.
复制
(6)RetinaNet模型推理
retina_inference.py
import cv2
from mmdeploy_python import Detector
#create a detector
detector = Detector(
model_path = 'mmdeploy_models/mmdet/retina/cann',
device_name='npu',
device_id=0)
#read an image
img = cv2.imread('demo/resources/det.jpg')
#perform inference
bboxes, labels, _ = detector(img)
#visualize result
for index, (bbox, label_id) in enumerate(zip(bboxes,labels)):
[left, top, right, bottom],score = bbox[0:4].astype(int),bbox[4]
if score < 0.3:
continue
cv2.rectangle(img, (left,top),(right,bottom),(0,255,0))
cv2.imwrite('retinanet_output_detection.png',img)
python retina_inference.py
查看结果图片retinanet_output_detection.png已生成:
将其传到windows上打开看看:
结果跟faster rcnn的结果也是类似的。
(全文完,谢谢阅读)
- 点赞
- 收藏
- 关注作者
评论(0)