ModelBox开发体验Day06开发案例-多人人体关键点检测
- 本案例使用Lightweight OpenPose模型,实现一个多人人体关键点检测应用
- 代码:https://github.com/sunxiaobei/modelbox_gallery
- 代码tag:v1.6 multi_person_pose_lightweight_openpose
- 此案例暂未跑通,总是卡死
开发准备
- 开发环境安装和部署,前面环境已完成
- 模型训练,ModelArts训练模型
- 模型转换,代码模型已完成转换
应用开发
打开VS Code,连接到ModelBox sdk所在目录或者远程开发板,开始进行口罩检测应用开发。
(1)创建工程
使用create.py
创建multi_person_pose_lightweight_openpose
工程, 将会创建出一个空的ModelBox样例工程。
./create.py -t server -n multi_person_pose_lightweight_openpose
git add .
git commit -m 'multi_person_pose_lightweight_openpose'
git push
(2)创建推理功能单元
- AI应用的核心是模型推理部分,我们用如下命令创建推理功能单元,该模块将会创建在工程目录的
model
文件夹下:
./create.py -t infer -n pose_infer -p multi_person_pose_lightweight_openpose
git add .
git commit -m 'create pose_infer'
- 将资源包中
model/pose_infer
文件夹中的模型和配置文件拷贝到多人人体关键点检测工程的model/pose_infer
目录下。
- 其中
lightweight_openpose_288x512_rknpu2.rknn
是转换好的rknn模型,pose_infer.toml
是该模型的ModelBox功能单元配置文件,其内容如下:
# Copyright (c) Huawei Technologies Co., Ltd. 2022. All rights reserved.
[base]
name = "pose_infer"
device = "rknpu"
version = "1.0.0"
description = "lightweight_openpose_288x512_rknpu2"
entry = "./lightweight_openpose_288x512_rknpu2.rknn" # model file path, use relative path
type = "inference"
virtual_type = "rknpu2" # inference engine type: rockchip now support rknpu, rknpu2(if exist)
group_type = "Inference" # flowunit group attribution, do not change
is_input_contiguous = "false" # rk do not support memory combine, fix, do not change
[input]
[input.input1]
name = "input"
type = "uint8"
device = "rknpu"
[output]
[output.output1]
name = "out_heatmaps"
type = "float"
[output.output2]
name = "out_pafs"
type = "float"
- 可以看到该模型有两个输出节点,即关键点信息和关节信息对应的feature map,需要从中解码出所有的关键点并组合成每个人的关节。
(3)创建后处理功能单元
- 后处理功能单元负责从模型推理结果中解码出关键点和关节,我们用如下命令创建该功能单元,其将会创建在工程目录的
etc/flowunit
文件夹下:
./create.py -t python -n pose_post_light -p multi_person_pose_lightweight_openpose
git add .
git commit -m 'create pose_post_light'
- 将
common
资源包中etc/flowunit/pose_post_light
文件夹中的代码和配置文件拷贝到本工程的同名目录下。解码过程的核心逻辑在pose_utils_light.py
文件中,可以查阅OpenPose模型细节阅读代码。
(4)创建画图功能单元
得到关键点后可以画在原图上进行输出展示,我们用如下命令创建画图功能单元:
./create.py -t python -n draw_pose_light -p multi_person_pose_lightweight_openpose
git add .
git commit -m 'create draw_pose_light'
将common
资源包中etc/flowunit/draw_pose_light
文件夹中的代码和配置文件拷贝到口罩检测工程的同名目录下。
(5)修改流程图
模型推理和配套的功能单元准备好后,我们就可以串联出流程图进行测试了,本工程默认在graph
目录下生成了multi_person_pose_lightweight_openpose.toml
,我们参考资源包中的graph/multi_person_pose_lightweight_openpose.toml
将其修改为:
# Copyright (c) Huawei Technologies Co., Ltd. 2022. All rights reserved.
[driver]
dir = ["${HILENS_APP_ROOT}/etc/flowunit",
"${HILENS_APP_ROOT}/etc/flowunit/cpp",
"${HILENS_APP_ROOT}/model",
"${HILENS_MB_SDK_PATH}/flowunit"]
skip-default = true
[profile]
profile=false
trace=false
dir=""
[graph]
format = "graphviz"
graphconf = """digraph multi_person_pose_lightweight_openpose {
node [shape=Mrecord];
queue_size = 4
batch_size = 1
input1[type=input,flowunit=input,device=cpu,deviceid=0]
data_source_parser[type=flowunit, flowunit=data_source_parser, device=cpu, deviceid=0]
video_demuxer[type=flowunit, flowunit=video_demuxer, device=cpu, deviceid=0]
video_decoder[type=flowunit, flowunit=video_decoder, device=rknpu, deviceid=0, pix_fmt=bgr]
image_resize[type=flowunit, flowunit=resize, device=rknpu, deviceid=0, image_width=512, image_height=288]
pose_detection[type=flowunit, flowunit=pose_infer, device=rknpu, deviceid=0]
pose_post_light[type=flowunit, flowunit=pose_post_light, device=cpu, deviceid=0]
draw_pose_light[type=flowunit, flowunit=draw_pose_light, device=cpu, deviceid=0]
video_out[type=flowunit, flowunit=video_out, device=rknpu, deviceid=0]
input1:input -> data_source_parser:in_data
data_source_parser:out_video_url -> video_demuxer:in_video_url
video_demuxer:out_video_packet -> video_decoder:in_video_packet
video_decoder:out_video_frame -> image_resize:in_image
image_resize:out_image -> pose_detection:input
pose_detection:out_heatmaps -> pose_post_light:in_heatmaps
pose_detection:out_pafs -> pose_post_light:in_pafs
video_decoder:out_video_frame -> draw_pose_light:in_image
pose_post_light:out_pose -> draw_pose_light:in_pose
draw_pose_light:out_image -> video_out:in_video_frame
}"""
[flow]
desc = "multi_person_pose_lightweight_openpose run in modelbox-rk-aarch64"
git add .
git commit -m 'modify graph'
该流程图对于某个视频流,经过视频解码、图像缩放、lightweight openpose推理、关键点后处理与画图等一系列操作后,将结果保存下来。
然后,参考common
资源包中mock_task.toml
,将工程的任务配置文件bin/mock_task.toml
中输入输出部分修改为:
# 任务输入,mock模拟目前仅支持一路rtsp或者本地url
# rtsp摄像头,type = "rtsp", url里面写入rtsp地址
# 其它用"url",比如可以是本地文件地址, 或者httpserver的地址,(摄像头 url = "0")
[input]
type = "url"
url = "../data/multi_person_pose.mp4"
# 任务输出,目前仅支持"webhook", 和本地输出"local"(输出到屏幕,url="0", 输出到rtsp,填写rtsp地址)
# (local 还可以输出到本地文件,这个时候注意,文件可以是相对路径,是相对这个mock_task.toml文件本身)
[output]
type = "local"
url = "../hilens_data_dir/multi_person_pose_result.mp4"
(6)运行应用
在工程路径下执行build_project.sh
进行工程构建:
cd workspace/multi_person_pose_lightweight_openpose
./build_project.sh
git add .
git commit -m 'build'
执行bin/main.sh
运行应用(如果运行报错请切换到root
账号再运行,本应用需要事先使用pip安装好OpenCV、NumPy),运行结束后在hilens_data_dir
目录下生成了multi_person_pose_result.mp4
文件,可以下载到PC端查看。
bin/main.sh
git add .
git commit -m 'run multi_person_pose_lightweight_openpose'
git push
git tag -a v1.6 -m 'multi_person_pose_lightweight_openpose'
git push origin --tags
No such file or directory
# 查看是否配置错了
-
卡死- 机器重启
-
总是卡死
git add .
git commit -m 'run failed'
- 重启
cd workspace/multi_person_pose_lightweight_openpose
bin/main.sh
git add .
git commit -m 'run multi_person_pose_lightweight_openpose'
git push
git tag -a v1.6.1 -m 'multi_person_pose_lightweight_openpose'
git push origin --tags
- 又卡死
小结
- 前面几个案例尚且可以清一清,重启一下,然后跑起来了,这个案例试了好多次都没有跑起来。希望后面有机会再继续体验。
参考文献:
- 点赞
- 收藏
- 关注作者
评论(0)