InceptionV3-图像分类【玩转华为云】

举报
阳光大猫 发表于 2025/03/14 22:06:26 2025/03/14
【摘要】 本节介绍了如何使用ModelArts和ModelBox训练开发一个InceptionV3动物图片分类的AI应用,我们只需要准备模型文件以及简单的配置即可创建一个HTTP服务。同时我们可以了解到InceptionV3网络的基本结构、数据处理和模型训练方法,以及对应推理应用的逻辑。

InceptionV3-图像分类

一、模型训练与转换

Inception V3,GoogLeNet的改进版本,采用InceptionModule和全局平均池化层,v3一个最重要的改进是分解(Factorization),将7x7分解成两个一维的卷积(1x7,7x1),3x3也是一样(1x3,3x1),这样的好处,既可以加速计算(多余的计算能力可以用来加深网络),又可以将1个conv拆成2个conv,使得网络深度进一步增加,增加了网络的非线性。

模型的训练与转换教程已经开放在AI Gallery中,其中包含训练数据、训练代码、模型转换脚本。

在ModelArts的Notebook环境中训练后,再转换成对应平台的模型格式:onnx格式可以用在Windows设备上,RK系列设备上需要转换为rknn格式。

二、ModelBox 应用开发

1. 创建工程

ModelBox sdk目录下使用create.bat创建InceptionV3工程:

PS D:\modelbox-win10-x64-1.5.3> .\create.bat -t server -n InceptionV3
...
success: create InceptionV3 in D:\modelbox-win10-x64-1.5.3\workspace

create.bat工具的参数中,-t参数,表示所创建实例的类型,包括server(ModelBox工程)、python(Python功能单元)、c++(C++功能单元)、infer(推理功能单元)等;-n参数,表示所创建实例的名称;-s参数,表示将使用后面参数值代表的模板创建工程,而不是创建空的工程。

2. 创建推理功能单元

ModelBox sdk目录下使用create.bat创建inceptionv3_infer推理功能单元:

PS D:\modelbox-win10-x64-1.5.3> .\create.bat -t infer -n inceptionv3_infer -p InceptionV3
...
success: create infer inceptionv3_infer in D:\modelbox-win10-x64-1.5.3\workspace\InceptionV3/model/inceptionv3_infer

create.bat工具使用时,-t infer即表示创建的是推理功能单元;-n xxx_infer表示创建的功能单元名称为xxx_infer;-p表示所创建的功能单元属于InceptionV3应用。

下载转换好的InceptionV3.onnx模型到InceptionV3\model目录下,修改推理功能单元inceptionv3_infer.toml模型的配置文件:

# Copyright (C) 2020 Huawei Technologies Co., Ltd. All rights reserved.

[base]
name = "inceptionv3_infer"
device = "cpu"
version = "1.0.0"
description = "your description"
entry = "./InceptionV3.onnx"  # model file path, use relative path
type = "inference" 
virtual_type = "onnx" # inference engine type: win10 now only support onnx
group_type = "Inference"  # flowunit group attribution, do not change

# Input ports description
[input]
[input.input1]  # input port number, Format is input.input[N]
name = "Input"  # input port name
type = "float"  # input port data type ,e.g. float or uint8
device = "cpu"  # input buffer type: cpu, win10 now copy input from cpu

# Output ports description
[output]
[output.output1] # output port number, Format is output.output[N]
name = "Output"  # output port name
type = "float"   # output port data type ,e.g. float or uint8

3. 创建后处理功能单元

ModelBox sdk目录下使用create.bat创建inceptionv3_post后处理功能单元:

PS D:\modelbox-win10-x64-1.5.3> .\create.bat -t python -n inceptionv3_post -p InceptionV3  
...
success: create python inceptionv3_post in D:\modelbox-win10-x64-1.5.3\workspace\InceptionV3/etc/flowunit/inceptionv3_post

create.bat工具使用时,-t python即表示创建的是通用功能单元;-n xxx_post表示创建的功能单元名称为xxx_post;-p表示所创建的功能单元属于InceptionV3应用。

a. 修改配置文件

我们的模型有一个输入和输出,总共包含90种动物类别:

# Copyright (c) Huawei Technologies Co., Ltd. 2022. All rights reserved.

# Basic config
[base]
name = "inceptionv3_post" # The FlowUnit name
device = "cpu" # The flowunit runs on cpu
version = "1.0.0" # The version of the flowunit
type = "python" # Fixed value, do not change
description = "description" # The description of the flowunit
entry = "inceptionv3_post@inceptionv3_postFlowUnit" # Python flowunit entry function
group_type = "Generic"  # flowunit group attribution, change as Input/Output/Image/Generic ...

# Flowunit Type
stream = false # Whether the flowunit is a stream flowunit
condition = false # Whether the flowunit is a condition flowunit
collapse = false # Whether the flowunit is a collapse flowunit
collapse_all = false # Whether the flowunit will collapse all the data
expand = false #  Whether the flowunit is a expand flowunit

# The default Flowunit config
[config]
num_classes = 90

# Input ports description
[input]
[input.input1] # Input port number, the format is input.input[N]
name = "in_feat" # Input port name
type = "float" # Input port type

# Output ports description
[output]
[output.output1] # Output port number, the format is output.output[N]
name = "out_data" # Output port name
type = "string" # Output port type
b. 修改逻辑代码
# Copyright (c) Huawei Technologies Co., Ltd. 2022. All rights reserved.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import _flowunit as modelbox
import numpy as np
import json

class inceptionv3_postFlowUnit(modelbox.FlowUnit):
    # Derived from modelbox.FlowUnit
    def __init__(self):
        super().__init__()

    def open(self, config):
        # Open the flowunit to obtain configuration information
        self.params = {}
        self.params['num_classes'] = config.get_int('num_classes')

        return modelbox.Status.StatusCode.STATUS_SUCCESS

    def process(self, data_context):
        # Process the data
        in_feat = data_context.input("in_feat")
        out_data = data_context.output("out_data")

        # inceptionv3_post process code.
        # Remove the following code and add your own code here.
        for buffer_feat in in_feat:
            feat_data = np.array(buffer_feat.as_object(), copy=False)
            clsse = np.argmax(feat_data).astype(np.int32).item()
            score = feat_data[clsse].astype(np.float32).item()
            result = {"clsse": clsse, "score":score}
            result_str = json.dumps(result)
            out_buffer = modelbox.Buffer(self.get_bind_device(), result_str)
            out_data.push_back(out_buffer)

        return modelbox.Status.StatusCode.STATUS_SUCCESS

    def close(self):
        # Close the flowunit
        return modelbox.Status()

    def data_pre(self, data_context):
        # Before streaming data starts
        return modelbox.Status()

    def data_post(self, data_context):
        # After streaming data ends
        return modelbox.Status()

    def data_group_pre(self, data_context):
        # Before all streaming data starts
        return modelbox.Status()

    def data_group_post(self, data_context):
        # After all streaming data ends
        return modelbox.Status()

4. 修改应用的流程图

InceptionV3工程graph目录下存放流程图,默认的流程图InceptionV3.toml与工程同名:

# Copyright (C) 2020 Huawei Technologies Co., Ltd. All rights reserved.

[driver]
dir = ["${HILENS_APP_ROOT}/etc/flowunit",
"${HILENS_APP_ROOT}/etc/flowunit/cpp",
"${HILENS_APP_ROOT}/model",
"${HILENS_MB_SDK_PATH}/flowunit"]
skip-default = true
[profile]
profile=false
trace=false
dir="${HILENS_DATA_DIR}/mb_profile"
[graph]
format = "graphviz"
graphconf = """digraph InceptionV3 {
    node [shape=Mrecord]
    queue_size = 4
    batch_size = 1
    input1[type=input,flowunit=input,device=cpu,deviceid=0]
    httpserver_sync_receive[type=flowunit, flowunit=httpserver_sync_receive_v2, device=cpu, deviceid=0, time_out_ms=5000, endpoint="http://0.0.0.0:1234/v1/InceptionV3", max_requests=100]
    image_decoder[type=flowunit, flowunit=image_decoder, device=cpu, key="image_base64", queue_size=4]
    image_resize[type=flowunit, flowunit=resize, device=cpu, deviceid=0, image_width=224, image_height=224]
    normalize[type=flowunit, flowunit=normalize, device=cpu, deviceid=0, standard_deviation_inverse="0.003921568627450,0.003921568627450,0.003921568627450"]
    inceptionv3_infer[type=flowunit, flowunit=inceptionv3_infer, device=cpu, deviceid=0, batch_size=1]
    inceptionv3_post[type=flowunit, flowunit=inceptionv3_post, device=cpu, deviceid=0]
    httpserver_sync_reply[type=flowunit, flowunit=httpserver_sync_reply_v2, device=cpu, deviceid=0]
    
    input1:input -> httpserver_sync_receive:in_url
    httpserver_sync_receive:out_request_info -> image_decoder:in_encoded_image
    image_decoder:out_image -> image_resize:in_image
    image_resize:out_image -> normalize:in_data
    normalize:out_data -> inceptionv3_infer:Input
    inceptionv3_infer:Output -> inceptionv3_post:in_feat
    inceptionv3_post:out_data -> httpserver_sync_reply:in_reply_info
}"""
[flow]
desc = "InceptionV3 run in modelbox-win10-x64"

在命令行中运行.\create.bat -t editor即可打开ModelBox图编排界面,可以实时修改并查看项目的流程图:

PS D:\modelbox-win10-x64-1.5.3> .\create.bat -t editor

5. 运行应用

InceptionV3工程目录下执行.\bin\main.bat运行应用:

PS D:\modelbox-win10-x64-1.5.3> cd D:\modelbox-win10-x64-1.5.3\workspace\InceptionV3
PS D:\modelbox-win10-x64-1.5.3\workspace\InceptionV3> .\bin\main.bat

InceptionV3工程data目录下新建test_http.py测试脚本:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

# Copyright (c) Huawei Technologies Co., Ltd. 2022. All rights reserved.

import os
import cv2
import json
import base64
import http.client
class HttpConfig:
    '''http调用的参数配置'''
    def __init__(self, host_ip, port, url, img_base64_str):
        self.hostIP = host_ip
        self.Port = port

        self.httpMethod = "POST"
        self.requstURL = url
        self.headerdata = {
            "Content-Type": "application/json"
        }
        self.test_data = {
            "image_base64": img_base64_str
        }
        self.body = json.dumps(self.test_data)
def read_image(img_path):
    '''读取图片数据并转为base64编码的字符串'''
    img_data = cv2.imread(img_path)
    img_data = cv2.cvtColor(img_data, cv2.COLOR_BGR2RGB)
    img_str = cv2.imencode('.jpg', img_data)[1].tobytes()
    img_bin = base64.b64encode(img_str)
    img_base64_str = str(img_bin, encoding='utf8')
    return img_data, img_base64_str
def decode_result_str(result_str):
    try:
        result = json.loads(result_str)
    except Exception as ex:
        print(str(ex))
        return []
    else:
        return result
labels = ['antelope', 'badger', 'bat', 'bear', 'bee', 'beetle', 'bison',
          'boar', 'butterfly', 'cat', 'caterpillar', 'chimpanzee',
          'cockroach', 'cow', 'coyote', 'crab', 'crow', 'deer', 'dog',
          'dolphin', 'donkey', 'dragonfly', 'duck', 'eagle', 'elephant',
          'flamingo', 'fly', 'fox', 'goat', 'goldfish', 'goose', 'gorilla',
          'grasshopper', 'hamster', 'hare', 'hedgehog', 'hippopotamus',
          'hornbill', 'horse', 'hummingbird', 'hyena', 'jellyfish',
          'kangaroo', 'koala', 'ladybugs', 'leopard', 'lion', 'lizard',
          'lobster', 'mosquito', 'moth', 'mouse', 'octopus', 'okapi',
          'orangutan', 'otter', 'owl', 'ox', 'oyster', 'panda', 'parrot',
          'pelecaniformes', 'penguin', 'pig', 'pigeon', 'porcupine',
          'possum', 'raccoon', 'rat', 'reindeer', 'rhinoceros', 'sandpiper',
          'seahorse', 'seal', 'shark', 'sheep', 'snake', 'sparrow', 'squid',
          'squirrel', 'starfish', 'swan', 'tiger', 'turkey', 'turtle',
          'whale', 'wolf', 'wombat', 'woodpecker', 'zebra']
def test_image(img_path, ip, port, url):
    '''单张图片测试'''
    img_data, img_base64_str = read_image(img_path)
    http_config = HttpConfig(ip, port, url, img_base64_str)

    conn = http.client.HTTPConnection(host=http_config.hostIP, port=http_config.Port)
    conn.request(method=http_config.httpMethod, url=http_config.requstURL,
                 body=http_config.body, headers=http_config.headerdata)

    response = conn.getresponse().read().decode()
    print('response: ', response)

    result = decode_result_str(response)
    clsse, score = result["clsse"], result["score"]
    result_str = f"{labels[clsse]}:{round(score, 2)}"
    cv2.putText(img_data, result_str, (0, 100), cv2.FONT_HERSHEY_TRIPLEX, 4, (0, 255, 0), 2)
    cv2.imwrite('./result-' + os.path.basename(img_path), img_data[..., ::-1])
if __name__ == "__main__":
    port = 1234
    ip = "127.0.0.1"
    url = "/v1/InceptionV3"

    img_folder = './test_imgs'
    file_list = os.listdir(img_folder)
    for img_file in file_list:
        print("\n================ {} ================".format(img_file))
        img_path = os.path.join(img_folder, img_file)
        test_image(img_path, ip, port, url)

InceptionV3工程data目录下新建test_imgs文件夹存放测试图片:

在另一个终端中进入InceptionV3工程目录data文件夹下运行test_http.py脚本发起HTTP请求测试:

PS D:\modelbox-win10-x64-1.5.3> cd D:\modelbox-win10-x64-1.5.3\workspace\InceptionV3\data
PS D:\modelbox-win10-x64-1.5.3\workspace\InceptionV3\data> D:\modelbox-win10-x64-1.5.3\python-embed\python.exe .\test_http.py

================ 61cf5127ce.jpg ================
response:  {"clsse": 63, "score": 0.9996486902236938}

================ 7e2a453559.jpg ================
response:  {"clsse": 81, "score": 0.999880313873291}

InceptionV3工程data目录下即可查看测试图片的推理结果:

三、小结

本节介绍了如何使用ModelArts和ModelBox训练开发一个InceptionV3动物图片分类的AI应用,我们只需要准备模型文件以及简单的配置即可创建一个HTTP服务。同时我们可以了解到InceptionV3网络的基本结构、数据处理和模型训练方法,以及对应推理应用的逻辑。

【声明】本内容来自华为云开发者社区博主,不代表华为云及华为云开发者社区的观点和立场。转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息,否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。