MindStudio制作MindSpore TBE算子(四)算子测试(ST测试-香橙派AI pro--失败)

举报
塞恩斯 发表于 2025/02/15 17:28:47 2025/02/15
1.2w+ 0 0
【摘要】 MindStudio制作MindSpore TBE算子(四)算子测试(ST测试-香橙派AI pro--失败)

在前两次分别尝试用vmware和ModelArts的910B芯片进行算子测试的时候,发现无法完成ST测试,详情查看:
MindStudio制作MindSpore TBE算子(三)算子测试(ST测试)
MindStudio制作MindSpore TBE算子(四)算子测试(ST测试-Ascend910B/ModelArts)–失败尝试
这里我重整旗鼓,在香橙派上执行测试功能。

一、环境准备

1.1 配置CANN环境

这里配置的时候使用普通用户即可,如果使用ROOT用户可能会导致执行的时候多出许多permission报错

wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-toolkit_8.0.0_linux-aarch64.run
chmod +x Ascend-cann-toolkit_8.0.0_linux-aarch64.run
./Ascend-cann-toolkit_8.0.0_linux-aarch64.run --install

1.2 配置AI PRO的mindspore环境

可以参考:

AI pro搭建Mindspore环境
补充kernel算子包

wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-kernels-310b_8.0.0_linux-aarch64.run
chmod +x Ascend-cann-kernels-310b_8.0.0_linux-aarch64.run
./Ascend-cann-kernels-310b_8.0.0_linux-aarch64.run --install

安装完成后,会出现success的标志,继续安装Mindspore环境

conda create -n mindspore python=3.9 -y
conda activate mindspore
pip install --upgrade pip setuptools
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.5.0/MindSpore/unified/aarch64/mindspore-2.5.0-cp39-cp39-linux_aarch64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install ml-dtypes cloudpickle decorator attrs sympy 

这里可能显示有一些包的依赖关系没有构建完成,如果缺少的话就继续安装。可以利用python解释器进行验证,如果没有error报错,证明依赖基本安装完成。

(mindspore) HwHiAiUser@orangepiaipro:~/MyAscend$ python
Python 3.9.21 (main, Dec 11 2024, 16:27:47) 
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import mindspore
[INFO] RUNTIME(224331,python):2025-02-13-13:41:11.437.848 [task_fail_callback_manager.cc:52] 224331 TaskFailCallBackManager: Constructor.
[INFO] HCCL(224331,python):2025-02-13-13:41:11.513.351 [adapter_rts.cc:2646][224331][adapter_rts.cc][CallBackInitRts] g_deviceType [6] g_deviceLogicId [-1] g_devicePhyId [-1]
[EVENT] PROFILING(224331,python):2025-02-13-13:41:11.947.471 [msprof_callback_impl.cpp:336] >>> (tid:224331) Started to register profiling ctrl callback.
[EVENT] PROFILING(224331,python):2025-02-13-13:41:11.948.105 [msprof_callback_impl.cpp:343] >>> (tid:224331) Started to register profiling hash id callback.
[INFO] PROFILING(224331,python):2025-02-13-13:41:11.948.297 [prof_atls_plugin.cpp:117] (tid:224331) RegisterProfileCallback, callback type is 7
[EVENT] PROFILING(224331,python):2025-02-13-13:41:11.948.446 [msprof_callback_impl.cpp:350] >>> (tid:224331) Started to register profiling enable host freq callback.
[INFO] PROFILING(224331,python):2025-02-13-13:41:11.948.572 [prof_atls_plugin.cpp:117] (tid:224331) RegisterProfileCallback, callback type is 8
[INFO] RUNTIME(224331,python):2025-02-13-13:41:12.436.374 [runtime.cc:5471] 224331 GetVisibleDevices: ASCEND_RT_VISIBLE_DEVICES param was not set
[INFO] PROFILING(224331,python):2025-02-13-13:41:12.437.265 [prof_atls_plugin.cpp:210] (tid:224331) Module[7] register callback of ctrl handle.
/home/HwHiAiUser/.conda/envs/mindspore/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.
  setattr(self, word, getattr(machar, word).flat[0])
/home/HwHiAiUser/.conda/envs/mindspore/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.
  return self._float_to_str(self.smallest_subnormal)
/home/HwHiAiUser/.conda/envs/mindspore/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
  setattr(self, word, getattr(machar, word).flat[0])
/home/HwHiAiUser/.conda/envs/mindspore/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
  return self._float_to_str(self.smallest_subnormal)
>>> 

为AI pro安装本地的MindStudio

wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/MindStudio/MindStudio%207.0.0/MindStudio_7.0.0_linux_aarch64.tar.gz
tar -xzvf MindStudio_7.0.0_linux_aarch64.tar.gz

安装完成后在Mindstudio开始配置信息,添加AI Pro信息
image.png
image.png

大概等待5分钟左右,即可完成配置。之后内容可以参考前期重复创建算子:MindStudio制作MindSpore TBE算子(一)算子制作

1.2 配置远程Python解释器

点击右上角设置,项目结构中设置sdk。
image.png

这里配置对应路径下的python解释器,这里虽然没有解析出python的包,但是不影响执行。
image.png
image.png
image.png

二、代码准备

将准备好的代码载入到impl文件,如果函数名调用有错误,需要根据实际情况修改。

from __future__ import absolute_import
from tbe import tvm
import tbe.dsl as tbe
from tbe.common.register import register_op_compute
from tbe.common.utils import shape_refine
from tbe.common.utils import shape_util
from tbe.common.utils import para_check
from functools import reduce
from mindspore.ops.op_info_register import op_info_register, TBERegOp, DataType

SHAPE_SIZE_LIMIT = 2147483648
@register_op_compute("add_custom")
def add_custom_compute(x, y):
    """
    The compute function of the Addcustom implementation.
    """
    # shape转为list
    shape_x = shape_util.shape_to_list(x.shape)
    shape_y = shape_util.shape_to_list(y.shape)

    # shape_max取shape_x与shape_y的每个维度的大值
    shape_x, shape_y, shape_max = shape_util.broadcast_shapes(shape_x, shape_y,
                                                              param_name_input1="input_x",
                                                              param_name_input2="input_y")
    shape_size = reduce(lambda x, y: x * y, shape_max[:])
    if shape_size > SHAPE_SIZE_LIMIT:
        raise RuntimeError("the shape is too large to calculate")

    # 将input_x的shape广播为shape_max
    input_x = tbe.broadcast(x, shape_max)
    input_y = tbe.broadcast(y, shape_max)

    # 执行input_x + input_y
    res = tbe.vadd(input_x, input_y)

    return res
# Define the kernel info of AddCustom.
add_custom_op_info = TBERegOp("AddCustom") \
    .fusion_type("OPAQUE") \
    .partial_flag(True) \
    .async_flag(False) \
    .binfile_name("add_custom.so") \
    .compute_cost(10) \
    .kernel_name("add_custom_impl") \
    .input(0, "x", False, "required", "all")\
    .input(1, "y", False, "required", "all")\
    .output(0, "z", False, "required", "all")\
    .dtype_format(DataType.F16_Default, DataType.F16_Default, DataType.F16_Default)\
    .get_op_info()
# Binding kernel info with the kernel implementation.
@op_info_register(add_custom_op_info)
def add_custom_impl(x, y, z, kernel_name="add_custom_impl"):
    """
    The entry function of the AddCustom implementation.
    """
    """
    The entry function of the Addcustom implementation.
    """
    # 获取算子输入tensor的shape与dtype
    shape_x = x.get("shape")
    shape_y = y.get("shape")

    # 检验算子输入类型
    check_tuple = ("float16")
    input_data_type = x.get("dtype").lower()
    para_check.check_dtype(input_data_type, check_tuple, param_name="input_x")

    # shape_max取shape_x与shape_y的每个维度的最大值
    shape_x, shape_y, shape_max = shape_util.broadcast_shapes(shape_x, shape_y,
                                                              param_name_input1="x",
                                                              param_name_input2="y")
    # 如果shape的长度等于1,就直接赋值,如果shape的长度不等于1,做切片,将最后一个维度舍弃(按照内存排布,最后一个维度为1与没有最后一个维度的数据排布相同,例如2*3=2*3*1,将最后一个为1的维度舍弃,可提升后续的调度效率)
    if shape_x[-1] == 1 and shape_y[-1] == 1 and shape_max[-1] == 1:
        shape_x = shape_x if len(shape_x) == 1 else shape_x[:-1]
        shape_y = shape_y if len(shape_y) == 1 else shape_y[:-1]
        shape_max = shape_max if len(shape_max) == 1 else shape_max[:-1]

    # 使用TVM的placeholder接口对输入tensor进行占位,返回一个tensor对象
    data_x = tvm.placeholder(shape_x, name="data_1", dtype=input_data_type)
    data_y = tvm.placeholder(shape_y, name="data_2", dtype=input_data_type)

    with tvm.target.cce():
        # 计算过程
        res = add_custom_compute(data_x, data_y)
        # 自动调度模块
        sch = tbe.auto_schedule(res)
    # 配置编译信息
    config = {"print_ir": False,
              "name": kernel_name,
              "tensor_list": [data_x, data_y, res]}

    tbe.build(sch, config)

# # 算子调用,测试算子计算正确性
if __name__ == '__main__':
    input_output_dict = {"shape": (5, 6, 7),"format": "ND","ori_shape": (5, 6, 7),"ori_format": "ND", "dtype": "float16"}
    add_custom_impl(input_output_dict, input_output_dict, input_output_dict, kernel_name="add")

三、ST测试

image.png

这里遇到了一些麻烦,创建的ST测试的json一直无法弹出,因此只能使用AI Pro本地的MindStudio,如果可以正常弹出的,远程继续即可。

3.1 问题确定

进入MindStudio的安装路径,启动MindStudio

cd bin
bash MindStudio.sh

在这里插入图片描述
这里百思不得其解,为什么不能够识别到正确的mindspore,明明我安装了啊,突然想到之前的配置,要在mindspore环境下安装CANN算子,因此这里将CANN算子卸载后,切到mindspore虚拟python环境下重新安装了算子,即可解决此问题。

3.2 ST算子测试

  1. 弄好环境后,这里就可以new st case
    image.png

  2. 配置工具链
    image.png

  3. 增加新配置,并补充相关软件,这里看到系统缺乏gdb
    image.png

  4. 安装gdb

    sudo apt install gdb -y
    

    image.png

  5. 之后选择对应的system解释器,执行run工具
    image.png

  6. 开始测试
    image.png
    如果有缺少什么依赖,可以继续补全

pip install pytest
  1. 报错适配器问题
    到这里,AI pro开发CANN算子的路径是堵死了,可能只能找个有权限的910服务器来进行相关研究了。
    image.png
    image.png
【声明】本内容来自华为云开发者社区博主,不代表华为云及华为云开发者社区的观点和立场。转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息,否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

作者其他文章

评论(0

抱歉,系统识别当前为高风险访问,暂不支持该操作

    全部回复

    上滑加载中

    设置昵称

    在此一键设置昵称,即可参与社区互动!

    *长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

    *长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。