在Android手机上运行Mindspore

举报
GeekHee 发表于 2021/02/07 10:22:42 2021/02/07
【摘要】 在Android手机上安装运行Mindspore,训练Lenet并导出mindir模型

(本文纯属折腾,毫无实际意义,请勿模仿)

在Android手机上安装运行Mindspore,训练Lenet并导出mindir模型

1. 安装Mindspore

由于Android系统本就基于linux,所以只要装上termux之类的shell就可以了

但termux中的python为3.9版,无法安装mindspore(mindspore对Python版本的要求实在太严了),所以使用自带py3.7环境的Aid-learning

直接复制官网arm64 cpu版本的安装命令

Screenshot_2021-02-07-07-58-30.png

基本每一个包都要从源码编译,安装过程持续大约90~120min(可怜我的垃圾SoC)

Screenshot_2021-02-07-09-25-51.png

验证...无视warning,安装成功

2.训练Lenet

直接clone mindspore的仓库,把mnist数据集拉回来

开始train了

Screenshot_2021-02-07-08-06-54.png

大约1小时后,查看log

root@localhost:/home/.../lenet# cat log.txt
WARNING: 'ControlDepend' is deprecated from version 1.1 and will be removed in a future version, use 'Depend' instead.
============== Starting Training ==============
epoch: 1 step: 1, loss is 2.30258
epoch: 1 step: 2, loss is 2.3024673
epoch: 1 step: 3, loss is 2.302711
epoch: 1 step: 4, loss is 2.3024042
epoch: 1 step: 5, loss is 2.301996
epoch: 1 step: 6, loss is 2.3015108
epoch: 1 step: 7, loss is 2.303113
epoch: 1 step: 8, loss is 2.3020747
epoch: 1 step: 9, loss is 2.3019657
epoch: 1 step: 10, loss is 2.3010747
..........................................
epoch: 10 step: 1866, loss is 0.00077541405
epoch: 10 step: 1867, loss is 6.524474e-05
epoch: 10 step: 1868, loss is 0.00012913258
epoch: 10 step: 1869, loss is 0.0006541713
epoch: 10 step: 1870, loss is 0.07394992
epoch: 10 step: 1871, loss is 0.006976784
epoch: 10 step: 1872, loss is 0.13376503
epoch: 10 step: 1873, loss is 0.0028693418
epoch: 10 step: 1874, loss is 0.0051945336
epoch: 10 step: 1875, loss is 0.0093629705
epoch time: 186157.063 ms, per step time: 99.284 ms
1949.9983580112457

 不错,总计用时1950s=32min30s

半小时,比x86用CPU跑慢了...亿点点,但至少比预计的1小时快出50%(手动狗头

3. 测试并导出mindir模型

Screenshot_2021-02-07-08-20-53.png

测试...0.98,正常水平了

既然都做到了这一步,那就把导出mindir的作业重复一遍吧

借鉴了段代码

import numpy as np
import mindspore.nn as nn
from mindspore.common.initializer import Normal
from mindspore import Tensor, export, load_checkpoint, load_param_into_net

class LeNet5(nn.Cell):
    def __init__(self, num_class=10, num_channel=1, include_top=True):
        super(LeNet5, self).__init__()
        self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')
        self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
        self.relu = nn.ReLU()
        self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
        self.include_top = include_top
        if self.include_top:
            self.flatten = nn.Flatten()
            self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02))
            self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02))
            self.fc3 = nn.Dense(84, num_class, weight_init=Normal(0.02))

    def construct(self, x):
        x = self.conv1(x)
        x = self.relu(x)
        x = self.max_pool2d(x)
        x = self.conv2(x)
        x = self.relu(x)
        x = self.max_pool2d(x)
        if not self.include_top:
            return x
        x = self.flatten(x)
        x = self.relu(self.fc1(x))
        x = self.relu(self.fc2(x))
        x = self.fc3(x)
        return x

lenet = LeNet5()
# 加载模型
param_dict = load_checkpoint("./model/.checkpoint_lenet-10_1875.ckpt")
load_param_into_net(lenet, param_dict)
input = np.random.uniform(0.0, 1.0, size=[32, 1, 32, 32]).astype(np.float32)
# 导出mindir
export(lenet, Tensor(input), file_name='lenet.mindir', file_format='MINDIR',)

print('Done')

Screenshot_2021-02-07-08-41-10.png

成功导出mindir模型

4. 继续折腾

完成这一切后,本来可以继续让这手机吃灰了

但是...

Screenshot_2021-02-07-08-45-39.png

Screenshot_2021-02-07-08-52-23.png

意料之中,做lite和cache都失败了

最后...

文中所用手机的SoC为骁龙630,一代神......垃圾,所以安装和运行时耗费较长时间

那么

华为送台麒麟9000给我跑一下吧......

【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。