他的回复:
(二)ModelArts自定义部署环境1.准备代码1.1 创建文件夹CPU-TF1.13.2-Flask1.1.21.2 在文件夹中创建server.py文件:from flask import Flask, request import tensorflow as tf app = Flask(__name__) @app.route('/health') def health_check(): return {"health": "true"} @app.route('/gpu-check', methods=['post']) def GPU_check(): upload_file = request.files['image'] if upload_file: size = len(upload_file.read()) return {'size': size, 'gpu': tf.test.is_gpu_available()} if __name__ == '__main__': app.run(host='0.0.0.0', port=8080)1.3 在文件夹中创建web.sh文件:python3 server.py1.4 在文件夹中创建Dockerfile文件:FROM swr.cn-north-4.myhuaweicloud.com/modelarts-job-dev-image/custom-cpu-base:1.3 RUN sed -i "s@http://repo.myhuaweicloud.com/repository/pypi/simple@https://pypi.tuna.tsinghua.edu.cn/simple@g" /root/.pip/pip.conf RUN mkdir -p /tmp/install && cd /tmp/install && \ curl -o Miniconda3-4.5.4-Linux-x86_64.sh -k https://repo.anaconda.com/miniconda/Miniconda3-py37_4.8.3-Linux-x86_64.sh && \ bash Miniconda3-4.5.4-Linux-x86_64.sh -b RUN /root/miniconda3/bin/pip install tensorflow==1.13.2 && \ /root/miniconda3/bin/pip install boto3==1.7.29 && \ rm -rf /tmp/install RUN /root/miniconda3/bin/pip install flask==1.1.2 COPY ./ ./ ENV PATH=/root/miniconda3/bin/:$PATH EXPOSE 8080 ENTRYPOINT ["/bin/bash", "web.sh"]2. 如(一)ModelArts自定义训练环境中第3部一样,准备好镜像环境。2.1 在CPU-TF1.13.2-Flask1.1.2中运行命令:docker build -t swr.cn-east-3.myhuaweicloud.com/temp/test-model-arts:CPU-TF1.13.2-Flask1.1.2 .2.2 在CPU-TF.13.2-Flask1.1.2中运行命令:docker push swr.cn-east-3.myhuaweicloud.com/temp/test-model-arts:CPU-TF1.13.2-Flask1.1.2若有未登录问题,则需如步骤(一)中3.7一样,登录swr3. 导入镜像(模型)3.1 在ModelArts模型管理中导入模型3.2 设置参数模型名称:environment-test元数据来源:从容器镜像中选择容器镜像所在位置:swr.cn-east-3.myhuaweicloud.com/temp/test-model-arts:CPU-TF1.13.2-Flask1.1.2部署类型:在线服务配置文件:在线编辑待编辑配置文件输入:(具体配置说明详见:https://support.huaweicloud.com/engineers-modelarts/modelarts_23_0092.html){ "model_type": "Image", "model_algorithm": "eson_test", "swr_location": "swr.cn-east-3.myhuaweicloud.com/temp/test-model-arts:CPU-TF1.13.2-Flask1.1.2", "health": { "url": "/health", "protocol": "http" }, "apis": [ { "protocol": "http", "url": "/gpu-check", "method": "post", "request": { "Content-type": "multipart/form-data", "data": { "type": "object", "properties": { "image": {"type": "file"} } } }, "response": { "Content-type": "multipart/form-data", "data": { "type": "object", "properties": { "size": {"type": "integer"}, "gpu": {"type": "boolean"} } } } } ] }单击保存;单击立即创建;3.3 等待上传成功4.部署4.1 选择部署4.2 配置参数名称:test-1(这个随便)资源池:公共资源池模型:environment-test 0.0.1计算规格:CPU:2核8GiB计算节点:14.3 单击下一步5. 验证5.1 等待部署成功5.2 单击预测,并上传文件5.3 选择一个8M以下的文件,上传成功后点击预测。显示结果如下:6. 小结本用例用检测是否支持GPU以及检测上传文件大小验证是否能使用tf以及整个流程环境验证。