如何通过onnx将pytorch模型转换为tensorflow模型和pb格式
Open Neural Network Exchange(ONNX,开放神经网络交换)格式,是一个用于表示深度学习模型的标准,可使模型在不同框架之间进行转移。目前官方支持加载ONNX模型并进行推理的深度学习框架有: Caffe2, PyTorch, MXNet,ML.NET,TensorRT 和 Microsoft CNTK,并且 TensorFlow 也非官方的支持ONNX。
pytorch官方网站有比较详细的介绍,主要是通过torch.onnx这个函数库实现相关的功能。
from torch.autograd import Variable # Load the trained model from file trained_model = Net() trained_model.load_state_dict(torch.load('output/mnist.pth')) # Export the trained model to ONNX dummy_input = Variable(torch.randn(1, 1, 28, 28)) # one black and white 28 x 28 picture will be the input to the model torch.onnx.export(trained_model, dummy_input, "output/mnist.onnx"
torch.onnx.export
的dynamic_axes
torch.onnx.export(trained_model, (input, (h0, c0)), 'lstm.onnx', input_names=['input', 'h0', 'c0'], output_names=['output','hn','cn'], dynamic_axes={'input': {0: 'sequence'}, 'output': {0: 'sequence'}})
onnx官方提供了比较详细的教程,主要是通过onnx_tf.backend
的prepare
函数实现,可以在tensorflow框架下测试onnx模型,并导出为pb格式。
import onnx from onnx_tf.backend import prepare # Load the ONNX file model = onnx.load('output/mnist.onnx') # Import the ONNX model to Tensorflow tf_rep = prepare(model) # Input nodes to the model print('inputs:', tf_rep.inputs) # Output nodes from the model print('outputs:', tf_rep.outputs) # All nodes in the model print('tensor_dict:') print(tf_rep.tensor_dict) # 运行tensorflow模型 import numpy as np from IPython.display import display from PIL import Image # 测试图片 print('Image 1:') img = Image.open('assets/two.png').resize((28, 28)).convert('L') display(img) output = tf_rep.run(np.asarray(img, dtype=np.float32)[np.newaxis, np.newaxis, :, :]) print('The digit is classified as ', np.argmax(output)) print('Image 2:') img = Image.open('assets/three.png').resize((28, 28)).convert('L') display(img) output = tf_rep.run(np.asarray(img, dtype=np.float32)[np.newaxis, np.newaxis, :, :]) print('The digit is classified as ', np.argmax(output)) # 保存tensorflow模型参数为pb tf_rep.export_graph('output/mnist.pb')
需要将AdaptiveAvgPool2d转换为标准的AvgPool2d。可以通过这两个公式将Adaptive Pooling转换为标准的Max/AvgPooling,从而应用到不同的学习框架中。
stride = floor ( (input_size / (output_size) ) kernel_size = input_size − (output_size−1) * stride
如alphapose模型将self.avg_pool = nn.AdaptiveAvgPool2d(1)
替换为
stride2 = math.floor(input_size[2] / output_size[2]) stride3 = math.floor(input_size[3] / output_size[3]) kernel_size2 = input_size[2] - (output_size[2] - 1) * stride2 kernel_size3 = input_size[3] - (output_size[3] - 1) * stride3 self.avg_pool = nn.AvgPool2d(kernel_size=(kernel_size2, kernel_size3), stride=(stride2, stride3), ceil_mode=False
如果有transform不要忘记依照其进行数据预处理。如输入要归一化标准化,将图片归一化至[0,1]标准化至[-1,1]。
amin, amax = img_array.min(), img_array.max() # 求最大最小值 img_array = (img_array - amin) / (amax - amin) img_array = (img_array - 0.5) / 0.5
https://pytorch.org/docs/stable/onnx.html
https://github.com/onnx/tutorials/tree/master/tutorials
https://blog.csdn.net/weixin_43902449/article/details/90515528
- 点赞
- 收藏
- 关注作者
评论(0)