xref: /aosp_15_r20/external/armnn/docs/05_01_parsers.dox (revision 89c4ff92f2867872bb9e2354d150bf0c8c502810)
1/// Copyright (c) 2022-2023 Arm Ltd and Contributors. All rights reserved.
2///
3/// SPDX-License-Identifier: MIT
4///
5
6namespace armnn
7{
8/**
9@page parsers Parsers
10
11@tableofcontents
12Execute models from different machine learning platforms efficiently with our parsers. Simply choose a parser according
13to the model you want to run e.g. If you've got a model in onnx format (<model_name>.onnx) use our onnx-parser.
14
15If you would like to run a Tensorflow Lite (TfLite) model you probably also want to take a look at our @ref delegate.
16
17All parsers are written in C++ but it is also possible to use them in python. For more information on our python
18bindings take a look into the @ref md_python_pyarmnn_README section.
19
20<br/><br/>
21
22
23
24
25@section S5_onnx_parser Arm NN Onnx Parser
26
27`armnnOnnxParser` is a library for loading neural networks defined in ONNX protobuf files into the Arm NN runtime.
28
29## ONNX operators that the Arm NN SDK supports
30
31This reference guide provides a list of ONNX operators the Arm NN SDK currently supports.
32
33The Arm NN SDK ONNX parser currently only supports fp32 operators.
34
35### Fully supported
36
37- Add
38  - See the ONNX [Add documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Add) for more information
39
40- AveragePool
41  - See the ONNX [AveragePool documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#AveragePool) for more information.
42
43- Concat
44  - See the ONNX [Concat documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Concat) for more information.
45
46- Constant
47  - See the ONNX [Constant documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Constant) for more information.
48
49- Clip
50  - See the ONNX [Clip documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Clip) for more information.
51
52- Flatten
53  - See the ONNX [Flatten documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Flatten) for more information.
54
55- Gather
56  - See the ONNX [Gather documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Gather) for more information.
57
58- GlobalAveragePool
59  - See the ONNX [GlobalAveragePool documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#GlobalAveragePool) for more information.
60
61- LeakyRelu
62  - See the ONNX [LeakyRelu documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#LeakyRelu) for more information.
63
64- MaxPool
65  - See the ONNX [max_pool documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#MaxPool) for more information.
66
67- Relu
68  - See the ONNX [Relu documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Relu) for more information.
69
70- Reshape
71  - See the ONNX [Reshape documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Reshape) for more information.
72
73- Shape
74  - See the ONNX [Shape documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Shape) for more information.
75
76- Sigmoid
77  - See the ONNX [Sigmoid documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Sigmoid) for more information.
78
79- Tanh
80  - See the ONNX [Tanh documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Tanh) for more information.
81
82- Unsqueeze
83  - See the ONNX [Unsqueeze documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Unsqueeze) for more information.
84
85### Partially supported
86
87- Conv
88  - The parser only supports 2D convolutions with a group = 1 or group = #Nb_of_channel (depthwise convolution)
89- BatchNormalization
90  - The parser does not support training mode. See the ONNX [BatchNormalization documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#BatchNormalization) for more information.
91- Gemm
92  - The parser only supports constant bias or non-constant bias where bias dimension = 1. See the ONNX [Gemm documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Gemm) for more information.
93- MatMul
94  - The parser only supports constant weights in a fully connected layer. See the ONNX [MatMul documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#MatMul) for more information.
95
96## Tested networks
97
98Arm tested these operators with the following ONNX fp32 neural networks:
99- Mobilenet_v2. See the ONNX [MobileNet documentation](https://github.com/onnx/models/tree/master/vision/classification/mobilenet) for more information.
100- Simple MNIST. This is no longer directly documented by ONNX. The model and test data may be downloaded [from the ONNX model zoo](https://onnxzoo.blob.core.windows.net/models/opset_8/mnist/mnist.tar.gz).
101
102More machine learning operators will be supported in future releases.
103<br/><br/><br/><br/>
104
105
106
107
108@section S6_tf_lite_parser Arm NN Tf Lite Parser
109
110`armnnTfLiteParser` is a library for loading neural networks defined by TensorFlow Lite FlatBuffers files
111into the Arm NN runtime.
112
113## TensorFlow Lite operators that the Arm NN SDK supports
114
115This reference guide provides a list of TensorFlow Lite operators the Arm NN SDK currently supports.
116
117### Fully supported
118The Arm NN SDK TensorFlow Lite parser currently supports the following operators:
119
120- ABS
121- ADD
122- ARG_MAX
123- ARG_MIN
124- AVERAGE_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE
125- BATCH_TO_SPACE
126- CONCATENATION, Supported Fused Activation: RELU , RELU6 , TANH, NONE
127- CONV_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE
128- CONV_3D, Supported Fused Activation: RELU , RELU6 , TANH, NONE
129- DEPTH_TO_SPACE
130- DEPTHWISE_CONV_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE
131- DEQUANTIZE
132- DIV
133- ELU
134- EQUAL
135- EXP
136- EXPAND_DIMS
137- FLOOR_DIV
138- FULLY_CONNECTED, Supported Fused Activation: RELU , RELU6 , TANH, NONE
139- GATHER
140- GATHER_ND
141- GREATER
142- GREATER_EQUAL
143- HARD_SWISH
144- LEAKY_RELU
145- LESS
146- LESS_EQUAL
147- LOG
148- LOGICAL_NOT
149- LOGISTIC
150- LOG_SOFTMAX
151- L2_NORMALIZATION
152- MAX_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE
153- MAXIMUM
154- MEAN
155- MINIMUM
156- MIRROR_PAD
157- MUL
158- NEG
159- NOT_EQUAL
160- PACK
161- PAD
162- PADV2
163- PRELU
164- QUANTIZE
165- RELU
166- RELU6
167- REDUCE_MAX
168- REDUCE_MIN
169- REDUCE_PROD
170- RESHAPE
171- RESIZE_BILINEAR
172- RESIZE_NEAREST_NEIGHBOR
173- RSQRT
174- SHAPE
175- SIN
176- SLICE
177- SOFTMAX
178- SPACE_TO_BATCH
179- SPACE_TO_DEPTH
180- SPLIT
181- SPLIT_V
182- SQUEEZE
183- SQRT
184- STRIDED_SLICE
185- SUB
186- SUM
187- TANH
188- TRANSPOSE
189- TRANSPOSE_CONV
190- UNPACK
191
192### Custom Operator
193- TFLite_Detection_PostProcess
194
195## Tested networks
196Arm tested these operators with the following TensorFlow Lite neural network:
197- [Quantized MobileNet](http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz)
198- [Quantized SSD MobileNet](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18.tar.gz)
199- DeepSpeech v1 converted from [TensorFlow model](https://github.com/mozilla/DeepSpeech/releases/tag/v0.4.1)
200- DeepSpeaker
201- [DeepLab v3+](https://www.tensorflow.org/lite/models/segmentation/overview)
202- FSRCNN
203- EfficientNet-lite
204- RDN converted from [TensorFlow model](https://github.com/hengchuan/RDN-TensorFlow)
205- Quantized RDN (CpuRef)
206- [Quantized Inception v3](http://download.tensorflow.org/models/tflite_11_05_08/inception_v3_quant.tgz)
207- [Quantized Inception v4](http://download.tensorflow.org/models/inception_v4_299_quant_20181026.tgz) (CpuRef)
208- Quantized ResNet v2 50 (CpuRef)
209- Quantized Yolo v3 (CpuRef)
210
211More machine learning operators will be supported in future releases.
212
213**/
214}
215
216