1# Qualcomm AI Engine Direct Backend 2 3Disclaimer: At present, we do not offer any backward compatibility guarantees 4for any APIs. We are currently in a development phase, and as such, 5we reserve the right to modify interfaces and implementations. 6 7This backend is implemented on the top of 8[Qualcomm AI Engine Direct SDK](https://developer.qualcomm.com/software/qualcomm-ai-engine-direct-sdk). 9Please follow [tutorial](../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md) to setup environment, build, and run executorch models by this backend (Qualcomm AI Engine Direct is also referred to as QNN in the source and documentation). 10 11A website version of the tutorial is [here](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html). 12 13## Delegate Options 14 15Please check `generate_qnn_executorch_compiler_spec()` in 16[utils.py](./utils/utils.py) for supported SoC and inference type. 17 18### Supported Chipset 19- Snapdragon 8 Gen 1 20- Snapdragon 8 Gen 1+ 21- Snapdragon 8 Gen 2 22- Snapdragon 8 Gen 3 23 24### Adding more supported Chipset 25Currently, users cannot add additional chipset models because the chipset ID is not accessible to community users. If you have specific chipset models you wish to add, please contact one of the authors in the `Code Reviews` section at the bottom of this page. 26 27### Supported Inference Type 28- Quantized 29- FP16 30 31## Directory Structure 32 33``` 34backends/qualcomm 35├── aot # Codes for generating QNN context binary (AoT Part). 36| ├── wrappers # Wrapper of QNN data structures for ease of use. 37| └── python # Python interface for using QNN libraries. 38├── builders # Codes for lowering each operators (AoT Part). 39├── partition # QNN Partitioner (AoT Part). 40├── _passes # Various private passes helping lower models to QNN backend (AoT Part). 41├── python # Places to put pybind artifacts for accessing QNN APIs, structures, etc (AoT Part). 42├── quantizer # QNN Quantizer 43├── runtime # Here is QNN runtime responsbile for compiling a model on x64. 44| | # Meanwhile, this is also the runtime responsbile for executing compiled 45| | # models on a device. 46| └── backends # Backends supported by QNN. 47| └── htpbackend 48| ├── aarch64 # Configuration required to run on device. (Device Part). 49| └── x86_64 # Configuration required to compile graph on host. (AoT Part). 50├── scripts # Misc supporting scripts, not related to core functionality. 51├── serialization # Contains files related to serializing QNN compiler options and SoC information 52├── tests # Unit tests and model tests go here. 53└── utils # Miscellaneous utilities. 54 55examples/qualcomm 56├── executor_runner # A general runner that is capable of running most of the basic models. 57├── oss_scripts # Scripts for OSS(Open Source Software) models and customized runner for some specific models. 58├── qaihub_scripts # Scripts for Qaihub models and corresponding customized runner for these models. 59└── scripts # Scripts for models provided by executorch. 60``` 61 62## Examples 63 64Please see this [README.md](../../examples/qualcomm/README.md). 65 66Further, an example build script is provided as [build.sh](scripts/build.sh). 67 68## Issues 69If you want to address the problem encountered, it would be great to have reproduction information for indicating maintainers. Please also follow the [policy](../../CONTRIBUTING.md#issues) to emit issues. 70 71## Pull Requests 72PRs are always welcome to help improve the codebase in a comprehensive manner. Before submitting changes, please apply: 73 74- **Check the Coding Style**:<br/> 75 Make sure your code follows the [style guides](../../CONTRIBUTING.md#coding-style) and passes the [lint checks](../../CONTRIBUTING.md#lintrunner). 76 77- **Add Unit Tests**:<br/> 78 Following is an example of adding test case after [creating new operator builder](builders/README.md), please navigate to `backends/qualcomm/tests` folder and put minimum example module in `model.py`. e.g.: 79 ```python 80 class IndexPut(torch.nn.Module): 81 ... 82 83 # please insert implementation in alphabetical order 84 class LayerNorm(torch.nn.Module): 85 def __init__(self): 86 super().__init__() 87 self.layer_norm = torch.nn.LayerNorm([768], eps=1e-6) 88 89 def forward(self, x): 90 return self.layer_norm(x) 91 92 93 class LeakyReLUDefault(torch.nn.Module): 94 ... 95 ``` 96 Also extend sections `TestQNNFloatingPointOperator`, `TestQNNQuantizedOperator` in `test_qnn_delegate.py`. e.g.: 97 ```python 98 class TestQNNQuantizedOperator(TestQNN): 99 def test_qnn_backend_interpolate_nearest_2d(self): 100 ... 101 102 # please insert it implementation alphabetical order 103 def test_qnn_backend_layer_norm(self): 104 module = LayerNorm() # noqa: F405 105 sample_input = (torch.randn(196, 768),) 106 module = self.get_qdq_module(module, sample_input) 107 self.lower_module_and_test_output(module, sample_input) 108 109 def test_qnn_backend_leaky_relu(self): 110 ... 111 ``` 112 113- **Verify Unit Test Results**:<br/> 114 ```bash 115 cd $PATH_TO_EXECUTORCH 116 # example usage of performing unit test 117 python backends/qualcomm/tests/test_qnn_delegate.py -k TestQNNQuantizedOperator.test_qnn_backend_layer_norm -s $DEVICE_SERIAL -m SM8650 -b build-android/ -a $PATH_TO_TEST_ARTIFACTS 118 ``` 119 The test graph is expected to have 1 delegated node with only placeholders / output nodes being left. Check the execution report for more information. 120 121- **Code Reviews**:<br/> 122 Please ping authors in Qualcomm AI Engine Direct related PRs for reviewing, possible candidates are listed below: 123 - [chiwwang](https://github.com/chiwwang) 124 - [shewu-quic](https://github.com/shewu-quic) 125 - [chunit-quic](https://github.com/chunit-quic) 126 - [winskuo-quic](https://github.com/winskuo-quic) 127 - [chuntl](https://github.com/chuntl) 128 - [haowhsu-quic](https://github.com/haowhsu-quic) 129 130Thanks again for your contribution! 131