Name Date Size #Lines LOC

..--

README.mdH A D25-Apr-20251.4 KiB3022

export_model.pyH A D25-Apr-20254.7 KiB14594

install_requirements.shH A D25-Apr-2025362 144

README.md

1## Summary
2In this example, we showcase how to export a model ([phi-3-mini](https://github.com/pytorch/executorch/tree/main/examples/models/phi-3-mini)) appended with LoRA layers to ExecuTorch. The model is exported to ExecuTorch for both inference and training.
3
4To see how you can use the model exported for training in a fully involved finetuning loop, please see our example on [LLM PTE Fintetuning](https://github.com/pytorch/executorch/tree/main/examples/llm_pte_finetuning).
5
6## Instructions
7### Step 1: [Optional] Install ExecuTorch dependencies
8`./install_requirements.sh` in ExecuTorch root directory.
9
10### Step 2: Install Requirements
11- `./examples/models/phi-3-mini-lora/install_requirements.sh`
12
13### Step 3: Export and run the model
141. Export the inference and training models to ExecuTorch.
15```
16python export_model.py
17```
18
192. Run the inference model using an example runtime. For more detailed steps on this, check out [Build & Run](https://pytorch.org/executorch/stable/getting-started-setup.html#build-run).
20```
21# Clean and configure the CMake build system. Compiled programs will appear in the executorch/cmake-out directory we create here.
22(rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..)
23
24# Build the executor_runner target
25cmake --build cmake-out --target executor_runner -j9
26
27# Run the model for inference.
28./cmake-out/executor_runner --model_path phi3_mini_lora.pte
29```
30