Name | Date | Size | #Lines | LOC | ||
---|---|---|---|---|---|---|
.. | - | - | ||||
README.md | H A D | 25-Apr-2025 | 1.4 KiB | 30 | 22 | |
export_model.py | H A D | 25-Apr-2025 | 4.7 KiB | 145 | 94 | |
install_requirements.sh | H A D | 25-Apr-2025 | 362 | 14 | 4 |
README.md
1## Summary 2In this example, we showcase how to export a model ([phi-3-mini](https://github.com/pytorch/executorch/tree/main/examples/models/phi-3-mini)) appended with LoRA layers to ExecuTorch. The model is exported to ExecuTorch for both inference and training. 3 4To see how you can use the model exported for training in a fully involved finetuning loop, please see our example on [LLM PTE Fintetuning](https://github.com/pytorch/executorch/tree/main/examples/llm_pte_finetuning). 5 6## Instructions 7### Step 1: [Optional] Install ExecuTorch dependencies 8`./install_requirements.sh` in ExecuTorch root directory. 9 10### Step 2: Install Requirements 11- `./examples/models/phi-3-mini-lora/install_requirements.sh` 12 13### Step 3: Export and run the model 141. Export the inference and training models to ExecuTorch. 15``` 16python export_model.py 17``` 18 192. Run the inference model using an example runtime. For more detailed steps on this, check out [Build & Run](https://pytorch.org/executorch/stable/getting-started-setup.html#build-run). 20``` 21# Clean and configure the CMake build system. Compiled programs will appear in the executorch/cmake-out directory we create here. 22(rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..) 23 24# Build the executor_runner target 25cmake --build cmake-out --target executor_runner -j9 26 27# Run the model for inference. 28./cmake-out/executor_runner --model_path phi3_mini_lora.pte 29``` 30