Name Date Size #Lines LOC

..--

test/H25-Apr-2025-549421

third-party/FFHT/H25-Apr-2025-50,44350,228

README.mdH A D25-Apr-2025751 1714

TARGETSH A D25-Apr-202593 63

fast_hadamard_transform.cppH A D25-Apr-20253.5 KiB10260

fast_hadamard_transform.hH A D25-Apr-20254.5 KiB14584

fast_hadamard_transform_special.hH A D25-Apr-202527.2 KiB242229

special_hadamard_code_gen.pyH A D25-Apr-20257.6 KiB280204

targets.bzlH A D25-Apr-2025752 2320

README.md

1# SpinQuant
2
3This is an implementation of the [Fast Hadamard
4Transform](https://en.wikipedia.org/wiki/Fast_Walsh–Hadamard_transform)
5as used in [SpinQuant](https://arxiv.org/abs/2405.16406) (for the R3
6and R4 matrices), [QuaRot](https://arxiv.org/abs/2404.00456), and
7[Quip#](https://arxiv.org/pdf/2402.04396). We follow those papers'
8method (as implemented in
9https://github.com/Dao-AILab/fast-hadamard-transform/) for extending
10the transform to non-power-of-two input sizes. CUDA is not considered
11because https://github.com/Dao-AILab/fast-hadamard-transform/ is
12already available.
13
14The intended long-term destination for this code is pytorch/ao; it is
15in ExecuTorch temporarily until we get C++ dependency from ExecuTorch
16on torchao figured out.
17