1# flake8: noqa: F401 2r"""Quantized Modules. 3 4This file is in the process of migration to `torch/ao/nn/quantized`, and 5is kept here for compatibility while the migration process is ongoing. 6If you are adding a new entry/functionality, please, add it to the 7appropriate file under the `torch/ao/nn/quantized/modules`, 8while adding an import statement here. 9""" 10 11from torch.ao.nn.quantized.modules.utils import ( 12 _hide_packed_params_repr, 13 _ntuple_from_first, 14 _pair_from_first, 15 _quantize_weight, 16 WeightedQuantizedModule, 17) 18