1# flake8: noqa: F401 2r""" 3Utils shared by different modes of quantization (eager/graph) 4 5This file is in the process of migration to `torch/ao/quantization`, and 6is kept here for compatibility while the migration process is ongoing. 7If you are adding a new entry/functionality, please, add it to the 8`torch/ao/quantization/utils.py`, while adding an import statement 9here. 10""" 11 12from torch.ao.quantization.utils import ( 13 activation_dtype, 14 activation_is_int8_quantized, 15 activation_is_statically_quantized, 16 calculate_qmin_qmax, 17 check_min_max_valid, 18 get_combined_dict, 19 get_qconfig_dtypes, 20 get_qparam_dict, 21 get_quant_type, 22 get_swapped_custom_module_class, 23 getattr_from_fqn, 24 is_per_channel, 25 is_per_tensor, 26 weight_dtype, 27 weight_is_quantized, 28 weight_is_statically_quantized, 29) 30