Home
last modified time | relevance | path

Searched full:linear_unpack_fp16 (Results 1 – 8 of 8) sorted by relevance

/aosp_15_r20/external/pytorch/benchmarks/operator_benchmark/pt/
H A Dlinear_unpack_fp16_test.py8 # Configs for PT linear_unpack_fp16 operator
37 self.set_module_name("linear_unpack_fp16")
40 return torch.ops.quantized.linear_unpack_fp16(input_one)
/aosp_15_r20/external/pytorch/aten/src/ATen/native/quantized/
H A Dqlinear_unpack.cpp38 "quantized::linear_unpack_fp16 is currently " in run()
69 …m.impl(TORCH_SELECTIVE_NAME("quantized::linear_unpack_fp16.legacy"), TORCH_FN(QLinearUnpackWeightF… in TORCH_LIBRARY_IMPL()
75 …m.impl(TORCH_SELECTIVE_NAME("quantized::linear_unpack_fp16"), TORCH_FN(QLinearUnpackWeightFp16::ru… in TORCH_LIBRARY_IMPL()
H A Dlibrary.cpp194 …m.def(TORCH_SELECTIVE_SCHEMA("quantized::linear_unpack_fp16(__torch__.torch.classes.quantized.Line… in TORCH_LIBRARY()
196 …m.def(TORCH_SELECTIVE_SCHEMA("quantized::linear_unpack_fp16.legacy(Tensor W_prepack) -> (Tensor W_… in TORCH_LIBRARY()
/aosp_15_r20/external/pytorch/torch/ao/nn/quantized/modules/
H A Dlinear.py49 return torch.ops.quantized.linear_unpack_fp16(self._packed_params)
/aosp_15_r20/external/pytorch/test/mobile/model_test/
H A Dmodel_ops.yaml440 quantized::linear_unpack_fp16: 46
H A Dcoverage.yaml1093 quantized::linear_unpack_fp16: 4
/aosp_15_r20/external/pytorch/torch/csrc/jit/passes/quantization/
H A Dquantization_patterns.h1127 %w_unpacked : Tensor, %b : Tensor? = quantized::linear_unpack_fp16(%packed_params) in dynamic_quant_fusion_pattern_and_replacements()
1168 %w_unpacked : Tensor, %b_unpacked : Tensor? = quantized::linear_unpack_fp16(%packed_params) in linear_prepack_unpack_patterns()
/aosp_15_r20/external/pytorch/test/quantization/core/
H A Dtest_quantized_op.py3313 w_unpacked_fp16 = torch.ops.quantized.linear_unpack_fp16(w_packed_fp16)