Searched full:linear_unpack_fp16 (Results 1 – 8 of 8) sorted by relevance
8 # Configs for PT linear_unpack_fp16 operator37 self.set_module_name("linear_unpack_fp16")40 return torch.ops.quantized.linear_unpack_fp16(input_one)
38 "quantized::linear_unpack_fp16 is currently " in run()69 …m.impl(TORCH_SELECTIVE_NAME("quantized::linear_unpack_fp16.legacy"), TORCH_FN(QLinearUnpackWeightF… in TORCH_LIBRARY_IMPL()75 …m.impl(TORCH_SELECTIVE_NAME("quantized::linear_unpack_fp16"), TORCH_FN(QLinearUnpackWeightFp16::ru… in TORCH_LIBRARY_IMPL()
194 …m.def(TORCH_SELECTIVE_SCHEMA("quantized::linear_unpack_fp16(__torch__.torch.classes.quantized.Line… in TORCH_LIBRARY()196 …m.def(TORCH_SELECTIVE_SCHEMA("quantized::linear_unpack_fp16.legacy(Tensor W_prepack) -> (Tensor W_… in TORCH_LIBRARY()
49 return torch.ops.quantized.linear_unpack_fp16(self._packed_params)
440 quantized::linear_unpack_fp16: 46
1093 quantized::linear_unpack_fp16: 4
1127 %w_unpacked : Tensor, %b : Tensor? = quantized::linear_unpack_fp16(%packed_params) in dynamic_quant_fusion_pattern_and_replacements()1168 %w_unpacked : Tensor, %b_unpacked : Tensor? = quantized::linear_unpack_fp16(%packed_params) in linear_prepack_unpack_patterns()
3313 w_unpacked_fp16 = torch.ops.quantized.linear_unpack_fp16(w_packed_fp16)