Home
last modified time | relevance | path

Searched full:using_ints (Results 1 – 22 of 22) sorted by relevance

/aosp_15_r20/external/pytorch/test/distributed/_composable/
H A Dtest_replicate_with_compiler.py351 fc.check("aten.flatten.using_ints(").check("cpp_fused_").check(
363 fc.check("aten.flatten.using_ints(").check("cpp_fused_").check(
/aosp_15_r20/external/pytorch/aten/src/ATen/native/metal/ops/
H A DMetalReshape.mm107 m.impl(TORCH_SELECTIVE_NAME("aten::flatten.using_ints"), TORCH_FN(flatten_using_ints));
/aosp_15_r20/external/executorch/backends/arm/quantizer/
H A Darm_quantizer_utils.py152 torch.ops.aten.flatten.using_ints,
/aosp_15_r20/external/pytorch/test/mobile/model_test/
H A Dcoverage.yaml233 - aten::flatten.using_ints
815 aten::flatten.using_ints: 45
H A Dmodel_ops.yaml149 aten::flatten.using_ints: 74
/aosp_15_r20/external/pytorch/torch/ao/quantization/pt2e/
H A Dport_metadata_pass.py109 torch.ops.aten.flatten.using_ints,
/aosp_15_r20/external/pytorch/torch/ao/quantization/quantizer/
H A Dx86_inductor_quantizer.py89 torch.ops.aten.flatten.using_ints,
218 torch.ops.aten.flatten.using_ints,
H A Dxnnpack_quantizer_utils.py1016 torch.ops.aten.flatten.using_ints,
/aosp_15_r20/external/pytorch/aten/src/ATen/functorch/
H A DBatchRulesDecompositions.cpp129 OP_DECOMPOSE2(flatten, using_ints); in TORCH_LIBRARY_IMPL()
/aosp_15_r20/external/pytorch/torch/csrc/jit/mobile/model_tracer/
H A DTracerRunner.cpp40 "aten::flatten.using_ints",
/aosp_15_r20/external/pytorch/torch/_inductor/fx_passes/
H A Dddp_fusion.py197 call_function(graph, aten.flatten.using_ints, (input_node,))
/aosp_15_r20/external/executorch/backends/qualcomm/quantizer/
H A Dannotators.py663 @register_annotator([torch.ops.aten.flatten.using_ints])
/aosp_15_r20/external/pytorch/aten/src/ATen/core/
H A DNamedRegistrations.cpp190 m.impl("flatten.using_ints", CppFunction::makeFallthrough()); in TORCH_LIBRARY_IMPL()
/aosp_15_r20/external/pytorch/torch/jit/
H A D_shape_functions.py1356 "aten::flatten.using_ints(Tensor(a) self, int start_dim=0, int end_dim=-1) -> Tensor(a)",
/aosp_15_r20/external/pytorch/torch/csrc/jit/runtime/static/
H A Dnative_ops.cpp403 … "aten::flatten.using_ints(Tensor(a) self, int start_dim=0, int end_dim=-1) -> Tensor(a)"))) { in __anon75e5f0512702()
H A Dpasses.cpp406 … "static_runtime::flatten_copy.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> Tensor", in TORCH_LIBRARY_FRAGMENT()
H A Dops.cpp1708 …"static_runtime::flatten_copy.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> Tensor")… in __anon11f46a8b4802()
/aosp_15_r20/external/pytorch/torch/csrc/jit/tensorexpr/
H A Dlowerings.cpp1823 {"aten::flatten.using_ints(Tensor(a) self, int start_dim=0, int end_dim=-1) -> (Tensor(a))"}, in nnc_lowerings_lazy_registration()
/aosp_15_r20/external/pytorch/test/quantization/pt2e/
H A Dtest_x86inductor_quantizer.py955 torch.ops.aten.flatten.using_ints,
/aosp_15_r20/external/pytorch/torch/csrc/jit/runtime/
H A Dserialized_shape_function_registry.cpp3299 …{"aten::flatten.using_ints(Tensor(a) self, int start_dim=0, int end_dim=-1) -> Tensor(a)", "flatte… in GetShapeFunctionMappings()
/aosp_15_r20/external/pytorch/torch/testing/_internal/
H A Dcommon_methods_invocations.py14747 # got: Batching rule not implemented for aten::flatten.using_ints
14768 # got: Batching rule not implemented for aten::flatten.using_ints
14792 # got: Batching rule not implemented for aten::flatten.using_ints
15642 # got: Batching rule not implemented for aten::flatten.using_ints
15668 # got: Batching rule not implemented for aten::flatten.using_ints
15707 # got: Batching rule not implemented for aten::flatten.using_ints
/aosp_15_r20/external/pytorch/aten/src/ATen/native/
H A Dnative_functions.yaml2645 - func: flatten.using_ints(Tensor(a) self, int start_dim=0, int end_dim=-1) -> Tensor(a)