Home
last modified time | relevance | path

Searched refs:use_local_synchronization (Results 1 – 4 of 4) sorted by relevance

/aosp_15_r20/external/pytorch/torch/distributed/
H A Ddistributed_c10d.py4496 use_local_synchronization=False, argument
4571 use_local_synchronization=use_local_synchronization,
4582 use_local_synchronization=False, argument
4611 if use_local_synchronization:
4646 group_name = _process_group_name(ranks, use_hashed_name=use_local_synchronization)
4684 barrier_store = pg_store if use_local_synchronization else default_store
4685 world_size = len(ranks) if use_local_synchronization else get_world_size()
/aosp_15_r20/external/pytorch/test/distributed/
H A Dtest_c10d_object_collectives.py143 my_pg = dist.new_group(ranks, use_local_synchronization=True)
H A Dtest_c10d_common.py1408 dist.new_group(ranks=ranks_out, use_local_synchronization=True)
1411 new_pg = dist.new_group(ranks=ranks_in, use_local_synchronization=True)
1437 new_pg = dist.new_group(ranks=ranks_in, use_local_synchronization=True)
1476 dist.new_group(ranks=ranks_in, use_local_synchronization=True)
/aosp_15_r20/external/pytorch/test/distributed/_shard/sharded_tensor/
H A Dtest_sharded_tensor.py3131 use_local_synchronization=True,
3161 use_local_synchronization=True,