Searched refs:_float_to_bfloat16_cpu (Results 1 – 2 of 2) sorted by relevance
44 at::Tensor _float_to_bfloat16_cpu(const at::Tensor& input) { in _float_to_bfloat16_cpu() function89 m.impl("_FloatToBfloat16Quantized", _float_to_bfloat16_cpu); in TORCH_LIBRARY_IMPL()
13 at::Tensor _float_to_bfloat16_cpu(const at::Tensor& input);