Searched refs:_bfloat16_to_float_cpu (Results 1 – 2 of 2) sorted by relevance
63 at::Tensor _bfloat16_to_float_cpu(const at::Tensor& input) { in _bfloat16_to_float_cpu() function88 m.impl("_Bfloat16QuantizedToFloat", _bfloat16_to_float_cpu); in TORCH_LIBRARY_IMPL()
14 at::Tensor _bfloat16_to_float_cpu(const at::Tensor& input);