Lines Matching full:mantissa
34 * S - sign bit, E - bits of the biased exponent, M - bits of the mantissa, 0 - zero bits. in fp16_ieee_to_fp32_bits()
47 * Extract mantissa and biased exponent of the input number into the bits 0-30 of the 32-bit word: in fp16_ieee_to_fp32_bits()
56 …* Renorm shift is the number of bits to shift mantissa left to make the half-precision number norm… in fp16_ieee_to_fp32_bits()
59 …* denormalized nonsign by renorm_shift, the unit bit of mantissa will shift into exponent, turning… in fp16_ieee_to_fp32_bits()
60 * biased exponent into 1, and making mantissa normalized (i.e. without leading 1). in fp16_ieee_to_fp32_bits()
88 …t nonsign right by 3 so the exponent (5 bits originally) becomes an 8-bit field and 10-bit mantissa in fp16_ieee_to_fp32_bits()
89 * shifts into the 10 high bits of the 23-bit mantissa of IEEE single-precision number. in fp16_ieee_to_fp32_bits()
95 …* 6. Binary ANDNOT with zero_mask to turn the mantissa and exponent into zero if the input was zer… in fp16_ieee_to_fp32_bits()
116 * S - sign bit, E - bits of the biased exponent, M - bits of the mantissa, 0 - zero bits. in fp16_ieee_to_fp32_value()
129 * Extract mantissa and biased exponent of the input number into the high bits of the 32-bit word: in fp16_ieee_to_fp32_value()
139 * Shift mantissa and exponent into bits 23-28 and bits 13-22 so they become mantissa and exponent in fp16_ieee_to_fp32_value()
142 * S|Exponent | Mantissa in fp16_ieee_to_fp32_value()
176 * In a denormalized number the biased exponent is zero, and mantissa has on-zero bits. in fp16_ieee_to_fp32_value()
177 * First, we shift mantissa into bits 0-9 of the 32-bit word. in fp16_ieee_to_fp32_value()
179 * zeros | mantissa in fp16_ieee_to_fp32_value()
186 * FP16 = mantissa * 2**(-24). in fp16_ieee_to_fp32_value()
187 …* The trick is to construct a normalized single-precision number with the same mantissa and thehal… in fp16_ieee_to_fp32_value()
188 * and with an exponent which would scale the corresponding mantissa bits to 2**(-24). in fp16_ieee_to_fp32_value()
190 * FP32 = (1 + mantissa * 2**(-23)) * 2**(exponent - 127) in fp16_ieee_to_fp32_value()
191 …* Therefore, when the biased exponent is 126, a unit change in the mantissa of the input denormali… in fp16_ieee_to_fp32_value()
208 * - Combine the result of conversion of exponent and mantissa with the sign of the input number. in fp16_ieee_to_fp32_value()
263 * S - sign bit, E - bits of the biased exponent, M - bits of the mantissa, 0 - zero bits. in fp16_alt_to_fp32_bits()
276 * Extract mantissa and biased exponent of the input number into the bits 0-30 of the 32-bit word: in fp16_alt_to_fp32_bits()
285 …* Renorm shift is the number of bits to shift mantissa left to make the half-precision number norm… in fp16_alt_to_fp32_bits()
288 …* denormalized nonsign by renorm_shift, the unit bit of mantissa will shift into exponent, turning… in fp16_alt_to_fp32_bits()
289 * biased exponent into 1, and making mantissa normalized (i.e. without leading 1). in fp16_alt_to_fp32_bits()
309 …t nonsign right by 3 so the exponent (5 bits originally) becomes an 8-bit field and 10-bit mantissa in fp16_alt_to_fp32_bits()
310 * shifts into the 10 high bits of the 23-bit mantissa of IEEE single-precision number. in fp16_alt_to_fp32_bits()
315 …* 5. Binary ANDNOT with zero_mask to turn the mantissa and exponent into zero if the input was zer… in fp16_alt_to_fp32_bits()
336 * S - sign bit, E - bits of the biased exponent, M - bits of the mantissa, 0 - zero bits. in fp16_alt_to_fp32_value()
349 * Extract mantissa and biased exponent of the input number into the high bits of the 32-bit word: in fp16_alt_to_fp32_value()
359 * Shift mantissa and exponent into bits 23-28 and bits 13-22 so they become mantissa and exponent in fp16_alt_to_fp32_value()
362 * S|Exponent | Mantissa in fp16_alt_to_fp32_value()
383 * In a denormalized number the biased exponent is zero, and mantissa has on-zero bits. in fp16_alt_to_fp32_value()
384 * First, we shift mantissa into bits 0-9 of the 32-bit word. in fp16_alt_to_fp32_value()
386 * zeros | mantissa in fp16_alt_to_fp32_value()
393 * FP16 = mantissa * 2**(-24). in fp16_alt_to_fp32_value()
394 …* The trick is to construct a normalized single-precision number with the same mantissa and thehal… in fp16_alt_to_fp32_value()
395 * and with an exponent which would scale the corresponding mantissa bits to 2**(-24). in fp16_alt_to_fp32_value()
397 * FP32 = (1 + mantissa * 2**(-23)) * 2**(exponent - 127) in fp16_alt_to_fp32_value()
398 …* Therefore, when the biased exponent is 126, a unit change in the mantissa of the input denormali… in fp16_alt_to_fp32_value()
415 * - Combine the result of conversion of exponent and mantissa with the sign of the input number. in fp16_alt_to_fp32_value()