Jump to content

Normal number (computing)

From Wikipedia, the free encyclopedia

In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.

The magnitude of the smallest normal number in a format is given by:

where b is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and depends on the size and layout of the format.

Similarly, the magnitude of the largest normal number in a format is given by

where p is the precision of the format in digits and is related to as:

In the IEEE 754 binary and decimal formats, b, p, , and have the following values:[1]

Smallest and Largest Normal Numbers for common numerical Formats
Format Smallest Normal Number Largest Normal Number
binary16 2 11 −14 15
binary32 2 24 −126 127
binary64 2 53 −1022 1023
binary128 2 113 −16382 16383
decimal32 10 7 −95 96
decimal64 10 16 −383 384
decimal128 10 34 −6143 6144

For example, in the smallest decimal format in the table (decimal32), the range of positive normal numbers is 10−95 through 9.999999 × 1096.

Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal numbers (or denormal numbers).

Zero is considered neither normal nor subnormal.

See also

[edit]

References

[edit]
  1. ^ IEEE Standard for Floating-Point Arithmetic, 2008-08-29, doi:10.1109/IEEESTD.2008.4610935, ISBN 978-0-7381-5752-8, retrieved 2015-04-26