Tag Archives: arithmetic

Floating Point Arithmetic Limitations in Python

Before moving forward just to clarify that the floating point arithmetic issue is not particular to Python. Almost all languages like C, C++, Java etc. often won’t display the exact decimal number you expect.

First we will discuss what are the floating point arithmetic limitations. Then we will see why this happens and what we can do to compensate these.

I hope you all have some idea of different base representation like 10-base, 2-base etc. Let’s get started

To understand the problem, let’s look at below expressions

Let’s take the first expression and understand why this happens

We know that Computers operate in binary (base 2 fractions). For example, 0.125 decimal fraction is represented as 0.001 in the computer.

There is no problem with integers as we can exactly represent these in binary. But unfortunately, most decimal fractions can’t be represented exactly as binary fractions.

Whenever we divide by a number which is prime and isn’t a factor of base, we always have non-terminating representation.

Thus for base 10, any x=p/q where q has prime factors other than 2 or 5 will have a non-terminating representation.

Simplest example is the case of representing 1/3 in decimal(0.3, 0.33,…, 0.33333…). No matter how many bits (or 3’s) we use, we still can’t represent it exactly.

Similarly, 0.1 is the simplest and the most commonly used example of an exact decimal number which can’t be represented exactly in binary. In base 2, 1/10 is the infinitely repeating fraction of 0011 as shown below

0.0001100110011001100110011001100110011001100110011…

So, Python strives to convert 0.1 to the closest fraction using 53 bits of precision (IEEE-754 double precision).

Thus when we write decimal 0.1 it is stored in computer as not exact but some close approximation. Since 0.1 is not exactly 1/10, summing three values of 0.1 may not yield exactly 0.3, either. Thus

But this contradicts our second expression mentioned above i.e.

This is because of IEEE-754 double precision which uses 53 bits of precision. As the number gets larger it uses more bits. In an attempt to fit into this 53-bit limit, bits have to be dropped off the end which causes rounding and sometimes the error may be visible and sometimes not.

For use cases which require exact decimal representation, you can use decimal or fraction module, scipy or even rounding can help.

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.