Join us and get access to thousands of tutorials and a community of expert Pythonistas.

Unlock This Lesson

This lesson is for members only. Join us and get access to thousands of tutorials and a community of expert Pythonistas.

Unlock This Lesson

Hint: You can adjust the default video playback speed in your account settings.
Hint: You can set your subtitle preferences in your account settings.
Sorry! Looks like there’s an issue with video playback 🙁 This might be due to a temporary outage or because of a configuration issue with your browser. Please refer to our video player troubleshooting guide for assistance.

Floating-Point Representation Error

00:00 In this lesson, you’ll learn about the representation error, which affects the precision of floating-point numbers in virtually every programming language, including Python.

00:11 Before exploring floating-point numbers, it helps to take a step back and think about fixed-point numbers, which provide one of the most straightforward ways of representing real numbers in computers.

00:22 The idea behind fixed-point numbers is to allocate a specific number of digits for the whole part and the fractional part. Therefore, the placement of the decimal point as well as the total number of digits are known up front and never change.

00:36 The biggest problem with such a representation is its storage inefficiency, which limits the available range of values that you can use. Notice how much space remains empty by getting filled with zeros in these examples. Floating-point numbers, on the other hand, can adapt to the size of the number they represent.

00:55 In floating-point representation, the decimal point can float around to the left or right to accommodate various number sizes. That way, you can use the available space more effectively without wasting memory. Really big numbers, such as the speed of light, don’t usually need a lot of precision, so it’s probably okay to allocate more memory for the significant digits while trading off the fractional part. Conversely, smaller numbers, like the size of an atom, will benefit from increasing the number of fractional digits by floating the decimal point to the left.

01:30 As long as you’re okay with trading of precision for a number’s magnitude and the other way around, then you’ll be able to store an enormous range of numbers on a relatively small amount of memory. The bigger the floating-point number, the lower its precision.

01:45 At the same time, smaller numbers tend to have a greater precision. Now you know that floating-point numbers owe their name to the way they’re represented in a computer’s memory.

01:55 They’re essentially scientific notation in the binary system with a few additional frills, which make them exceptionally efficient in terms of memory use. Unfortunately, the price for such a representation is losing the ability to express some numbers exactly, which can lead to very surprising results, like the one you can see on the current slide. Have a look at a few examples in IDLE.

02:18 When you type 0.1 + 0.2 in Python, you get a result that’s pretty close to what you’d expect, but not exactly right. The reason for such a strange outcome is because the two numbers you tried adding up don’t have an exact representation on the floating-pointed data type, which is only an approximation of the real numbers. By default, Python only shows the most significant digits of a number. To reveal the remaining ones, you can format your number, requesting say a hundred decimal digits. However, the float data type keeps up to just about fifty-five digits.

02:58 If you request more digits, then Python will pad the number with zeros on the right, as there’s not enough information. Now, where do all of these seemingly random digits come from?

03:10 Without getting too technical, you must remember that computers keep all numbers in the binary system using just ones and zeros rather than the decimal system with ten digits from zero to nine.

03:22 When you convert one-tenth or two-tenths to the binary system, then you’ll end up with a recurring binary string that repeats itself infinitely. It’s a familiar problem that you’ll face in the decimal system too, when you try to represent one-third, for example.

03:38 The result is 0.3 recurring. Because computers don’t have an infinite amount of memory, at some point, they have to chop off the binary string and round the results to the nearest representable value. On the other hand, as long as you stay within the realms of the binary system, you’ll be able to represent a number exactly using the floating-pointed data type. For example, 0.25 is a power of 2, so it has a finite binary expansion.

04:06 Same with one half.

04:10 When you add them together, you’ll get an exact result without any rounding errors.

04:17 To reiterate, a floating-point number is usually just an approximation of the real number, kind of like the lossy compression in JPG images, for example.

04:29 Don’t worry if this is making your head spin. In most engineering applications, the floating-point representation error remains within acceptable bounds, and the results are satisfactory. However, in scientific or financial calculations, the rounding error becomes noticeable. In those cases, you should look into other numeric types, like the decimal type mentioned earlier in this course. Okay, with that out of the way, it’s time to take a closer look at the math functions and number methods in Python.

Become a Member to join the conversation.