Locked learning resources

Join us and get access to thousands of tutorials and a community of expert Pythonistas.

Unlock This Lesson

Locked learning resources

This lesson is for members only. Join us and get access to thousands of tutorials and a community of expert Pythonistas.

Unlock This Lesson

Python Integers

00:00 In the previous lesson, I showed you all the different ways to be negative—numerically, that is. In this lesson, I’ll run down how Python stores integers so that in the next lesson, you can be prepared for manipulating their bits.

00:14 Python has two different ways of storing integers and seamlessly switches between them. The first is fixed precision. This is based on the underlying C language’s signed long.

00:25 That’s an eight-byte integer on most platforms.

00:29 And the second is called arbitrary precision, which allows you to store numbers past the maximum specified by eight bytes. In fact, arbitrary precision has no maximum value at all.

00:43 On a 64-bit platform, the eight-byte representation of an integer can store from -2 to the 63 all the way up to one less than positive 2 to the 63. That’s an upper limit of this very big number. To store negative values, Python’s fixed-precision integer uses two’s complement storage.

01:06 As you may recall from previous lessons, flipping bits on a two’s complement number can have strange effects.

01:13 Then, what, if a nineteen-digit numbers too small for you—well, Python’s got your back. The arbitrary-precision integer has no limit and is based on a series of numbers stored in base 2 to the 30. Yup, you read that right. Not base 30. Base 2 to the 30.

01:31 I’m sure the Mayans and their paltry base-360 are jealous. As an example, this uncomfortably large number is stored as 973 plus 4,353 times 2 to the 30 plus 50,934 times 2 to the 60.

01:51 Python conveniently switches back and forth between fixed- and arbitrary-precision based on the size of the number being stored. In fact, Python does something even trickier than that.

02:02 Everything in Python is an object, integers included, so there’s more overhead in storing them than just the eight bytes. Common integers are cached at startup and treated as singletons.

02:13 If you’re bored someday and want to see some internal magic, use the id() function to look at integer values between -5 and 256. The id() function shows you the unique identifier of an object.

02:27 New instances of the same number result in the same ID. With values between 5 and 256, new instances of the same number result in the same ID. Do that with a number bigger than 256, and you’ll get multiple instances, no singleton. The compiler will even do further optimizations if the same number shows up in the same line.

02:50 Lots of things baked in here to help you get some performance with the common numeric cases.

02:56 Arbitrary precision numbers are sometimes to as big nums. Doing math on big nums is expensive. Unlike a fixed-precision integer or a float, the CPU isn’t built to do this specifically. Of course, if you’re playing with numbers this big, the precision you get out of a big num might be worth the performance trade-off. What about negative numbers? Well, arbitrary-precision numbers don’t use two’s complement storage for negatives.

03:23 I’ll get back to that in just a second.

03:27 Doing bitwise operations on arbitrary-precision numbers is a multi-step process with even more steps if the value happens to be negative. First, it has to be converted to a series of bits in two’s complement form.

03:41 Then the bitwise operator is applied. Then the result is converted back into an arbitrary-precision integer. This back-and-forth ensures that bitwise operations do the same thing regardless of the size of the integer. Consistency is good, but of course, that is consistency with two’s complement, which I believe I have mentioned is a bit weird with bitwise operations.

04:06 Just a quick recap before getting into the actual bitwise ops. Python doesn’t have an unsigned integer. It uses two’s complement representation for negative numbers, which can have surprising effects when interacting with bitwise operations.

04:21 But for most use cases, this doesn’t really matter. Typically, if you’re doing bitwise math, you’re doing it on positive numbers that are small enough to be using the fixed-precision representation.

04:33 Now that you’ve got a handle on how integers are stored in Python, let’s do some bitwise ops.

Become a Member to join the conversation.