Hint: You can adjust the default video playback speed in your account settings.
Hint: You can set your subtitle preferences in your account settings.
Sorry! Looks like there’s an issue with video playback 🙁 This might be due to a temporary outage or because of a configuration issue with your browser. Please refer to our video player troubleshooting guide for assistance.

Time Complexity Overview

00:00 In this lesson, I’m going to give a short overview of measures of time complexity—that is, different ways to measure how long an algorithm takes to run.

00:09 And in the process, I’ll give a bit of an overview of space complexity as well, which is how much space an algorithm takes in your computer’s memory. If you’re already familiar with runtime analysis—Big O notation and the like—then you can probably safely skip this video and move on to the discussion of bubble sort. So, generally, in computer science, people are really focused on getting things to run fast.

00:35 So, algorithms—you want them to be about as fast as they possibly can because that allows us to do things like send data over networks and enables, you know, the internet and all of that sort of stuff. Now there are two general ways to measure how long an algorithm takes.

00:50 You can measure it directly—in minutes, seconds, milliseconds, so on—or you can measure what I’m going to call relatively, though this isn’t in a really strict sense of the word, which just means in terms of the number of operations completed.

01:04 Now it’s important to measure relatively in a lot of cases, especially in the world of theoretical computer science, because processors that make computers run have been getting faster and faster due to hardware improvements over the last 50 to 60 years.

01:19 So an algorithm that might have taken minutes to run in 1975 or something like that might now take milliseconds, even though the algorithm itself is still performing the same number of operations, simply because the processor can do many more operations per second now than it could back then. So normally when you talk about algorithmic runtime, you speak relatively. And a way to do that is to use what’s called Big O notation.

01:46 Big O notation represents the worst-case runtime of an algorithm as a function of the number of operations compared to the size of the input. So the size of input is n and you define some function on n which gives you a general idea of the worst-case number of operations the algorithm does. The idea is if everything goes wrong, how many operations would the algorithm do? So, how many addition operations, how many comparisons or memory accesses or anything like that?

02:16 You’re just saying “How many things do we need to do to figure out the answer to this problem?”

02:23 Commonly in computer science, algorithms can be divided into different classes based on their worst-case runtime. These are some common runtimes of algorithms in functional notation, right?

02:35 So a really good runtime, for example, is one that takes a constant number of operations. So think, for example, if you want to just multiply a number by 20—that will always take one multiplication operation.

02:50 That’s what we call a constant time algorithm. A function that just multiplies a number by 20 will always run in the same number of multiplication operations. Now on the other hand, if you want to multiply every number in a list by 20, you’ll always have to do the same number of operations as the number of items in the list, which is n. That’s what we call an order of n operation.

03:13 Now some other common algorithms you might be familiar with. Binary search, for example, is what we call a log(n) algorithm. Most sorting algorithms—most fast-sorting algorithms, I should say—are nlog(n), so n times the log base 2 of n.

03:28 And then you have other sorting algorithms that are n squared time, and then you start to run into some really slow algorithms where you’re trying to compute really difficult to compute things where you might have a constant number to the power of n or you might have an n factorial operation. Now in practice, most algorithms that are this slow don’t actually get used, just because n factorial grows very, very quickly, so if you have, say, a hundred elements in your list and your algorithm runs in factorial time, then it’s going to take a really long time to complete. Now of course, certain algorithms in the same classes might be faster or slower than one another in practice because the constant factor might be slightly different.

04:07 But this is an easy way to classify runtimes generally because it’s a general category and it generally describes how fast the runtime will grow as a function of the size of the input.

04:19 And when we speak in computer science terms, normally it’s important to think about the growth of the runtime as inputs get large because often computer scientists are working with huge amounts of data, right? Big data, all of your giant tech companies are processing huge amounts of data, and so they want algorithms that have runtime that grows slowly with large data. So, that was a lightning-speed overview of time complexity.

04:46 You can also have space complexity, right? Which follows the same general rules. How much memory space do you use as a function of the input? And that won’t be strictly the focus of our discussion of sorting algorithms but it is useful to know.

05:00 Space complexity generally follows the same kind of patterns as time complexity. You normally hear the two spoken about in the same way.

Become a Member to join the conversation.