Loading video player…

Caching in Python With lru_cache (Overview)

There are many ways to achieve fast and responsive applications. Caching is one approach that, when used correctly, makes things much faster while decreasing the load on computing resources.

Python’s functools module comes with the @lru_cache decorator, which gives you the ability to cache the result of your functions using the Least Recently Used (LRU) strategy. This is a simple yet powerful technique that you can use to leverage the power of caching in your code.

In this video course, you’ll learn:

  • What caching strategies are available and how to implement them using Python decorators
  • What the LRU strategy is and how it works
  • How to improve performance by caching with the @lru_cache decorator
  • How to expand the functionality of the @lru_cache decorator and make it expire after a specific time

By the end of this video course, you’ll have a deeper understanding of how caching works and how to take advantage of it in Python.

Download

Sample Code (.zip)

5.3 KB
Download

Course Slides (.pdf)

1.0 MB

00:00 Welcome to Caching in Python with LRU. My name is Christopher, and I will be your guide. This course is primarily about the @lru_cache decorator in the functools library, but you’ll need a bit of background to better understand what it is doing.

00:16 In this course, you will learn about using caches to improve your code’s speed, choosing from different caching styles, writing cache functions with the Least Recently Used policy, and augmenting a decorator to add functionality.

00:32 The code I’ll be showing you was tested using Python 3.10. The @lru_cache decorator has been around since Python 3.2. So if you’re not running the latest, you’re probably still okay.

00:43 That being said, 3.10 has some great improvements on syntax errors, and it makes finding problems that much easier. You really should upgrade if you can.

00:54 A cache is a common optimization in the computing field. It is used to gain speed in your programs at the cost of using up memory. Expensive calculations or data that takes a long time to fetch is put into the cache, and subsequent access to the same content can be returned more quickly.

01:12 A lot of software follows what’s called the principle of locality. That means if you’ve used it before, you’re likely to use it again, or at least more likely than some other stuff.

01:23 Python provides you a self-contained caching mechanism as a decorator. With a few small restrictions, you basically just need to wrap your function and you’ll be taking advantage of a cache. In this course, I’ll walk you through the trade offs and show you both how to use this decorator as well as how it works.

01:42 There are several different ways of writing a cache. In the next lesson, I’ll show you some of the common implementations.

Become a Member to join the conversation.