Making your code run faster is a really useful skill when you’re learning or working with Python. Whether you’re building a small school project or writing a script to automate tasks, improving performance is something that always helps. One simple trick to speed things up is by using memoization. Memoization means saving the answers to function calls, so the next time you use the same input, Python can give you the answer right away without doing the work again.
Thankfully, Python gives us a handy tool for this called functools.lru_cache. This tool makes it really easy to add memoization to your functions. In this article, we’ll explain what memoization is, how lru_cache works, and when it’s best to use it. We’ll also go through examples and explain exactly what’s happening in each case so you can apply it to your own code.
What is Memoization?
Memoization is a way to make functions more efficient by remembering the results of calculations. If a function gets called again with the same inputs, Python can skip the calculation and just use the result it saved earlier. This can make your program run much faster, especially when you’re doing lots of repeated work.
It’s really helpful for functions that use loops or recursion, like ones that calculate Fibonacci numbers, check prime numbers, or solve math problems by breaking them into smaller parts. Without memoization, these kinds of functions might repeat the same calculations many times.
What is functools.lru_cache?
Python includes a special decorator called lru_cache in the functools module. Decorators are ways to change how functions work without rewriting them. When you add @lru_cache above a function, Python will start remembering the results of that function automatically.
“LRU” stands for “Least Recently Used.” That means when your saved results start using too much memory, Python will remove the oldest ones first to make space for new ones. You can even control how many results Python should keep by setting the maxsize.
Example: Fibonacci Numbers
Let’s look at the classic example of the Fibonacci sequence. It’s a perfect case where memoization can save a lot of time.
from functools import lru_cache
@lru_cache(maxsize=None)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
print(fibonacci(10)) # Output: 55
This function calculates the 10th number in the Fibonacci sequence. Without memoization, the function would call itself many times and repeat the same calculations. But with @lru_cache, Python remembers the results of earlier calls like fibonacci(1) and fibonacci(2), so it can reuse them. This makes the function run much faster.
If you tried to calculate fibonacci(30) or even fibonacci(100), you’d really see the difference in speed.
Setting a Cache Limit
The maxsize option lets you control how many saved results to keep in memory. If you set maxsize=None, Python will save everything. But if your function gets called with many different inputs, that could use a lot of memory. To avoid that, you can limit the cache size.
Here’s a quick example:
@lru_cache(maxsize=3)
def square(n):
print(f"Calculating square of {n}")
return n * n
square(2)
square(3)
square(4)
square(2) # This one uses the saved answer
square(5) # Adds a new result and removes the oldest one
square(3) # No longer saved, so it recalculates
- The first time we call the function with 2, 3, and 4, Python saves the results.
- When we call square(2) again, it uses the saved result and skips printing.
- When we call square(5), Python has to remove the oldest saved result (which is square(3)) to make space.
- If we call square(3) again, it has to do the calculation over since its result was removed.
This example shows how lru_cache chooses what to save and what to forget when the cache is full.
Managing the Cache
Besides just remembering results, lru_cache also gives you tools to check and clear the cache. You can:
- Use cache_info() to see how many times results were reused.
- Use cache_clear() to delete all saved results and start fresh.
@lru_cache(maxsize=2)
def multiply(x, y):
return x * y
multiply(2, 3)
multiply(2, 3)
print(multiply.cache_info()) # Shows 1 hit and 1 miss
multiply.cache_clear()
print(multiply.cache_info()) # Everything reset to zero
- On the first call, Python saves the result of multiply(2, 3).
- On the second call, it uses the saved result.
- cache_info() shows one cache hit (used saved result) and one miss (had to calculate).
- After calling cache_clear(), all saved results are erased.
These tools are helpful when you want to measure performance or reset things during testing.
When Should You Use lru_cache?
Here are good times to use lru_cache:
- When your function gets called often with the same inputs.
- When your function takes a long time to compute.
- When your function always gives the same output for the same input (this is called a pure function).
Some examples include:
- Mathematical functions like factorial, Fibonacci, or GCD.
- Expensive data-processing functions.
- Searching algorithms or pathfinding functions in games.
Using lru_cache makes sense when repeating calculations wastes time and resources.
Things to Be Careful About
Even though lru_cache is powerful, it’s important to use it wisely:
- Don’t set the cache size too large, or you could run out of memory.
- Only use it with functions that take simple and unchanging inputs like numbers or strings.
- If your function uses lists or dictionaries as arguments, it won’t work with lru_cache because those are not hashable.
- For very small or fast functions, using a cache might actually slow things down.
So make sure to test your function both with and without caching before you decide to keep it.
Python’s functools.lru_cache is a simple and useful tool that can help speed up your programs by remembering function results. It’s perfect for situations where your code repeats the same work. Whether you’re solving math problems, processing data, or building a game, memoization with lru_cache can save time and effort. Just remember to use it with the right kind of functions and test your program to make sure it’s actually improving speed. With this tool, you can take your Python skills to the next level and write more efficient, powerful code.
Memoization and Caching with functools.lru_cache was originally published in ScriptSerpent on Medium, where people are continuing the conversation by highlighting and responding to this story.