Blog / Locks in asynchronous applications in Rust
So, picture this, you’re writing an app, perhaps using
Life is well and you’re adding feature after feature, one day however
you need to use shared mutable state between your tasks,
perhaps to store some changing common value.
Now, unless you’re some wizard, you probably need some kind of primitive
to give you mutable access to some data from a shared reference. Around these parts
we like to call this a lock. Now, this sounds scary, but I promise you it isn’t that bad,
[well] unless you factor in
async that is. You see, over in synchronous land figuring out what
lock to use is quite easy. In most cases, you just pick the
Mutex<T> primitive from your
favorite library, be that
std or perhaps
parking_lot if you’re really cool.
async land, picking an appropriate lock implementation can be anywhere
from confusing to downright daunting, depending on what you’ve read previously.
tokio documentation doesn’t quite do it justice and often just
confuses people even more or leads them to suboptimal decisions.
Fear not however, by looking at a couple concepts and distinguishing between we can drastically simplify this problem and equip you with the tools you need to figure out what kind of lock you need where.
The critical section
Let us go over some basic terminology.
You’ve probably previously encountered the world critical section.
The world itself can seem a bit scary but it denotes the section of the program that exists inbetween
the lock getting acquired and subsequently, it getting released. To illustrate, here’s
a sample snippet that writes to
stdout within the critical section.
Each time you use a lock, you need to carefully consider the critical section. Ideally, we want to minimize this to make it as short as possible and do only precisely what needs to be done within it and nothing else. Everything that doesn’t need to be inside the critical section should always be moved outside it. This can help performance by allowing other threads/tasks to take the lock sooner, allowing higher throughput and better latencies.
Data and logic
Now that we have our critical section, we need to figure out what type it is. We can categorize all critical sections by two types, data criticals and logic criticals.
Data critical sections are primarily used when your lock contains trivial data that is updated with new data that isn’t derived from the data currently held in the lock. A data critical section should consist of reading and writing to the data inside the lock but not performing any real computation. Below is an example of a typical data critical section where the current temperature is updated.
The other kind of critical section is called a logic critical. These protect not only data, but also logic from executing concurrently. You’ll see and use these most often when the data you intend to store in the lock is calculated from data already inside the lock. In these cases, you usually want to perform the computation inside the critical section to prevent the source data from changing before you store the result of your computation. Below is an example of a typical logic critical section where temperature is fetched from a database.
Picking your lock primitive
So, we’ve identified what kind critical sections are relevant for the data we’re storing and the task at hand, but how does that help us?
The clue comes from an observation of the length and predictability of the critical sections and how
scheduling is done by the OS and by
If we look at a typical data critical section, they’re usually rather short and execute very quickly, we’re only copying some data after all. They’re also very predictable, we know that copying data is always fast and we know that the execution time isn’t reliant upon some unpredictable factor such as external APIs or disk I/O.
Due to this observation, we know that it’s very likely that our critical section is always making optimal use of the processor and that it likely won’t block anyone else. In these cases, a synchronous lock such
parking_lot is preferable. This is because it has lower performance overheads
and due to the fact that we are relatively unaffected by the downside of not allowing the
tokio scheduler to preempt our task when it’s waiting.
Compared to data critical sections, logic criticals have vastly different characteristics. You’ll usually see them perform I/O or make OS syscalls. These operations always have unpredictable execution times and overall the lock is held for much longer than in a data critical section.
This means that there is usually moments in time that our task is idle and waiting on some action to complete such as waiting for a database query to come back.
This means that you should choose an asynchronous lock such as the
tokio::sync since it allows tokio to better schedule tasks to take advantage
of the idle waiting times during the critical section.
I originally got the idea to write an article like this after being asked by my friends in the Rust community many times. Over the years I’ve probably explained this concept at least a dozen times to different people.
I’ve also seen many people use the wrong lock for the wrong task in libraries that I depend on which just itches me to no avail.
Hopefully you’ve learned how to use locks from this article but more importantly you now hopefully have a good framework for making informed decisions when sync and async interact to allow you to solve similar problems which you will no doubt face in the future.