In this article, I'll explain how we can add multi-threading to our code with a single keyword, with no risk of data races!
Note that this is a theoretical feature, and not implemented yet; this article is just to give a hint at the direction Vale is going.
If you've never used it, believe me when I say that structured concurrency is a lot of fun. 0
With structured concurrency, you can launch multiple threads at once, to run your calculations in parallel, for a big speedup. And often, it can be done with a tiny adjustment.
Imagine we had this simple program that calculated x^3 for some values of x:
If pow was much more expensive, we might want to run those calculations in parallel. 1
It can be pretty tedious to add threading to C code. We'd have to wrap our results[i] = pow(i, exponent); line in an entire new function, and use pthread_create, like so:
That's a lot of code to just run results[i] = pow(i, exponent); in parallel!
With structured concurrency, we can do that with just one line, using OpenMP. Let's add a #pragma omp parallel for to our original program:
This is structured concurrency. It runs our iterations in parallel, and makes sure the parallel iterations are complete before continuing on.
Nathaniel Smith makes a great case that we should use structured concurrency rather than directly using thread APIs such as pthread_create, go, asyncio.create_task, etc. 2 Structured concurrency isn't a silver bullet, of course, but it's definitely a big step forward.
My mind was blown the first time I used it with OpenMP. I had no idea that parallelism could be so easy!
And years later, I used CUDA to make a raytracer, which was similarly mind-boggling. They even appear to have lambda support now, which means we basically have GPU structured concurrency now. Incredible!
We only use 5 ints and the pow function as a simple example. In practice, we use threads for much larger data sets.
In fact, this toy example will probably be a lot slower, because threads have their own performance overhead, and because of false sharing. Threads tend to pay off for much larger data sets.
OpenMP is a really amazing tool for C structured concurrency because it's seamless:
In other words, seamless concurrency is the ability to read existing data concurrently without refactoring existing code.
This seamlessness is important because it saves us time, and we can more easily experiment with adding concurrency to more places in our program.
Some other implementations of concurrency aren't seamless, but they have other benefits. For example, fearless concurrency is immune to data race bugs. Read on to find out what that means, and how we can prevent them!
Concurrency is "fearless" if data races are impossible. So what's a data race, and why do we want to avoid them?
A data race is when: 3
Let's add a "progress" counter to our above snippet, and see a data race bug in action:
Look at that! There are two Completed 4 iterations! printed out. 5 Why is that?
It's because the line numIterationsCompleted = numIterationsCompleted + 1; has three steps:
If two threads are running in parallel, we might see this happen:
The problem is that they didn't see each others add operations; they're both adding 1 to the old value 3, to get 4.
If instead the 4th thread's store happened before the 5th thread's load, we'd have gotten the correct answer 5.
This is a data race bug. They are very difficult to detect, because they depend on the scheduling of the threads, which is effectively random.
Luckily, we've figured out how to avoid these problems, with some "concurrency ground rules":
We usually need proper fear, respect, and discipline to stick to these rules. But when a language enforces these rules, we have fearless concurrency.
Plenty of languages 6 offer fearless concurrency, but we'll look at Pony and Rust here.
Pony doesn't let us access data from the outside scope, like in our above C examples. Instead, we "send" data between "actors", which are similar to threads.
We can send data if either of these is true:
If we have an val or iso reference to an object, we can send it to another actor.
Pony has fearless concurrency because in this system, data races are impossible; we can never have a modifiable chunk of data that's visible to multiple threads.
Key takeaways from Pony:
We'll use these techniques below!
See Renato Athaydes' Fearless concurrency: how Clojure, Rust, Pony, Erlang and Dart let you achieve that.
A "permission" is something that travels along with a pointer at compile time, which affects how you may use that pointer. For example, C++'s const or the mut in Rust's &mut
This is also true of Clojure. It's sometimes true with Rust; only some immutable borrows can be shared with multiple threads.
Rust has fearless concurrency that feels a lot like Pony's:
And because Rust's has async lambdas, it can also sometimes 9 do structured concurency!
But alas, Rust doesn't have seamless structured concurrency, because it often can't access variables defined outside the task's scope. 10
There are some outstanding issues which prevent this from being generally usable, like sometimes requiring 'static lifetimes, blocking the running executor, or running unsafe code. Some very smart folks are working on these though, see tokio/1879 and tokio/2596.
Specifically, tasks can only capture things in the parent scope if they have Sync, see this example. Making data Sync to achieve concurrency is often not possible without potentially extensive rearchitecting of the containing program, so we can't quite say it's seamless. It's still pretty cool though!
We can make fearless and seamless structured concurrency by:
Our C program almost follows these rules, except it violates rule #2; remember how we modified the results array inside the parallel block:
There! The loop doesn't modify anything created outside the block.
We can now add a theoretical parallel keyword. 13
The parallel keyword will:
Since no threads modify the data, the data is truly temporarily immutable.
Since the data immutable, the threads can safely share it, as we saw in Pony.
We are now thread-safe!
And just because we can, let's modify something created inside the block:
You can see how the compiler can enforce that we only modify things that came from inside the block.
If we call a function, it needs to know what it can modify and what it can't. For that, we use 'r. Note how in this snippet, blur's first parameter's type is 'r &int. 16
That 'r means we're pointing into a read-only region. More on that below!
Vale is a language we're making to explore techniques for easy, fast, and safe code.
We actually discovered this "seamless, fearless, structured concurrency" while playing around with Vale's region borrow checker, which blends mutable aliasing with Rust's borrow checker.
The lack of a ; here means this line produces the block's result, similar to Scala and Rust.
To reiterate, this feature is theoretical, and we're still adding it to Vale. Stay tuned!
The parallel block has a default executor, but can be overloaded with parallel(num_threads) or parallel(an_executor).
set a = a + 4; is like a = a + 4; in C.
This is similar to Rust's lifetimes, more on that in the afterword.
As stated above, to enable fearless structured concurrency, we need to express that a reference points into a read-only region. 17 We can think of a region as a scope.
When we have a reference to an object in a read-only region, we:
We're using 'r to specify that a reference is in a read-only region. For example, an 'r &int is an non-owning reference to an array of integers inside a read-only region.
We can use these for our concurrency, with the following rule:
Elaborating on the 'r a little more:
In Vale, regions can be mutable or immutable, but we "see" them as read-write or read-only. We can see both immutable and mutable regions as read-only. (This is similar to * and const * in C++.)
The user doesn't need to know about this distinction; if they ignore regions altogether the program's behavior will still make sense and be predictable.
A const * is similar, but we might not get a const * when we load a member/element from it.
An immutable borrow (&) is similar, except those can't always be shared with multiple threads. Only objects with Sync can be shared with multiple threads.
The afterword goes more into the difference between Vale's regions and Rust's lifetimes.
There are a couple drawbacks:
As we saw, we can combine...
...into a new and interesting fearless, seamless, structured concurrency feature.
And in Part 2, we'll enable fearlessly sharing mutable data too, using channels, isolated subgraphs, "region-shifting", and mutexes. Stay tuned!
Thanks for reading! If you want to discuss more, come by the r/Vale subreddit or the Vale discord, and if you enjoy these articles or want to support our work on Vale, you're welcome to sponsor us on GitHub!
This means no "immutability escape hatches" such as RefCell; immutable must really mean immutable.
Though, this might be good practice anyway.
The 'r seems to hint that they're similar! Let's take another look:
There are some similarities:
The biggest difference is that Vale lifts mutability to the region level:
The compiler enforces that nobody changes objects in immutable regions by:
Perhaps the biggest difference is that Vale has no aliasing restrictions.
Because of these design decisions, Vale's regions are largely "opt-in":
We hope that these things will give Vale a gradual learning curve, and make life easy for newcomers.
This is a pretty high-level overview of the differences, feel free to swing by our discord and we'd be happy to explain it!
Though, we can have structs in one temporary region pointing into structs in another region, like 'r MyStruct<'b>.
A function that accepts a read-only region will actually be generated twice: once with and once without the assumption that the read-only region is immutable and can therefore take advantage of the immutability optimizations.