Since I started learning Rust 2 years ago, my usage of this programming language has grown quite a bit. For example, this year I had the opportunity to contribute to an open-source Rust project in the context of my job: Tock, an operating system for embedded systems.

In this post I want to react to the call for blog posts on Rust for 2020. I’ll focus on 4 topics where Rust could improve in the year(s) to come.


Stable const generics

In my opinion, generics are one of the strongest selling points of Rust compared to other modern programming languages. This is for several reasons.

First, from a programmer’s perspective Rust generics allow to define flexible and parametrizable interfaces, a.k.a. traits. By “flexible and parametrizable”, I refer to concepts like associated types, that allow the traits themselves to be generic. This allows to naturally define advanced concepts like iterators, I/O reading and writing, arithmetic operators or dereferencing as traits. Thanks to that, generic code is pervasive in Rust’s standard library.

// Iterator trait in a nutshell.
trait Iterator {
    // The iterator's item is an associated type.
    type Item;
    // Functions of the trait can depend on an associated type.
    fn next(&mut self) -> Option<Self::Item>;
    ...
}

These kind of interfaces also exist in languages like Java, but in most programming languages, generic code uses dynamic polymorphism. In these cases, the concrete implementation of a generic function is resolved at runtime, which has some performance overhead. And to cope with performance overhead (notably due to dynamic typing), many languages have to resort to JIT optimizations at runtime, which is quite complex and also a source of vulnerabilities.

On the contrary, where Rust shines is that generic code uses static polymorphic by default (unless using &dyn references). Much like C++ templates, the machine code generated from Rust generics is specialized for each concrete type. While specializing the code for each type can increase the binary size, and lead to longer compilation times, it also allows the compiler to apply optimizations such as inlining and dead code elimination for each concrete type. But contrary to C++, Rust generics allow to define precise interfaces, thanks to traits and associated types.

To further extend the flexibility, const generics allow to define generic code that depends not only on types, but also on constants. This is especially useful for interfaces based on arrays of a fixed size. A concrete example is a function adding two arrays element-wise.

fn add_arrays<T: AddAssign, const N: usize>(a: &mut [T; N], b: &[T; N]) {
    for i in 0..N {
        a[i] += b[i];
    }
}

This function only makes sense for two arrays of the same size, but without const generics the a and b parameters would be slices of unknown size (&[T]), so for correctness the function would have to include an extra check that a.len() == b.len(), and return an error (or panic) in that case. On the contrary, with const generics, one can define more precise constraints at compile time and therefore write a simpler code that can also be better optimized by the compiler.

In 2019, a fundational effort has been made to support const generics in simple cases (see in particular this summary), and the standard library started using const generics (only as an internal implementation detail for now). With these foundations, it turns out that the above add_arrays function is already covered1 by the latest Rust nightly! And I also successfully wrote my first commit applying const generics.

However, complex cases are not all supported yet, so filling the gaps and hopefully stabilizing const generics would be an exciting development to see in Rust in 2020!

Rust infrastructure: reducing bandwidth

The Rust ecosystem has grown quite a lot. At the time of writing, tere are now 32,565 libraries on crates.io. Currently, all the metadata to resolve dependencies on all these crates is stored as a git repository, which contains one file per crate.

When initializing Cargo, this whole repository is cloned, with all of its history (you can find it in $HOME/.cargo/registry/index/github.com-xxxx/). Although git can somewhat compress the data, this index is now quite large – a git clone takes more than 150MB at the time of writing. I think that this growing bandwidth usage will become more and more problematic as it slows down the overall compilation time for people with slow connections. It can also put pressure on people with a limited downloading budget, for example tethering Internet from their mobile phone. Last, all of this data has to be stored on disk.

Overall, I think that this growing bandwidth usage makes Rust less accessible. So what can be done about it?

One approach would be to do a shallow clone of the git repository, only downloading the latest commit.

Another approach would be to create a custom and optimized data representation of the crate index, without using git at all. Instead of using one JSON file per crate, the crates.io service could generate a single file snapshotting the current state of the index, and storing it in a more compact manner. For example, some optimizations could be to (1) use a binary format instead of JSON and (2) represent dependencies as integer indices instead of text.

Another example where bandwidth is problematic is when contributing to the Rust compiler itself. When I gave it a try, I realized that bootstrapping it requires recursively cloning all git submodules, which is currently done as deep copies. Some of the submodule repositories have quite a large history, so this requires a large network bandwidth, and a non-trivial amount of disk. There doesn’t seem to be an easy solution due to the way GitHub is configured2, but supporting a more lightweight way of bootstrapping the compiler would also make it more accessible for people to contribute.

Scoping unsafe code?

A useful feature of Rust is the unsafe escape hatch, allowing to deviate from the regular memory safety features to manipulate raw memory. This allows to efficiently implement containers (hash tables, vectors, etc.) in the standard library: with a safe user-facing interface, the implementation internally uses some blocks of unsafe code.

Another example where direct access to the memory is useful is to write kernel drivers, that at the lowest level need to read and write to specific memory addresses to interact with the hardware. Therefore, it’s not surprising that projects like Tock – an embedded OS written in Rust – use a lot of unsafe code.

However, there is a syntactic element that I find problematic with unsafe code. If one want to use unsafe code, they wrap it into an unsafe { ... } block, allowing unsafe operations only inside this block. On the other side, if one wants to declare a specific API as unsafe, they can declare a function as unsafe fn; one is then only allowed to call this function within an unsafe block.

The problem is that these two notions are conflated: when a function is marked as unsafe, the whole body of the function is implicitly in an unsafe block! This may not seem surprising, but if the function contains dozens of lines of code, it becomes easy to accidentally use inintended unsafe features in the implementation. In most cases, maybe a few lines of the function actually use unsafe code (but we still want to mark the API as unsafe, for a variety of reasons).

I’m not sure whether decoupling the unsafety of interfaces from the unsafety of their implementation makes sense. It’s also clear that introducing such decoupling would be a breaking change (so maybe for another Rust edition). But it would be interesting to explore.

Towards support for constant-time code?

This one is likely not realistic for 2020, but I still want to mention it as I think that it would be useful in the longer term.

Due to the low-level at which Rust operates and the memory safety guarantees that it provides, multiple projects have started using Rust to implement cryptographic code. Some examples include: the ring general-purpose cryptography library, dalek-cryptography (some elliptic curves, zero-knowledge proofs), or my experimental implementations of Haraka and Gravity-SPHINCS. Rust has multiple selling points for cryptographic code, such as memory safety, native performance, but also generics which allow to express algebraic properties as traits for example (and const generics will also help – see my commit).

However, memory safety and zero-cost abstractions are not everything, and one topic that has come up in applied cryptography is how side-channels can be used to extract cryptographic secrets. Therefore, writing constant-time code is primordial to protect against such attacks (although constant-timeness is in general not enough).

A typical example is comparing two arrays for equality.

fn cmp_variable_time(a: &[u8; 32], b: &[u8; 32]) {
    for i in 0..32 {
        if a[i] != b[i] {
            return false;
        }
    }
    true
}

Here, the number of iterations run by the loop is equal to the index of the first byte that differs between a and b. If an attacker can repeatedly call such function as an oracle to compare their challenge to a secret, they can brute-force it byte after byte in at most steps (much less than the steps required without side-channel).

To avoid such attacks, the code should take a constant time irrespective of the values of a and b. However, this is a notably hard problem. The problem is that programming languages like C or Rust are not well-equiped to write constant-time code. This is due to compilers optimizations, which are in general essential, but also often make code variable time.

To cope with that, developers have often used techniques such as using a more complex arithmetic, hoping that the compiler wouldn’t figure out how to optimize it.

fn cmp_best_effort(a: &[u8; 32], b: &[u8; 32]) {
    // Let's hope the compiler doesn't optimize that...
    let mut x = 0;
    for i in 0..32 {
        x |= a[i] ^ b[i];
    }
    x == 0
}

However, trying to fool the compiler isn’t a robust strategy, and as compilers evolve and include more and more optimizations, a code that compiled to constant time today may not in the future. For example, a recent blog post by Daan Sprenkels, LLVM provides no side-channel resistance, describes how a common pattern used by cryptographic engineers breaks when compiled with some LLVM optimization pass (and this pass is enabled by default).

Another technique has been to introduce “memory barriers” or volatile reads, as a best-effort to prevent the compiler from applying optimizations. This is what the subtle crate uses for example (source code). But this also seems like a hack, and in principle doesn’t prevent other optimizations outside of the barriers.

In the longer term, being able to mark some code with a “must stay constant time” annotation, and propagating this annotation throughout the compiler down to optimizations would be really nice! It could be some #[constant_time] annotation around functions (or more generally blocks of code), or – as Daan Sprenkels describes – some secret keyword to mark secret values that must not influence timing of the program.

However, this comes with its own set of challenges, starting with the fact that LLVM doesn’t have such annotations yet.


  1. Provided that the unstable #![feature(const_generics)] feature is enabled, which is “incomplete and may cause the compiler to crash”. 

  2. GitHub’s configuration prevents shallow cloning at a specific commit, which essentially prevents shallow cloning submodules, since the parent git repository points to specific commits of submodules. 


Follow me | Tweet article | RSS