Last time, we introduced the idea of async methods, and talked about how they would be implemented: as a kind of anonymous associated type on the trait that declares the method, which corresponds to a different, anonymous future type for each implementation of that method. Starting this week we’re going to look at some of the implications of that. The first one we’re going to look at is object safety.

What is object safety?

“Object safety” is the set of restrictions that a trait must meet in order to be allowed to be turned into a trait object, Rust’s solution for dynamic dispatch. These restrictions aren’t arbitrary, but they also aren’t really cohesive: they’re basically the set of things that we can do at compile time, but we can’t do at runtime. This means that its often hard for users t4 remember what the object safety rules are, and it can feel like almost no trait can be object safe.

Fortunately, only one of the object safety rules is important to talk about: the rules around associated types. Let’s start with Iterator as an example:

trait Iterator {
    type Item;
    fn next(&mut self) -> Option<Self::Item>:

When using the trait in a generics, you can optionally specify what the Item type is, but you also can not specify it. That is, you can write a function that takes a T: Iterator without specifying what kind of item it yields at all. However, for trait objects, you must specify the item: Box<dyn Iterator> is not allowed, you have to write Box<dyn Iterator<Item = i32>> (or whatever item type you want).

The reason for this rule is that when you call next on your trait object, we need, at compile time, to know the layout of the item type so that we can allocate space for the Option<Self::Item> that next returns on the stack. This presents a problem for async functions!

Async functions and object safety

It’s pretty useful to be able to create a trait object from a trait with an async method in it, so it would be bad for trait objects not to be object safe. Last time, we talked about how an async function is like having an extra associated type on the trait:

trait Foo {
    async fn foo(&self) -> i32;
// equivalent to:

trait Foo {
    type _Future<'a>: Future<Output = i32> + 'a;
    fn foo(&self) -> Self::_Future<'a>;

But there’s another fact about this “secret associated type” that is important: every implementation provides a different type for this associated type, generated specifically for that impl. That means there’s no way to write Box<dyn Foo<_Future = {some type}>>, since each implementation returns a different type!

This looks pretty bad on the surface: it looks like async fns would not be object safe.

A solution: more dynamic dispatch

Since every async fn returns a different future type, there’s only one way to dynamically dispatch an async fn: dynamically dispatch the future type! That is, the returned future from any async method called on a trait object would be Box<dyn Future>. The compiler, when generating the vtables for this trait, would generate the necessary shim as well to heap allocate the returned future.

futures-await, today, is intended to support async/await in traits using roughly this mechanism. However, because its macro does not have language support, it requires you to annotate async methods as #[async(boxed)] and always heap allocate them, not only in the dynamic dispatch case. So this is strictly an upgrade over the situation futures-await presents today.

You might be concerned: isn’t this implicit allocation? The truth is that it is no more implicit than any of the many APIs provided by the standard library that heap allocate, such as Vec, HashMap, Mutex, Rc and so on. It would be a known fact that calling an async fn on a trait object means a heap allocation, just like its a known fact of using all of those APIs.

Importantly, just like types like Vec, there isn’t a viable alternative way to make an async fn object safe than to heap allocate the future. Some users who follow RFCs might wonder if the RFC for stack allocated trait objects would solve the problem without heap allocation, but the answer is no: it does not support returning a trait object, which is what would be necessary to avoid the heap allocation. And since the future type is opaque, if we ever could support stack allocation in this case, it would be backwards compatible to introduce that optimization.

Overall, this solution is appealing because it gives users more flexibility while also being consistent with the zero cost abstraction principle: it only costs you if you use it, and if you hand-rolled it, you couldn’t do better.


Overall, heap allocating the futures returned by async methods in trait objects seems like a pretty good solution to the object safety problem. This means that async methods will be object safe just like any other method, and users will be able to use async/await with trait objects.

Next time, we’ll look in more detail at another implication of async methods: the problem of the bounds on the anonymous return type.