failure

This is the documentation for the failure crate, which provides a system for creating and managing errors in Rust. Additional documentation is found here:


# #![allow(unused_variables)]
#fn main() {
extern crate failure;
extern crate serde;
extern crate toml;

#[macro_use] extern crate failure_derive;
#[macro_use] extern crate serde_derive;

use std::collections::HashMap;
use std::path::PathBuf;
use std::str::FromStr;

use failure::Error;

// This is a new error type that you've created. It represents the ways a
// toolchain could be invalid.
//
// The custom derive for Fail derives an impl of both Fail and Display.
// We don't do any other magic like creating new types.
#[derive(Debug, Fail)]
enum ToolchainError {
    #[fail(display = "invalid toolchain name: {}", name)]
    InvalidToolchainName {
        name: String,
    },
    #[fail(display = "unknown toolchain version: {}", version)]
    UnknownToolchainVersion {
        version: String,
    }
}

pub struct ToolchainId {
    // ... etc
}

impl FromStr for ToolchainId {
    type Err = ToolchainError;

    fn from_str(s: &str) -> Result<ToolchainId, ToolchainError> {
        // ... etc
    }
}

pub type Toolchains = HashMap<ToolchainId, PathBuf>;

// This opens a toml file containing associations between ToolchainIds and
// Paths (the roots of those toolchains).
//
// This could encounter an io Error, a toml parsing error, or a ToolchainError,
// all of them will be thrown into the special Error type
pub fn read_toolchains(path: PathBuf) -> Result<Toolchains, Error>
{
    use std::fs::File;
    use std::io::Read;

    let mut string = String::new();
    File::open(path)?.read_to_string(&mut string)?;

    let toml: HashMap<String, PathBuf> = toml::from_str(&string)?;

    let toolchains = toml.iter().map(|(key, path)| {
        let toolchain_id = key.parse()?;
        Ok((toolchain_id, path))
    }).collect::<Result<Toolchains, ToolchainError>>()?;

    Ok(toolchains)
}
#}

How to use failure

This section of the documentation is about how the APIs exposed in failure can be used. It is organized around the major APIs of failure:

The Fail trait

The Fail trait is a replacement for std::error::Error. It has been designed to support a number of operations:

  • Because it is bound by both Debug and Display, any failure can be printed in two ways.
  • It has both a backtrace and a cause method, allowing users to get information about how the error occurred.
  • It supports wrapping failures in additional contextual information.
  • Because it is bound by Send and Sync, failures can be moved and share between threads easily.
  • Because it is bound by 'static, the abstract Fail trait object can be downcast into concrete types.

Every new error type in your code should implement Fail, so it can be integrated into the entire system built around this trait. You can manually implement Fail yourself, or you can use the derive for Fail defined in a separate crate and documented here.

Implementors of this trait are called 'failures'.

Cause

Often, an error type contains (or could contain) another underlying error type which represents the "cause" of this error - for example, if your custom error contains an io::Error, that is the cause of your error.

The cause method on the Fail trait allows all errors to expose their underlying cause - if they have one - in a consistent way. Users can loop over the chain of causes, for example, getting the entire series of causes for an error:


# #![allow(unused_variables)]
#fn main() {
// Assume err is a type that implements Fail;
let mut fail: &Fail = err;

while let Some(cause) = fail.cause() {
    println!("{}", cause);

    // Make `fail` the reference to the cause of the previous fail, making the
    // loop "dig deeper" into the cause chain.
    fail = cause;
}
#}

Because &Fail supports downcasting, you can also inspect causes in more detail if you are expecting a certain failure:


# #![allow(unused_variables)]
#fn main() {
while let Some(cause) = fail.cause() {

    if let Some(err) = cause.downcast_ref::<io::Error>() {
        // treat io::Error specially
    } else {
        // fallback case
    }

    fail = cause;
}
#}

Backtraces

Errors can also generate a backtrace when they are constructed, helping you determine the place the error was generated and every function that called into that. Like causes, this is entirely optional - the authors of each failure have to decide if generating a backtrace is appropriate in their use case.

The backtrace method allows all errors to expose their backtrace if they have one. This enables a consistent method for getting the backtrace from an error:


# #![allow(unused_variables)]
#fn main() {
// We don't even know the type of the cause, but we can still get its
// backtrace.
if let Some(bt) = err.cause().and_then(|cause| cause.backtrace()) {
    println!("{}", bt)
}
#}

The Backtrace type exposed by failure is different from the Backtrace exposed by the backtrace crate, in that it has several optimizations:

  • It has a no_std compatible form which will never be generate (because backtraces require heap allocation), and should be entirely compiled out.
  • It will not be generated unless the RUST_BACKTRACE environmental variable has been set at runtime.
  • Symbol resolution is delayed until the backtrace is actually printed, because this is the most expensive part of generating a backtrace.

Context

Often, the libraries you are using will present error messages that don't provide very helpful information about what exactly has gone wrong. For example, if an io::Error says that an entity was "Not Found," that doesn't communicate much about what specific file was missing - if it even was a file (as opposed to a directory for example).

You can inject additional context to be carried with this error value, providing semantic information about the nature of the error appropriate to the level of abstraction that the code you are writing operates at. The context method on Fail takes any displayable value (such as a string) to act as context for this error.

Using the ResultExt trait, you can also get context as a convenient method on Result directly. For example, suppose that your code attempted to read from a Cargo.toml. You can wrap the io::Errors that occur with additional context about what operation has failed:


# #![allow(unused_variables)]
#fn main() {
use failure::ResultExt;

let mut file = File::open(cargo_toml_path).context("Missing Cargo.toml")?;
file.read_to_end(&buffer).context("Could not read Cargo.toml")?;
#}

The Context object also has a constructor that does not take an underlying error, allowing you to create ad hoc Context errors alongside those created by applying the context method to an underlying error.

Backwards compatibility

We've taken several steps to make transitioning from std::error to failure as painless as possible.

First, there is a blanket implementation of Fail for all types that implement std::error::Error, as long as they are Send, Sync, and 'static. If you are dealing with a library that hasn't shifted to Fail, it is automatically compatible with failure already.

Second, Fail contains a method called compat, which produces a type that implements std::error::Error. If you have a type that implements Fail, but not the older Error trait, you can call compat to get a type that does implement that trait (for example, if you need to return a Box<Error>).

The biggest hole in our backwards compatibility story is that you cannot implement std::error::Error and also override the backtrace and cause methods on Fail. We intend to enable this with specialization when it becomes stable.

Deriving Fail

Though you can implement Fail yourself, we also provide a derive macro to generate the impl for you. This macro is provided through the failure_derive crate.

In its smallest form, deriving Fail looks like this:


# #![allow(unused_variables)]
#fn main() {
extern crate failure;
#[macro_use] extern crate failure_derive;

use std::fmt;

#[derive(Fail, Debug)]
struct MyError;

impl fmt::Display for MyError {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "An error occurred.")
    }
}
#}

All failures need to implement Display, so we have added an impl of Display. However, implementing Display is much more boilerplate than implementing Fail - this is why we support deriving Display for you.

Deriving Display

You can derive an implementation of Display with a special attribute:


# #![allow(unused_variables)]
#fn main() {
extern crate failure;
#[macro_use] extern crate failure_derive;

#[derive(Fail, Debug)]
#[fail(display = "An error occurred.")]
struct MyError;
#}

This attribute will cause the Fail derive to also generate an impl of Display, so that you don't have to implement one yourself.

String interpolation

String literals are not enough for error messages in many cases. Often, you want to include parts of the error value interpolated into the message. You can do this with failure using the same string interpolation syntax as Rust's formatting and printing macros:


# #![allow(unused_variables)]
#fn main() {
extern crate failure;
#[macro_use] extern crate failure_derive;

#[derive(Fail, Debug)]
#[fail(display = "An error occurred with error code {}. ({})", code, message)]
struct MyError {
    code: i32,
    message: String,
}
#}

Note that unlike code that would appear in a method, this does not use something like self.code or self.message; it just uses the field names directly. This is because of a limitation in Rust's current attribute syntax. As a result, you can only interpolate fields through the derivation; you cannot perform method calls or use other arbitrary expressions.

Tuple structs

With regular structs, you can use the name of the field in string interpolation. When deriving Fail for a tuple struct, you might expect to use the numeric index to refer to fields 0, 1, et cetera. However, a compiler limitation prevents this from parsing today.

For the time being, tuple field accesses in the display attribute need to be prefixed with an underscore:


# #![allow(unused_variables)]
#fn main() {
extern crate failure;
#[macro_use] extern crate failure_derive;

#[derive(Fail, Debug)]
#[fail(display = "An error occurred with error code {}.", _0)]
struct MyError(i32);


#[derive(Fail, Debug)]
#[fail(display = "An error occurred with error code {} ({}).", _0, _1)]
struct MyOtherError(i32, String);
#}

Enums

Implementing Display is also supported for enums by applying the attribute to each variant of the enum, rather than to the enum as a whole. The Display impl will match over the enum to generate the correct error message. For example:


# #![allow(unused_variables)]
#fn main() {
extern crate failure;
#[macro_use] extern crate failure_derive;

#[derive(Fail, Debug)]
enum MyError {
    #[fail(display = "{} is not a valid version.", _0)]
    InvalidVersion(u32),
    #[fail(display = "IO error: {}", error)]
    IoError { error: io::Error },
    #[fail(display = "An unknown error has occurred.")]
    UnknownError,
}
#}

Overriding backtrace

The backtrace method will be automatically overridden if the type contains a field with the type Backtrace. This works for both structs and enums.


# #![allow(unused_variables)]
#fn main() {
extern crate failure;
#[macro_use] extern crate failure_derive;

use failure::Backtrace;

/// MyError::backtrace will return a reference to the backtrace field
#[derive(Fail, Debug)]
#[fail(display = "An error occurred.")]
struct MyError {
    backtrace: Backtrace,
}

/// MyEnumError::backtrace will return a reference to the backtrace only if it
/// is Variant2, otherwise it will return None.
#[derive(Fail, Debug)]
enum MyEnumError {
    #[fail(display = "An error occurred.")]
    Variant1,
    #[fail(display = "A different error occurred.")]
    Variant2(Backtrace),
}
#}

This happens automatically; no other annotations are necessary. It only works if the type is named Backtrace, and not if you have created an alias for the Backtrace type.

Overriding cause

In contrast to backtrace, the cause cannot be determined by type name alone because it could be any type which implements Fail. For this reason, if your error has an underlying cause field, you need to annotate that field with the #[cause] attribute.

This can be used in fields of enums as well as structs.


# #![allow(unused_variables)]
#fn main() {
#[derive(Fail, Debug)]
extern crate failure;
#[macro_use] extern crate failure_derive;

use std::io;

/// MyError::cause will return a reference to the io_error field
#[derive(Fail, Debug)]
#[fail(display = "An error occurred.")]
struct MyError {
    #[cause] io_error: io::Error,
}

/// MyEnumError::cause will return a reference only if it is Variant2,
/// otherwise it will return None.
#[derive(Fail, Debug)]
enum MyEnumError {
    #[fail(display = "An error occurred.")]
    Variant1,
    #[fail(display = "A different error occurred.")]
    Variant2(#[cause] io::Error),
}
#}

The Error type

In addition to the trait Fail, failure provides a type called Error. Any type that implements Fail can be cast into Error using From and Into, which allows users to throw errors using ? which have different types, if the function returns an Error.

For example:


# #![allow(unused_variables)]
#fn main() {
// Something you can deserialize
#[derive(Deserialize)]
struct Object {
    ...
}

impl Object {
    // This throws both IO Errors and JSON Errors, but they both get converted
    // into the Error type.
    fn from_file(path: &Path) -> Result<Object, Error> {
        let mut string = String::new();
        File::open(path)?.read_to_string(&mut string)?;
        let object = json::from_str(&string)?;
        Ok(object)
    }
}
#}

Causes and Backtraces

The Error type has all of the methods from the Fail trait, with a few notable differences. Most importantly, the cause and backtrace methods on Error do not return Options - an Error is guaranteed to have a cause and a backtrace.


# #![allow(unused_variables)]
#fn main() {
// Both methods are guaranteed to return an &Fail and an &Backtrace
println!("{}, {}", error.cause(), error.backtrace())
#}

An Error's cause is always the failure that was cast into this Error. That failure may have further underlying causes. Unlike Fail, this means that the cause of an Error will have the same Display representation as the Error itself.

As to the error's guaranteed backtrace, when the conversion into the Error type happens, if the underlying failure does not provide a backtrace, a new backtrace is constructed pointing to that conversion point (rather than the origin of the error). This construction only happens if there is no underlying backtrace; if it does have a backtrace no new backtrace is constructed.

Downcasting

The Error type also supports downcasting into any concrete Fail type. It can be downcast by reference or by value - when downcasting by value, the return type is Result<T, Error>, allowing you to get the error back out of it.


# #![allow(unused_variables)]
#fn main() {
match error.downcast::<io::Error>() {
    Ok(io_error)    => { ... }
    Err(error)      => { ... }
}
#}

Implementation details

Error is essentially a trait object, but with some fanciness to store the backtrace it may generate if the underlying failure did not have one. In particular, we use a custom dynamically sized type to store the backtrace information inline with the trait object data.


# #![allow(unused_variables)]
#fn main() {
struct Error {
    // Inner<Fail> is a dynamically sized type
    inner: Box<Inner<Fail>>,
}

struct Inner<F: Fail> {
    backtrace: Backtrace,
    failure: F,
}
#}

By storing the backtrace in the heap this way, we avoid increasing the size of the Error type beyond that of two non-nullable pointers. This keeps the size of the Result type from getting too large, avoiding having a negative impact on the "happy path" of returning Ok. For example, a Result<(), Error> should be represented as a pair of nullable pointers, with the null case representing Ok. Similar optimizations can be applied to values up to at least a pointer in size.

To emphasize: Error is intended for use cases where the error case is considered relatively uncommon. This optimization makes the overhead of an error less than it otherwise would be for the Ok branch. In cases where errors are going to be returned extremely frequently, returning this Error type is probably not appropriate, but you should benchmark in those cases.

(As a rule of thumb: if you're not sure if you can afford to have a trait object, you probably can afford it. Heap allocations are not nearly as cheap as stack allocations, but they're cheap enough that you can almost always afford them.)

Patterns & Guidance

failure is not a "one size fits all" approach to error management. There are multiple patterns that emerge from the API this library provides, and users need to determine which pattern makes sense for them. This section documents some patterns and how users might use them.

In brief, these are the patterns documented here:

  • Strings as errors: Using strings as your error type. Good for prototyping.
  • A Custom Fail type: Defining a custom type to be your error type. Good for APIs where you control all or more of the possible failures.
  • Using the Error type: Using the Error type to pull together multiple failures of different types. Good for applications and APIs that know the error won't be inspected much more.
  • An Error and ErrorKind pair: Using both a custom error type and an ErrorKind enum to create a very robust error type. Good for public APIs in large crates.

(Though each of these items identifies a use case which this pattern would be good for, in truth each of them can be applied in various contexts. Its up to you to decide what makes the most sense for your particular use case.)

Strings as errors

This pattern is a way to create new errors without doing much set up. It is definitely the sloppiest way to throw errors. It can be great to use this during prototyping, but maybe not in the final product.

String types do not implement Fail, which is why there are two adapters to create failures from a string:

  • failure::err_msg - a function that takes a displayable type and creates a failure from it. This can take a String or a string literal.
  • format_err! - a macro with string interpolation, similar to format! or println!.

# #![allow(unused_variables)]
#fn main() {
fn check_range(x: usize, range: Range<usize>) -> Result<usize, Error> {
    if x < range.start {
        return Err(format_err!("{} is below {}", x, range.start));
    }
    if x >= range.end {
        return Err(format_err!("{} is above {}", x, range.end));
    }
    Ok(x)
}
#}

If you're going to use strings as errors, we recommend using Error as your error type, rather than ErrorMessage; this way, if some of your strings are String and some are &'static str, you don't need worry about merging them into a single string type.

When might you use this pattern?

This pattern is the easiest to set up and get going with, so it can be great when prototyping or spiking out an early design. It can also be great when you know that an error variant is extremely uncommon, and that there is really no way to handle it other than to log the error and move on.

Caveats on this pattern

If you are writing a library you plan to publish to crates.io, this is probably not a good way to handle errors, because it doesn't give your clients very much control. For public, open source libraries, we'd recommend using custom failures in the cases where you would use a string as an error.

This pattern can also be very brittle. If you ever want to branch over which error was returned, you would have to match on the exact contents of the string. If you ever change the string contents, that will silently break that match.

For these reasons, we strongly recommend against using this pattern except for prototyping and when you know the error is just going to get logged or reported to the users.

A Custom Fail type

This pattern is a way to define a new kind of failure. Defining a new kind of failure can be an effective way of representing an error for which you control all of the possible failure cases. It has several advantages:

  1. You can enumerate exactly all of the possible failures that can occur in this context.
  2. You have total control over the representation of the failure type.
  3. Callers can destructure your error without any sort of downcasting.

To implement this pattern, you should define your own type that implements Fail. You can use the custom derive provided in failure_derive to make this easier. For example:


# #![allow(unused_variables)]
#fn main() {
#[derive(Fail, Debug)]
#[fail(display = "Input was invalid UTF-8")]
pub struct Utf8Error;
#}

This type can become as large and complicated as is appropriate to your use case. It can be an enum with a different variant for each possible error, and it can carry data with more precise information about the error. For example:


# #![allow(unused_variables)]
#fn main() {
#[derive(Fail, Debug)]
#[fail(display = "Input was invalid UTF-8 at index {}", index)]
pub struct Utf8Error {
    index: usize,
}
#}

When might you use this pattern?

If you need to raise an error that doesn't come from one of your dependencies, this is a great pattern to use.

You can also use this pattern in conjunction with using Error or defining an Error and ErrorKind pair. Those functions which are "pure logic" and have a very constrained set of errors (such as parsing simple formats) might each return a different custom Fail type, and then the function which merges them all together, does IO, and so on, would return a more complex type like Error or your custom Error/ErrorKind.

Caveats on this pattern

When you have a dependency which returns a different error type, often you will be inclined to add it as a variant on your own error type. When you do that, you should tag the underlying error as the #[cause] of your error:


# #![allow(unused_variables)]
#fn main() {
#[derive(Fail, Debug)]
pub enum MyError {
    #[fail(display = "Input was invalid UTF-8 at index {}", _0)]
    Utf8Error(usize),
    #[fail(display = "{}", _0)]
    Io(#[cause] io::Error),
}
#}

Up to a limit, this design can work. However, it has some problems:

  • It can be hard to be forward compatible with new dependencies that raise their own kinds of errors in the future.
  • It defines a 1-1 relationship between a variant of the error and an underlying error.

Depending on your use case, as your function grows in complexity, it can be better to transition to using Error or defining an Error & ErrorKind pair.

Use the Error type

This pattern is a way to manage errors when you have multiple kinds of failure that could occur during a single function. It has several distinct advantages:

  1. You can start using it without defining any of your own failure types.
  2. All types that implement Fail can be thrown into the Error type using the ? operator.
  3. As you start adding new dependencies with their own failure types, you can start throwing them without making a breaking change.

To use this pattern, all you need to do is return Result<_, Error> from your functions:


# #![allow(unused_variables)]
#fn main() {
use std::io;
use std::io::BufRead;

use failure::Error;
use failure::err_msg;

fn my_function() -> Result<(), Error> {
    let stdin = io::stdin();

    for line in stdin.lock().lines() {
        let line = line?;

        if line.chars().all(|c| c.is_whitespace()) {
            break
        }

        if !line.starts_with("$") {
            return Err(format_err!("Input did not begin with `$`"));
        }

        println!("{}", &line[1..]);
    }

    Ok(())
}
#}

When might you use this pattern?

This pattern is very effective when you know you will usually not need to destructure the error this function returns. For example:

  • When prototyping.
  • When you know you are going to log this error, or display it to the user, either all of the time or nearly all of the time.
  • When it would be impractical for this API to report more custom context for the error (e.g. because it is a trait that doesn't want to add a new Error associated type).

Caveats on this pattern

There are two primary downsides to this pattern:

  • The Error type allocates. There are cases where this would be too expensive. In those cases you should use a custom failure.
  • You cannot recover more information about this error without downcasting. If your API needs to express more contextual information about the error, use the Error and ErrorKind pattern.

An Error and ErrorKind pair

This pattern is the most robust way to manage errors - and also the most high maintenance. It combines some of the advantages of the using Error pattern and the custom failure patterns, while avoiding some of the disadvantages each of those patterns has:

  1. Like Error, this is forward compatible with new underlying kinds of errors from your dependencies.
  2. Like custom failures, this pattern allows you to specify additional information about the error that your dependencies don't give you.
  3. Like Error, it can be easier to convert underlying errors from dependency into this type than for custom failures.
  4. Like custom failures, users can gain some information about the error without downcasting.

The pattern is to create two new failure types: an Error and an ErrorKind, and to leverage the Context type provided by failure.


# #![allow(unused_variables)]
#fn main() {
#[derive(Debug)]
struct MyError {
    inner: Context<MyErrorKind>,
}

#[derive(Copy, Clone, Eq, PartialEq, Debug, Fail)]
enum MyErrorKind {
    // A plain enum with no data in any of its variants
    //
    // For example:
    #[fail(display = "A contextual error message.")]
    OneVariant,
    // ...
}
#}

Unfortunately, it is not easy to correctly derive Fail for MyError so that it delegates things to its inner Context. You should write those impls yourself:


# #![allow(unused_variables)]
#fn main() {
impl Fail for MyError {
    fn cause(&self) -> Option<&Fail> {
        self.inner.cause()
    }

    fn backtrace(&self) -> Option<&Backtrace> {
        self.inner.backtrace()
    }
}

impl Display for MyError {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        Display::fmt(&self.inner, f)
    }
}
#}

You should also provide some conversions and accessors, to go between a Context, your ErrorKind, and your Error:


# #![allow(unused_variables)]
#fn main() {
impl MyError {
    pub fn kind(&self) -> MyErrorKind {
        *self.inner.get_context()
    }
}

impl From<MyErrorKind> for MyError {
    fn from(kind: MyErrorKind) -> MyError {
        MyError { inner: Context::new(kind) }
    }
}

impl From<Context<MyErrorKind>> for MyError {
    fn from(inner: Context<MyErrorKind>) -> MyError {
        MyError { inner: inner }
    }
}
#}

With this code set up, you can use the context method from failure to apply your ErrorKind to errors in underlying libraries:


# #![allow(unused_variables)]
#fn main() {
perform_some_io().context(ErrorKind::NetworkFailure)?;
#}

You can also directly throw ErrorKind without an underlying error when appropriate:


# #![allow(unused_variables)]
#fn main() {
Err(ErrorKind::DomainSpecificError)?
#}

What should your ErrorKind contain?

Your error kind probably should not carry data - and if it does, it should only carry stateless data types that provide additional information about what the ErrorKind means. This way, your ErrorKind can be Eq, making it easy to use as a way of comparing errors.

Your ErrorKind is a way of providing information about what errors mean appropriate to the level of abstraction that your library operates at. As some examples:

  • If your library expects to read from the user's Cargo.toml, you might have a InvalidCargoToml variant, to capture what io::Error and toml::Error mean in the context of your library.
  • If your library does both file system activity and network activity, you might have Filesystem and Network variants, to divide up the io::Errors between which system in particular failed.

Exactly what semantic information is appropriate depends entirely on what this bit of code is intended to do.

When might you use this pattern?

The most likely use cases for this pattern are mid-layer which perform a function that requires many dependencies, and that are intended to be used in production. Libraries with few dependencies do not need to manage many underlying error types and can probably suffice with a simpler custom failure. Applications that know they are almost always just going to log these errors can get away with using the Error type rather than managing extra context information.

That said, when you need to provide the most expressive information about an error possible, this can be a good approach.

Caveats on this pattern

This pattern is the most involved pattern documented in this book. It involves a lot of boilerplate to set up (which may be automated away eventually), and it requires you to apply a contextual message to every underlying error that is thrown inside your code. It can be a lot of work to maintain this pattern.

Additionally, like the Error type, the Context type may use an allocation and a dynamic dispatch internally. If you know this is too expensive for your use case, you should not use this pattern.