This is just a post about something that grinds my gears a bit more than it reasonably should: I think the habit of applying for CVEs for Rust (and Rust ecosystem libraries) is silly at best and harmful at worst. I think it muddies the waters about what a vulnerability is, and paints an overly negative picture of Rust’s security situation that can only lead people to make inaccurate evaluations when contrasting it with other languages like C/C++.

Every CVE for Rust or a Rust library that I know of comes down to this: using this API, without using unsafe code, a programmer could introduce a memory safety issue into their code. For example, the CVE concerning str::repeat: for several versions of Rust the str::repeat function did not properly consider integer overflow when multiplying the length of the string to be repeated by the count of repetitions - if that multiplication led to an overflow, then the buffer allocated would not be large enough to hold the full string, resulting in a wild write into other memory (or worse).

But its important to consider the conditions necessary to achieve this: a memory bug can only exist in the program if the programmer calling str::repeat has passed a string and and integer which could, during program execution, have values that, when multiplied together, overflow the pointer sized integer on that platform. This is the crucial point: the programmer must implement code in a certain way for any exploit to be achieved; in the language these CVEs use, the “attacker” is the programmer..

That is to say: it rather involves being on the other side of an airtight hatchway. Normally the “attacker” who “exploits” a “vulnerability” is a user of a program - a user who can use this vulnerability to exceed their intended privilege level. To be an author of a program is to be at the ultimate privilege level: you’re writing the code, you can just have it do whatever you want.

A more accurate description of these sorts of bugs would be to say that they enable a programmer to insert a vulnerability in their program by mistake. Or worse, a malicious attacker could (in theory) intentionally insert such a vulnerability into an open source program, and the maintainer of that program could accidentally accept the change without realizing the vulnerability. This is bad, but it is not in itself a vulnerability in the traditional sense. It is categorically different.

Of course, one of the key value propositions of Rust is that - outside of unsafe code - it is not supposed to be possible to accidentally insert such a vulnerability into your own code. If you are writing strictly safe code, you are supposed to be guaranteed that such vulnerabilities do not occur. In this sense, Rust strives to move that privilege barrier to introducing these vulnerabilities so that it even excludes programmers themselves if they are not writing unsafe code.

Fixing these vulnerabilities immediately and warning Rust users about them is certainly necessary, and I’m very proud of the Rust project’s robust and responsive attitude toward these issues. And I also think a vital area of work moving forward is improving the tooling around unsafe code so that these bugs in unsafe code will occur even less frequently. But I am concerned that by making this category error and calling these “vulnerabilities,” we are sharply underselling Rust’s enormous success in reducing exploitable memory bugs.

The reality is that in a language like C or C++, CVEs of this sort would be practically infinite. The fundamental operators of the language enable users to write memory vulnerabilities in their code, without any warning, and the languages as designed are full of footguns which make this not only possible but probable. (Obviously, some proponents of these language will disagree with me, but I think we have a lot of empirical evidence on this point.) The present situation amounts to CVEs being issued whenever Rust’s level of security is accidentally, in some small way, on par with the dominant systems programming languages, instead of the order of magnitude improvement it normally represents. We are keeping score using radically different metrics.

Along the same lines, while dealing with the soundness hole discovered in the Pin API, which underlies our async/await system, I couldn’t help but be a bit ticked off that it was trumpeted across the internet as “Pin is Unsound.” To be unsound and to have a soundness hole mean the same thing on some level, but the tone is very different. I’m fairly sure the issue would have received no attention outside of the Rust project and a handful of our most invested users if it hadn’t been for the implication that it could undermine the entire async/await ecosystem. Of course, it couldn’t: this soundness hole was not of the apocalyptic nature of bugs like the original scoped thread API, but rather was the sort of thing fixed without changing the API or breaking a single user’s code.

I’m really glad for all the work being done to discover these bugs, to bring them to wider awareness, and to build tooling that will prevent them in the future. But I really wish they were discussed in a way which didn’t suggest a false equivalency between them and the live vulnerabilities that exist and result in remote code execution in the immediate present.