Pointers Are Complicated III, or: Pointer-integer casts exposed
In my previous blog post on pointer provenance, I have shown that not thinking carefully about pointers can lead to a compiler that is internally inconsistent: programs that are intended to be well-behaved get miscompiled by a sequence of optimizations, each of which seems intuitively correct in isolation. We thus have to remove or at least restrict at least one of these optimizations. In this post I will continue that trend with another example, and then I will lay down my general thoughts on how this relates to the recent Strict Provenance proposal, what it could mean for Rust more generally, and compare with C’s PNVI-ae-udi. We will end on a very hopeful note about what this could all mean for Rust’s memory model. There’s a lot of information packed into this post, so better find a comfortable reading position. :)
In case you don’t know what I mean by “pointer provenance”, you can either read that previous blog post or the Strict Provenance documentation. The gist of it is that a pointer consists not only of the address that it points to in memory, but also of its provenance: an extra piece of “shadow state” that is carried along with each pointer and that tracks which memory the pointer has permission to access and when. This is required to make sense of restrictions like “use-after-free is Undefined Behavior, even if you checked that there is a new allocation at the same address as the old one”. Architectures like CHERI make this “shadow state” explicit (pointers are bigger than usual so that they can explicitly track which part of memory they are allowed to access), but even when compiling for AMD64 CPUs, compilers act “as if” pointers had such extra state – it is part of the specification, part of the Abstract Machine, even if it is not part of the target CPU.
Dead cast elimination considered harmful
The key ingredient that will help us understand the nuances of provenance is
restrict, a C keyword to promise that a given pointer
x does not alias any other pointer not derived from
This is comparable to the promise that a
&mut T in Rust is unique.
However, just like last time, we want to consider the limits that
restrict combined with integer-pointer casts put on an optimizing compiler – so the actual programming language that we have to be concerned with is the IR of that compiler.
Nevertheless I will use the more familiar C syntax to write down this example; you should think of this just being notation for the “obvious” equivalent function in LLVM IR, where
restrict is expressed via
Of course, if we learn that the IR has to put some limitations on what code may do, this also applies to the surface language – so we will be talking about all three (Rust, C, LLVM) quite a bit.
With all that out of the way, consider the following program:
This function takes as argument two
y. We first write
Then we compute
y2 as pointing to the
int right before
*y, and cast that and
x to integers.
If the addresses we get are the same, we cast
xaddr back to a pointer and write
1 to it.
Finally, we return the value stored in
main function simply calls
uwu with two pointers pointing to the first two elements of an array.
Note, in particular, that this will make
y2addr always equal!
&i - 1 denotes the same address as
Now, let us imagine we run a few seemingly obvious optimizations on
- Inside the
if, we can replace
y2addrsince they are both equal integers.
- Since this is a
staticfunction and the only caller makes
y2addralways equal to
xaddr, we know that the conditional in the
ifwill always evaluate to
true. We thus remove the test. (Alternatively, the same transformation can happen by inlining
mainwhile preserving the alias information, which LLVM explicitly aims for.)
- Finally, we observe that
xaddris unused, so we can remove it entirely.
uwu now looks as follows:
This might still look harmless.
However, we can do even more!
Notice how this function now consists of a store of
*x, then a bunch of code that does not involve
x at all, and then a load from
x is a
restrict pointer, this “code that does not involve
x” cannot possibly mutate
*x, as that would be a violation of the
Hence we can optimize the
return *x to
This kind of optimization is the primary reason to have
restrict annotations in the first place, so this should be uncontroversial.
Formally speaking: only pointers “derived from”
x may access
*x, and while the details of defining “derived from” are nasty, it should be clear that doing a bunch of operations that literally don’t involve
x at all cannot by any stretch of the imagination produce a result that is “derived from”
(If they could,
restrict would be basically worthless.)
Now, the whole program looks like this:
We started out with a program that always prints
1, and ended up with a program that always prints
This is bad news. Our optimizations changed program behavior. That must not happen! What went wrong?
Fundamentally, this is the same situation as in the previous blog post: this example demonstrates that either the original program already had Undefined Behavior, or (at least) one of the optimizations is wrong.
However, the only possibly suspicious part of the original program is a pointer-integer-pointer round-trip – and if casting integers to pointers is allowed, surely that must work.
I will, for the rest of this post, assume that replacing
(int*)(uintptr_t)x is always allowed.
So, which of the optimizations is the wrong one?
The blame game
Remember what I said earlier about
restrict and how it matters which pointer
ptr is “derived from”?
If we follow this lead, it may seem like the bogus optimization is the one that replaced
After this transformation,
ptr is obviously “derived from”
y2 (and thus transitively from
y) and not
x, and so obviously
uwu (as called from
main) is wrong since we are doing two memory accesses (at least one of which is a write) to the same location, using two pointers that are “derived from” different
However, that optimization doesn’t even have anything to do with pointers. It just replaces one equal integer by another! How can that possibly be incorrect?
What this example shows is that the notion of one value being “derived from” another is not very meaningful when considering an optimizing compiler.2
It is possible to “fix” this problem and have a notion of “derived from” that works correctly even with pointer-integer round-trips.
However, this requires saying that not only pointers but also integers carry provenance, such that casting a pointer to an integer can preserve the provenance.
We solved one problem and created many new ones.
For once, we have to stop doing optimizations that replace one
==-equal integer by another, unless we know they carry no provenance.
(Alternatively we could say
==-comparing such integers is Undefined Behavior. But clearly we want to allow people to
==-compare integers they obtained from pointer-integer casts, so this is not an option.)
That seems like a bad deal, since the code that benefits from such optimizations doesn’t even do anything shady – it is the pointer-manipulating code that is causing trouble.
The list doesn’t end here though, and because of that, this option was discarded by the C standardization process during its provenance work, and they ended up picking a “PNVI” model – provenance not via integers.
I think Rust should follow suit.
But, if it’s not the replacement of
y2addr that is wrong, then which optimization is the wrong one?
I will argue that the incorrect optimization is the one that removed
More specifically, the bad step was removing the cast
(uintptr_t)x, irrespective of whether the result of that cast is used or not.
Had this cast been preserved, it would have been a marker for the compiler to know that “the
restrict guarantee of
x ends here”, and it would not have done the final optimization of making
uwu always return
Casts have a side-effect
How can it not be correct to remove an operation if its result is unused?
If we take a step back, then in general, the answer is simple – if calling
foo() has some side-effect on the global state, like changing the value of a global variable, then of course we have to keep the call to
foo around even if we ignore its return value.
But in this case, the operation in question is
(uintptr_t)x, which has no side-effect – right?
This is exactly the key lesson that this example teaches us: casting a pointer to an integer has a side-effect, and that side-effect has to be preserved even if we don’t care about the result of the cast (in this case, the reason we don’t care is that we already know that
y2 will cast to the same
To explain what that side-effect is, we have to get deep into the pointer provenance mindset.
y are both pointers, so they carry provenance that tracks which memory they have permission to access.
x has permission to access
i (declared in
y has permission to access
y2 just inherits the permission from
But which permission does
Since integers do not carry provenance, the details of this permission information are lost during a pointer-integer cast, and have to somehow be ‘restored’ at the integer-pointer cast.
And that is exactly the point where our problems begin.
In the original program, we argued that doing a pointer-integer-pointer round-trip is allowed (as is the intention of the C standard).
It follows that
ptr must pick up the permission from
x (or else the write to
*ptr would be Undefined Behavior:
restrict, nothing else can access that memory).
However, in the final program,
x plays literally no role in computing
It would be a disaster to say that
ptr could pick up the permission of
x – just imagine all that
y-manipulating code is moved into a different function.
Do we have to assume that any function we call can just do a cast to “steal”
That would entirely defeat the point of
restrict and make
noalias optimizations basically impossible.
But how can it be okay for
ptr to pick up
x’s permission in the original program, and not okay for it to pick up the same permission in the final program?
The key difference is that in the original program,
x has been cast to an integer.
When you cast a pointer to an integer, you are basically declaring that its permission is “up for grabs”, and any future integer-pointer cast may end up endowing the resulting pointer with this permission.
We say that the permission has been “exposed”.
And that is the side-effect that
Yes, this way of resolving the conflict does mean we will lose some optimizations. We have to lose some optimization, as the example shows. However, the crucial difference to the previous section is that only code which casts pointers to integers is affected. This means we can keep the performance cost localized to code that does ‘tricky things’ around pointers – that code needs the compiler to be a bit conservative, but all the other code can be optimized without regard for the subtleties of pointer-integer-pointer round-trips. (Specifically, both pointer-integer and integer-pointer casts have to be treated as impure operations, but for different reasons. Pointer-integer casts have a side-effect as we have seen. Integer-pointer casts are non-deterministic – they can produce different results even for identical inputs. I moved the discussion of this point into the appendix below.)
Strict provenance: pointer-integer casts without side-effects
This may sound like bad news for low-level coding tricks like pointer tagging (storing a flag in the lowest bit of a pointer).
Do we have to optimize this code less just because of corner cases like the above?
As it turns out, no we don’t – there are some situations where it is perfectly fine to do a pointer-integer cast without having the “exposure” side-effect.
Specifically, this is the case if we never intend to cast the integer back to a pointer!
That might seem like a niche case, but it turns out that most of the time, we can avoid ‘bare’ integer-pointer casts, and instead use an operation like
with_addr that explicitly specifies which provenance to use for the newly created pointer.4
This is more than enough for low-level pointer shenanigans like pointer tagging, as Gankra demonstrated.
Rust’s Strict Provenance experiment aims to determine whether we can use operations like
with_addr to replace basically all integer-pointer casts.
As part of Strict Provenance, Rust now has a second way of casting pointers to integers,
ptr.addr(), which does not “expose” the permission of the underlying pointer, and hence can be treated like a pure operation!5
We can do shenanigans on the integer representation of a pointer and have all these juicy optimizations, as long as we don’t expect bare integer-pointer casts to work.
As a bonus, this also makes Rust work nicely on CHERI without a 128bit wide
usize, and it helps Miri, too.
But that is not the focus of this blog post, Gankra has already written most of what there is to say here. For this blog post, we are happy with what we learned about casts between pointers and integers. We have found a way to resolve the conflict uncovered by the example, while keeping performance cost (due to lost optimizations) confined to just the code that is truly ambiguous, and even found alternative APIs that can be used to replace most (all?) uses of ambiguous integer-pointer casts. All is well that ends well? Unfortunately, no – we are not quite done yet with pointer provenance nightmares.
Let’s do some transmutation magic
Languages like C or Rust typically allow programmers to re-interpret the underlying representation of a value at a different type.
In Rust, this is often called “transmutation”; in C, a common term for this is “type punning”.
The easiest way to do this in Rust is via the
mem::transmute function, but alternatively transmutation is possible via
unions or by casting a
*mut T raw pointer to
In C, the easiest way is to use a
memcpy between variables of different types, but
union-based type punning is also sometimes allowed, as is loading data of arbitrary type using a character-typed pointer.
(Other kinds of pointer-based type punning are forbidden by C’s strict aliasing rules, but Rust has no such restriction.)
The next question we are going to treat in this blog post is: what happens when we transmute a pointer to an integer?
Basically, imagine the original example after we replace the two casts (computing
y2addr) with a call to a function like
All the same optimizations still apply – right?
This requires a compiler that can “see through”
memcpy or union field accesses, but that does not seem too much to ask.
But now we have the same contradiction as before!
Either the original program already has Undefined Behavior, or one of the optimizations is incorrect.
Previously, we resolved this conundrum by saying that removing the “dead cast”
(uintptr_t)x whose result is unused was incorrect, because that cast had the side-effect of “exposing” the permission of
x to be picked up by future integer-pointer casts.
We could apply the same solution again, but this time, we would have to say that a
union access (at integer type) or a
memcpy (to an integer) can have an “expose” side-effect and hence cannot be entirely removed even if its result is unused.
And that sounds quite bad!
(uintptr_t)x only happens in code that does tricky things with pointers, so urging the compiler to be careful and optimize a bit less seems like a good idea (and at least in Rust,
x.addr() even provides a way to opt-out of this side-effect).
memcpy are all over the place.
Do we now have to treat all of them as having side-effects?
In Rust, due to the lack of a strict aliasing restriction (or in C with
-fno-strict-aliasing), things get even worse, since literally any load of an integer from a raw pointer might be doing a pointer-integer transmutation and thus have the “expose” side-effect!
To me, and speaking from a Rust perspective, that sounds like bad idea. Sure, we want to make it as easy as possible to write low-level code in Rust, and that code sometimes has to do unspeakable things with pointers. But we don’t like the entire ecosystem to carry the cost of that decision by making it harder to remove every raw pointer load everywhere! So what are the alternatives?
Well, I would argue that the alternative is to treat the original program (after translation to Rust) as having Undefined Behavior. There are, to my knowledge, generally two reasons why people might want to transmute a pointer to an integer:
- Chaining many
ascasts is annoying, so calling
mem::transmutemight be shorter.
- The code doesn’t actually care about the integer per se, it just needs some way to hold arbitrary data in a container of a given type.
The first kind of code should just use
as casts, and we should do what we can (via lints, for example) to identify such code and get it to use casts instead.6
Maybe we can adjust the cast rules to remove the need for chaining, or add some helper methods that can be used instead.
The second kind of code should not use integers!
Putting arbitrary data into an integer type is already somewhat suspicious due to the trouble around padding (if we want to make use of those shiny new
noundef annotations that LLVM offers, we have to disallow transmuting data with padding to integer types).
The right type to use for holding arbitrary data is
MaybeUninit, so e.g.
[MaybeUninit<u8>; 1024] for up to 1KiB of arbitrary data.
MaybeUninit can also hold pointers with their provenance without any trouble.
Because of that, I think we should move towards discouraging, deprecating, or even entirely disallowing pointer-integer transmutation in Rust.
That means a cast is the only legal way to turn a pointer into an integer, and after the discussion above we got our casts covered.
A first careful step has recently been taken on this journey; the
mem::transmute documentation now cautions against using this function to turn pointers into integers.
A new hope for Rust
All in all, while the situation may be very complicated, I am actually more hopeful than ever that we can have both – a precise memory model for Rust and all the optimizations we can hope for! The three core pillars of this approach are:
- making pointer-integer casts “expose” the pointer’s provenance,
ptr.addr()to learn a pointer’s address without exposing its provenance,
- and disallowing pointer-integer transmutation.
Together, they imply that we can optimize “nice” code (that follows Strict Provenance, and does not “expose” or use integer-pointer casts) perfectly, without any risk of breaking code that does use pointer-integer round-trips. In the easiest possible approach, the compiler can simply treat pointer-integer and integer-pointer casts as calls to some opaque external function. Even if the rest of the compiler literally entirely ignores the existence of pointer-integer round-trips, it will still support such code correctly!
However, it’s not just compilers and optimizers that benefit from this approach. One of my biggest quests is giving a precise model of the Rust aliasing rules, and that task has just gotten infinitely easier. I used to worry a lot about pointer-integer round-trips while developing Stacked Borrows. This is the entire reason why all of this “untagged pointer” mess exists.
Under this brave new world, I can entirely ignore pointer-integer round-trips when designing memory models for Rust. Once that design is done, support for pointer-integer round-trips can be added as follows:
- When a pointer is cast to an integer, its provenance (whatever information it is that the model attaches to pointers – in Stacked Borrows, this is called the pointer’s tag) is marked as “exposed”.
- When an integer is cast to a pointer, we guess the provenance that the new pointer should have from among all the provenances that have been previously marked as “exposed”. (And I mean all of them, not just the ones that have been exposed “at the same address” or anything like that. People will inevitably do imperfect round-trips where the integer is being offset before being cast back to a pointer, and we should support that. As far as I know, this doesn’t really cost us anything in terms of optimizations.)
This “guess” does not need to be described by an algorithm. Through the magic that is formally known as angelic non-determinism, we can just wave our hands and say “the guess will be maximally in the programmer’s favor”: if any possible choice of (previously exposed) provenance makes the program work, then that is the provenance the new pointer will get. Only if all choices lead to Undefined Behavior, do we consider the program to be ill-defined. This may sound like cheating, but it is actually a legit technique in formal specifications.
Also note how it’s really just the integer-pointer casts that are making things so complicated here.
If it weren’t for them, we would not even need all that “exposure” machinery.
Pointer-integer casts on their own are perfectly fine!
with_addr is such a nice API from a memory model perspective.7
This approach does have the disadvantage that it becomes near impossible to write a tool like Miri that precisely matches the specification, since Miri cannot possibly implement this “guessing” accurately. However, Miri can still properly check code that uses Strict Provenance operations, so hopefully this is just yet another incentive (besides the more precise specification and better optimization potential) for programmers to move their code away from integer-pointer casts and towards Strict Provenance. And who knows, maybe there is a clever way that Miri can actually get reasonably close to checking this model? It doesn’t have to be perfect to be useful.
What I particularly like about this approach is that it makes pointer-integer round-trips a purely local concern.
With an approach like Stacked Borrows “untagged pointers”, every memory operation has to define how it handles such pointers.
Complexity increases globally, and even when reasoning about Strict Provenance code we have to keep in mind that some pointers in other parts of the program might be “untagged”.
In contrast, this “guessing maximally in your favor”-based approach is entirely local; code that does not syntactically contain exposing pointer-integer or integer-pointer casts can literally forget that such casts exist at all.
This is true both for programmers thinking about their
unsafe code, and for compiler authors thinking about optimizations.
Compositionality at its finest!
But what about C?
I have talked a lot about my vision for “solving” pointer provenance in Rust. What about other languages? As you might have heard, C is moving towards making PNVI-ae-udi an official recommendation for how to interpret the C memory model. With C having so much more legacy code to care about and many more stakeholders than Rust does, this is an impressive achievement! How does it compare to all I said above?
First of all, the “ae” part of the name refers to “address-exposed” – that’s exactly the same mechanism as what I described above!
In fact, I have taken the liberty to use their terminology.
So, on this front, I see Rust and C as moving into the same direction, which is great.
(Now we just need to get LLVM to also move in that direction.)
I should mention that PNVI-ae-udi does not account for the
restrict modifier of C, so in a sense it is solving an easier problem than the Rust memory model which has no choice but to contend with interesting questions around aliasing restrictions.
However, if/when a more precise model of C with
restrict emerges, I don’t think they will be moving away from the “address-exposed” model – to the contrary, as I just argued this model means we can specify
restrict without giving a thought to pointer-integer round-trips.
The “udi” part of the name means “user disambiguation”, and is basically the mechanism by which an integer-pointer cast in C “guesses” the provenance it has to pick up.
The details of this are complicated, but the end-to-end effect is basically exactly the same as in the “best possible guess” model I have described above!
Here, too, my vision for Rust aligns very well with the direction C is taking.
(The set of valid guesses in C is just a lot more restricted since they do not have
wrapping_offset, and the model does not cover
That means they can actually feasibly give an algorithm for how to do the guessing.
They don’t have to invoke scary terms like “angelic non-determinism”, but the end result is the same – and to me, the fact that it is equivalent to angelic non-determinism is what justifies this as a reasonable semantics.
Presenting this as a concrete algorithm to pick a suitable provenance is then just a stylistic choice.)
Kudos go to Michael Sammler for opening my eyes to this interpretation of “user disambiguation”, and arguing that angelic non-determinism might not be such a crazy idea after all.
What is left is the question of how to handle pointer-integer transmutation, and this is where the roads are forking.
PNVI-ae-udi explicitly says loading from a union field at integer type exposes the provenance of the pointer being loaded, if any.
So, the example with
transmute_union would be allowed, meaning the optimization of removing the “dead” load from the
union would not (in general) be allowed.
transmute_memcpy, where the proposal says that when we access the contents of
ret at type
uintptr_t, that will again implicitly expose the provenance of the pointer.
I think there are several reasons why this choice makes sense for C, that do not apply to Rust:
- There is a lot of legacy code. A LOT.
- There is no alternative like
MaybeUninitthat could be used to hold data without losing provenance.
- Strict aliasing means that not all loads at integer type have to worry about provenance; only loads at character type are affected.
On the other hand, I am afraid that this choice might come with a significant cost in terms of lost optimizations. As the example above shows, the compiler has to be very careful when removing any operation that can expose a provenance, since there might be integer-pointer casts later that rely on this. (Of course, until this is actually implemented in GCC or LLVM, it will be hard to know the actual cost.) Because of all that, I think it is reasonable for Rust to make a different choice here.
This was a long post, but I hope you found it worth reading. :) To summarize, my concrete calls for action in Rust are:
- Code that uses pointer-integer transmutation should migrate to regular casts or
MaybeUninittransmutation ASAP. I think we should declare pointer-integer transmutation Undefined Behavior and not accept such code as well-defined.
- Code that uses pointer-integer or integer-pointer casts might consider migrating to the Strict Provenance APIs. You can do this even on stable with this polyfill crate. However, such code is and remains well-defined. It just might not be optimized as well as one could hope, it might not compile on CHERI, and Miri will probably miss some bugs. If there are important use-cases not covered by Strict Provenance, we’d like to hear about them!
This is a large undertaking and will require a lot of work! However, at the end of this road is a language with a coherent, well-defined memory model and support for doing unspeakable things to pointers without incurring a (reasoning or optimization) cost on code that is perfectly nice to its pointers. Let us work towards this future together. :)
Integer-pointer casts are not pure, either
I promised an example of how integer-pointer casts are “impure”, in the sense that two casts with the same input integer can produce different pointers:
If we ignore the pointer-integer round-trips, this uses
xcopy to access
i, while using
ycopy to access
i, so this should be uncontroversial.
ycopy is computed via
(y-1)+1, but hopefully nobody disagrees with that.
Then we just add some pointer-integer round-trips.
But now, consider that
(int*)y2addr take the same integer as input!
If the compiler were to treat integer-pointer casts as a pure, deterministic operation, it could replace
However, that would mean
ycopy have the same provenance!
And there exists no provenance in this program that has access to both
So, either the cast has to synthesize a new provenance that has never been seen before, or doing common subexpression elimination on integer-pointer casts is wrong.
My personal stance is that we should not let the cast synthesize a new provenance. This would entirely lose the benefit I discussed above of making pointer-integer round-trips a local concern – if these round-trips produce new, never-before-seen kinds of provenance, then the entire rest of the memory model has to define how it deals with those provenances. We already have no choice but treat pointer-integer casts as an operation with side-effects; let’s just do the same with integer-pointer casts and remain sure that no matter what the aliasing rules are, they will work fine even in the presence of pointer-integer round-trips.
What about LLVM?
I discussed above how my vision for Rust relates to the direction C is moving towards. What does that mean for the design space of LLVM? Which changes would have to be made to fix (potential) miscompilations in LLVM and to make it compatible with these ideas for C and/or Rust? Here’s the list of open problems I am aware of:
- LLVM would have to to stop removing
inttoptr(ptrtoint(_))and stop doing replacement of
- As the first example shows, LLVM also needs to treat
ptrtointas a side-effecting operation that has to be kept around even when its result is unused. (Of course, as with everything I say here, there can be special cases where the old optimizations are still correct, but they need extra justification.)
- I think LLVM should also treat
inttoptras a side-effecting (and, in particular, non-deterministic) operation, as per the last example. However, this could possibly be avoided with a
noaliasmodel that specifically accounts for new kinds of provenance being synthesized by casts. (I am being vague here since I don’t know what that provenance needs to look like.)
So far, this all applies to LLVM as a Rust and C backend equally, so I don’t think there are any good alternatives.
On the plus side, adapting this strategy for
ptrtoint means that the recent LLVM “Full Restrict Support” can also handle pointer-integer round-trips “for free”!
copy_alloc_id to LLVM is not strictly necessary, since it can be implemented with
However, optimizations don’t seem to always deal well with that pattern, so it might still be a good idea to add this as a primitive operation to LLVM.
Where things become more subtle is around pointer-integer transmutation.
If LLVM wants to keep doing replacement of
==-equal integers (which I strongly assume to be the case), something needs to give: my first example, with casts replaced by transmutation, shows a miscompilation.
If we focus on doing an
i64 load of a pointer value (e.g. as in the LLVM IR produced by
transmute_union, or pointer-based transmutation in Rust), what are the options?
Here are the ones I have seen so far (but there might be more, of course):
- The load could be said to behave like
ptrtoint. This means it strips provenance and as a side-effect, it also exposes the pointer.
- The load could be said to just strip provenance without exposing the pointer.
- The load could be simply UB or return
- The load could produce an integer with provenance, and moreover any computation on such an integer (including
icmp) is UB (or returns
poison). This has some subtle consequences, but they might be mostly harmless. For example,
xcan no longer be replaced by
x+0. We cannot assume that it is safe to compare arbitrary
i64and branch on the result, even if they are
noundef. Or maybe
noundefalso excludes provenance? This is certainly the least obvious alternative.
Except for the first option, these all say that my example with transmutation instead of the pointer-integer casts is UB, which avoids the optimization problems that arise from accepting that example. That is fine for my vision for Rust, but a problem for C with PNVI-ae-udi. Only the first option is compatible with that, but that option also means entirely removing a load is non-trivial even if its result is unused! I hope we can avoid that cost for Rust.
Another interesting difference between these options is whether the resulting semantics are “monotone” with respect to provenance: is “increasing” the provenance of a value (i.e., letting it access more memory) a legal program transformation?
With the last two options, it is not, since adding provenance to a value that did not have it can introduce Undefined Behavior.
The first two options are “monotone” in this sense, which seems like a nice property.
(This is comparable to how the semantics are “monotone” with respect to
poison: replacing either of them by a fixed value is a legal program transformation. For
poison this is crucially important, for provenance it seems more like a sanity check of the semantics.)
In all of these cases except the last one, LLVM would probably need something like a byte type so that a load of arbitrary data (including a pointer with provenance) can be done without losing the provenance attached to the data.
A similar question arises for doing a pointer-typed load of a bare integer (integer-pointer transmutation):
- The load could have the effects of a
inttoptr. This is less clearly bad than a
ptrtoint, but is still tricky since (at least without extra work)
inttoptris non-deterministic and depends on the global set of exposed provenances (so, it cannot be easily reordered up across potentially exposing operations). I also have another example showing that if both pointer-integer transmutation and integer-pointer transmutation work like the corresponding casts (i.e., if the first of my options is picked for both loads of pointers at integer type, and integers at pointer type), then more optimizations fail: removing a store that follows a load and just writes back the same value that has just been loaded is no longer correct. Yet, I think this is what PNVI-ae-udi mandates. Again I hope Rust can opt-out of this.
- The load could create a pointer with “invalid” provenance.
That means transmutation of a pointer to an integer and back produces a pointer that cannot be used to access memory, but avoids all the analysis difficulties that come with an
inttoptr. This is what I think would be best for Rust.
- The load could produce
poison, but I see no good reason for doing that.
Since LLVM generally errs on the side of delaying UB as long as possible if that is not in conflict with optimizations, the second option for both questions feels most “on-brand” to me personally – but in the end, these are some hard choices that the LLVM community will have to make. I can help evaluate these trade-offs by giving structure to the design space and pointing out the inevitable consequences of certain decisions, but I can only lend a hand here – while I think and care a lot about LLVM semantics, I haven’t done any direct work on LLVM myself. I am also not enough of an expert for which optimizations are important and the performance impact of the various options here, so I hope we can get people with that kind of background involved in the discussion as well. For the sake of the entire ecosystem I mostly hope that LLVM will make some choice so that we can, eventually, leave this limbo state we are currently in.
The exact semantics of
restrictare subtle and I am not aware of a formal definition. (Sadly, the one in the C standard does not really work, as you can see when you try to apply it to my example.) My understanding is as follows:
restrictpromises that this pointer, and all pointers derived from it, will not be used to perform memory accesses that conflict with any access done by pointers outside of that set. A “conflict” arises when two memory accesses overlap and at least one of them is a write. This promise is scoped to the duration of the function call when
restrictappears in an argument type; I have no good idea for what the scope of the promise is in other situations. ↩
This is, in fact, a common problem – it is what makes the
consumememory order for atomic accesses basically impossible to specify in a programming language! While instruction sets often have very explicit rules about which instructions are assumed to “depend” on which previous instructions, that notion is hard to rationalize in a language where the compiler can replace
a + (b-a)by
b– and thus remove dependencies from the program. ↩
As mentioned in a previous footnote, this is not actually how
restrictworks. The exact set of locations these pointers can access is determined dynamically, and the only constraint is that they cannot be used to access the same location (except if both are just doing a load). However, I carefully picked this example so that these subtleties should not change anything. ↩
with_addrhas been unstably added to the Rust standard library very recently. Such an operation has been floating around in various discussions in the Rust community for quite a while, and it has even made it into an academic paper under the name of
copy_alloc_id. Who knows, maybe one day it will find its way into the C standard as well. :) ↩
My lawyers advised me to say that all of this is provisional and the specification for
addrand all other Strict Provenance operations might change until their eventual stabilization. ↩
We could even, if we are really desperate, decide to special-case
mem::transmute::<*const T, usize>(and likewise for
*mut T) and declare that it does have the “expose” side-effect if the current crate is using some old edition. Sometimes, you have to do ugly things to move forwards. This would not apply to
union- or raw-pointer-based transmutation. ↩
Even more specifically, it’s the integer-pointer cast as part of a pointer-integer round-trip that are a problem. If you are just casting an integer constant to a pointer because on your platform that’s where some fixed memory region lies, and if that memory is entirely outside of the global, stack, and heap allocations that the Rust language itself is aware of, we can still be friends. ↩
Posted on Ralf's Ramblings on Apr 11, 2022 and licensed under CC BY-SA 4.0.
Comments? Drop me a mail or leave a note on reddit!