What are Rust's exact auto-dereferencing rules?

kFYatek picture kFYatek · Feb 14, 2015 · Viewed 25.1k times · Source

I'm learning/experimenting with Rust, and in all the elegance that I find in this language, there is one peculiarity that baffles me and seems totally out of place.

Rust automatically dereferences pointers when making method calls. I made some tests to determine the exact behaviour:

struct X { val: i32 }
impl std::ops::Deref for X {
    type Target = i32;
    fn deref(&self) -> &i32 { &self.val }
}

trait M { fn m(self); }
impl M for i32   { fn m(self) { println!("i32::m()");  } }
impl M for X     { fn m(self) { println!("X::m()");    } }
impl M for &X    { fn m(self) { println!("&X::m()");   } }
impl M for &&X   { fn m(self) { println!("&&X::m()");  } }
impl M for &&&X  { fn m(self) { println!("&&&X::m()"); } }

trait RefM { fn refm(&self); }
impl RefM for i32  { fn refm(&self) { println!("i32::refm()");  } }
impl RefM for X    { fn refm(&self) { println!("X::refm()");    } }
impl RefM for &X   { fn refm(&self) { println!("&X::refm()");   } }
impl RefM for &&X  { fn refm(&self) { println!("&&X::refm()");  } }
impl RefM for &&&X { fn refm(&self) { println!("&&&X::refm()"); } }


struct Y { val: i32 }
impl std::ops::Deref for Y {
    type Target = i32;
    fn deref(&self) -> &i32 { &self.val }
}

struct Z { val: Y }
impl std::ops::Deref for Z {
    type Target = Y;
    fn deref(&self) -> &Y { &self.val }
}


#[derive(Clone, Copy)]
struct A;

impl M for    A { fn m(self) { println!("A::m()");    } }
impl M for &&&A { fn m(self) { println!("&&&A::m()"); } }

impl RefM for    A { fn refm(&self) { println!("A::refm()");    } }
impl RefM for &&&A { fn refm(&self) { println!("&&&A::refm()"); } }


fn main() {
    // I'll use @ to denote left side of the dot operator
    (*X{val:42}).m();        // i32::m()    , Self == @
    X{val:42}.m();           // X::m()      , Self == @
    (&X{val:42}).m();        // &X::m()     , Self == @
    (&&X{val:42}).m();       // &&X::m()    , Self == @
    (&&&X{val:42}).m();      // &&&X:m()    , Self == @
    (&&&&X{val:42}).m();     // &&&X::m()   , Self == *@
    (&&&&&X{val:42}).m();    // &&&X::m()   , Self == **@
    println!("-------------------------");

    (*X{val:42}).refm();     // i32::refm() , Self == @
    X{val:42}.refm();        // X::refm()   , Self == @
    (&X{val:42}).refm();     // X::refm()   , Self == *@
    (&&X{val:42}).refm();    // &X::refm()  , Self == *@
    (&&&X{val:42}).refm();   // &&X::refm() , Self == *@
    (&&&&X{val:42}).refm();  // &&&X::refm(), Self == *@
    (&&&&&X{val:42}).refm(); // &&&X::refm(), Self == **@
    println!("-------------------------");

    Y{val:42}.refm();        // i32::refm() , Self == *@
    Z{val:Y{val:42}}.refm(); // i32::refm() , Self == **@
    println!("-------------------------");

    A.m();                   // A::m()      , Self == @
    // without the Copy trait, (&A).m() would be a compilation error:
    // cannot move out of borrowed content
    (&A).m();                // A::m()      , Self == *@
    (&&A).m();               // &&&A::m()   , Self == &@
    (&&&A).m();              // &&&A::m()   , Self == @
    A.refm();                // A::refm()   , Self == @
    (&A).refm();             // A::refm()   , Self == *@
    (&&A).refm();            // A::refm()   , Self == **@
    (&&&A).refm();           // &&&A::refm(), Self == @
}

(Playground)

So, it seems that, more or less:

  • The compiler will insert as many dereference operators as necessary to invoke a method.
  • The compiler, when resolving methods declared using &self (call-by-reference):
    • First tries calling for a single dereference of self
    • Then tries calling for the exact type of self
    • Then, tries inserting as many dereference operators as necessary for a match
  • Methods declared using self (call-by-value) for type T behave as if they were declared using &self (call-by-reference) for type &T and called on the reference to whatever is on the left side of the dot operator.
  • The above rules are first tried with raw built-in dereferencing, and if there's no match, the overload with Deref trait is used.

What are the exact auto-dereferencing rules? Can anyone give any formal rationale for such a design decision?

Answer

huon picture huon · Feb 17, 2015

Your pseudo-code is pretty much correct. For this example, suppose we had a method call foo.bar() where foo: T. I'm going to use the fully qualified syntax (FQS) to be unambiguous about what type the method is being called with, e.g. A::bar(foo) or A::bar(&***foo). I'm just going to write a pile of random capital letters, each one is just some arbitrary type/trait, except T is always the type of the original variable foo that the method is called on.

The core of the algorithm is:

  • For each "dereference step" U (that is, set U = T and then U = *T, ...)
    1. if there's a method bar where the receiver type (the type of self in the method) matches U exactly , use it (a "by value method")
    2. otherwise, add one auto-ref (take & or &mut of the receiver), and, if some method's receiver matches &U, use it (an "autorefd method")

Notably, everything considers the "receiver type" of the method, not the Self type of the trait, i.e. impl ... for Foo { fn method(&self) {} } thinks about &Foo when matching the method, and fn method2(&mut self) would think about &mut Foo when matching.

It is an error if there's ever multiple trait methods valid in the inner steps (that is, there can be only be zero or one trait methods valid in each of 1. or 2., but there can be one valid for each: the one from 1 will be taken first), and inherent methods take precedence over trait ones. It's also an error if we get to the end of the loop without finding anything that matches. It is also an error to have recursive Deref implementations, which make the loop infinite (they'll hit the "recursion limit").

These rules seem to do-what-I-mean in most circumstances, although having the ability to write the unambiguous FQS form is very useful in some edge cases, and for sensible error messages for macro-generated code.

Only one auto-reference is added because

  • if there was no bound, things get bad/slow, since every type can have an arbitrary number of references taken
  • taking one reference &foo retains a strong connection to foo (it is the address of foo itself), but taking more starts to lose it: &&foo is the address of some temporary variable on the stack that stores &foo.

Examples

Suppose we have a call foo.refm(), if foo has type:

  • X, then we start with U = X, refm has receiver type &..., so step 1 doesn't match, taking an auto-ref gives us &X, and this does match (with Self = X), so the call is RefM::refm(&foo)
  • &X, starts with U = &X, which matches &self in the first step (with Self = X), and so the call is RefM::refm(foo)
  • &&&&&X, this doesn't match either step (the trait isn't implemented for &&&&X or &&&&&X), so we dereference once to get U = &&&&X, which matches 1 (with Self = &&&X) and the call is RefM::refm(*foo)
  • Z, doesn't match either step so it is dereferenced once, to get Y, which also doesn't match, so it's dereferenced again, to get X, which doesn't match 1, but does match after autorefing, so the call is RefM::refm(&**foo).
  • &&A, the 1. doesn't match and neither does 2. since the trait is not implemented for &A (for 1) or &&A (for 2), so it is dereferenced to &A, which matches 1., with Self = A

Suppose we have foo.m(), and that A isn't Copy, if foo has type:

  • A, then U = A matches self directly so the call is M::m(foo) with Self = A
  • &A, then 1. doesn't match, and neither does 2. (neither &A nor &&A implement the trait), so it is dereferenced to A, which does match, but M::m(*foo) requires taking A by value and hence moving out of foo, hence the error.
  • &&A, 1. doesn't match, but autorefing gives &&&A, which does match, so the call is M::m(&foo) with Self = &&&A.

(This answer is based on the code, and is reasonably close to the (slightly outdated) README. Niko Matsakis, the main author of this part of the compiler/language, also glanced over this answer.)