How do I write a maintainable, fast, compile-time bit-mask in C++?

Alex Reinking picture Alex Reinking · Feb 20, 2019 · Viewed 8.5k times · Source

I have some code that is more or less like this:

#include <bitset>

enum Flags { A = 1, B = 2, C = 3, D = 5,
             E = 8, F = 13, G = 21, H,
             I, J, K, L, M, N, O };

void apply_known_mask(std::bitset<64> &bits) {
    const Flags important_bits[] = { B, D, E, H, K, M, L, O };
    std::remove_reference<decltype(bits)>::type mask{};
    for (const auto& bit : important_bits) {
        mask.set(bit);
    }

    bits &= mask;
}

Clang >= 3.6 does the smart thing and compiles this to a single and instruction (which then gets inlined everywhere else):

apply_known_mask(std::bitset<64ul>&):  # @apply_known_mask(std::bitset<64ul>&)
        and     qword ptr [rdi], 775946532
        ret

But every version of GCC I've tried compiles this to an enormous mess that includes error handling that should be statically DCE'd. In other code, it will even place the important_bits equivalent as data in line with the code!

.LC0:
        .string "bitset::set"
.LC1:
        .string "%s: __position (which is %zu) >= _Nb (which is %zu)"
apply_known_mask(std::bitset<64ul>&):
        sub     rsp, 40
        xor     esi, esi
        mov     ecx, 2
        movabs  rax, 21474836482
        mov     QWORD PTR [rsp], rax
        mov     r8d, 1
        movabs  rax, 94489280520
        mov     QWORD PTR [rsp+8], rax
        movabs  rax, 115964117017
        mov     QWORD PTR [rsp+16], rax
        movabs  rax, 124554051610
        mov     QWORD PTR [rsp+24], rax
        mov     rax, rsp
        jmp     .L2
.L3:
        mov     edx, DWORD PTR [rax]
        mov     rcx, rdx
        cmp     edx, 63
        ja      .L7
.L2:
        mov     rdx, r8
        add     rax, 4
        sal     rdx, cl
        lea     rcx, [rsp+32]
        or      rsi, rdx
        cmp     rax, rcx
        jne     .L3
        and     QWORD PTR [rdi], rsi
        add     rsp, 40
        ret
.L7:
        mov     ecx, 64
        mov     esi, OFFSET FLAT:.LC0
        mov     edi, OFFSET FLAT:.LC1
        xor     eax, eax
        call    std::__throw_out_of_range_fmt(char const*, ...)

How should I write this code so that both compilers can do the right thing? Failing that, how should I write this so that it remains clear, fast, and maintainable?

Answer

Yakk - Adam Nevraumont picture Yakk - Adam Nevraumont · Feb 20, 2019

Best version is :

template< unsigned char... indexes >
constexpr unsigned long long mask(){
  return ((1ull<<indexes)|...|0ull);
}

Then

void apply_known_mask(std::bitset<64> &bits) {
  constexpr auto m = mask<B,D,E,H,K,M,L,O>();
  bits &= m;
}

back in , we can do this strange trick:

template< unsigned char... indexes >
constexpr unsigned long long mask(){
  auto r = 0ull;
  using discard_t = int[]; // data never used
  // value never used:
  discard_t discard = {0,(void(
    r |= (1ull << indexes) // side effect, used
  ),0)...};
  (void)discard; // block unused var warnings
  return r;
}

or, if we are stuck with , we can solve it recursively:

constexpr unsigned long long mask(){
  return 0;
}
template<class...Tail>
constexpr unsigned long long mask(unsigned char b0, Tail...tail){
  return (1ull<<b0) | mask(tail...);
}
template< unsigned char... indexes >
constexpr unsigned long long mask(){
  return mask(indexes...);
}

Godbolt with all 3 -- you can switch CPP_VERSION define, and get identical assembly.

In practice I'd use the most modern I could. 14 beats 11 because we don't have recursion and hence O(n^2) symbol length (which can explode compile time and compiler memory usage); 17 beats 14 because the compiler doesn't have to dead-code-eliminate that array, and that array trick is just ugly.

Of these 14 is the most confusing. Here we create an anonymous array of all 0s, meanwhile as a side effect construct our result, then discard the array. The discarded array has a number of 0s in it equal to the size of our pack, plus 1 (which we add so we can handle empty packs).


A detailed explanation of what the version is doing. This is a trick/hack, and the fact you have to do this to expand parameters packs with efficiency in C++14 is one of the reasons why fold expressions where added in .

It is best understood from the inside out:

    r |= (1ull << indexes) // side effect, used

this just updates r with 1<<indexes for a fixed index. indexes is a parameter pack, so we'll have to expand it.

The rest of the work is to provide a parameter pack to expand indexes inside of.

One step out:

(void(
    r |= (1ull << indexes) // side effect, used
  ),0)

here we cast our expression to void, indicating we don't care about its return value (we just want the side effect of setting r -- in C++, expressions like a |= b also return the value they set a to).

Then we use the comma operator , and 0 to discard the void "value", and return the value 0. So this is an expression whose value is 0 and as a side effect of calculating 0 it sets a bit in r.

  int discard[] = {0,(void(
    r |= (1ull << indexes) // side effect, used
  ),0)...};

At this point, we expand the parameter pack indexes. So we get:

 {
    0,
    (expression that sets a bit and returns 0),
    (expression that sets a bit and returns 0),
    [...]
    (expression that sets a bit and returns 0),
  }

in the {}. This use of , is not the comma operator, but rather the array element separator. This is sizeof...(indexes)+1 0s, which also set bits in r as a side effect. We then assign the {} array construction instructions to an array discard.

Next we cast discard to void -- most compilers will warn you if you create a variable and never read it. All compilers will not complain if you cast it to void, it is sort of a way to say "Yes, I know, I'm not using this", so it suppresses the warning.