A lot of times I see flag enum declarations that use hexadecimal values. For example:
[Flags]
public enum MyEnum
{
None = 0x0,
Flag1 = 0x1,
Flag2 = 0x2,
Flag3 = 0x4,
Flag4 = 0x8,
Flag5 = 0x10
}
When I declare an enum, I usually declare it like this:
[Flags]
public enum MyEnum
{
None = 0,
Flag1 = 1,
Flag2 = 2,
Flag3 = 4,
Flag4 = 8,
Flag5 = 16
}
Is there a reason or rationale to why some people choose to write the value in hexadecimal rather than decimal? The way I see it, it's easier to get confused when using hex values and accidentally write Flag5 = 0x16
instead of Flag5 = 0x10
.
Rationales may differ, but an advantage I see is that hexadecimal reminds you: "Okay, we're not dealing with numbers in the arbitrary human-invented world of base ten anymore. We're dealing with bits - the machine's world - and we're gonna play by its rules." Hexadecimal is rarely used unless you're dealing with relatively low-level topics where the memory layout of data matters. Using it hints at the fact that that's the situation we're in now.
Also, i'm not sure about C#, but I know that in C x << y
is a valid compile-time constant.
Using bit shifts seems the most clear:
[Flags]
public enum MyEnum
{
None = 0,
Flag1 = 1 << 0,
Flag2 = 1 << 1,
Flag3 = 1 << 2,
Flag4 = 1 << 3,
Flag5 = 1 << 4
}