Is it more efficient to use a varchar
field sized as a power of two vs. another number? I'm thinking no, because for SQL Server the default is 50.
However, I've heard (but never confirmed) that sizing fields as a power of 2 is more efficient because they equate to even bytes, and computers process in bits & bytes.
So, does a field declared as varchar(32)
or varchar(64)
have any real benefit over varchar(50)
?
No.
In some other uses, there are some advantages to use structures with a power of two size, mostly because you can fit a nice (power of two) number of these inside another power-of-two-sized structure. But this doesn't apply to a DB fieldsize.
The only power-of-two-sizing related to VARCHARs is about the exact type of varchar (or TEXT/BLOB in some SQL dialects): if it's less than 256, it can use a single byte to indicate length. if it's less than 65536 (64KB), two bytes are enough, three bytes work up to 16777216 (16MB), four bytes go to 4294967296 (4GB).
Also, it can be argued that VARCHAR(50)
is just as expensive as VARCHAR(255)
, since both will need n+1 bytes of storage.
Of course that's before thinking of Unicode...