Every time is set up a new SQL table or add a new varchar
column to an existing table, I am wondering one thing: what is the best value for the length
.
So, lets say, you have a column called name
of type varchar
. So, you have to choose the length. I cannot think of a name > 20 chars, but you will never know. But instead of using 20, I always round up to the next 2^n number. In this case, I would choose 32 as the length. I do that, because from an computer scientist point of view, a number 2^n looks more even
to me than other numbers and I'm just assuming that the architecture underneath can handle those numbers slightly better than others.
On the other hand, MSSQL server for example, sets the default length value to 50, when you choose to create a varchar column. That makes me thinking about it. Why 50? is it just a random number, or based on average column length, or what?
It could also be - or probably is - that different SQL servers implementations (like MySQL, MSSQL, Postgres, ...) have different best column length values.
No DBMS I know of has any "optimization" that will make a VARCHAR
with a 2^n
length perform better than one with a max
length that is not a power of 2.
I think early SQL Server versions actually treated a VARCHAR
with length 255 differently than one with a higher maximum length. I don't know if this is still the case.
For almost all DBMS, the actual storage that is required is only determined by the number of characters you put into it, not the max
length you define. So from a storage point of view (and most probably a performance one as well), it does not make any difference whether you declare a column as VARCHAR(100)
or VARCHAR(500)
.
You should see the max
length provided for a VARCHAR
column as a kind of constraint (or business rule) rather than a technical/physical thing.
For PostgreSQL the best setup is to use text
without a length restriction and a CHECK CONSTRAINT
that limits the number of characters to whatever your business requires.
If that requirement changes, altering the check constraint is much faster than altering the table (because the table does not need to be re-written)
The same can be applied for Oracle and others - in Oracle it would be VARCHAR(4000)
instead of text
though.
I don't know if there is a physical storage difference between VARCHAR(max)
and e.g. VARCHAR(500)
in SQL Server. But apparently there is a performance impact when using varchar(max)
as compared to varchar(8000)
.
See this link (posted by Erwin Brandstetter as a comment)
Edit 2013-09-22
Regarding bigown's comment:
In Postgres versions before 9.2 (which was not available when I wrote the initial answer) a change to the column definition did rewrite the whole table, see e.g. here. Since 9.2 this is no longer the case and a quick test confirmed that increasing the column size for a table with 1.2 million rows indeed only took 0.5 seconds.
For Oracle this seems to be true as well, judging by the time it takes to alter a big table's varchar
column. But I could not find any reference for that.
For MySQL the manual says "In most cases, ALTER TABLE
makes a temporary copy of the original table". And my own tests confirm that: running an ALTER TABLE
on a table with 1.2 million rows (the same as in my test with Postgres) to increase the size of a column took 1.5 minutes. In MySQL however you can not use the "workaround" to use a check constraint to limit the number of characters in a column.
For SQL Server I could not find a clear statement on this but the execution time to increase the size of a varchar
column (again the 1.2 million rows table from above) indicates that no rewrite takes place.
Edit 2017-01-24
Seems I was (at least partially) wrong about SQL Server. See this answer from Aaron Bertrand that shows that the declared length of a nvarchar
or varchar
columns makes a huge difference for the performance.