Simple question?
Why is READ_COMMITTED_SNAPSHOT
not on by default?
I'm guessing either backwards compatibility, performance, or both?
[Edit] Note that I'm interested in the effect relating to the READ_COMMITTED isolation level, and not the snapshot isolation level.
Why would this be a breaking-change, as it holds less locks, and still doesn't read non-committed rows?
Turning snapshot on by default would break the vast majority of applications
It is unclear to me if it will break the "vast majority" of applications. Or, if it will break many applications in ways that are hard to identify and/or hard to work around. The SQL Server documentation states that READ COMMITTED
and READ COMMITTED SNAPSHOT
both satisfy the ANSI definition of READ COMMITTED
. (Stated here: http://msdn.microsoft.com/en-us/library/ms189122.aspx) So, as long as your code does not rely on anything beyond the literal ANSI-required behavior, in theory, you will be okay.
A complication is that the ANSI specification doesn't capture everything that people commonly think things like dirty read, fuzzy/non-repeatable read, etc. mean in practice. And, there are anomalies (permitted by the ANSI definitions) that can occur under READ COMMITTED SNAPSHOT
that cannot occur under READ COMMITTED
. For an example, see http://www.jimmcleod.net/blog/index.php/2009/08/27/the-potential-dangers-of-the-read-committed-snapshot-isolation-level/.
For deep information on the differences between the isolation levels, start with http://www.cs.umb.edu/cs734/CritiqueANSI_Iso.pdf (READ_COMMITTED_SNAPSHOT
was not around when this paper was written, but the other levels are covered by it).