Google's Go language has no exceptions as a design choice, and Linus of Linux fame has called exceptions crap. Why?
Exceptions make it really easy to write code where an exception being thrown will break invariants and leave objects in an inconsistent state. They essentially force you to remember that most every statement you make can potentially throw, and handle that correctly. Doing so can be tricky and counter-intuitive.
Consider something like this as a simple example:
class Frobber
{
int m_NumberOfFrobs;
FrobManager m_FrobManager;
public:
void Frob()
{
m_NumberOfFrobs++;
m_FrobManager.HandleFrob(new FrobObject());
}
};
Assuming the FrobManager
will delete
the FrobObject
, this looks OK, right? Or maybe not... Imagine then if either FrobManager::HandleFrob()
or operator new
throws an exception. In this example, the increment of m_NumberOfFrobs
does not get rolled back. Thus, anyone using this instance of Frobber
is going to have a possibly corrupted object.
This example may seem stupid (ok, I had to stretch myself a bit to construct one :-)), but, the takeaway is that if a programmer isn't constantly thinking of exceptions, and making sure that every permutation of state gets rolled back whenever there are throws, you get into trouble this way.
As an example, you can think of it like you think of mutexes. Inside a critical section, you rely on several statements to make sure that data structures are not corrupted and that other threads can't see your intermediate values. If any one of those statements just randomly doesn't run, you end up in a world of pain. Now take away locks and concurrency, and think about each method like that. Think of each method as a transaction of permutations on object state, if you will. At the start of your method call, the object should be clean state, and at the end there should also be a clean state. In between, variable foo
may be inconsistent with bar
, but your code will eventually rectify that. What exceptions mean is that any one of your statements can interrupt you at any time. The onus is on you in each individual method to get it right and roll back when that happens, or order your operations so throws don't effect object state. If you get it wrong (and it's easy to make this kind of mistake), then the caller ends up seeing your intermediate values.
Methods like RAII, which C++ programmers love to mention as the ultimate solution to this problem, go a long way to protect against this. But they aren't a silver bullet. It will make sure you release resources on a throw, but doesn't free you from having to think about corruption of object state and callers seeing intermediate values. So, for a lot of people, it's easier to say, by fiat of coding style, no exceptions. If you restrict the kind of code you write, it's harder to introduce these bugs. If you don't, it's fairly easy to make a mistake.
Entire books have been written about exception safe coding in C++. Lots of experts have gotten it wrong. If it's really that complex and has so many nuances, maybe that's a good sign that you need to ignore that feature. :-)