That's simple. Everything can throw, unless it is explicitly documented as nothrow. That may not be technically true, but it is a great guideline. Writing exception-safe code, to an experienced C++ programmer, should not be a complicated task, but simply the natural way of coding. Writing code with the basic safety guarantee ("don't corrupt the program") is easy and should not require any thought. Writing code with the strong safety guarantee ("all or nothing") is more demanding, but actually rarely needed.
I do know from the little programs I've had to write that exceptions are very hard to work with and always leave me wondering if a piece of code I haven't written myself will throw exceptions or not.
The exception prohibition is the single most discussed element of these style guidelines. But I'm not happy with several other points.
The suggestion to use two-stage initialization ("Don't do work in constructors") is a (disastrous, IMO) consequence of the exception prohibition. Two-stage initialization is an incredibly destabilizing trait. Are you worried you don't know what code throws exception? Try worrying about what objects have a meaningful state.
Under "Operator Overloading", they discourage overloading of the assignment operator. This is absurd. If you want objects with value semantics, you have to overload assignment. Even if you don't overload any other operator, you should always be prepared to overload assignment if required. If a style guideline discourages this, then it should very strongly encourage using reference semantics only, i.e. not writing value classes. Make them all noncopyable. Not implementing assignment is just setting yourself up for disaster.
But the exceptions are the really tricky part. I, for one, think the rule wrong. Look at the cons for exceptions:
This is nonsense. Either your function never promised not to throw exceptions, in which case the callers should already be safe no matter what you throw. Or you previously promised not to throw, in which case you're breaking the interface of your function: the exception guarantee of a function is part of its interface! (Unfortunately, except for the nothrow guarantee, it is not documentable as a language feature.) If you change the error return of a function from 0 to -1, that's exactly the same. Not detectable by the compiler, and you have to examine every caller. And you don't have to transitively examine every caller if you introduce throwing to a previously nothrow function. You only need to do that for functions that were in turn nothrow. But if you detect such a dependency, then you might really, really think again about introducing throwing to the function. Remember: don't document a function as nothrow if it is only the current implementation that happens not to throw. Nothrow means, "I cannot logically fail." It is an inherent property of a function.
When you add a throw statement to an existing function, you must examine all of its transitive callers.
This is half-true. The last part of the last sentence is true: there is a cost in developer knowledge on the comfortable use of exceptions. On the other hand, in my opinion there is no comfortable use of other error reporting mechanism no matter the developer knowledge.
More generally, exceptions make the control flow of programs difficult to evaluate by looking at code: functions may return in places you don't expect. This results maintainability and debugging difficulties. You can minimize this cost via some rules on how and where exceptions can be used, but at the cost of more that a developer needs to know and understand.
Also, a function never returns in places I don't expect, because I expect every function to return from anywhere, unless I'm specifically looking to write a function with the strong exception guarantee, in which case I think about it a bit more.
They claim that these coding practices have downsides. I say that the advantages of RAII and these coding practices always outweigh the costs, whether you use exceptions or not. This argument follows the very typical pattern in exception cons: ignoring that, if you don't use exceptions, you still have to detect and report errors. Making guarantees about behavior in the face of errors doesn't get any easier if you use return values instead of exceptions.
Exception safety requires both RAII and different coding practices.
True, but I'm not aware of any exhaustive test for the size of the code needed to deal with other error mechanisms. Are exception handling tables larger than ifs on every possibly failing function call?
Turning on exceptions adds data to each binary produced, increasing compile time (probably slightly) and possibly increasing address space pressure.
The argument against exceptions comes down to this:
Everything else is just a vain attempt at justification, instead of stating outright, "We have legacy code. Deal with it."
for existing code, the introduction of exceptions has implications on all dependent code.
And yeah, making legacy code exception-safe is one major pain. I don't even try.