The whole problem is one of what is a "trivial" copy versus a move essentially being a destructive copy. To use the example of a swap() function
Where move() essentially enables the move construction or assignment. The thing is, in the first line it effectively destroys the object a. The second line then reconstructs a, with the contents of b, and destroys b. The last line then reconstructs b with the data that was in a.Code:template<class T> swap(T &a, T &b) { T tmp(move(a)); a = move(b); b = move(tmp); }
Granted, there are efficiency gains ... if everything goes right. But one of the common maintenance problems in classes with user-supplied copy constructors or assignment operators has been the problem of ensuring those functions work correctly (eg correctly copy all members). And what does the "move construction" introduce ..... a requirement to potentially maintain a larger set of such functions. Unless performance is highly critical - which it is in some applications, but not in many - that is, on the face of it, a maintenance burden.
All this is at odds with accepted best practices, that encourage getting code functionally right first in the simplest and most maintainable form, and then worrying about addressing any performance deficiencies - in other words, discouraging premature optimisation. Instead, what we have is an encouragement for programmers to perform premature optimisation. That will eventually yield a series of constraints and guidelines concerning whether a class should support move construction/assignment or not. We already have an example where constraints have to be imposed related to exceptions. A few more such constraints - particularly if they rely on losts of care and consideration by the programmer - and it will be easier for programmers (who, let's face it, are often careless and inclined to waste productive time with premature optimisation) to use other techniques.
Yeah, I'm sure others will make different predictions to mine. The proof will be in the practice down-track, I guess. As a matter of fact, I hope I'll be proven wrong, but don't think I will be.
There are only two possible circumstances I can see in which changing exception specifications will cause confusion.
1) Addition of new constructs, which fail to compile at present anyway. This will only cause confusion if new code is ported to old compilers, but that's a common problem with all new language features. Deprecation will not stop programmers from running new code through old compilers.
2) Practical problems associated with turning run-time checks (eg calling abort() when an unexpected exception is thrown) into compile time checks.
Would you care to characterise practical circumstances in which changing current run-time enforcement of exception specifications to compile time checks presents a real disadvantage?
Yes, it is true that some old code will fail to compile. I consider those circumstances are a non-issue. Firstly, the amount of such code is small (since exception specifications have been actively discouraged - to the extent there is consideration of deprecating exception specifications). Second, the cause of failure will, most likely, be related to existing undiagnosed faults in code.