Then hopefully we'll be able to get limited evaluation at compile time some time in the future, as they define what limited is. Hopefully.
Making it easier for compiler (or compiler vendors) is one reason. The other common reason is that less restrictions usually mean a lot more possible meanings to be properly specified in the standard (which is a lot of work, as interactions with other language features have to be properly analysed and/or specified). Easier in the first instance (particularly with a large language like C++) to specify something in a restricted manner, and only relax restrictions if there is a real world need to do so.
Yes, this is very significant also. Remember, you can never back something out of a standard, only add to it, if you want to keep backward compatibility.Easier in the first instance (particularly with a large language like C++) to specify something in a restricted manner, and only relax restrictions if there is a real world need to do so.
The only exception I know of is the auto keyword, whose old meaning will be removed completely. That's safe because absolutely nobody used that keyword, and even if someone did, it's completely redundant anyway.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
You're right, but it's worth noting there is a defined mechanism for backing things out of the standard: deprecation is essentially flagging a feature for removal from a future version of the standard. Of course, actually removing a previously deprecated feature will break backward compatibility, so it will be interesting to see if any deprecated features in the language ever do disappear in a future version of a standard for C or C++ and - if they do - how many vendors will remove them.