Hopefully by 2008 we will have ISO sanctioning C++0x into C++09. By 2009 (most probably even before ratification) some of the new features will start to make their way into compilers. I've been reading about the proposed changes. A couple of things do strike me as odd though. I would like your shared thoughts and help in understanding perhaps some of the choices.

Concepts
The notion of concepts seems to me pretty simple. It's the type of a type. Seems to be a push into making the STL more OO oriented which is fine.... I think (I really never understood why some coders insist to believe it's bad the STL is not OO). Anyways, it will be possible to do things like:

Code:
std::vector<int> foo;
/* ... vector populated here ... */
for_each( foo, add<int>(2) );
Simply, most algorithms will be overloaded to accept the notion of Concept. On the case of for each, it's quiet possible that the above example will be possible through the overload

Code:
UnaryFunction for_each( Container c, UnaryFunction f );
That type, "Container", is the so called Concept. It seems it will be nothing more than an abstract class of which the container types will derive. Other concepts will probably be Iterator and possibly(?) more specialized versions of the Container concept, like SequencialContainer(?) and AssociativeContainer(?).

But...

Will this not make templating around STL objects harder to achieve? There cannot be many assumptions about the type inside the templated function or class. STL objects are invariably very different objects among themselves. So, it seems to me almost useless to create a template with Concept parameters. In fact, it seems a recipe for disaster except for very specific situations that perhaps wouldn't warrant such a big change to the STL.

Also, will this not add a lot of weight to the STL performance? Every STL object being now inside a OO structure with a base abstract class, especially when accessed through a pure virtual function, will be leaps and bounds slower than the current design. Won't it?

Type Inference
This one boggles me to no end. the auto keyword is dropped of its previous use and is now used to define an object of which type is infered from the initializer. So...

Code:
int foo = 12;
auto bar = foo; // bar type is obtained by infering the type of foo. bar is an int.
From what I have read this is nothing but syntactic sugar. Sure it will help on situations like:

Code:
for(std::vector< mjf::calculus::real<int> >::iterator iter = vec.begin(); iter != end(); ++iter)

// where the alternative will be

for(auto iter = vec.begin(); iter != end(); ++iter)
But...

What will distinguish const_iterator from iterator? Surely not the compiler! Will it?

And what more... literals. What to say of auto x = 12.5? It seems logic to believe that the implicit conversion rules will dictate the type of x. But according to these rules x would be a unsigned long double. Highly excessive, don't you think?

It will also introduce yet another machine dependant construct. But what makes this one worst is that the auto keyword will explicitly hide the type of the object... forever! Only through RTTI will it be possible to effectively debug the code. Worst, there's no RTTI for built-in types. So... What on earth! Am I missing something?

But more importantly, and this is what boggles me more, will this not break backwards compatibility with C?