1. Actually, the standard is even weirder than that, specifying (in [basic.fundamental]p2): "There are five standard signed integer types : “signed char”, “short int”, “int”, “long int”, and “long long int”. In this list, each type provides at least as much storage as those preceding it in the list."
The fun thing? The standard never defines what "providing storage" means. So it's actually somewhat open to interpretation. I would personally actually go more with the sizeof interpretation than the value range interpretation.
I don't have the C standard to check what that says, though.

2. Originally Posted by grumpy
Not true.
....
The standard only requires that the set of values that can be represented by a char is a subset of the values that can be represented by a short, which in turn is a subset of the values that can be represented by a short .....
How are those two different ?
Can you give an example? (Sticking to integral types)

3. Originally Posted by CornedBee
The fun thing? The standard never defines what "providing storage" means. So it's actually somewhat open to interpretation. I would personally actually go more with the sizeof interpretation than the value range interpretation.
I don't have the C standard to check what that says, though.
The C standard only talks about the ranges of values that can be represented. It does not make reference to sizeof() at all in discussing storage of a value in an integral (or any other) type.

Part of the ambiguity is that the word "storage" can mean "representing a value using a variable" as well as "allocating memory".

4. Originally Posted by grumpy
Not true. In practice it usually works out that sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long). However, that is not actually a requirement in the standard.

The standard only requires that the set of values that can be represented by a char is a subset of the values that can be represented by a short, which in turn is a subset of the values that can be represented by a short .....
Regardless of how numbers are represented, fundamental mathematics seems to imply that if the above statement is true, then the statement about sizeof() is also true. You can't have sizeof(short) < sizeof(char) if short can represent every value that char can represent.

5. It is common implementation choices that lead to the rule of thumb about sizeof(), not "fundamental mathematics" or laws of physics.

All that can be implied from the implementation limits is minimum number of bits needed to represent a type (and that is assuming a binary computer - a ternary computer does not work with bits). The is no "fundamental mathematics" that implies a lower bound (on size of a given type, in this case) is equal to the upper bound.

6. Originally Posted by grumpy
It is common implementation choices that lead to the rule of thumb about sizeof(), not "fundamental mathematics" or laws of physics.

All that can be implied from the implementation limits is minimum number of bits needed to represent a type (and that is assuming a binary computer - a ternary computer does not work with bits). The is no "fundamental mathematics" that implies a lower bound (on size of a given type, in this case) is equal to the upper bound.
Which is why the statement about sizeof() uses '<=' and not '<'. I do not follow you. How can sizeof(short) < sizeof(char) if short can represent everything char can represent?

7. Originally Posted by brewbuck
How can sizeof(short) < sizeof(char) if short can represent everything char can represent?
It can't, since sizeof(char) is defined to be 1, and sizeof(any other complete type) is defined to a positive value.

But that's got nothing to do with there being a one-to-one relationship of "supporting a larger range mandates a bigger variable".

In response to a debate like this, a colleague implemented a bare bones C compiler that had sizeof(short) equal to 4 and sizeof(int) equal to 2, and enforced ranges of values on each type. He then challenged the team to find any clauses in the C standard that would make that compiler non-conformant. The team could not find such clauses.

8. In other words, you can't rely on the sizes of types being greater or equal. One instead has to rely on the set of all representable numbers being greater or equal for every type.
So, if U is the set of all representable numbers, then

U(char) <= U(short) <= U(int) <= U(long) <= U(long long)

Still, one has to wonder what sorts of expensive checks one has to do to enforce an upper bound on a type.

9. Originally Posted by grumpy
In response to a debate like this, a colleague implemented a bare bones C compiler that had sizeof(short) equal to 4 and sizeof(int) equal to 2, and enforced ranges of values on each type. He then challenged the team to find any clauses in the C standard that would make that compiler non-conformant. The team could not find such clauses.
Ok, I get it now.

In theory, if you changed all your ints to shorts in an attempt to save space you might actually use more space. In practice, I wouldn't lose sleep over the possibility.

10. actually, while we're at it, what about an implementation of C that runs on an analog computer? I think that's where the "range of values" interpretation would really take on significance, since we'd be talking about a gradient of voltage levels, rather than combinations of ones and zeroes.

11. The property of being able to have a discrete type is inherently digital; by having an integer type a computer is necessarily digital. You can have a programmable analog device, but you can't use c++ to program it. You'd need to use Verilog-A or invent something new.

12. Originally Posted by Elkvis
actually, while we're at it, what about an implementation of C that runs on an analog computer? I think that's where the "range of values" interpretation would really take on significance, since we'd be talking about a gradient of voltage levels, rather than combinations of ones and zeroes.
I'm having trouble imagining what sizeof would actually do on such an architecture. Or what a pointer would mean. I don't think C maps to analog.

13. Originally Posted by brewbuck
I'm having trouble imagining what sizeof would actually do on such an architecture. Or what a pointer would mean. I don't think C maps to analog.
On such an architecture, sizeof() could be 1 for all basic integral types (char, int, long, unsigned, etc) since the physical manifestation of a variable would be a single voltage line.

Implementing formatted I/O on such an architecture would be highly entertaining though.

14. Originally Posted by King Mir
The property of being able to have a discrete type is inherently digital; by having an integer type a computer is necessarily digital. You can have a programmable analog device, but you can't use c++ to program it. You'd need to use Verilog-A or invent something new.
the only reason you "can't" use C++ to program an analog device is that it has simply never been done. I don't think the language has anything in it (other than bitwise operators, and they would be largely irrelevant) that would preclude its use with an analog device.

15. Originally Posted by Elkvis
the only reason you "can't" use C++ to program an analog device is that it has simply never been done. I don't think the language has anything in it (other than bitwise operators, and they would be largely irrelevant) that would preclude its use with an analog device.
You can use a device that's part analog, but to read a value as an integer requires breaking it down into discrete voltage level, a process that would make the device digital, at least in that part. So what you'd have is digital device that uses modulo N instead of 1's and 0's, for it's most basic unit of information.