What is the meaning of unsigned char?
What is the meaning of unsigned char?
It's a character that holds only unsigned, or positive values. Since it doesn't hold negative values, there's more room for positive values. In fact, an unsigned char can hold 0 - 255 instead of -127 - 127. [edit] Note that ordinary chars can be either signed chars or unsigned chars depending on the implementation. [/edit]
Assigning a negative value to an unsigned char will cause it to wrap.
Is this a homework question?
[edit] Googling for "unsigned char" turns up tons of information. [/edit]
dwk
Seek and ye shall find. quaere et invenies.
"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell
Other boards: DaniWeb, TPS
Unofficial Wiki FAQ: cpwiki.sf.net
My website: http://dwks.theprogrammingsite.com/
Projects: codeform, xuni, atlantis, nort, etc.
The limits dwks gave you are just minimums. Any given implementation is free to give char whatever range it wants. Some embedded systems, for example, have 32-bit chars. Some mainframes have 9-bit chars.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
9-bit chars? Crazy...
"If you tell the truth, you don't have to remember anything"
-Mark Twain
Hmm, is that right? sizeof(char) is always 1, isn't it?
[edit] Perhaps sizeof(char) is always 1, but it can be stored in more than that? . . . [/edit]
dwk
Seek and ye shall find. quaere et invenies.
"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell
Other boards: DaniWeb, TPS
Unofficial Wiki FAQ: cpwiki.sf.net
My website: http://dwks.theprogrammingsite.com/
Projects: codeform, xuni, atlantis, nort, etc.
I believe char is guaranteed to be 1 byte, but some machines can have odd numbers of bits in a byte (defined by CHAR_BIT)
"Think not but that I know these things; or think
I know them not: not therefore am I short
Of knowing what I ought."
-John Milton, Paradise Regained (1671)
"Work hard and it might happen."
-XSquared
Weird. So machines with 9-bit chars might have 18-bit shorts? . . .
dwk
Seek and ye shall find. quaere et invenies.
"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell
Other boards: DaniWeb, TPS
Unofficial Wiki FAQ: cpwiki.sf.net
My website: http://dwks.theprogrammingsite.com/
Projects: codeform, xuni, atlantis, nort, etc.
sizeof(char) is always 1, but then the return value of sizeof() is defined in terms of multiples of char. In other words, sizeof(T) == 4 means that T is as large as 4 chars, but it doesn't tell you how many bits there are in a T. That's what CHAR_BITS is for.
dwks: Yes. And 36-bit ints.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
One thing that will begin showing up very often pretty soon is 64-bit ints, i presume, with the 64-bit processors coming out. (since unsigned ints usually represent the word length)
Unlikely. The 64-bit processors are out, and have been for a while, and no popular compiler has made int 64 bits. The reason is simply memory efficiency. 32-bit ints suffice for most purposes, so 64-bit integers would just waste memory, memory bandwidth and probably cache space.
64-bit GCC has made long 64 bits, while MS C has left it at 32 bits, but both compilers have int still at 32 bits, and they won't change now.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
When we have 128-bit processors or a completely new architecture, maybe. Not before. I'm willing to bet money on it. (But not over the internet.)
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
I was just assuming that because an unsigned int usually represents a word on a normal machine. On a 64-bit processor, a word would be 64-bits long...and therefore an unsigned long would be a word instead of an unsigned int...if ints stayed 32 bits that is.
It could be either way. If you wrote your own compiler you could do you what you wanted.
AFAIK the standard just states that an int must be capable of storing at least -(2^15-1) to 2^15-1.
dwk
Seek and ye shall find. quaere et invenies.
"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell
Other boards: DaniWeb, TPS
Unofficial Wiki FAQ: cpwiki.sf.net
My website: http://dwks.theprogrammingsite.com/
Projects: codeform, xuni, atlantis, nort, etc.
It took a while for ints to become 32 bit when 32 bit processors first showed up.
No doubt some compilers will move to native 64 bit ints over time when they're hosted on 64 bit machines.
And we'll still be seeing people trying to use TurboC 2.01
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.