To what purpose? Note that floating point imprecision might foul up your result.
If you want the number of significant bits in a type, use std::numeric_limits.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
Surely you could not possibly suggesting that it might ever not work, cause that's crazy talk man!
Nevermind the fact that an int is always the native register size of the processor anyway. Only the size of the "container" actually matters here. That's what we want to calculate and use and that's exactly what the code posted does! It can never be wrong on any architecture!
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
I think Dave_Sinkula is talking about bits used by an actual value, rather than how many bits can be held by the type. The number of bits that can be held by int is sizeof(int)*CHAR_BIT, but the number of bits used by the value 12 is just 4.
That's so obviously wrong, it's sad you actually dare call the correct position crazy talk. (By the way, would you mind toning it down a bit?) Examples? x86_64 has a native register size of 64 bits, but int is 32 bits wide.
Wrong again. There are architectures where ints have extra bits that show up in sizeof(int) but cannot be used to hold values. They hold things like parity bits, or simply nothing. E.g. a 9-bit byte machine emulating 8-bit bytes would have 9-bit native chars and probably 36-bit native ints, but only 8 respectively 32 bits of those containers would actually hold a value in emulation mode.Only the size of the "container" actually matters here. That's what we want to calculate and use and that's exactly what the code posted does! It can never be wrong on any architecture!
Here's the correct way of finding out how many actual value bits an unsigned integral type offers:
Signed integrals have one less digit than they have bits.Code:std::numeric_limits<unsigned int>::digits
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
I'm confused. In your scenario, would CHAR_BIT be 8 or 9? If 8, then isn't sizeof(int)*CHAR_BIT the number of bits available to hold values? And if 9, is there a way in C to get the same information that numeric_limits<T>::digits gives? My K&R2 shows a bunch of _MIN and _MAX macros for each type in <limits.h>, but nothing specifying a number of bits, except CHAR_BIT.
Changed to vectors, fixed my constructors, and now using numeric_limits:
Code:class CBoolArray { std::vector<unsigned int> array; unsigned int elementCount; unsigned int arraySize; int bitsPerInt; public: CBoolArray(unsigned int elements) :elementCount(elements) { bitsPerInt = std::numeric_limits<unsigned int>::digits; arraySize = elements / bitsPerInt + ((elements % bitsPerInt) ? 1 : 0); if (!array.empty()) array.clear(); array.resize(arraySize, 0); } CBoolArray() { if (!array.empty()) array.clear(); bitsPerInt = std::numeric_limits<unsigned int>::digits; } void Resize(unsigned int elements) { if (elements == elementCount) return; int eModBits = elements % bitsPerInt; int eDivBits = elements / bitsPerInt; int prevArraySize = arraySize; arraySize = eDivBits + (eModBits ? 1 : 0); if (array.empty()) { arraySize = eDivBits + (eModBits ? 1 : 0); array.resize(arraySize, 0); elementCount = elements; } else { if (elements < elementCount) { array.resize(arraySize, 0); if (eModBits) array[arraySize - 1] &= (1 << eModBits) - 1; } else { array.resize(arraySize, 0); } elementCount = elements; } } bool Set(unsigned int element, bool value) { if (element > elementCount) return false; int eModBits = element % bitsPerInt; int eDivBits = element / bitsPerInt; if (value) array[eDivBits] |= (1 << eModBits); else array[eDivBits] &= ~(1 << eModBits); return true; } bool At(unsigned int element) { if (element > elementCount) return false; return ((array[element / bitsPerInt] >> (element % bitsPerInt)) & 1); } };
Don't quote me on that... ...seriously
My apologies, I've rechecked, and an int is only usually the native register size.
"bits" don't show up in sizeof unsigned int because the answer is obviously in bytes (by definition).
However, after doing some research types larger than char really can have some padding bits. I was wrong okay, sorry! Not that it will make the slightest difference for code that 99.9999% of people will write (myself included).
So what's the C solution anyway? numeric_limits wont help you there!
Actually I just noticed that in VS2005 Express, numeric_limits doesn't appear in any header files! I never knew it was so broken.
Here's an interesting link about all this:
http://www.programmersheaven.com/mb/...admessage.aspx
Last edited by iMalc; 07-06-2007 at 03:47 AM.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
Another giant leap forward there, Brad0407.
Still obvious room for improvement though:
bitsPerInt is still a variable instead of a constant.
arraySize is redundant as a member variable now that you are using a vector.
arraySize is being calculated twice which is redundant. The calculation could be simplified too: (elements + bitsPerInt - 1) / bitsPerInt
'At' should be a const function.
You're improving it at a remarkably good rate though!
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
7. It is easier to write an incorrect program than understand a correct one.
40. There are two ways to write error-free programs; only the third one works.*
Hoping for another little jump:
Changes:
1. Removed arraySize
2. Replaced BitsPerInt with digits
3. used a typedef to allow for use with datatypes other than unsigned int
Code:class CBoolArray { typedef unsigned int T_ARRAY; std::vector<T_ARRAY> array; unsigned int elementCount; public: CBoolArray(unsigned int elements) :elementCount(elements) { if (!array.empty()) array.clear(); int eModBits = elements % std::numeric_limits<T_ARRAY>::digits; int eDivBits = elements / std::numeric_limits<T_ARRAY>::digits; array.resize(eDivBits + (eModBits ? 1 : 0), 0); } CBoolArray() { if (!array.empty()) array.clear(); } void Resize(unsigned int elements) { if (elements == elementCount) return; int eModBits = elements % std::numeric_limits<T_ARRAY>::digits; int eDivBits = elements / std::numeric_limits<T_ARRAY>::digits; int prevArraySize = array.size(); if (array.empty()) { array.resize(eDivBits + (eModBits ? 1 : 0), 0); elementCount = elements; } else { if (elements < elementCount) { array.resize(eDivBits + (eModBits ? 1 : 0), 0); if (eModBits) array[array.size() - 1] &= (1 << eModBits) - 1; } else { array.resize(eDivBits + (eModBits ? 1 : 0), 0); } elementCount = elements; } } bool Set(unsigned int element, bool value) { if (element > elementCount) return false; int eModBits = element % std::numeric_limits<T_ARRAY>::digits; int eDivBits = element / std::numeric_limits<T_ARRAY>::digits; if (value) array[eDivBits] |= (1 << eModBits); else array[eDivBits] &= ~(1 << eModBits); return true; } bool At(unsigned int element) { if (element > elementCount) return false; int eModBits = element % std::numeric_limits<T_ARRAY>::digits; int eDivBits = element / std::numeric_limits<T_ARRAY>::digits; return ((array[eDivBits] >> eModBits) & 1); } };
Don't quote me on that... ...seriously
Oh, and there's Boost.DynamicBitset. Perhaps you can get a few ideas from that, too.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law