1. Heaps of booleans

I've stumbled upon a bit of a problem.
I have a struct that contains lots of binary properties (more than 32) most of which will be false. I want to make a global const array of this struct. Problem: I don't care to write down "false" hundreds of times and count them to know which ones should in fact be a "true".
If I had less than 32 of these suckers, I could just make an enum with the names of all the properties like so:
Code:
```enum Variablenames
{
Variable1,
Variable2,
Variable3,
...
VariableNminus1,
VariableN,
N
};```
And make the whole bunch of the bools a single 32 bits (unsigned) integer so I can just jot down a 0 if none of the properties are true, and a (1u<<Variable5 | 1u<<Variable9) if only the fifth and ninth variable should be true and, for example, (0 ~ 1u<<Variable4) if all variables except the fourth should be true. However, there's more than 32 of them so I can't use a single int, and even if I use an array of ints 1u<<Variable33 would equate to 0 so I can't go over 32 variables.
Is there some simple solution to this problem or will I have to do it the hard way using multiple ints that contain 32 bools each? (which is a pain to read because I need to remember which property is located in which int)

2. There is a bitset class in the STL: http://www.cppreference.com/cppbitset/index.html

3. __int64?
Code:
```#include <iostream>
using namespace std;

int main()
{
unsigned __int64 flags = 1;

cout << hex << flags << endl;
flags <<= 32;
cout << hex << flags << endl;

return 0;
}```

4. vector<bool> is specialized to be space efficient.

5. I think I understand what you're trying to do... The WinAPI uses lots of variables like this... where one bit represents the true / false status of something. The way it's done in the API, all 32 variables are pre-defined in the header. So, the programmer doesn't even have to worry about which bit he's working with! Still, you need to be fairly comfortable with bitwise operators.

In your case, you'll have to initialize the variables (constants, actually). You can use bit shifting (recommended), or initialize them manually. If you do it manually, use hex... it's a pain with decimal.

const int ConditionFlag1 = 1; // bit 0
const int ConditionFlag2 = 2; // bit 1
const int ConditionFlag3 = 4; // bit 2
const int ConditionFlag4 = 8; // bit 3
... etc.

Note that bits are counted starting at "bit 0". So in the above scheme, bit 0 represents Condition 1.

Code:
```//Set Condition 1 and Condition 3 true, all others false
int Status = ConditionFlag1 | ConditionFlag3;   // Status = 5, bits 0, 2 are high

// Set Condition 2 true without affecting other flags
Status = Status | ConditionFlag2;  // Status = 7, bits 0, 1, 2 high

// Test Condition 2
if (Status & ConditionFlag2)
cout << "Condition 2 is TRUE!"   // Bit 1 is high

//Clear Condition 2 without affecting others
Status = Status & (~ContitionFlag2);  //  Force bit 1 low.```
Be careful. An int is not always 32 bits. In order to write good portable code, you have to make use of sizeof() or <limits>.

6. Thanks for the replies people, especially joni: that was exactly what I was looking for! (although it took me a while to figure out it's part of the std namespace...)

DougDbug: I had that figured out, but my problem was that I want to fit more than 32 booleans in a single variable which doesn't work very well with 32-bit integers. In fact, you can't even use an explicit constant int bigger than 2^32-1, meaning I HAVE to use bitshifting. (on bitsets, even. Bitshifting a 32-bit integer by 32 spaces just empties the whole thing)

anonytmouse: a bitset<64> seems to take up 8 bytes of memory. That's plenty of efficient to me. Do vectors use compression?

7. A vector<bool> would probably take up about the same amount of memory, 1 bit per bit. It is specialized for bools just for that reason.

I think the bitset might make more sense though if you have a specific set of bits you are manipulating. A vector<bool> might be better if you were adding, inserting, and erasing boolean values from the vector.

8. bitset and vector<bool> have the same space requirements, with the vector having a bit of constant overhead.

The difference is that bitset is compile-time sized, while the vector can change size at runtime. This makes the vector more flexible, but the bitset is a bit faster.