We have one for hex (0x), but what is the standard representation for binary numbers in C?
The C99 standard makes no mention of it, and google seems to corroborate that there is no standard representation. Why allow hex notation but not binary?
We have one for hex (0x), but what is the standard representation for binary numbers in C?
The C99 standard makes no mention of it, and google seems to corroborate that there is no standard representation. Why allow hex notation but not binary?
There isn't one (I don't think), Humans have no need to work in binary, only the computer does, why try and make it more complex for the humans? You've already got decimal and hex, not to mention you've got bitwise ops
You could always make your own.
Don't forget C also supports octal
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
even octal too eh... now binary really feels left out
I think in some circumstances hex would be more complex. i.e. when setting up OR-flags, its easier to see that none collide by specifying them in binary.
C's Motto: who cares what it means? I just compile it!!
Some compilers which target embedded processors support the ability to write binary constants (IIRC, the Kiel C compiler does).
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
To determine that you are using unique bits. Think about it: whats easier to detect bit collisions in "3" or "101"? Sure, you can just remember to "always use powers of 2", but its a lot more clear when you can easily spot when exponents are used twice:
Interesting... I guess that would explain why I've seen some notations (I think it was in the form of a "b" suffix or prefix) being used to specify a binary number (in fact, up until I actually tried using it, I had just thought it was part of the standard).Code:00001 00010 00100 01010 /* at first glance, you can easily spot the repeated 1 */ 10000 versus: 1 2 4 10 /* at first glance, this is not as obvious */ 16
Last edited by @nthony; 11-08-2007 at 02:11 AM.
Alternatively, you can set it up like this:
You can of course also use it more directly:Code:#define Bit(n) (1 << (n)) #define Bit0 Bit(0) #define Bit1 Bit(1) .... #define Bit4 Bit(4)
--Code:if (val & (1 << 4)) ...
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
This is for single bits... but I mean he wants to do something like:
Code:selective = number & 10101010010101011010101001010101b;
"The Internet treats censorship as damage and routes around it." - John Gilmore
Because hex notation is equivalent to binary and far more convenient. Memorize this table:
0 = 0000
1 = 0001
2 = 0010
3 = 0011
4 = 0100
5 = 0101
6 = 0110
7 = 0111
8 = 1000
9 = 1001
A = 1010
B = 1011
C = 1100
D = 1101
E = 1110
F = 1111
Suppose you had a binary value 10110110. That's 1011 0110, which is 0xB6 from the above table.
Suppose you had a hex value A43C. That's 1010 0100 0011 1100 in binary again using the table.
The two bases are so conveniently related to each other this way that it's pointless to support both. Why type 16 binary digits when you could type 4 hex digits? And don't object to having to memorize it -- this is C, it's something you absolutely need to know.
and most important of all - the binary table:
0 = 0
1 = 1
that was easy. =)
seriously, though - binary representation deserves a place in the language. it's a real shame it was left out of the standard...
Code:#include <cmath> #include <complex> bool euler_flip(bool value) { return std::pow ( std::complex<float>(std::exp(1.0)), std::complex<float>(0, 1) * std::complex<float>(std::atan(1.0) *(1 << (value + 2))) ).real() < 0; }
>that was easy. =)
Uh, can you describe that a different way? I didn't get it.
>binary representation deserves a place in the language
Why? Hexadecimal is just as easy and far less verbose.
My best code is written with the delete key.
(At first I thought it was AAAAAAAA - but scanning more carefully I came up wtih that - I think it's correct). I'd hate to see number being 12345678 hex in the debugger, and then having to disassemble the source code to see what the binary patter represents in hex - converting hex to binary and binary to hex will become second nature if you do this for a while. Before it's secodn nature, just use the table posted.Code:selective = number & 0xAA55AA55;
And of course, the risk of making a mistake by adding one too many or too few digits is less when you have 8 than when you have 32. For small numbers, I would agree that binary may make some sense. But for 32-bit numbers it gets VERY unwieldy - not to mention 64-bit numbers. Even hex gets a bit messy at 16 digits.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.