Is it possilbe to make 64 bit variables in C or C++? If so, how?
Is it possilbe to make 64 bit variables in C or C++? If so, how?
the only datatype in C++ that you can be sure of size-wise in bytes is char (signed or unsigned) which is 1 byte always.
if you are using msvc++ you can use
__int64
for a 64 bit 8 byte integer
(that's 2 underscores)
The __int64 is nice, but what if I want a 64 bit floating point variable?
then you use double, silly
usually a double is 8 bytes (64 bits) and a float is 4 bytes (32 bits)
>the only datatype in C++ that you can be sure of size-wise in bytes is char
Is this a languague standard or de-facto compiler standard?
that's a language standard, but it's important to note that i said "in bytes" because a byte is not necissarily 8 bits.
Also note that I meant to say C/C++. It's guaranteed to be a byte for both C and C++ (forgot this was the c board not the c++ board).
Last edited by Polymorphic OOP; 12-19-2002 at 11:42 AM.
I see! I was under the idiot assumption that a byte was 8 bits. But I'm not anymore - you prompted me to look it up (below). The 9 bit computers sound interesting. Thanks.
Byte
<unit> /bi:t/ (B) A component in the machine {data hierarchy} usually larger than a {bit} and smaller than a {word}; now most often eight bits and the smallest addressable unit of storage. A byte typically holds one {character}.
A byte may be 9 bits on 36-bit computers. Some older architectures used "byte" for quantities of 6 or 7 bits, and the PDP-10 and IBM 7030 supported "bytes" that were actually {bit-fields} of 1 to 36 (or 64) bits! These usages are now obsolete, and even 9-bit bytes have become rare in the general trend toward power-of-2 word sizes.
The term was coined by Werner Buchholz in 1956 during the early design phase for the {IBM} {Stretch} computer. It was a mutation of the word "bite" intended to avoid confusion with "bit". In 1962 he described it as "a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units". The move to an 8-bit byte happened in late 1956, and this size was later adopted and promulgated as a standard by the {System/360} {operating system} (announced April 1964).
James S. Jones <[email protected]> adds:
I am sure I read in some historical brochure by IBM some 15-20 years ago that BYTE was an acronym that stood for "Bit asYnchronous Transmission E__?__" which related to width of the bus between the Stretch CPU and its CRT-memory (prior to Core).
Terry Carr <[email protected]> says:
In the early days IBM taught that a series of bits transferred together (like so many yoked oxen) formed a Binary Yoked Transfer Element (BYTE).
[True origin? First 8-bit byte architecture?]
See also {nibble}, {octet}.
[{Jargon File}]
(1998-08-06)
It's not an "idiot assumption." A lot of people think that a byte is always 8 bits. You were most likely falsely taught and there's nothin you coulda really done about that. Blame it on your teachers!!!Originally posted by Davros
I see! I was under the idiot assumption that a byte was 8 bits. But I'm not anymore - you prompted me to look it up (below). The 9 bit computers sound interesting. Thanks.
Suicide pill!!! Suicide pill!!!
I have to admit that I, too, thought that 1 byte == 8 bits
MagosX.com
Give a man a fish and you feed him for a day.
Teach a man to fish and you feed him for a lifetime.
Me too.Originally posted by Magos
I have to admit that I, too, thought that 1 byte == 8 bits