hello ppl...
what the F__k is the diffrence between 16 bit API to a 32 one??
I know there is a diffrence (between WIN NT and 98 or 2000)... but I really don't even know what tutorial or introduction should I look for....on the net
please help???
hello ppl...
what the F__k is the diffrence between 16 bit API to a 32 one??
I know there is a diffrence (between WIN NT and 98 or 2000)... but I really don't even know what tutorial or introduction should I look for....on the net
please help???
THE GIRAFFE MAN
"our heads in the skies, but our feet on the ground"....
http://dagan.150m.com
16Bit Variable in binary
0001 1001 1011 1100
32Bit Variable in binary
1000 1000 0001 1001 1100 1011 0000 1100
The difference is that a 32Bit var is twice the size of a 16Bit var......Older processors could fit at most a 16Bit var into a single register , but newer processors can handle up to 32Bit thefore twice as much info....
that was very inlighting although does that mean that you could use a 16bit var in a 32bit envioroment ???
is 32 better then 16???
I thought that XP and 2000 are NT based - which means they are 16bit ??? is that true???
please help...
THE GIRAFFE MAN
"our heads in the skies, but our feet on the ground"....
http://dagan.150m.com
>that was very inlighting although does that mean that you could
>use a 16bit var in a 32bit envioroment ???
Yes you can.
>is 32 better then 16???
That depends on what you're going to do.
Nope, Windows 95 right through to Windows XP are designed for 32Bit Processors........Originally posted by GiraffeMan
I thought that XP and 2000 are NT based - which means they are 16bit ??? is that true???
please help...
Win 3.1 was 16 bit.
So?
Shiro: o.k I can see that they are diffrenet I am just not getting why??? I understand the part of the variabels but one thing...
if let say I have an int which will be 2 bytes in a 16 bit OS...
in a 32 bit one it will become 4 bytes.. so far so good.. but my question is what does that matter when I can just use a different variable type - like a long insted of an int...
there must be more diffrences to it then just that..
I was searching the net for tutorials about it.. but I can't seem to find anything, this is really important for me in order to understand how everything works (well not everything but mostly windows...).
sorry to be such a pain.. but there's no place else I can go
16 bits or 32 bits depends highly on CPU architecture. For 16bits architecture, all registers are 16bits including addressing mode. And you can addressing 64Kbyte data directly only. To address data more than 64K bytes is a big job. By segmentation, By huge model etc... , these technigue degrade performance significantly. 32bits allow you to operate data upto 4G directly ( theoretically ).
The most benifit for 32bit is the addressing capability. You know, today's data is far more than 64k bytes! 32bit arithematic is only one of benefit of 32bit cpu architecture.
so wouldn't it be good if someone would make a 64bit API???
or there might be a problem with that??
THE GIRAFFE MAN
"our heads in the skies, but our feet on the ground"....
http://dagan.150m.com
2^16 = 65536Originally posted by GiraffeMan
is 32 better then 16???
2^32 = 4294967296
2^64 = 18446744073709551616
As you see, you can store way larger numbers the more bits you have (which is good). One problem though is that it is slower to calculate the more bits you have... (which is bad)
MagosX.com
Give a man a fish and you feed him for a day.
Teach a man to fish and you feed him for a lifetime.
But if you are using the optimal variable size for the processor there shouldnt be any problem....I have read exapmples that have increased the variable size so that it is inline with the processer and this would supposedly increase speed as there would be no masking........Originally posted by Magos
One problem though is that it is slower to calculate the more bits you have... (which is bad)