Thread: OS: To go 64 bit or not to go?

  1. #1
    Chinese pâté foxman's Avatar
    Join Date
    Jul 2007
    Location
    Canada
    Posts
    404

    OS: To go 64 bit or not to go?

    Hi everyone,

    i'm building a new PC and i was wondering if i should choose a 64-bit operating system or not. What are the risks and benefits ? Because i must say, right now, i don't see much benefits from it (except using the full 4 Gb of memory i'll have) since I don't see much 64bit applications out there (but I might be wrong - or blind). And what about drivers ?

    The operating system I am looking at is Windows Vista Business 64bit (which i can get for free and completely legal; installing Windows XP is not really an option since i don't have any license left). I'll also install Ubuntu 8.04 64bit. Or i could install the 32-bit version of Vista with the 32-bit version of Ubuntu. I have numerous choice. What do you think ? Not installing Vista is not option, nor is not installing Linux.

    It seems a bit odd to me to see that there is so few 64bit software even if it's been a couple of years that 64bit processor are on the market. I was too young to see the 16bit/32bit transition, but I'm curious, was it long ?

    For people who might be interested, here's the highlight of my new system:
    • Core 2 Duo E8400
    • GA-EP35-DS3R motherboard
    • 2x2GB of PC2-6400
    • 8800GT


    Thanks
    I hate real numbers.

  2. #2
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    In Linux I would say go for 64-bit - there's little or no drawback to it, particularly if it's a "starting from scratch" system.

    In Windows, there's probably little benefit of 64-bit.

    And note that if you have 4GB in the machine, a 32-bit OS can still use all (and more), just not in ONE PROCESS (a process is normally limited to 2GB, which, if you want, can be extended to 3GB, but only processes that exhibit the "LARGE_MEMORY_SUPPORTED"[or similar] flag in the .EXE header will be able to use this).

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  3. #3
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    You should absolutely not go for a 64-bit Vista. Its driver support is abysmal.

    The 16-bit -> 32-bit transition was indeed long and gradual. 32-bit support was introduced with the 80386, but for compatibility reasons, DOS was never updated to actually use it. It remained 16-bit right until the very end.
    There were 32-bit extenders, like DOS4GW used in many games or Win32s that allowed writing 32-bit applications for Windows 3.x, but in general, these applied to a single application only.
    Windows NT was developed and ran the system in 32-bit mode, but it was never very widespread, at least on home computers. Probably a bit more on workstations.
    Windows 95, finally, ran mostly in 32-bit mode, although parts of its core were still based on DOS and thus 16-bit. Windows ME was the last of the Win9x family.
    Windows XP, the successor of Windows 2000 (and thus Windows NT) was the first operating system to almost completely displace Windows 98 and ME from home computers.
    The 80386 was released in 1986. Windows XP was released in 2002. That's 16 years of transition.

    The 64-bit transition will take even longer, though, I think. The 32-bit transition brought immense side-effect advantages: protected memory, virtual memory, preemptive multitasking, linear addressing, native large-integer calculations, and probably more.
    The 64-bit transition brings no such advantages. All it does is extend the address space even more, and few applications actually need that much memory. As such, there is less incentive to actually make the transition.
    We'll see. The clock started ticking with the release of the Intel Core 2 in 2006. I think it's safe to say that every CPU henceforth will support 64-bit mode.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  4. #4
    Internet Superhero
    Join Date
    Sep 2006
    Location
    Denmark
    Posts
    964
    Quote Originally Posted by CornedBee View Post
    We'll see. The clock started ticking with the release of the Intel Core 2 in 2006. I think it's safe to say that every CPU henceforth will support 64-bit mode.
    AMD had a 64-bit CPU on the market a long time before that..

    And note that if you have 4GB in the machine, a 32-bit OS can still use all (and more), just not in ONE PROCESS
    Really? I always thought that a 32-bit OS could never address more than 4GB memory at one time?
    How I need a drink, alcoholic in nature, after the heavy lectures involving quantum mechanics.

  5. #5
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by Neo1 View Post
    AMD had a 64-bit CPU on the market a long time before that..
    Yes, and Intel had P4 processors capable of 64-bit for a while before the Core2's too - but only certain models.

    Really? I always thought that a 32-bit OS could never address more than 4GB memory at one time?
    Yes and no. You can't have access to more than 4GB at any given moment in time. The PAE mode of 32-bit x86 processors allows up to 64GB of RAM, but only 4GB can be used at any given time - in portions of 4KB. However, PAE is only in use when you have Server type OS's.

    And the process itself (in windows standard configuration) can not use more than 2GB without trickery stuff, because Windows itself "uses" the 31st bit of the address to delineate kernel and user mode code. User-mode addresses are always less than 0x8000000, and kernel addresses always above this. With the /3GB on the command line of the kernel, this limit changes to 0xC0000000 - so one GB is used for kernel space, and 3GB for user-mode code. This doesn't work well with high-end graphics cards and such, because there isn't enough memory to map a 256MB+ graphics card's frame buffer in kernel-space. These are VIRTUAL addresses, and any of that virtual memory can be mapped anywhere in the 4GB (or 64GB) physical memory.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  6. #6
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,195
    the 2/3/4 gb per process limit only effects 32 bit applications running under 64 bit OS's, if you have a 64 bit application it has full access to the whole 16 billion gigabytes. For an application to be 64 bit though it has to be specifically compiled as an x64 application. Few applications need access to the additional ram and running in 64 bit mode does incure a small processing overhead as some of the opcodes are a byte or two larger, on the order of a few percent. So most developers just write for 32 bit. Currently the only applications I am aware of that need 64 bit are media editing software, engineering modelers, database servers and some HPC applications. I don't know of any game clients that need 64 bit, although some of the server engines might.

  7. #7
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Actuall, the overhead of 1 byte more for certain opcodes [1] is, largely, hidden by the changes in registers and ABI - the calling convention allows for 5-6 registers as parameters, instead of 2-3 in 32-bit. Since loading a register is almost always simpler (in code-size) than loading the data onto the stack and cleaning up the stack afterwards.

    Clever compilers will attempt to get things into the right register early on, so no overhead is needed for getting the data in the right place except where the compiler runs out of registers.

    The bigger issue, from a performance perspective, is that 64-bit pointers take up twice as much memory, and thus twice as much space in the cache. That slows things down when you have lots of pointers (linked lists and arrays of pointers for example).

    And as pointed out, most applications don't actually need 2 or 3 GB of memory anyways, so the push for using 64-bit in applications is not heavy.

    The limit for a single 32-bit app in 64-bit is just under 4GB, because alost all the kernel-space stuff is now 64-bit, so it can be "hidden out of the way" from the 32-bit space.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  8. #8
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,195
    I believe it is 256MB shy of 4GB, the 256MB that is reserved for NULL pointer detection IIRC.

  9. #9
    Registered User
    Join Date
    Feb 2008
    Location
    Rochester, NY
    Posts
    27
    I've had absolutely no issues with Vista Business 64-bit. No driver issues to my knowledge at least.

    I was having some fun with it when I initially set up the machine with rendering maya animations to see performance differences. Huge from core duo 2.14 -> core 2 quad (8+ hour render time to under 40 minutes) I'd assume some of that was from the 64-bit support not just 2 extra cores. Since then I haven't really done much more than DirectX 10 programming and playing with various OS's in virtual machines. But I haven't had any issues with the system. So far I have yet to blue screen the computer (knock on wood) and it's a few months old.

  10. #10
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by Neo1 View Post
    Really? I always thought that a 32-bit OS could never address more than 4GB memory at one time?
    It's not a question of whether the OS can address it, but if the application can. It's perfectly possible to set up page tables for an application which let it access memory that isn't (currently) mapped in the kernel's own page table. The inherent limitation is the 32-bit size of a pointer. The CPU itself has ways of accessing more than 4 gigabytes of total memory, it just can't be done without altering the page tables. In essence you only ever see a 4 gigabyte "window", but this window can be manipulated.

    Back in 16 bit DOS, applications were certainly able to access more than 64 kilobytes of memory, they just had to use tricks. Same goes these days, the numbers are just bigger.

  11. #11
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    Quote Originally Posted by Neo1 View Post
    AMD had a 64-bit CPU on the market a long time before that.
    Yes, but I started the count where there were no more non-64-bit-capable CPUs to come, because it's hard for the industry to support something that only a part of even the new products actually support.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  12. #12
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    I agree on the whole "don't use vista" business. Vista is slow, unstable and I would never trust an OS which cannot even copy files correctly.
    Vista is bloat, bloat, bloat and no more. It has some nice eye candy and (some) nice features, but not enough to make it a viable option over XP/Linux.
    Go for Linux instead.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  13. #13
    Registered User
    Join Date
    Oct 2001
    Posts
    2,129
    Quote Originally Posted by abachler View Post
    I believe it is 256MB shy of 4GB, the 256MB that is reserved for NULL pointer detection IIRC.
    256MB for NULL pointer detection? What amazing algorithm is this?

  14. #14
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    One that also detects null pointer access of this kind:
    Code:
    void foo(int *ar, int sz)
    {
      printf("%d\n", ar[100]);
    }
    
    int main()
    {
      foo(0, 150);
    }
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  15. #15
    Registered User
    Join Date
    Oct 2001
    Posts
    2,129
    How is that a null pointer access? And how does that take 256MB?

    Or maybe I'm not getting the joke.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. 32 bit to 64 bit Ubuntu
    By Akkernight in forum Tech Board
    Replies: 15
    Last Post: 11-17-2008, 03:14 AM
  2. 64 bit testing
    By DrSnuggles in forum C++ Programming
    Replies: 7
    Last Post: 11-20-2007, 03:20 AM
  3. A Question About Unit Testing
    By Tonto in forum C++ Programming
    Replies: 2
    Last Post: 12-14-2006, 08:22 PM
  4. bit patterns of negtive numbers?
    By chunlee in forum C Programming
    Replies: 4
    Last Post: 11-08-2004, 08:20 AM
  5. Array of boolean
    By DMaxJ in forum C++ Programming
    Replies: 11
    Last Post: 10-25-2001, 11:45 PM