Thread: OS: To go 64 bit or not to go?

  1. #16
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Well, it's not like the OS eats 256 MB. The process just can't use those extra 256 MB.
    But I guess you knew that, huh?

    But, the point is, I think, that it helps catch more "invalid pointers" errors.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  2. #17
    Hurry Slowly vart's Avatar
    Join Date
    Oct 2006
    Location
    Rishon LeZion, Israel
    Posts
    6,788
    Quote Originally Posted by robwhit View Post
    How is that a null pointer access? And how does that take 256MB?

    Or maybe I'm not getting the joke.
    first parameter of the function is null pointer
    a[100]
    results in 0+100*sizeof(int) = 0x00000190 or 0x00000320 address access
    Last edited by vart; 04-28-2008 at 01:14 PM.
    All problems in computer science can be solved by another level of indirection,
    except for the problem of too many layers of indirection.
    – David J. Wheeler

  3. #18
    Registered User
    Join Date
    Oct 2001
    Posts
    2,129
    oh I see... null pointer base, not null pointer.

  4. #19
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    It may be that some memory around address zero is used, but the OS also needs a small thunking layer to implement the 32-to-64-bit conversion for system calls, which also eats a bit of memory.

    Conventional .exe files start around 0x400000, which is "only" 4MB into memory.

    By experimenting using VirtualAlloc on Win2K, I can allocate memory at 0x30000 (but no lower), so I expect that 192KB is the "minimum" that the OS refuses to give to you to preven null-pointers. Note that these 192KB doesn't really occupy any memory - it's just "not present" in the page-table entry.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  5. #20
    Chinese pâté foxman's Avatar
    Join Date
    Jul 2007
    Location
    Canada
    Posts
    404
    Well, thanks for these interesting replies. I'll install Vista 32bit instead of the 64bit version. I'll be playing on the safe side and like some mention, there's not much to win (especially for my needs), and more to lose.

    As for the Linux OS, I migth still go 64bit, but I'm no more entirely convinced. Maybe i should look a bit further for more informations. Or i might install 2 different version of Linux since I have plenty of space on this new drive, even if it might be a bit (a lot?) useless.

    I also want to have one "data" partition. This may looks odd, but which file system do you advise me to choose for this, since i want my data to be accessible on both Windows and Linux ? NTFS ? Anyone has suggestions ?
    I hate real numbers.

  6. #21
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    I don't believe NTFS in Linux is robust, even if the latest versions allow writing as well as reading. I'd go for old-fashioned FAT for a shared partition.

    As to 32 or 64-bit Linux, I have been using both for a long time, and there are very few things that are "worse" in 64-bit, but a lot of things that are better (faster load of .so's for exmaple, because there is nearly no relocation need, since PC-relative addressing is part of x86-64, which also means that the code is more efficient in PIC (position independent code), whereas the 32-bit version needs to use a register (I believe EBX) to hold a base-pointer to access things relative within the .so).

    32-bit apps will work just fine on 64-bit OS (same applies in Windows), so there's no direct need to worry about applications being compatible or not.

    But please make your own decision, as you are the one living with the machine.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  7. #22
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    It's a null pointer access because memory is accessed through a null pointer, even if it is offset first.

    And 256MB is what the designers decided was a good compromise between catching accesses even into very large imagined arrays and not taking up too much of the virtual address space.

    There is no joke.

    Edit: Totally missed the second page.

    Linux NTFS is still not really done. The in-kernel NTFS driver still refuses to write unless you specifically enable that support, and even then the writing is very limited. The out-of-kernel NTFS driver provides more writing capabilities, but there are still limits.
    Then there's NTFS-3g, an entirely separate project, only available as a FUSE module. They support all write operations, but not for encrypted or compressed systems. I don't know about their stability, but I believe it's very good. Their main "problem" with acceptance is one of politics, I think.
    Last edited by CornedBee; 04-29-2008 at 02:49 AM.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  8. #23
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by CornedBee View Post
    It's a null pointer access because memory is accessed through a null pointer, even if it is offset first.

    And 256MB is what the designers decided was a good compromise between catching accesses even into very large imagined arrays and not taking up too much of the virtual address space.

    There is no joke.
    Although it ISN'T 256MB at the beginning of a normal Windows executable, as there is only 4MB before the code starts. And as I wrote in the last post, if you really want to squeeze every single bit (pun intended) out of the virtual memory range, you can VirtualAlloc() down to 192KB from zero.

    In a 64-bit system, I expect it to be bigger range around zero that is "unused for the purpose of catching NULL accesses".

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  9. #24
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by foxman View Post
    I'll install Vista 32bit instead of the 64bit version. [...] As for the Linux OS, I migth still go 64bit, but I'm no more entirely convinced.
    You shouldn't be too worried with this issue, particularly on Linux. If Vista 64-Bit versions are indeed lacking on drivers, then you should go 32 bit, definitely. However, it should be expected that only new hardware suffers from this and within a few more months most, if not all, major hardware vendors have made their drivers available. Meanwhile, as far as I know, a 64-bit operating system can run on 32-bit drivers.

    As for Linux (and Macintosh, btw) full 64-bit support is provided. It probably will be a mistake NOT to go 64-bit as this is the new direction the operating system is taking. Sure there will be 32 bit versions for a long time to come. But since a) 64-bit Linux can run any 32-bit application and b) 64-bit applications run faster than 32-bit ones (at least on Intel processors), there is very little doubt on my mind that 64-bit Linux is what most everybody will be investing their time on.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  10. #25
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by Mario F. View Post
    b) 64-bit applications run faster than 32-bit ones (at least on Intel processors), there is very little doubt on my mind that 64-bit Linux is what most everybody will be investing their time on.
    They certainly almost always do on AMD CPU's too - and I'm sure it's possible to find ones that run slower in Intel CPU's too - typically the ones using long linked lists that have small content (e.g. a linked list of a record of 6-7 integers/pointers).

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. 32 bit to 64 bit Ubuntu
    By Akkernight in forum Tech Board
    Replies: 15
    Last Post: 11-17-2008, 03:14 AM
  2. 64 bit testing
    By DrSnuggles in forum C++ Programming
    Replies: 7
    Last Post: 11-20-2007, 03:20 AM
  3. A Question About Unit Testing
    By Tonto in forum C++ Programming
    Replies: 2
    Last Post: 12-14-2006, 08:22 PM
  4. bit patterns of negtive numbers?
    By chunlee in forum C Programming
    Replies: 4
    Last Post: 11-08-2004, 08:20 AM
  5. Array of boolean
    By DMaxJ in forum C++ Programming
    Replies: 11
    Last Post: 10-25-2001, 11:45 PM