Well, it's not like the OS eats 256 MB. The process just can't use those extra 256 MB.
But I guess you knew that, huh?
But, the point is, I think, that it helps catch more "invalid pointers" errors.
Last edited by vart; 04-28-2008 at 01:14 PM.
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
oh I see... null pointer base, not null pointer.
It may be that some memory around address zero is used, but the OS also needs a small thunking layer to implement the 32-to-64-bit conversion for system calls, which also eats a bit of memory.
Conventional .exe files start around 0x400000, which is "only" 4MB into memory.
By experimenting using VirtualAlloc on Win2K, I can allocate memory at 0x30000 (but no lower), so I expect that 192KB is the "minimum" that the OS refuses to give to you to preven null-pointers. Note that these 192KB doesn't really occupy any memory - it's just "not present" in the page-table entry.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
Well, thanks for these interesting replies. I'll install Vista 32bit instead of the 64bit version. I'll be playing on the safe side and like some mention, there's not much to win (especially for my needs), and more to lose.
As for the Linux OS, I migth still go 64bit, but I'm no more entirely convinced. Maybe i should look a bit further for more informations. Or i might install 2 different version of Linux since I have plenty of space on this new drive, even if it might be a bit (a lot?) useless.
I also want to have one "data" partition. This may looks odd, but which file system do you advise me to choose for this, since i want my data to be accessible on both Windows and Linux ? NTFS ? Anyone has suggestions ?
I hate real numbers.
I don't believe NTFS in Linux is robust, even if the latest versions allow writing as well as reading. I'd go for old-fashioned FAT for a shared partition.
As to 32 or 64-bit Linux, I have been using both for a long time, and there are very few things that are "worse" in 64-bit, but a lot of things that are better (faster load of .so's for exmaple, because there is nearly no relocation need, since PC-relative addressing is part of x86-64, which also means that the code is more efficient in PIC (position independent code), whereas the 32-bit version needs to use a register (I believe EBX) to hold a base-pointer to access things relative within the .so).
32-bit apps will work just fine on 64-bit OS (same applies in Windows), so there's no direct need to worry about applications being compatible or not.
But please make your own decision, as you are the one living with the machine.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
It's a null pointer access because memory is accessed through a null pointer, even if it is offset first.
And 256MB is what the designers decided was a good compromise between catching accesses even into very large imagined arrays and not taking up too much of the virtual address space.
There is no joke.
Edit: Totally missed the second page.
Linux NTFS is still not really done. The in-kernel NTFS driver still refuses to write unless you specifically enable that support, and even then the writing is very limited. The out-of-kernel NTFS driver provides more writing capabilities, but there are still limits.
Then there's NTFS-3g, an entirely separate project, only available as a FUSE module. They support all write operations, but not for encrypted or compressed systems. I don't know about their stability, but I believe it's very good. Their main "problem" with acceptance is one of politics, I think.
Last edited by CornedBee; 04-29-2008 at 02:49 AM.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
Although it ISN'T 256MB at the beginning of a normal Windows executable, as there is only 4MB before the code starts. And as I wrote in the last post, if you really want to squeeze every single bit (pun intended) out of the virtual memory range, you can VirtualAlloc() down to 192KB from zero.
In a 64-bit system, I expect it to be bigger range around zero that is "unused for the purpose of catching NULL accesses".
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
You shouldn't be too worried with this issue, particularly on Linux. If Vista 64-Bit versions are indeed lacking on drivers, then you should go 32 bit, definitely. However, it should be expected that only new hardware suffers from this and within a few more months most, if not all, major hardware vendors have made their drivers available. Meanwhile, as far as I know, a 64-bit operating system can run on 32-bit drivers.
As for Linux (and Macintosh, btw) full 64-bit support is provided. It probably will be a mistake NOT to go 64-bit as this is the new direction the operating system is taking. Sure there will be 32 bit versions for a long time to come. But since a) 64-bit Linux can run any 32-bit application and b) 64-bit applications run faster than 32-bit ones (at least on Intel processors), there is very little doubt on my mind that 64-bit Linux is what most everybody will be investing their time on.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.