But lol, NEED an array of such size? For what?!
What could possibly eat 4 GB of memory?
Perhaps you should elaborate.
Printable View
Yeah, the 32nd bit is just a flag telling the OS if the address is representing a 31-bit address or a 24-bit address. The thing I'm wondering though is, shouldn't the 64-bit z/OS actually be 62-bit? Does it use some other method of knowing whether to interpret the address as 24, 31, or 64 bit?
No, IBM went to great lengths to make sure it was actually 64 bits. It's my understanding IBM caught a lot sneers when all the PC guys implemented 32 bits. IBM marketing didn't want that to happen again - so 64 it was.
In 64-bit, the PSW (program status word) now has a new bit (previously unused) that indicates 64 bit mode. And, the instruction address portion of the PSW is now odd as well. And, there are new instructions:
TAM - test addressing mode. It sets a cond code for what amode you are in.
SAM24 - set 24 bit amode
SAM31 - set 31 bit amode
SAM64 - set 64 bit amode.
Todd
Yes, but it's not so simple. There are two considerations. It matters both how a program is linked, and, what address mode, or modes, it chooses to operate in.
If a program is linked for 24 bit mode, it can't be loaded (resident) in any storage higher than what can be addressed by 24 bits. If linked in 31 bit mode, it can be loaded in either 24 or 31 bit storage. At this point, programs cannot execute in storage that is not addressable by 31 bits, so being resident in 64-bit storage is a moot point. Not sure if that will change in the future or not. For now, it's not a problem.
If a program is linked in 24, 31 or 64 bit mode, it can still choose to switch addressing modes if it wants or needs to. However, if it chooses to run it 24 bit mode while residing in 31 bit storage, it better not reference any storage higher than 24 bits. (not recommended protocol)
Todd
If you've got the physical memory, in the right machine, with the right OS.
http://en.wikipedia.org/wiki/Physical_Address_Extension
Yes, but for any given process, you are still limited to 32-bit addressable space, which commonly also have further limitations to allow simple comparisons to recognize kernel vs. user mode addresses, e.g. 2+2GB or 3+1GB user+kernel address mapping. [It is not only a matter of comparing addresses, but also when mapping kernel mode memory, this is shared across the entire system at all times, whilst user-mode memory is obviously "per process", and thus needs to be swapped for each process. To manage this, the kernel will want to lump all the kernel space memory in one "bucket" so that the sharing can be done simply by using the same "bucket" for all different processes - the kernel actually uses the exact same "bucket", not just a copy of the "bucket".]
There are ways to use more memory, but it involves some address mapping tricks, and whilst it's probably better than "not being able to do what you want to do", it's not going to allow you to just write code with huge arrays directly in the code [like the OP wants to do], you need some sort of interface that takes a 64-bit value and maps the relevant section of memory into the "visible 32 bits".
By the way, on x86_64, in the current implementation, the address space is "limited" to 48 bits. This means that 2^48 is the limit for virtual memory [physical memory in early K8 processors is 40 bits, so "only" 1024GB]. Other 64-bit architectures may or may not be able to actually allow you to use a full 64-bit address space.
Also, since a 64-bit OS has a bigger address space, 32-bit apps may well get "more space" even in 32-bit, since the kernel addresses can now be put clearly outside the first 32 bits of address spaces, so only a small amount of "shim" is needed to translate the 32-bit OS functions into 64-bit OS - but that's some tens or hundreds of kilobytes, and nowhere near the amounts needed for the entire OS.
--
Mats
That's nice and all, but arrays need contiguous memory too.
So even if you can extend the amount of memory you can address, you won't be able to get too much memory at once in a 32-bit app, period.
> you won't be able to get too much memory at once in a 32-bit app, period.
Never fear, dun dun dun --
Don't seriously use that :\Code:size_t i = -1;
do {
ptr = malloc(i);
--i;
} while(ptr == NULL);
Yes, of course. But each individual allocation from heap memory can be in a separate block of memory. The stack and it's size needs to be determined when it is being allocated (as part of thread or process creation). It doesn't need to be committed at that time, so any "unused area of stack" doesn't necessarily occupy real memory, but the space must be reserved for the stack at start of day for that thread/process.
--
Mats
I was hoping the 'dun dun dun' was enough sarcasm. But seriously, did you actually try it matsp? :\
How long did it take?
Hehehe... the code is still looping for me, though I had to reduce it to start from 0x7FFDEFE0, otherwise I'd get an ASSERT inside malloc or an invalid heap allocation size from HeapAlloc.
Sure, but since I reduced the loop time by 4096 times, using THIS:
It still takes a few seconds - didn't time it...Code:size_t i = -4095;
do {
ptr = malloc(i);
i -= 4096;
} while(ptr == NULL);
free(ptr);
printf("i=%d\n", i);
I also fixed the memory leak and added a printf for the result, of course. As I did this in release mode (-O2 in gcc) it shouldn't cause any asserts to fire, etc.
[Don't ever post code that says "Don't do this" ;) ]
--
Mats
Bored and lonely -- I decided to try, without cheating ;)
Or 1.3GB roughly. Long time just to find that out :(Code:zac@neux:~/devel/C/sandbox> time ./sandbox
i = 1345761301
real 15m42.793s
user 7m12.687s
sys 8m21.467s
* That gives me an idea, that'd make a pretty awesome library, libGimmeAllTheMemory -- useful for those new comers who think they need all the memory they can get.[1]
[1] Please, I'm not serious :)