Hi guys,
I got 2 questions, hope you won't find it stupid....
1) How many bits does indirect addressing use? Is it 30 bits? Even though it is allocated 32 bits?
2) Is there a thing called secondary memory algorithms? If so, is there any example?
Hi guys,
I got 2 questions, hope you won't find it stupid....
1) How many bits does indirect addressing use? Is it 30 bits? Even though it is allocated 32 bits?
2) Is there a thing called secondary memory algorithms? If so, is there any example?
If you mean "how big is a pointer" then the answer is "it depends". For example, I believe 32-bit x86 code uses 32-bit pointers, but 64-bit IA64 code uses 64-bit pointers.Originally Posted by franziss
Huh? Could you give an example of what you're looking for?Originally Posted by franziss
If you understand what you're doing, you're not learning anything.
Google says yes.Is there a thing called secondary memory algorithms? If so, is there any example?
Er... I'm totally blur about secondary memory algorithms. This phase is given to me by a professor and I dare not keep mailing him questions, I'm afraid he might be irritated by me....
Thus, I post it on this forum...
Originally Posted by franziss
i might be making this up, but i think the processor uses 36 bits...4 for the segment address and 32 for the offest address.
i seem to have GCC 3.3.4
But how do i start it?
I dont have a menu for it or anything.
This is on the C board, why again?
Quzah.
Hope is the first step on the road to disappointment.
Sounds like your professor might be talking about virtual memory and/or caching algorithms.
That sounds possible. Though I do wonder why someone in such and advanced class can't articulate a bit better.
Traditionally, "secondary memory" refers to external storage on mainframes (9-track tape drives, magnetic drums, etc.) This is contrasted to "primary memory" which in contained in the CPU. Programs that need more memory than the amount of CPU "core memory" must store parts of the data base externally.Originally Posted by franziss
A sort algorithm specifically fine-tuned for a large data base in secondary storage may very well be different from a sort algorithm optimized for a smaller data base that can be contained in core.
For example some sorting algorithms require repeated access to elements near the first and near the last, whereas others require more accesses, but the accesses are more-or-less sequential. If the external storage has a large access time for elements that are not near each other (like tape drives), you really would like to see successive accesses more-or-less sequential.
With the advent of operating systems with virtual memory, (where large parts of programs and data bases are rapidly shifted between core and external memory in a way invisible to the programmer) the importance of the difference in such algorithms has pretty much been lost on most rank-and-file programmers, except as a mild historical oddity.
Regards,
Dave
Last edited by Dave Evans; 12-17-2004 at 10:11 AM.