Paging is a PITA, but high bandwidth and low latency are excruciatingly important in some instances when memory quickly becomes the bottleneck (and it becomes the limiting factor a lot quicker than CPU time these days, when factors like I/O and networking aren't taken into consideration. This is one of the reasons that the Cell architecture abandoned caches in favor of SPE-local 256kb data stores. Controlling/predicting the effects of cache locality can be very hard, but the effects on performance are HUGE (think 50x speedups on a loop because it fits in your L1i cache) so they eschewed that and instead make you move data across a high-speed bus by yourself, so you have more control over the effects of memory latency/speed.)More memory means less paging. Therefore I prioritize more memory.
Not to say more memory is bad, but higher bandwidth and a big L1 cache can make all the difference for certain use cases. :]