Well first, you need to make a distinction between simply disabling extraneous services (the second thing I mentioned) and compiling support for un-needed hardware out of the kernel (the first thing I mentioned). Disabling unused services (specifically networked services) is just plain sound advice on any platform. While you can reduce your memory footprint by simply disabling services on either platform, a smaller kernel is still a smaller kernel; as long as it does what you need and does it faster using less memory, this is a win. Not everyone will see it this way and to be honest only in rare situations do I go this extra mile; it is nice just knowing that I can (and have in the past). For me the "optimal computing experience" is defined by what I can do, not by what I am not allowed to do with my own stuff.
Yes, around 550 MB.
It seems I've been wrong.
I suppose there are factors I don't know about Windows. Especially low memory systems.
Well, my systems are usually full of memory and usually hovers around the GB mark on bootup. That's all I can say for me. Not a problem for me, though.
I see your point.
But the thing is that if you disable stuff in the kernel, say, then you will (or may) become limited in what you can do. So you disable multi-monitor support, for example. One day you get another monitor. And it doesn't work. What then? Oh, you could probably enable it again, but what if it's less obvious?
That's why I don't like disabling stuff, either in services, or if I could, the kernel. You never know what might happen.
I understand your perspective which is very PC-oriented and for the garden-variety PC it is powerful enough to only be helped minimally by recompiling your kernel and stripping out hardware support. Where this *does* bring a big bang for the buck is when you can build an OS for things like routers, replacing the iPod or other players OS and things like NSLU2 - Wikipedia, the free encyclopedia. There is never *ever* going to be a need for multimonitor support (or whatever; I am not picking on that for any reason in particular) on something like this and the CPU is pretty low-powered to begin with not to mention is not even remotely Intel-based so turning this into something more useful than it is can see some real benefit from stripping out the unneeded stuff. I turned mine into a low-power bit-torrent server. This is why I like having the source for my kernel and having it available on many hardware architectures. I am a tinkerer, an experimenter, someone who likes to see what can be made out of other things. This is not for everyone I know but for someone like me this capability is nothing short of heaven and anything that gets in the way of that is simply a bug in the system.
But for a PC the situation on the ground is different in that if some new hardware comes out that my kernel doesn't come with support for and the vendor doesn't support, I can usually build it into the kernel, recompile and I am off to the races. I like that. I don't always like the fact that I have to do that sometimes but I do like being capable of doing it. I know many won't understand that or see the value in that and that's fine.
Ah yes, I shouldn't mentioned the system cache. I forgot it doesn't count towards the commit charge. But superfetch seems like it does though from my quick tests. I disabled superfetch and my commit charge dropped by around 400MB but the "system cache" entry in the task manager didn't change at all.
It follows the same rules as the cache though, if more memory is requested than what is currently free it will just throw the data away. (Although I think, and I'm basically just guessing here, that superfetch has a higher priority than the system cache to stay resident.)
Yep, that's pretty cool. And useful too, it sounds.
Superfetch is a little special, I think. I do believe it likes to cache all sorts of stuff. I would have to read up more on it, though.
But any sort of cache doesn't count towards total commit. It never should. Because committed memory is all memory used by application + internally needed OS memory. This is memory that can never be thrown away. If it doesn't fit into memory, it will be paged.
Cache is another thing. It's not necessary and is thrown away if not needed. It's never paged and occupied only free memory.
I do believe superfetch is caching some specific program information into memory, but I can't be sure.
My slightly more expensive (80GB for $200) SSD can do 250MB/s read and ~80MB/s write. The "real" difference lies in random read, though, since that's the predominant access pattern in modern computers (a lot of small files, or random access in big files). A harddrive will drop to low single digit MB/s for small (64K) random reads, and an SSD can still do ~100MB/s since it doesn't really need to seek.
The difference is very noticeable. My laptop with 1.4ghz single core CPU can boot Ubuntu in about 10 seconds (Win 7 in <15) not counting the POST time. All applications open like they are already in cache. Amazing stuff. I also don't need suspend-to-RAM now because waking up from suspend-to-disk is very fast. At 250MB/s it only takes a few seconds to restore my 3GB RAM.
Saves about 2W, too, which translates to about extra 1.5hr battery life on my laptop.
Most efficient use of money for making a computer faster IMHO.
2W would give a lot of extra hours.
But the problem is the price. So little storage for such a large sum. That's a problem for laptops. And netbooks, naturally.
For desktop computers, it's less of a problem.
Anyway, to satisfy the price, the SSD will need at least a minimum read and write speed. And it needs to perform well in real tests, as well, and I suppose that's where random reads/writes come in.
Fast random access on a drive is *the* factor in having fast reads/writes. If you also consider the mechanics of a non-SSD rotational drive, you see that not all bytes are created equal for reads: rotational drives typically have an outer track size of 3.5", and an inner track size of 1.5" - this is a ratio of 2.33. Most all drives have a variable length amount of sectors per track (as opposed to a fixed amount,) so the outer tracks can store more data, roughly 2.33x more data (because sector-length is fixed.)And it needs to perform well in real tests, as well, and I suppose that's where random reads/writes come in.
Not surprisingly, when you begin to deal with files in the huge range (think: 100 GB) your real-world performance is going to drop drastically as your reads begin to move towards the inner tracks (problem if your drive is fragmented) - you will read nearly 2.33x faster on the outer tracks, simply due to physics, because in a single turn of the HDD head you can read more data due to longer tracks. SSDs make a lot of these problems go away - it would be awesome if we could use them at work, but unfortunately not so
Last edited by Mad_guy; 03-08-2010 at 11:04 AM.
operating systems: mac os 10.6, debian 5.0, windows 7
editor: back to emacs because it's more awesomer!!
version control: git
website: http://0xff.ath.cx/~as/