This is something I've always wondered about, but as I've never had a hardware CS class I've never been able to answer.
How often do computers truly make errors? What I mean is, when you instruct your computer to write, say, "00000000" to a certain location in RAM or disk it actually writes "00010000"? As I've never run into a problem like this the error rate must be astoundingly low, though as I can't see how it could be zero I'm wondering what it is.
And, as a related question: how (briefly) are computers able to process such huge amounts of data without ever screwing up? Most people couldn't even read a list of 100 numbers without screwing up somewhere most of the time, yet computers can scan through arrays of gigabytes flawlessly. How do they do it? What kind of error-checking mechanisms do they employ?