Thread: True Errors in Memory and Storage

  1. #1
    Ethereal Raccoon Procyon's Avatar
    Join Date
    Aug 2001

    True Errors in Memory and Storage

    This is something I've always wondered about, but as I've never had a hardware CS class I've never been able to answer.

    How often do computers truly make errors? What I mean is, when you instruct your computer to write, say, "00000000" to a certain location in RAM or disk it actually writes "00010000"? As I've never run into a problem like this the error rate must be astoundingly low, though as I can't see how it could be zero I'm wondering what it is.

    And, as a related question: how (briefly) are computers able to process such huge amounts of data without ever screwing up? Most people couldn't even read a list of 100 numbers without screwing up somewhere most of the time, yet computers can scan through arrays of gigabytes flawlessly. How do they do it? What kind of error-checking mechanisms do they employ?

  2. #2
    Join Date
    Jan 2002
    Answers to your questions might be found here:
    CProgramming FAQ
    Caution: this person may be a carrier of the misinformation virus.

  3. #3
    It's full of stars adrianxw's Avatar
    Join Date
    Aug 2001
    Almost universally now, computer memory blocks have extra bits used for error detection and correction, (so a 32 bit word may actually be stored in 36 or more bits). The actual "value" of a memory location is the value "calculated" to be there by the actual data bits and the output of the EDC, (Error Detection and Correction), unit. EDC systems vary, some are able to simply detect an error, (simple parity type things). Others are able to detect multi-bit errors, and correct them - you pays your money etc.

    There are two basic types of error. Hard usual arise from manufacturing faults and are always there. Soft are transient, sometimes just a borderline section of the crystal lattice which works some times and not others, (temperature effects etc). An interesting class of faults is the impact into a critical part of the lattice by, for example, cosmic ray particles. Certain components are built much larger than they really need to be to in order to reduce the chances of an impact on a critical point, (i.e. the particle can hit the lattice, but there are enough atoms in the lattice to make the impact negligible). These type of components are used often in space, and for nuclear hard military hardware.

    Typical EDC systems employ Hamming codes. Search for this, or start here...
    Wave upon wave of demented avengers march cheerfully out of obscurity unto the dream.

  4. #4
    5|-|1+|-|34|) ober's Avatar
    Join Date
    Aug 2001
    Answers to your questions might be found here:
    AH HAHAHAHA... LMAO... good site...

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. sorting number
    By Leslie in forum C Programming
    Replies: 8
    Last Post: 05-20-2009, 04:23 AM
  2. Replies: 7
    Last Post: 02-06-2009, 12:27 PM
  3. Free Store of memory
    By George2 in forum C++ Programming
    Replies: 6
    Last Post: 11-12-2007, 02:27 PM
  4. opengl help
    By heat511 in forum Game Programming
    Replies: 4
    Last Post: 04-05-2004, 01:08 AM
  5. operator overloading and dynamic memory program
    By jlmac2001 in forum C++ Programming
    Replies: 3
    Last Post: 04-06-2003, 11:51 PM