Salem is definitely on the best track. I/o can slow things down hours - crummy algorithms slow things down for weeks.

I vote for:
Linear search times on a growing array or linked list. At the start you are searching a small number of data elements, doing compare operations, etc. As you add more elements the search times keep going up. O(n) - searching an unordered list.

No way does I/O account for weeks of runtime. If that were the case the system would be unusable by other processes.

I read/write 10+GB files on HPUX 9000 series everyday. I can read a whole file with fgets() in less than 120 seconds.
11.2 GB file example:
Code:
/bbp01/CSF_regbills_out/cycle_18> time  hash32 [hiddenfilename].REGB.PDF
23187971334

real    1m57.44s
user    0m34.53s
sys     0m4.34s
That code read and processed every single character in the file. The difference:
Code:
real - (user + sys)
is due primarily to I/O wait times - ie. losing quantum for direct I/O waits.

I could see a crummy PIII PC taking a full hour to read the file. If the OS supported largefiles. Not weeks.