I just did some experiment on my Linux machine (not rocket science, but shows what I meant) -
Code:
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=1
102400000+0 records in
102400000+0 records out
102400000 bytes (102 MB) copied, 239.271 s, 428 kB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ rm copy
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=2
51200000+0 records in
51200000+0 records out
102400000 bytes (102 MB) copied, 117.278 s, 873 kB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ rm copy
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=4
25600000+0 records in
25600000+0 records out
102400000 bytes (102 MB) copied, 61.4141 s, 1.7 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=8
12800000+0 records in
12800000+0 records out
102400000 bytes (102 MB) copied, 30.6745 s, 3.3 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=16
6400000+0 records in
6400000+0 records out
102400000 bytes (102 MB) copied, 17.4458 s, 5.9 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=32
3200000+0 records in
3200000+0 records out
102400000 bytes (102 MB) copied, 11.4355 s, 9.0 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=64
1600000+0 records in
1600000+0 records out
102400000 bytes (102 MB) copied, 3.99045 s, 25.7 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=128
800000+0 records in
800000+0 records out
102400000 bytes (102 MB) copied, 2.07903 s, 49.3 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=256
400000+0 records in
400000+0 records out
102400000 bytes (102 MB) copied, 1.18204 s, 86.6 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=512
200000+0 records in
200000+0 records out
102400000 bytes (102 MB) copied, 0.687293 s, 149 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=1024
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 0.455564 s, 225 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=2048
50000+0 records in
50000+0 records out
102400000 bytes (102 MB) copied, 0.35289 s, 290 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=4096
25000+0 records in
25000+0 records out
102400000 bytes (102 MB) copied, 0.260405 s, 393 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=8192
12500+0 records in
12500+0 records out
102400000 bytes (102 MB) copied, 0.226622 s, 452 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=16364
6257+1 records in
6257+1 records out
102400000 bytes (102 MB) copied, 0.242798 s, 422 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=32768
3125+0 records in
3125+0 records out
102400000 bytes (102 MB) copied, 0.215378 s, 475 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=65536
1562+1 records in
1562+1 records out
102400000 bytes (102 MB) copied, 0.209652 s, 488 MB/s
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ rm copy
cyberfish@cyberfish-laptop:/tmp/bigfile_test$ dd if=big_file of=copy bs=4194304 //4MB
24+1 records in
24+1 records out
102400000 bytes (102 MB) copied, 0.286021 s, 358 MB/s
For non-Linux/UNIX people, the dd program, in this case, copies big_file (a 100MB file with random data) to "copy" in block sizes specified in bs. The speed increase is negligible beyond 16KB blocks. (no, I don't have a 400MB/s harddrive, it is probably caching/buffering in effect)