I'm was a bit ambiguous, though... both. brewbuck raises an interesting point, one that I didn't not think of. However, I've always lived by the advice of giving everything that you have, but keeping that as small as can be when writing, and then reading as much as possible. What I was trying to get at with the "Re-evaluate" is that data very rarely comes in 1 byte chunks. An integer is 4 bytes, a string is however long that string is, etc. Often, I know the size of a larger structure is fixed, and I can I/O the whole thing in one large chunk. Such as a file header is 20 bytes - that's a 20 byte IO - then that header reveals 2MB worth of data, and another 2MB I/O. Very rarely do I find myself reading/writing 1 byte chunks.
Moreover, however, is that calling fwrite() or fread() less results in less error checking on the file (you do check your errors, don't you? ;-) ), so unless you're in a tight loop, I particularly appreciate this benefit. It did sound like the OP was in a tight loop of some sort, but he mentions an array - is it possible to I/O that entire array, all at once? Maybe, maybe not - hence "re-evaluate".
However, I'll also content that fread/write'ing more per call is still faster:
Code:
$ gcc -o fwrite fwrite.c
$ ./fwrite -1
1GB took 15.091726 seconds.
$ ./fwrite -2 1024
1GB took 4.674032 seconds.
$ cat fwrite.c
#include <stdio.h>
#include <sys/time.h>
#include <stdint.h>
#include <stdlib.h>
double now()
{
struct timeval tv = {0};
gettimeofday(&tv, NULL);
return tv.tv_sec + tv.tv_usec / 1000000.0;
}
void output_time(uint64_t count, double time)
{
printf(
"\r\x1b[2K%f, %fMB/s",
count / 1024.0 / 1024.0,
count / 1024.0 / 1024.0 / time);
fflush(stdout);
}
int main(int argc, char *argv[])
{
FILE *fp;
int c, i, method;
unsigned char *buf;
size_t bufsize;
uint64_t count = 0, stop = 1 << 30;
double start, end;
if(strcmp(argv[1], "-1") == 0)
method = 1;
else if(strcmp(argv[1], "-2") == 0)
{
method = 2;
bufsize = atoi(argv[2]);
// yes, I know, we leak this...
buf = malloc(bufsize);
}
else
{
fprintf(stderr, "Fail.\n");
return 1;
}
fp = fopen("/dev/zero", "rb");
start = now();
while(count < stop)
{
// if(next_count <= count)
// {
// output_time(count, now() - start);
// next_count = count + (1 << 24);
// }
if(method == 1)
{
c = fgetc(fp);
++count;
}
else if(method == 2)
{
fread(buf, bufsize, 1, fp);
for(i = 0; i < bufsize; ++i)
{
c = buf[i];
++count;
}
}
}
end = now();
printf("1GB took %f seconds.\n", end - start);
return 1;
}
There's a big, huge grain of salt in here though - even though the single byte fgetc()s were slower, they still I/O'd data at 60MB/s, which for me, is faster than my disk. (The large chunk reads were ~200MB/s)